text
stringlengths 1
2.25M
|
---|
---
abstract: 'We report on the infrared limit of the quenched lattice Landau gauge gluon propagator computed from large asymmetric lattices. In particular, the compatibility of the pure power law infrared solution $( q^2 )^{2\kappa}$ of the Dyson-Schwinger equations is investigated and the exponent $\kappa$ is measured. Some results for the ghost propagator and for the running coupling constant will also be shown.'
author:
- 'P. J. Silva'
- 'O. Oliveira'
title: Studying the infrared behaviour of gluon and ghost propagators using large asymmetric lattices
---
[ address=[Centro de Física Computacional, Departamento de Física, Universidade de Coimbra, P-3004-516 Coimbra, Portugal]{} ]{}
[ address=[Centro de Física Computacional, Departamento de Física, Universidade de Coimbra, P-3004-516 Coimbra, Portugal]{} ]{}
Despite the success of Quantum Chromodynamics (QCD) as *the* theory of the strong interaction, a full understanding of the confinement mechanism is still missing. One line of research, very active in the last years, consists in the study of the QCD propagators for low momenta. Indeed, some works (for details, see [@AlkLvS01] and references therein) relate the infrared behaviour of the gluon and ghost propagators in the Landau gauge with gluon confinement. In particular, the Zwanziger horizon condition implies a null zero momentum gluon propagator $D(q^2)$, and the Kugo-Ojima confinement mechanism requires an infinite zero momentum ghost propagator $G(q^2)$.
An investigation of the infrared behaviour of the gluon and ghost propagators should be done in a non-perturbative framework. At the moment, two first principles approaches are available for such a task, namely Dyson-Schwinger equations (DSE) and lattice QCD methods. Given the different nature of such approaches, a comparison between the results of the two methods is necessary.
A solution of the DSE [@lerche] predicting pure power laws for gluon and ghost dressing functions, $$Z_{gluon}(q^2)\sim(q^2)^{2\kappa}\,,\,Z_{ghost}(q^2)\sim(q^2)^{-\kappa},$$ with $\kappa\sim0.595$, has been extensively used in subsequent works (see [@fischer06] for a recent review). As shown in figure 1 of [@brasil], these power laws are only valid for very low momenta, $q<200 MeV$ (see also [@fispaw]). To test this solution of the DSE with lattice QCD, using a symmetric lattice, it would require a lattice volume much larger than a typical present day simulation (see, for example, [@sternbeck]).
Large asymmetric lattices, in the form $L_s^3 \times L_t$, with $L_t \gg L_s$, give us a possibility to test these power laws on the lattice. In this paper we briefly report on the results [@sardenha; @dublin; @finvol; @nossoprd; @madrid; @brasil; @tucson] obtained by us, considering large asymmetric lattices with $L_s=8,10,\ldots,18$ and $L_t=256$, about the infrared behaviour of the gluon and ghost propagators and the strong running coupling defined from these propagators. Despite the finite volume effects caused by the small spatial extension, the large temporal size of these lattices allow to access to momenta as low as 48 MeV.
In what concerns the gluon propagator, our results [@nossoprd] show that the propagator dependence on the spatial volume is smooth. Indeed, for the smallest momenta, the bare gluon propagator decreases with the lattice volume, and increases for higher momenta. The values of the infrared exponent extracted from our lattices increase with the lattice volume. Although almost all values of $\kappa$ are below 0.5 (see table 1 in [@nossoprd]), we obtain, by extrapolating the $\kappa$ values to the infinite volume, $\kappa$ values above 0.5, with a weigthed mean of the various estimations giving $\bar{\kappa}_{\infty}= 0.5246(46)$.
![On the left, the gluon propagator for $16^3\times 128$ and $16^3\times 256$ lattices, considering only pure temporal momenta. Note the logarithmic scale in the vertical axis. On the right, the gluon propagator for all lattices $L_{s}^{3}\times256$. For comparisation, we also show the $16^3\times 48$ and $32^3\times 64$ propagators computed in [@lein99]. []{data-label="asygluon"}](raw_Ls_16.eps){width="6.5cm"}
![On the left, the gluon propagator for $16^3\times 128$ and $16^3\times 256$ lattices, considering only pure temporal momenta. Note the logarithmic scale in the vertical axis. On the right, the gluon propagator for all lattices $L_{s}^{3}\times256$. For comparisation, we also show the $16^3\times 48$ and $32^3\times 64$ propagators computed in [@lein99]. []{data-label="asygluon"}](prop_all.V_Lein.eps){width="6.5cm"}
Considering the gluon propagator as a function of the spatial volume, we can also extrapolate it, and fit the obtained propagator to a pure power law. Going this way, we get values for $\kappa \in [0.49,0.53]$. Note that the lattice data favours the values in the right hand side of this interval.
The reader should be also aware that fits to our data considering higher momenta and other model functions give higher values for $\kappa$ [@brasil; @poster].
Similarly to other studies, it is possible to use our gluon data to verify the positivity violation for the gluon propagator [@tucson; @poster].
We have also computed the ghost propagator and the strong coupling constant $\alpha_S(q^2)$ defined from these propagators, for our smallest lattices [@madrid]. Our lattice data for these quantities also show sizeable dependence on the spatial volume of the lattices involved in our calculations. We also have found visible Gribov copy effects in the ghost propagator as well as in the strong coupling constant.
![ On the left, the bare ghost dressing function in the infrared region computed from a plane wave source. On the right, the strong coupling constant. Here, we only consider pure temporal momenta. \[ghostalpha\]](cucc.eps){width="6.5cm"}
![ On the left, the bare ghost dressing function in the infrared region computed from a plane wave source. On the right, the strong coupling constant. Here, we only consider pure temporal momenta. \[ghostalpha\]](alpha.cucc.eps){width="6.5cm"}
Concerning the infrared behaviour of the ghost propagator, we were unable to extract an infrared exponent from our results. Possible reasons for this negative result can be either the finite volume effects associated to the small spatial volume of the lattices involved in the computation, or the lack of lattice data in the infrared region — remember that the DSE ghost power law lacks validity well below 200 MeV.
In the infrared region, $\alpha_S(q^2)$ shows a decreasing behaviour for the smallest momenta, in apparent contradiction with the continuum DSE prediction — an infrared fixed point, but in agreement with other lattice studies [@sternbeck] and the solution of DSE on a torus [@dsetorus]. However, the reader should be aware that $\alpha_S(q^2)$, for the smallest momenta, seems to increase with the volume.
In a near future, we will improve the statistics for our larger lattices and the extrapolations to the infinite volume limit. We also plan to perform simulations with larger lattices.
This work was supported by FCT via grant SFRH/BD/10740/2002, and project POCI/FP/63436/2005.
[9]{}
R. Alkofer, L. von Smekal, *Phys. Rept.* **353** (2001) 281 \[hep-ph/0007355\]. C. Lerche, L. von Smekal, *Phys. Rev.* **D65** (2002) 125006 \[hep-ph/0202194\]. C. S. Fischer, *J.Phys.* **G32** (2006) R253-R291 \[hep-ph/0605173\]. C. S. Fischer, J. M. Pawlowski, hep-th/0609009. A. Sternbeck, E.-M. Ilgenfritz, M. Müller-Preussker, A. Schiller, I. L. Bogolubsky, PoS(LAT2006)076 \[hep-lat/0610053\]. O. Oliveira, P. J. Silva, *AIP Conf. Proc.* **756** (2005) 290 \[hep-lat/0410048\]. P. J. Silva, O. Oliveira, PoS(LAT2005)286 \[hep-lat/0509034\]. O. Oliveira, P. J. Silva, PoS(LAT2005)287 \[hep-lat/0509037\]. P. J. Silva, O. Oliveira, *Phys. Rev.* **D74** (2006) 034513 \[hep-lat/0511043\]. O. Oliveira, P. J. Silva, hep-lat/0609027. O. Oliveira, P. J. Silva, hep-lat/0609036, to appear in Brazilian Journal of Physics. P. J. Silva, O. Oliveira, PoS(LAT2006)075 \[hep-lat/0609069\]. P. J. Silva, O. Oliveira, “Fitting the lattice gluon propagator and the question of positivity violation”, these proceedings. D. B. Leinweber, J. I. Skullerud, A. G. Williams, and C. Parrinello, *Phys. Rev.* **D60** (1999) 094507; Phys. Rev. **D61** (2000) 079901 \[hep-lat/9811027\]. C. S. Fischer, B. Gruter, R. Alkofer, *Annals Phys.* **321** (2006) 1918 \[hep-ph/0506053\].
|
---
author:
- 'A. Miroshnikov, K. Kotsiopoulos, E. M. Conlon'
title:
- 'Asymptotic properties and approximation of Bayesian logspline density estimators for communication-free parallel computing methods'
- '(Conditional ${\mbox{\rm MISE}}$ for unnormalized $p^{*},{\hat{p}}^{*}$ and $M\geq 1$)\'
---
[[**Keywords:**]{} logspline density estimation, asymptotic properties, error analysis, parallel algorithms.]{}
[[**Mathematics Subject Classification (2000):** ]{}62G07, 62G20, 68W10.]{}
Introduction {#intro}
============
The recent advances in data science and big data research have brought challenges in analyzing large data sets in full. These massive data sets may be too large to read into a computer’s memory in full, and data sets may be located on different machines. In addition, there is a lengthy time needed to process these data sets. To alleviate these difficulties, many parallel computing methods have recently been developed. One such approach partitions large data sets into subsets, where each subset is analyzed on a separate machine using parallel Markov chain Monte Carlo (MCMC) methods [@Langford; @Newman; @Smola]; here, communication between machines is required for each MCMC iteration, increasing computation time.
Due to the limitations of methods requiring communication between machines, a number of alternative communication- free parallel MCMC methods have been developed for Bayesian analysis of big data [@Neiswanger; @Miroshnikov]. For these approaches, Bayesian MCMC analysis is performed on each subset independently, and the subset posterior samples are combined to estimate the full data posterior distributions. Neiswanger, Wang and Xing [@Neiswanger] introduced a parallel kernel density estimator that first approximates each subset posterior density and then estimates the full data posterior by multiplying together the subset posterior estimators. The authors of [@Neiswanger] show that the estimator they use is asymptotically exact; they then develop an algorithm that generates samples from the posterior distribution approximating the full data posterior estimator. Though the estimator is asymptotically exact, the algorithm of [@Neiswanger] does not perform well for posteriors that have non-Gaussian shape. This under-performance is attributed to the method of construction of the subset posterior densities; this method produces near-Gaussian posteriors even if the true underlying distribution is non-Gaussian. Another limitation of the method of Neiswanger, Wang and Xing is its use in high-dimensional parameter spaces, since it becomes impractical to carry out this method when the number of model parameters increases.
Miroshnikov and Conlon [@Miroshnikov] introduced a new approach for parallel MCMC that addresses the limitations of [@Neiswanger]. Their method performs well for non-Gaussian posterior distributions and only analyzes densities marginally for each parameter, so that the size of the parameter space is not a limitation. The authors use logspline density estimation for each subset posterior, and the subsets are combined by a direct numeric product of the subset posterior estimates. However, note that this technique does not produce joint posterior estimates, as in [@Neiswanger].
The estimator introduced in [@Miroshnikov] follows the ideas of Neiswanger et al. [@Neiswanger]. Specifically, let $p(\textbf{x}|\theta)$ be the likelihood of the full data given the parameter $\theta\in {\mathbb{R}}$. We partition **x** into $M$ disjoint subsets $\textbf{x}_{m}$, with $m \in \{1,2,...,M\}$. For each subset we draw $N$ samples $\theta^{m}_{1}, \theta^{m}_{2},...,\theta^{m}_{N}$ whose distribution is given by the subset posterior density $p(\theta|\textbf{x}_{m})$. Given prior $p(\theta)$, the datasets $\textbf{x}_{1},\textbf{x}_{2},\dots,\textbf{x}_{M}$ and assuming that they are independent from each other, then the posterior density, see [@Neiswanger], is expressed by $$\label{fulldens}
p(\theta|\textbf{x})\propto p(\theta)\prod_{m=1}^{M}p(\textbf{x}_{m}|\theta)={\displaystyle}\prod_{m=1}^{M}p_{m}(\theta)=: p^*(\theta)\,, \quad \text{where} \quad p(\theta|\textbf{x}_{m}):=p_{m}(\theta)=p(\textbf{x}_m | \theta)p(\theta)^{1/M}.
$$ In our work, we investigate the properties of the estimator ${\hat{p}}(\theta|\textbf{x})$, defined in [@Miroshnikov], that has the form $$\label{fullestmod}
\hat{p}(\theta|\textbf{x})\propto {\displaystyle}\prod_{m=1}^{M}\hat{p}_{m}(\theta)=: \hat{p}^*(\theta)\,,
$$ where ${\hat{p}}_{m}(\theta)$ is the logspline density estimator of $p_{m}(\theta)$ and where we suppressed the information about the data $x$.
The estimated product ${\hat{p}}^{*}$ of the subset posterior densities is, in general, unnormalized. This motivates us to define the normalization constant $\hat{c}$ for the estimated product ${\hat{p}}^{*}$. Thus, the normalized density ${\hat{p}}$, one of the main points of interest in our work, is given by $${\hat{p}}(\theta)=\hat{c}^{-1}{\hat{p}}^{*}(\theta), \quad \text{where} \quad \hat{c}=\int {\hat{p}}^{*}(\theta)\, d\theta.$$ Computing the normalization constant analytically is a difficult task since the subset posterior densities are not explicitly calculated, with the exception of a finite number of points $\big(\theta_{i},{\hat{p}}_{m}^{*}(\theta_{i})\big),$ where $i\in \{1,\dots,n\}$. By taking the product of these values for each $i$ we obtain the value of ${\hat{p}}^{*}(\theta_{i})$. This allows us to numerically approximate the unnormalized product ${\hat{p}}^{*}$ by using a Lagrange interpolation polynomials. This approximation is denoted by ${\tilde{p}}^{*}$. Then we approximate the constant $\hat{c}$ by numerically integrating ${\tilde{p}}^{*}$. The approximation of the normalization constant $\hat{c}$ is denoted by $\tilde{c}$, given by $$\tilde{c}=\int {\tilde{p}}^{*}(\theta)\,d\theta, \ \text{and we set} \ {\tilde{p}}(\theta):=\tilde{c}^{-1}{\tilde{p}}^{*}(\theta).$$ The newly defined density ${\tilde{p}}$ acts as the estimator for the full-data posterior $p$.
In this paper, we establish error estimates between the three densities via the mean integrated squared error or MISE, defined for two functions $f,g$ as $$\label{MISEdef}
{{\mbox{\rm MISE}}}(f,g):=\mathbb{E}\int \big(f(\theta)-g(\theta) \big) ^{2}d\theta.$$ Thus, our work involves two types of approximations: 1) the construction of ${\hat{p}}^{*}$ using logspline density estimators and 2) the construction of the interpolation polynomial ${\tilde{p}}^{*}$. The methodology of logspline density estimation was introduced in [@StoneKoo] and corresponding error estimates between the estimator and the density it is approximating are presented in [@Stone89; @Stone90]. These error estimates depend on three factors: i) the $N_m$ number of samples drawn from the subset posterior density, ii) the $K_m+1$ number of knots used to create the $k$-order B-splines, and iii) the step-size of those knots, which we denote by $h_m$.
In our work we estimate the MISE between the functions ${\hat{p}}^{*}$ and $p^{*}$ by adapting the estimation techniques introduced in [@Stone89; @Stone90]. We then utilize this analysis to establish a similar estimate for the normalized densities ${\hat{p}}$ and $p$, $${\mbox{\rm MISE}}(p^{*}\,,{\hat{p}}^{*})=O\left[ \left( \exp\left\{\sum_{m=1}^{M}\dfrac{K_{m}+1-k}{N_m^{1/2}}+\ h_{max}^{j+1} \ \sum_{m=1}^{M}\left\| \frac{d^{j+1}\log({p_m})}{d\theta^{j+1}} \right\|_{\infty} \right\}-1 \right)^{2}\right],$$ where $h_{max}=\max_{m}\{h_m\}$ and $j+1$ is the number of continuous derivatives of $p$. Notice that the exponential contains two terms, where the first depends on the number of samples and the number of knots and the other depends on the placement of the spline knots. Both terms converge to zero and for MISE to scale optimally both terms must converge at the same rate. To this end, we choose $h_{max}$ and each $K_m$ to be functions of the vector ${\textbf{N}}=\big\{N_{1},\dots,N_{M}\big\}$ and scale appropriately with the norm $\|{\textbf{N}}\|$. This simplifies the above estimate to $${\mbox{\rm MISE}}(p^{*},{\hat{p}}^{*})=O\left(M^{2-2\beta}\|{\textbf{N}}\|^{-2\beta}\right)$$ where the parameter $\beta\in (0,1/2)$ is related to the convergence of the logspline density estimators.
The estimate for MISE between ${\tilde{p}}^{*}$ and ${\hat{p}}^{*}$ is obtained in a similar way by utilizing Lagrange interpolation error bounds, as described in [@Atkinson]. This error depends on two factors: i) the step-size $\Delta x$ of the grid points chosen to construct the polynomial, where the grid points correspond to the coordinates $\big(\theta_{i},{\hat{p}}_{m}^{*}(\theta_{i})\big)$ discussed earlier, and ii) the degree $l$ of the Lagrange polynomial. The estimate obtained is also shown to hold for the normalized densities ${\tilde{p}}$ and ${\hat{p}}$. $${\mbox{\rm MISE}}({\hat{p}}^{*},{\tilde{p}}^{*})=O\left[ \Bigg(\frac{\Delta x}{h_{min}({\textbf{N}})} M \Bigg)^{2(l+1)} \right]$$ where $h_{min}({\textbf{N}})$ is the minimal distance between the spline knots and is chosen to asymptotically scale with the norm of the vector of samples ${\textbf{N}}$, see Section \[section:not\_hyp\].
We then combine both estimates to obtain a bound for MISE for the densities $p$ and ${\tilde{p}}$. We obtain $${\mbox{\rm MISE}}(p\,,{\tilde{p}})=O\left[ M^{2-2\beta}\|{\textbf{N}}\|^{-2\beta}+\Bigg(\frac{\Delta x}{h_{min}({\textbf{N}})} M \Bigg)^{2(l+1)} \right].$$ In order for MISE to scale optimally the two terms in the sum must converge to zero at the same rate. As before with the distance between ${\hat{p}}^{*}$ and $p^{*}$, we choose $\Delta x$ to scale appropriately with the norm of the vector ${\textbf{N}}$. This leads to the optimum error bound for the distance between the estimator ${\tilde{p}}$ and the density $p$, $${\mbox{\rm MISE}}(p\,,{\tilde{p}})=O\left(\|{\textbf{N}}\|^{-2\beta}\right) \quad \text{where we choose} \quad \Delta x = O\bigg( \|{\textbf{N}}\|^{-\beta\left(\frac{1}{l+1}+\frac{1}{j+1}\right)}\bigg).$$
The paper is arranged as follows. In Section \[section:not\_hyp\] we set notation and hypotheses that form the foundation of the analysis. In Section \[section:Mise\_pbar\_anl\] we derive an asymptotic expansion for MISE of the non-normalized estimator, which are central to the analysis performed in subsequent sections. We also perform there the analysis of MISE for the full data set posterior density estimator ${\hat{p}}$. In Section \[section:num\_error\], we perform the analysis for the numerical estimator ${\tilde{p}}$. In Section \[section:num\_exp\] we showcase our simulated experiments and discuss the results. Finally, in the appendix we provide supplementary lemmas and theorems employed in Section \[section:Mise\_pbar\_anl\] and Section \[section:num\_error\].
Notation and hypotheses {#section:not_hyp}
=======================
For the convenience of the reader we collect in this section all hypotheses and results relevant to our analysis and present the notation that is utilized throughout the article.
1. \[model\] Motivated by the form of the posterior density at Neiswanger et al. [@Neiswanger] we consider the probability density function of the form $$ p(\theta) \propto p^*(\theta) \quad \text{where} \quad p^*(\theta):=\prod_{m=1}^M p_m(\theta)$$ where we assume that $p_m(\theta), \ m\in \{1,\dots,M\}$ have compact support on the interval $[a,b]$.
2. \[hyp2\] For each $m \in \{1,\dots,M \}$ $p_m(\theta)$ is a probability density function. We consider the estimator of $p$ in the form $$\label{estmodel}\tag{H2-a}
\hat{p}(\theta)\propto
{\displaystyle}\hat{p}^*(\theta) \quad \text{where} \quad \hat{p}^*(\theta):=\prod_{m=1}^{M}\hat{p}_m(\theta)$$ and for each $m \in \{1,\dots,M\}$ $\hat{p}_m(\theta)$ is the logspline density estimator of the probability density $p_m(\theta)$ that has the form $$\label{subpost}\tag{H2-b}
{\hat{p}}_m: {\mathbb{R}}\times \Omega_{n_m}^{m} \quad \text{defined by} \quad {\hat{p}}_{m}(\theta,\omega)=f_m(\theta, {\hat{y}}(\theta_1^m,\dots,\theta_{n_m}^{m})), \ \omega \in \Omega_{n_m}^{m}$$ We also consider the additional estimators ${\bar{p}}_m$ of $p_m$ as defined in and $${\bar{p}}^*(\theta):=\prod_{m=1}^{M}{\bar{p}}_m(\theta).$$ Here $\theta_1^m, \theta_2^m, \dots,\theta^m_{n_m} \sim p_m(x)$ are independent identically distributed random variables and $f_m$ is the logspline density estimate introduced in Definition with $N_m$ number of knots and the order of the B-splines is $k_m$.
$$\label{omeganm}
\Omega_{n_m}^{m} = \bigg\{ \omega \in \Omega: {\hat{y}}={\hat{y}}(\theta_1^{m},\dots,\theta_{n_m}^{m}) \in {\mathbb{R}}^{L_m+1} \;\; \text{exists} \bigg\}.$$
where $L_m:=N_m-k_m$.
The mean integrated square error of the estimator $\hat{p}^*$ of the product $p^*$ is defined by $$\label{mise}
{\mbox{\rm MISE}}_{[{\textbf{N}}]} := {\mbox{\rm MISE}}(p^*, \hat{p}^*) = {\mathbb{E}}\int (\hat{p}^*(\theta;\omega) - p^*(\theta))^2 \, d\theta\,\quad$$ where we use the notation ${\textbf{N}}= (N_m)_{m=1}^N$.
We assume that the probability densities functions $p_1,\dots,p_M$ satisfy the following hypotheses:
3. \[unifconvNh\] The number of samples for each subset are parameterized by a governing parameter $n$ as follows: $$\begin{aligned}
{\textbf{N}}(n)&=\{N_1(n),N_2(n),N_3(n),\ldots,N_M(n)\}:\mathbb{N}\to\mathbb{N}^M\\
\end{aligned}$$ such that for all $m \in \{1,2,\ldots, M\}$ $$ \begin{aligned}
&D_1\leq \frac{N_m}{n}\leq D_2 \\
&\lim_{n\to\infty}{ N_m(n)}=\infty\,.\\
\end{aligned}$$ Note that $C_1\|{\textbf{N}}(n)\| \leq N_{m}(n) \leq C_2\|{\textbf{N}}(n)\| $.
4. \[Lchoice\] For each $m\in \{1,\dots,M\}, \ k_1=k_2=\dots=k_M=k$ for some fixed $k$ in ${\mathbb{N}}$. For the number of knots for each $m$ are parameterized by $n$ as follows: $${\textbf{K}}(n)=\{K_1(n),K_2(n),K_3(n),\ldots,K_M(n)\}:\mathbb{N}\to\mathbb{N}^M\\$$ where $K_m(n) + 1$ is the number of knots for B-splines on the interval $[a,b]$ and thus $${\textbf{L}}(n)=\{L_1(n),L_2(n),L_3(n),\ldots,L_M(n)\}:\mathbb{N}\to\mathbb{N}^M \quad \text{with} \quad
L_m(n)=K_m(n)-k$$
and we require $$\lim_{n\to \infty}L_m(n)=\infty \quad \text{and} \quad \lim_{n\to \infty}\frac{L_m(n)}{N_m(n)^{1/2-\beta}}=0, \ 0<\beta<\frac{1}{2}.$$
5. \[hnotation\] For the knots $T_{K_m(n)}=(t_{i}^{m})_{i=0}^{K_m(n)}$, we write $$\bar{h}_m=\max_{k-1 \leq i \leq K_{m}(n)-k}(t_{i+1}^{m}-t_{i}^{m}) \quad \text{and} \quad \underline{h}_{m}=\min_{k-1 \leq i \leq K_{m}(n)-k}(t_{i+1}^{m}-t_{i}^{m}).$$
6. \[pcond1\] For each $m \in \{1,\dots,M\}$, $j \in \{0,\dots,k-1\}$ and density $p_m \in C^{j+1}([a,b])$ there exists $C_{m,s} \geq 0$ such that $$\left|\frac{d^{j+1}\log{(p_m(\theta))}}{d\theta^{j+1}}\right|< C_{m,s} \;\;\;\text{for all} \;\;\; x.$$
7. \[pcond2\] Let $\|\cdot\|_{2}$ denote the $L^{2}$-norm on $[a,b]$. For $p^{*}$ defined as in \[model\], there exists $C^{*}\geq 0$ such that $$\|p^{*}\|_{2}^{2}=\int (p^{*}(\theta))^{2}\,d\theta <C^{*}\,.$$
8. \[hmaxcond\] For each subset $\textbf{x}_{m}$, the B-splines are created by choosing a uniform knot sequence. Thus, $$\bar{h}_{m}=\underline{h}_{m}=h_{m}, \ \text{for} \ m\in \{1,\dots,M\}.$$ Let $$h_{min}={\displaystyle}\min_{1\leq m\leq M}\{h_{m}\} \quad \text{and} \quad h_{max}={\displaystyle}\max_{1\leq m\leq M}\{h_{m}\}.$$
We assume that $h_{min},h_{max}$ scale in a similar way to the number of samples, i.e $$c_1\|N(n)\|^{-\beta}\leq h_{min}^{j+1}(n)\leq h_{max}^{j+1}(n)\leq c_2\|N(n)\|^{-\beta}$$ where $j\in \{0,\dots,k-1\}$ is the same as in hypothesis .
Analysis of ${\mbox{\rm MISE}}$ for ${\hat{p}}$ {#section:Mise_pbar_anl}
===============================================
Error analysis for unnormalized estimator
-----------------------------------------
Suppose we are given a data set $\textbf{x}$ and it is partitioned into $M\geq 1$ disjoint subsets $\textbf{x}_{m}, \ m\in \{1,\dots,M\}$. We are interested in the subset posterior densities $p_{m}(\theta)=p(\theta |\textbf{x}_{m})$. For each such density we apply the analysis from before. Let ${\hat{p}}_{m}$ and ${\bar{p}}_{m}, \ m\in \{1,\dots,M\}$ be the corresponding logspline estimators as defined in and respectively. By definition of ${\hat{p}}_{m}$, that is equal to the logspline density estimate on $\Omega_{n_m}^{m}\subset \Omega$, where $\Omega_{n_m}^{m}$ is the set defined in for ${\hat{p}}_{m}$.
\[interOmega\] For $m\in\{1,\dots,M\}$, let $\Omega_{n_m}^{m}$ be the set defined in . We then set $${\underline{\Omega}}^{M,{\textbf{N}}}:=\bigcap_{m=1}^{M}\Omega_{n_m}^{m} \quad \text{where} \quad {\textbf{N}}=(n_1,\dots,n_m)$$ which is the set where the maximizer for the log-likelihood exists given each data subset and thus all logspline density estimators ${\hat{p}}_{m}$ exist.
\[interOmegaprob\] Suppose the conditions in and hold. Given the previous definition, we have that $$\lim_{n\to\infty}{\mathbb{P}}\left({\underline{\Omega}}^{M,{\textbf{N}}(n)}\right)=1$$
By Theorem \[Aboundthm\] we have that $${\mathbb{P}}\left( \Omega \setminus {\underline{\Omega}}^{M,{\textbf{N}}(n)} \right)={\mathbb{P}}\left( \bigcup_{m=1}^{M}(\Omega_{N_m(n)}^{m})^{c} \right)\leq \sum_{m=1}^{M}{\mathbb{P}}\left( (\Omega_{N_m(n)}^{m})^{c} \right)\leq \sum_{m=1}^{M} 2e^{{-N_m(n)^{2\epsilon}(L_m(n)+1)\delta_m(D)}}$$ and the result follows by taking $n$ to infinity.
Since the probability of the set where the estimators ${\hat{p}}_m$ exist for all $m\in \{1,\dots,M\}$ tends to 1, it makes sense to do our analysis for a conditional ${\mbox{\rm MISE}}$ on the set ${\underline{\Omega}}^{M,{\textbf{N}}(n)}$. Considering the practical aspect, we will never encounter the set where the maximizer of the log-likelihood doesn’t exist.
At this point, let’s state a bound for $|{\hat{p}}^{*}(\theta;\omega)-p^{*}(\theta)|$ which will be essential in our analysis of ${\mbox{\rm MISE}}$.
\[boundM>1\] Suppose the hypotheses - hold and that we are restricted to the sample subspace ${\underline{\Omega}}^{M,{\textbf{N}}(n)}$. We then have the following:
- There exists a positive constant $R_1=R_1(M)$ such that $$\|\log({\hat{p}}^{*}(\cdot,\omega))-\log({\bar{p}}^{*}(\cdot))\|_{\infty} \leq R_1\sum_{m=1}^{M}\dfrac{L_m(n)+1}{\sqrt{N_m(n)}}.$$
- There exists a positive constant $R_{2}=R_2(M,k,j,\mathcal{F}_p,\gamma(T_{K_{1}(n)}),\dots,\gamma(T_{K_{M}(n)}))$ such that $$\|\log{(p^{*})}-\log{({\bar{p}}^{*})}\|_{\infty}\leq R_2 \ \bar{h}_{max}^{j+1} \ \sum_{m=1}^{M}\left\| \frac{d^{j+1}\log({p_m})}{d\theta^{j+1}} \right\|_{\infty} \quad \text{where} \quad \bar{h}_{max}=\max_{m}\{\bar{h}_{m}\}.$$
- Using the bounds from $(a)$ and $(b)$ we have $$|{\hat{p}}^{*}(\theta;\omega)-p^{*}(\theta)|\leq \left( \exp\left\{R_1\sum_{m=1}^{M}\dfrac{L_m(n)+1}{\sqrt{N_m(n)}}+R_2 \ \bar{h}_{max}^{j+1} \ \sum_{m=1}^{M}\left\| \frac{d^{j+1}\log({p_m})}{d\theta^{j+1}} \right\|_{\infty} \right\}-1 \right)p^{*}(\theta).$$
<!-- -->
- The bound can be shown by writing $$\begin{aligned}
\|\log({\hat{p}}^{*}(\cdot,\omega))-\log({\bar{p}}^{*}(\cdot))\|_{\infty}&=\| \log(\prod_{m=1}^{M}\hat{p}_m(\cdot;\omega))-\log(\prod_{m=1}^{M}{\bar{p}}_m(\cdot)) \|_{\infty} \\
&\leq \sum_{m=1}^{M}\|\log(\hat{p}_m(\cdot;\omega))-\log({\bar{p}}_m(\cdot))\|_{\infty}
\end{aligned}$$ and then applying Theorem \[estonOmegan\]. For each $m\in \{1,\dots,M\}$ there will be an $M_{3}^{m}$ appearing in the bound and we can take $R_1=\max_{m}\{M_{3}^{m}\}$.
- Similar to part $(a)$ we can write $$\begin{aligned}
\|\log(p^{*}(\cdot))-\log({\bar{p}}^{*}(\cdot))\|_{\infty}&=\| \log(\prod_{m=1}^{M}p_m(\cdot))-\log(\prod_{m=1}^{M}{\bar{p}}_m(\cdot)) \|_{\infty} \\
&\leq \sum_{m=1}^{M}\|\log(p_m(\cdot))-\log({\bar{p}}_m(\cdot))\|_{\infty}
\end{aligned}$$ and then we apply Lemma \[pbarsupnormbound\]. For each $m\in \{1,\dots,M\}$ there will be constants $M'_{m}$ and $C_{m}(k,j)$ appearing and we can take $R_{2}=\max_{m}\{M'_{m}C_{m}(k,j)\}$.
- To see why this is true, we write $$|{\hat{p}}^{*}(\theta;\omega)-p^{*}(\theta)|=p^{*}(\theta)\left|\frac{{\hat{p}}^{*}(\theta;\omega)}{p^{*}(\theta)}-1\right|
=p^{*}(\theta)\left|\exp\{\log({\hat{p}}^{*}(\theta;\omega))-\log(p^{*}(\theta))\}-1\right|.$$
If ${\hat{p}}^{*}(\theta;\omega)\geq p^{*}(\theta)$ then $$\left|\exp\{\log({\hat{p}}^{*}(\theta;\omega))-\log(p^{*}(\theta))\}-1\right|=\exp\{\log({\hat{p}}^{*}(\theta;\omega))-\log(p^{*}(\theta))\}-1.$$
If ${\hat{p}}^{*}(\theta;\omega)< p^{*}(\theta)$ then $$\begin{aligned}
\left|\exp\{\log({\hat{p}}^{*}(\theta;\omega))-\log(p^{*}(\theta))\}-1\right|&=1-\exp\{\log({\hat{p}}^{*}(\theta;\omega))-\log(p^{*}(\theta))\}\\
&\leq \exp\{\log(p^{*}(\theta))-\log({\hat{p}}^{*}(\theta;\omega))\}-1
\end{aligned}$$ where the last step is justified by the fact that $1-e^{-x}\leq e^{x}-1, \ \text{for any} \ x\geq 0$. This implies $$\begin{aligned}
|{\hat{p}}^{*}(\theta;\omega)-p^{*}(\theta)|&\leq p^{*}(\theta)\left(\exp\{|\log({\hat{p}}^{*}(\theta;\omega))-\log(p^{*}(\theta))|\}-1\right)\\
&\leq p^{*}(\theta)\left(\exp\{|\log({\hat{p}}^{*}(\theta;\omega))-\log({\bar{p}}^{*}(\theta))|+|\log({\bar{p}}^{*}(\theta))-\log(p^{*}(\theta))|\}-1\right)
\end{aligned}$$ and then we apply the bounds from the previous two parts.
This leads us directly to the theorem for the conditional ${\mbox{\rm MISE}}$ of the unnormalized densities $p^{*}$ and ${\hat{p}}^{*}$.
Assume the conditions - hold. Given $M\geq 1$ we have $$\label{MISEnotH8}
\begin{aligned}
&\mkern-18mu {\mbox{\rm MISE}}(p^{*},{\hat{p}}^{*} \ | \ \underline{\Omega}^{M,{\textbf{N}}(n)})\\
&\leq \left( \exp\left\{R_1\sum_{m=1}^{M}\dfrac{L_m(n)+1}{\sqrt{N_m(n)}}+R_2 \ \bar{h}_{max}^{j+1} \ \sum_{m=1}^{M}\left\| \frac{d^{j+1}\log({p_m})}{d\theta^{j+1}} \right\|_{\infty} \right\}-1 \right)^{2}\|p^{*}\|_{2}^{2}
\end{aligned}$$ where $R_{1},R_{2}$ are as in Lemma \[boundM>1\].
In addition, if holds, then ${\mbox{\rm MISE}}$ scales optimally in regards to the number of samples, $$\label{MISEH8}
\sqrt{{\mbox{\rm MISE}}(p^{*},{\hat{p}}^{*})}=O(M n^{-\beta})=O(M^{1-\beta}\| {\textbf{N}}(n) \|^{-\beta})$$
By definition of the conditional ${\mbox{\rm MISE}}$ and Lemma \[boundM>1\], we have $$\begin{aligned}
{\mbox{\rm MISE}}(p^{*},{\hat{p}}^{*}\ |& \ \underline{\Omega}^{M,{\textbf{N}}(n)})={\mathbb{E}}_{\underline{\Omega}^{M,{\textbf{N}}(n)}}\int ({\hat{p}}^{*}(\theta;\omega)-p^{*}(\theta))^{2}\,d\theta\\
&\leq {\mathbb{E}}_{\underline{\Omega}^{M,{\textbf{N}}(n)}}\int \left[\left( \exp\left\{R_1\sum_{m=1}^{M}\dfrac{L_m(n)+1}{\sqrt{N_m(n)}}+R_2 \ \bar{h}_{max}^{j+1} \ \sum_{m=1}^{M}\left\| \frac{d^{j+1}\log({p_m})}{d\theta^{j+1}} \right\|_{\infty} \right\}-1 \right)p^{*}(\theta)\right]^{2}\,d\theta
\end{aligned}$$ which implies . Next, if holds, then follows directly.
It’s interesting to note how the number of knots, their placement and the number of samples all play a role in the above bound. If we want to be accurate, all of the parameters $L_{m}(n), N_{m}(n)$ and $\bar{h}_{max}$ must be chosen appropriately.
Analysis for renormalization constant {#AnalysisRenormConst}
-------------------------------------
We will now consider the error that arises for ${\mbox{\rm MISE}}$ when one renormalizes the product of the estimators so it can be a probability density. The renormalization can affect the error since $p^{*}$ and ${\hat{p}}^{*}$ are rescaled. We define the renormalization constant and its estimator to be $$\label{lambdahat}
\lambda=\int p^{*}(\theta)\,d\theta \quad \text{and} \quad {\hat{\lambda}}={\hat{\lambda}}(\omega)=\int {\hat{p}}^{*}(\theta;\omega)\,d\theta$$ Therefore, we are interested in analyzing $${\mbox{\rm MISE}}(p,{\hat{p}})={\mbox{\rm MISE}}(cp^{*},\hat{c}{\hat{p}}^{*}), \quad \text{where} \quad c=\lambda^{-1}, \ \hat{c}={\hat{\lambda}}^{-1}.$$ We first state the following lemma for $\lambda$ and ${\hat{\lambda}}(\omega)$.
\[lambdahaterror\] Let $\lambda$ and ${\hat{\lambda}}(\omega)$ be defined as in . Suppose that holds and we are restricted to the sample subspace ${\underline{\Omega}}^{M,{\textbf{N}}(n)}$. Then we have $$\label{lambdahatscale}
\left|\frac{{\hat{\lambda}}(\omega)}{\lambda}-1\right|=O(M^{1-\beta}\| {\textbf{N}}(n) \|^{-\beta})$$
By definition of $\lambda$ and ${\hat{\lambda}}(\omega)$, we have $$\begin{aligned}
|\lambda-{\hat{\lambda}}(\omega)|&=\left| \int p^{*}(\theta)\, d\theta-\int {\hat{p}}^{*}(\theta;\omega)\, d\theta \right|\\
&\leq \left( \exp\left\{R_1\sum_{m=1}^{M}\dfrac{L_m(n)+1}{\sqrt{N_m(n)}}+R_2 \ \bar{h}_{max}^{j+1} \ \sum_{m=1}^{M}\left\| \frac{d^{j+1}\log({p_m})}{d\theta^{j+1}} \right\|_{\infty} \right\}-1 \right)\lambda
\end{aligned}$$ where the inequality is justified by Lemma \[boundM>1\](c). Dividing by $\lambda$ the result then follows by hypothesis .
So what the above lemma suggests is that when restricted to the sample subspace $\underline{\Omega}^{M,{\textbf{N}}(n)}$, the space where the logspline density estimators ${\hat{p}}_{m}$, $m\in \{1,\dots,M\}$ are all defined, the renormalization constant $\hat{c}$ of the product of the estimators approximates the true renormalization constant $c$.
Knowing now how ${\hat{\lambda}}(\omega)$ scales we can start analyzing ${\mbox{\rm MISE}}(p,{\hat{p}})$ on the sample subspace. However, to make the analysis slightly easier we introduce a new functional, called $\overline{{\mbox{\rm MISE}}}$. This new functional is asymptotically equivalent to ${\mbox{\rm MISE}}$ as we will show, thus providing us with the means to view how ${\mbox{\rm MISE}}$ scales without having to directly analyze it.
\[MISEbar\] Suppose $M\geq 1$ and hypotheses - hold. Given the sample subspace $\underline{\Omega}^{M,{\textbf{N}}(n)}$ we define the functional $$\overline{{\mbox{\rm MISE}}}\left(p,{\hat{p}}\ | \ \underline{\Omega}^{M,{\textbf{N}}(n)}\right)={\mathbb{E}}_{\underline{\Omega}^{M,{\textbf{N}}(n)}}\left[ \left(\frac{{\hat{\lambda}}(\omega)}{\lambda}\right)^{2}\int ({\hat{p}}(\theta;\omega)-p(\theta))^{2}\,d\theta \right]$$
\[MISEbarasym\] The functional $\overline{{\mbox{\rm MISE}}}$ is asymptotically equivalent to ${\mbox{\rm MISE}}$ on $\underline{\Omega}^{M,{\textbf{N}}(n)}$, in the sense that $$\lim_{\|{\textbf{N}}(n)\|\to\infty}\frac{\overline{{\mbox{\rm MISE}}}\left(p,{\hat{p}}\ | \ \underline{\Omega}^{M,{\textbf{N}}(n)}\right)}{{\mbox{\rm MISE}}\left(p,{\hat{p}}\ | \ \underline{\Omega}^{M,{\textbf{N}}(n)}\right)}=1$$
Notice that $\overline{{\mbox{\rm MISE}}}$ can be written as $$\begin{aligned}
\overline{{\mbox{\rm MISE}}}\left(p,{\hat{p}}\ | \ \underline{\Omega}^{M,{\textbf{N}}(n)}\right)&={\mathbb{E}}_{\underline{\Omega}^{M,{\textbf{N}}(n)}}\left[ \left(\frac{{\hat{\lambda}}}{\lambda}-1+1\right)^{2}\int ({\hat{p}}(\theta;\omega)-p(\theta))^{2}\,d\theta \right]\\
&={\mathbb{E}}_{\underline{\Omega}^{M,{\textbf{N}}(n)}}\left[ \left[\left(\frac{{\hat{\lambda}}}{\lambda}-1\right)^{2}+2\left(\frac{{\hat{\lambda}}}{\lambda}-1\right)+1\right]\int ({\hat{p}}(\theta;\omega)-p(\theta))^{2}\,d\theta \right]
\end{aligned}$$
and thus by Lemma \[lambdahaterror\] $$\overline{{\mbox{\rm MISE}}}\left(p,{\hat{p}}\ | \ \underline{\Omega}^{M,{\textbf{N}}(n)}\right)=(1+\mathcal{E}(n)){\mbox{\rm MISE}}\left(p,{\hat{p}}\ | \ \underline{\Omega}^{M,{\textbf{N}}(n)}\right)$$ $$\text{where} \quad \mathcal{E}(n)=O(M^{1-\beta}\|{\textbf{N}}(n)\|^{-\beta})$$ which then implies the result.
We conclude our analysis with the next theorem, which states how ${\mbox{\rm MISE}}$ scales for the renormalized estimators.
\[MISEpphat\] Let $M\geq 1$. Assume the conditions - hold. Then $${\mbox{\rm MISE}}\left( p,{\hat{p}}\ | \ \underline{\Omega}^{M,{\textbf{N}}(n)} \right)=O(M^{2-2\beta}\|{\textbf{N}}(n)\|^{-2\beta}).$$
We will do the work for $\overline{{\mbox{\rm MISE}}}$ and the result will follow from Proposition \[MISEbarasym\]. Notice that $\overline{{\mbox{\rm MISE}}}$ can be written as below. Also, let ${\mathbb{E}}_n(\cdot)={\mathbb{E}}(\cdot|\underline{\Omega}^{M,{\textbf{N}}(n)})$ $$\begin{aligned}
\overline{{\mbox{\rm MISE}}}\left( p,{\hat{p}}\ | \ \underline{\Omega}^{M,{\textbf{N}}(n)} \right)&={\mathbb{E}}_n\left[ \left(\frac{{\hat{\lambda}}}{\lambda}\right)^{2}\int (p-{\hat{p}})^{2}\,d\theta \right]\\
&=\|p\|_{2}^{2}\ {\mathbb{E}}_n\left[\left(\frac{{\hat{\lambda}}}{\lambda}-1\right)^{2}\right]+\lambda^{-2}\ {\mbox{\rm MISE}}_n(p^{*},{\hat{p}}^{*})\\
&\quad -2\lambda^{-1}\ {\mathbb{E}}_n\int\left(\frac{{\hat{\lambda}}}{\lambda}-1\right)({\hat{p}}^{*}-p^{*})p\,d\theta\\
&=J_{1}+J_{2}+J_{3}
\end{aligned}$$ We now determine how each of the $J_{i}$, $i\in \{1,2,3\}$ scale. For $J_1$ by Lemma \[lambdahaterror\] we have $$J_1=O(M^{2-2\beta}\|{\textbf{N}}(n)\|^{-2\beta}),$$ for $J_2$ we have from $$J_2=O(M^{2-2\beta}\|{\textbf{N}}(n)\|^{-2\beta})$$ and for $J_3$ we have from Lemmas \[boundM>1\](c) and \[lambdahaterror\] $$\begin{aligned}
|J_3|^{2}&\leq 4\lambda^{-2}\ \left({\mathbb{E}}_n\int\left|\frac{{\hat{\lambda}}}{\lambda}-1\right||{\hat{p}}^{*}-p^{*}|p\,d\theta\right)^{2}\\
&\leq 4\lambda^{-2}\ {\mathbb{E}}_n\left[\left(\frac{{\hat{\lambda}}}{\lambda}-1\right)^{2}\int p^{2}\,d\theta\right]\cdot {\mbox{\rm MISE}}_n(p^{*},{\hat{p}}^{*}).
\end{aligned}$$ Thus, by hypotheses - $$|J_3|=O(M^{2-2\beta}\|{\textbf{N}}(n)\|^{-2\beta}).$$
Numerical Error {#section:num_error}
===============
So far we have estimated the error that arises between the unknown density $p$ and the full-data estimator ${\hat{p}}$. However, in practice it is difficult to evaluate the renormalization constant $${\hat{\lambda}}(\omega)=\int{\hat{p}}^{*}(\theta)\,d\theta= \int\prod_{m=1}^{M}{\hat{p}}_{m}(\theta)\,d\theta$$ defined in . The difficulty is due to the process of generating MCMC samples and thus ${\hat{p}}^{*}$ is not explicitly known. In order to circumvent this issue, our idea is to approximate the integral above numerically. To accomplish this, we interpolate ${\hat{p}}^{*}$ using Lagrange polynomials. This procedure leads to the construction of an interpolant estimator ${\tilde{p}}^{*}$ which we then integrate numerically. We then normalize ${\tilde{p}}^{*}$ and use that as a density estimator for $p$. Unfortunately, to estimate the error by considering that kind of approximation given an arbitrary grid of points for Lagrange polynomials, independent of the set of knots $(t_{i})$ for B-splines gives a stringent condition on the smoothness of B-splines we incorporate. It turns out that we have to utilize B-splines of order at least $k=4$. For this reason we consider using Lagrange polynomials of order $l+1$ which satisfy $l<k-2$.
Interpolation of an estimator: preliminaries
--------------------------------------------
We remind the reader the model we deal with throughout our work. We recall that the (marginal) posterior of the parameter $\theta \in \RR$ (which is a component of a multidimensional parameter $ { \pmb{\theta}\in \RR^d}$) given the data $${\bf x} = \{ {\bf x}_1, {\bf x}_2, \dots, {\bf x}_M \}$$ partitioned into $M$ disjoint sets ${\bf x}_m$, $m=1,\dots,M$ is assumed to have the form $$\label{margmodel}
p(\theta|{\bf x}) \propto \prod_{m=1}^M p_{m}(\theta)$$ with $p(\theta| {\bf x}_m)$ denoting the (marginal) posterior density of $\theta$ given data ${\bf x}_m$.
The estimator $\hat{p}(\theta|{\bf x})$ of the posterior $p(\theta|{\bf x})$ is taken to be $$\label{estmod}
\hat{p}(\theta|{\bf x})\propto
{\displaystyle}\prod_{m=1}^{M}\hat{p}_m(\theta)$$ where $\hat{p}_m(\theta)$ stands for the logspline density estimator of the sub-posterior density $p_m(\theta)$. Recall from Definition \[LogsplineEst\] and hypotheses - that for each $m\in\{1,\dots,M\}$, the estimator ${\hat{p}}_{m}$ has the form $${\hat{p}}_{m}(\theta)=\exp\left( B_{m}(\theta;{\hat{y}}^{m})-c({\hat{y}}^{m}) \right)$$ where $$\begin{aligned}
&B_{m}(\theta;{\hat{y}}^{m})=\sum_{j=0}^{L_{m}(n)}{\hat{y}}_{j}^{m}B_{j,k,T_{K_{m}(n)}}(\theta)\\
\text{and} \quad &c({\hat{y}}^{m})=\log\left( \int \exp\left( B_{m}(\theta;{\hat{y}}^{m})\, d\theta \right) \right)\end{aligned}$$ The vector ${\hat{y}}^{m}=({\hat{y}}_{1}^{m},\dots,{\hat{y}}_{L_{m}(n)}^{m})$ is the argument that maximizes the log-likelihood, as described in equation and we also remind the reader that this maximizer exists for all $m\in\{1,\dots,M\}$ as we carry out our analysis on the sample subspace $\underline{\Omega}^{M,{\textbf{N}}(n)}$.
Together with the hypotheses stated in section 3, we now add the next proposition which will be necessary for our work later on.
\[essboundphatder\] Suppose hypotheses - hold. Given the space $\underline{\Omega}^{M,{\textbf{N}}(n)}$, we have that the estimator ${\hat{p}}_{m}$ is bounded and its derivatives of all orders satisfy $$\left| {\hat{p}}_{m}^{(\alpha)}(\theta) \right|\leq C(\alpha,k,p_{m})\|{\textbf{N}}(n)\|^{\alpha \beta/(j+1)} \quad \text{for} \quad \theta \in (a,b) \ \text{and} \ \alpha<k-1$$ where the constant $C(\alpha,k,p_{m})$ depends on the order $k$ of the B-splines, the order $\alpha$ of the derivative and the density $p_{m}$.
Observe that the estimator ${\hat{p}}_{m}$ can be expressed as $${\hat{p}}_{m}(\theta)=\exp{\big[ \sum_{j=0}^{L_{m}(n)}{\hat{y}}_{j}^{m}B_{j,k}(\theta)-c({\hat{y}}^{m}) \big]}=\exp{\big[\sum_{j=0}^{L_{m}(n)}({\hat{y}}_{j}^{m}-c({\hat{y}}^{m}))B_{j,k}(\theta) \big]}$$
Then, applying Faa di Bruno’s formula, we obtain $$|{\hat{p}}_{m}^{(\alpha)}(\theta)|\leq {\hat{p}}_{m}(\theta)\sum_{k_1+2k_2+\dots+\alpha k_{\alpha}=\alpha}\frac{\alpha !}{k_1!k_2!\dots k_{\alpha}!}\prod_{i=1}^{\alpha}\left(\frac{\left|\frac{d^{i}}{d\theta^{i}}\sum_{j=0}^{L_{m}(n)}({\hat{y}}_{j}^{m}-c({\hat{y}}^{m}))B_{j,k}(\theta)\right|}{i!}\right)^{k_i}, \ \text{for} \ \theta\in [t_{i},t_{i+1}].$$ where $k_1,\dots,k_{\alpha}$ are nonnegative integers and if $k_{i}>0$ with $i\geq k$ then that term in the sum above will be zero since almost everywhere $B_{j,k}^{(i)}(\theta)=0$. By De Boor’s formula [@de; @Boor p.132], we can estimate the derivative of a spline as follows $$\left|\frac{d^{i}}{d\theta^{i}}\sum_{j=0}^{L_{m}(n)}({\hat{y}}_{j}^{m}-c({\hat{y}}^{m}))B_{j,k}(\theta)\right|=\left|\frac{d^{i}}{d\theta^{i}}\log{{\hat{p}}_{m}(\theta)}\right|\leq C\frac{\|\log{{\hat{p}}}\|_{\infty}}{\underline{h}_{m}^{i}}.$$ where the constant $C$ depends only on the order $k$ of the B-splines. Therefore, we can bound $|{\hat{p}}_{m}^{(\alpha)}(\theta)|$ as follows $$\begin{aligned}
| {\hat{p}}_{m}^{(\alpha)}(\theta)|&\leq {\hat{p}}_{m}(\theta)\sum_{k_1+2k_2+\dots+\alpha k_{\alpha}=\alpha}\frac{\alpha !}{k_1!k_2!\dots k_{\alpha}!}\prod_{i=1}^{\alpha}\left(C\frac{\|\log{{\hat{p}}}_{m}\|_{\infty}}{i!\, \underline{h}_{m}^{i}}\right)^{k_i}\\
&\leq {\hat{p}}_{m}(\theta) \bigg( \frac{1+C^{\alpha}\|\log{{\hat{p}}}_{m}\|_{\infty}^{\alpha}}{\underline{h}_{m}^{\alpha}}\bigg)\sum_{k_1+2k_2+\dots+\alpha k_{\alpha}=\alpha}\frac{\alpha !}{k_1!k_2!\dots k_{\alpha}!}.
\end{aligned}$$ The above leads to the following bound: $$\begin{aligned}
\left|{\hat{p}}_{m}^{(\alpha)}(\theta)\right|&\leq {\hat{p}}_{m}(\theta)\frac{1+C^{\alpha}\|\log{{\hat{p}}}_{m}\|_{\infty}^{\alpha}}{\underline{h}_{m}^{\alpha}}\sum_{\zeta=1}^{\alpha}\frac{\alpha !}{\zeta !}(\alpha-\zeta+1)^{\zeta}\\
&\leq C(k,\alpha)\,{\hat{p}}_{m}(\theta)\frac{1+\|\log{{\hat{p}}}_{m}\|_{\infty}^{\alpha}}{\underline{h}_{m}^{\alpha}}
\end{aligned}$$ where $C(k,\alpha)$ is a constant that depends on the order $k$ and the $\alpha$. Next, recalling the hypotheses , , and , we obtain $${\hat{p}}_{m}(\theta)\leq |{\hat{p}}_{m}(\theta)-p_{m}(\theta)|+p_{m}(\theta)\leq \|p_{m}\|_{\infty}(1+c\|{\textbf{N}}(n)\|^{-\beta})$$ and $$\begin{aligned}
\|\log{{\hat{p}}_{m}}\|_{\infty}&\leq \|\log{{\hat{p}}_{m}}-\log{{\bar{p}}_{m}}\|_{\infty}+\|\log{{\bar{p}}_{m}}-\log{p_{m}}\|_{\infty}+\|\log{p_{m}}\|_{\infty}\\
&\leq c\|{\textbf{N}}(n)\|^{-\beta}+\|\log{p_{m}}\|_{\infty}
\end{aligned}$$ where we also used Lemma \[pbarsupnormbound\], Theorem \[estonOmegan\], Lemma \[boundM>1\]. Therefore, $$\begin{aligned}
\left|{\hat{p}}_{m}^{(\alpha)}(\theta)\right|&\leq C(k,\alpha)\,\|p_{m}\|_{\infty}(1+\|{\textbf{N}}(n)\|^{-\beta})\frac{1+\|{\textbf{N}}(n)\|^{-\alpha\beta}+\|\log{p_{m}}\|_{\infty}^{\alpha}}{\underline{h}_{m}^{\alpha}}\\
&\leq C(\alpha,k,p_{m})\frac{1}{\underline{h}_{m}^{\alpha}}\\
&= C(\alpha,k,p_{m})(\underline{h}_{m}^{j+1})^{-\alpha/(j+1)} \ \sim \ C(\alpha,k,p_{m})\|{\textbf{N}}(n)\|^{\alpha \beta/(j+1)}
\end{aligned}$$ The final result follows immediately and since the index $i$ was chosen arbitrarily and that all interior knots are simple, this concludes the proof.
Remark \[CurryScho\], in Section \[section:appendix\], allowed us to extend the bound for all $\theta \in (a,b)$ in the proof above. In reality, we can also extend the bound to the closed interval $[a,b]$. Since $a=t_{0}$ and $b=t_{K_{m}(n)}$ are knots with multiplicity $k$, any B-spline that isn’t continuous at those knots will just be a polynomial that has been cut off, which means there is no blow-up. Thus, we can extend the bound by considering right-hand and left-hand limits of derivatives at $a$ and $b$, respectively. From this point on we consider the bound in Proposition \[essboundphatder\] holds for all $\theta \in [a,b]$.
\[subpostboundlmm\] Assume hypotheses - hold. Suppose that for each $m=1,\dots,M$ the sub-posterior estimator $\hat{p}_m(\theta)$ is $\alpha$-times differentiable on $[a,b]$ for some positive integer $\alpha<k-1$.\
Then, the estimator $\hat{p}^*$ satisfies $$\label{subpostboundest}
\Big|\frac{d^{\alpha}}{d \theta^{\alpha}}\hat{p}^*(\theta)\Big| =
\big|(\hat{p}_{1}...\hat{p}_{M})^{(\alpha)}(\theta)\big| \leq C(\alpha,k,p_{1},\dots,p_{M})\|{\textbf{N}}(n)\|^{\alpha \beta/(j+1)}M^{\alpha} \quad \text{for} \quad \theta \in [a,b],$$ where $C(\alpha,k,p_1,\dots,p_M)$ depends on the order $k$ of the B-splines, the order $\alpha$ of the derivative and the densities $p_{1},\dots,p_{M}$.
Let $\theta\in [a,b]$. By Proposition we have $$|\hat{p}_m^{(\alpha)}(\theta)| \, \leq \, C(\alpha,k,p_{m})\|{\textbf{N}}(n)\|^{\alpha \beta/(j+1)}.$$ Then, using the general Leibnitz rule and employing the above inequality we obtain $$\begin{aligned}
\Big|\frac{d^{\alpha}}{d \theta^{\alpha}}\hat{p}^*(\theta)\Big| &=
\big|(\hat{p}_{1}...\hat{p}_{M})^{(\alpha)}(\theta)\big| = \\
& = \bigg|\sum_{i_{1}+\dots+i_{M}=\alpha}\dfrac{\alpha!}{i_{1}! \dots i_{M}!}\hat{p}_{1}^{(i_{1})}...\hat{p}_{M}^{(i_{M})}\bigg|
\\
& \leq \sum_{i_{1}+...+i_{M}=\alpha}\dfrac{\alpha!}{i_{1}!...i_{M}!}C(i_{1},k,p_{1})\|{\textbf{N}}(n)\|^{i_{1} \beta/(j+1)}\,...\,C(i_{M},k,p_{M})\|{\textbf{N}}(n)\|^{i_{M} \beta/(j+1)}\\
&=\|{\textbf{N}}(n)\|^{\alpha \beta/(j+1)}\sum_{i_{1}+...+i_{M}=\alpha}\dfrac{\alpha!}{i_{1}!...i_{M}!}C(i_{1},k,p_{1})\,...\,C(i_{M},k,p_{M})
\end{aligned}$$ From the proof of Proposition \[essboundphatder\], notice that $C(i,k,p_{m})\leq C(j,k,p_{m})$ for positive integers $i\leq j$. Therefore, we have $$|\hat{p}_m^{(\alpha)}(\theta)|\leq C(\alpha,k,p_{1},\dots,p_{M})\|{\textbf{N}}(n)\|^{\alpha \beta/(j+1)}\sum_{i_{1}+...+i_{M}=\alpha}\dfrac{\alpha!}{i_{1}!...i_{M}!}$$ where $C(\alpha,k,p_{1},\dots,p_{M})=C(\alpha,k,p_{1})\dots C(\alpha,k,p_{M})$ and the result follows from the multinomial theorem. This concludes the proof.
Numerical approximation of the renormalization constant $\hat{c}={{\hat{\lambda}}}^{-1}$
----------------------------------------------------------------------------------------
By Remark \[CurryScho\], in Section \[section:appendix\], we have that B-splines of order $k$, and therefore any splines that arise from these, will have $k-2$ continuous derivatives on $(a,b)$. Thus, in order to utilize Lemma \[estinterrlmm\], we must have that the order of the Lagrange polynomials be at most $k-2$, i.e. $l\leq k-3$. Since $l\geq 1$ this implies that the B-splines used in the construction of the logspline estimators be at least cubic. Thus, assume $k\geq 4$ and let $1\leq l\leq k-3$ be a positive integer that denotes the degree of the interpolating polynomials. Let $N\in {\mathbb{N}}$ be the number of sub-intervals of $[a,b]$ on each of which we will interpolate the product of estimators by the polynomial of degree $l$. Thus each sub-interval has to be further subdivided into $l$ intervals. Define the partition $\mathcal{X}$ of $[a,b]$ such that $$\label{meshX}
\mathcal{X} = \{a=x_0 < x_1 < x_2 < \dots < x_{Nl}=b \} \, \quad \text{and} \quad x_{i+1}-x_i = \frac{b-a}{Nl} = \Delta x.$$ For each $i=0,\dots, N-1$, recalling the formula , we define the (random) Lagrange polynomial $$\label{rlagrp}
\hat{q}_i(\theta) := \sum_{\tau=0}^l {\hat{p}}^{*}(x_{il+\tau}) l_{\tau,i}(\theta)
\quad \text{with} \quad
\hspace{0.2 in}l_{\tau,i}(\theta):={\displaystyle}\prod_{j \in \{0, \dots, l\}\backslash\{\tau\}}
\left(\frac{\theta-x_{il+j}}{x_{il+\tau}-x_{il+j}}\right)\,,$$ which is a polynomial that interpolates the estimator ${\hat{p}}^{*}(\theta)$ on the interval $[x_{il},x_{(i+1)l}]$. We next define an interpolant estimator ${\tilde{p}}^{*}$ to be a [*random*]{} composite polynomial given by $$\label{compintp}
{\tilde{p}}^{*}(\theta):= \left\{
\begin{aligned}
& 0,& & \theta \in \RR \backslash [a,b] \\
& \hat{q}_i(\theta),& & \theta \in [x_{il},x_{(i+1)l}]
\end{aligned}\right.$$ which approximates the estimator ${\hat{p}}^{*}$ on the whole interval $[a,b]$.
We are now ready to estimate the mean integrated squared error given by $$\label{MISE}
\begin{aligned}
{\mbox{\rm MISE}}\big(p^{*},{\tilde{p}}^{*}\ | \ \underline{\Omega}^{M,{\textbf{N}}(n)}\big) &= \mathbb{E}\int \big( p^{*}(\theta)-{\tilde{p}}^{*}(\theta)\big)^{2}d\theta \\
\end{aligned}$$
\[MISEpptil\] Assume that hypotheses - hold and ${\tilde{p}}^{*}$ is the estimator of ${\hat{p}}^{*}$ as defined in given the partition $\mathcal{X}$ from respectively. The following estimate holds provided $1\leq l\leq k-3$. $$\label{MISEpptilest}
\begin{aligned}
{\mbox{\rm MISE}}({\hat{p}}^{*}\,,{\tilde{p}}^{*} \ | \ \underline{\Omega}^{M,{\textbf{N}}(n)}) &
\, =\, \mathbb{E}\int_{a}^{b}\big({\hat{p}}^{*}(\theta)-{\tilde{p}}^{*}(\theta)\big)^{2}d\theta \, \\
& \leq \Bigg( \frac{(\Delta x)^{l+1}}{4(l+1)}\|{\textbf{N}}(n)\|^{(l+1) \beta/(j+1)} M^{l+1} \Bigg)^2 \mathcal{C}(l+1,k,p_{1},\dots,p_{M},(a,b))
\end{aligned}$$ where the constant $\mathcal{C}(l+1,k,p_{1},\dots,p_{M},(a,b))$ depends on the order $l+1$ of the Lagrange polynomials, the order $k$ of the B-splines, the densities $p_{1},\dots,p_{M}$ and the length of the interval $(a,b)$.
Let $i \in \{0,\dots,N-1\}$. By Lemma \[estinterrlmm\], Lemma \[subpostboundlmm\], and for any $\theta \in [x_{il},x_{(i+1)l}]$ we have $$\label{locerr}
\begin{aligned}
\big|{\hat{p}}^{*}(\theta) - {\tilde{p}}^{*}(\theta)\big| &= \big|{\hat{p}}^{*}(\theta) - \hat{q}_i(\theta)\big| \\[2pt]
& \leq \bigg( \sup_{\theta \in [x_{il},x_{(i+1)l}]} \Big|\frac{d}{d \theta}^{(l+1)}\hat{p}^*(\theta)\Big|\bigg) \dfrac{(\Delta x)^{l+1}}{4(l+1)} \\
& \leq \frac{(\Delta x)^{l+1}}{4(l+1)} C(l+1,k,p_{1},\dots,p_{M})\|{\textbf{N}}(n)\|^{(l+1) \beta/(j+1)}M^{l+1}.
\end{aligned}$$
Thus we conclude that $$\begin{aligned}
\mathbb{E}\int_{a}^{b}\big({\hat{p}}^{*}(\theta)-{\tilde{p}}^{*}(\theta)\big)^{2}d\theta & = \sum_{i=0}^{N-1} \mathbb{E}\int_{x_{il}}^{x_{(i+1)l}} \big( {\hat{p}}^{*}(\theta)-\hat{q}_i(\theta)\big)^2 d\theta \\
& \leq \Bigg( \frac{(\Delta x)^{l+1}}{4(l+1)}\|{\textbf{N}}(n)\|^{(l+1) \beta/(j+1)} M^{l+1} \Bigg)^2 \mathcal{C}(l+1,k,p_{1},\dots,p_{M},(a,b)).
\end{aligned}$$ where $\mathcal{C}(l+1,k,p_{1},\dots,p_{M},(a,b))=C^{2}(l+1,k,p_{1},\dots,p_{M})(b-a)$.
Now that we have bounded the error between ${\hat{p}}^{*}$ and ${\tilde{p}}^{*}$, we define the renormalization constant $\tilde{c}$ and the density estimator ${\tilde{p}}$ of ${\hat{p}}$. $$\label{lamtil}
\frac{1}{\tilde{c}}=\tilde{\lambda}=\int_{a}^{b}{\tilde{p}}^{*}(\theta)\, d\theta \quad \text{and} \quad {\tilde{p}}:=\tilde{c}{\tilde{p}}^{*}$$ Now the question is, how close is $\tilde{\lambda}$ to ${\hat{\lambda}}$. This is answered in the following lemma.
\[lamtilerror\] Given the definitions of ${\hat{\lambda}}$ and $\tilde{\lambda}$ in and respectively, we have that the distance between the two renormalization constants is bounded by $$|{\hat{\lambda}}-\tilde{\lambda}|\leq \Bigg( \frac{(\Delta x)^{l+1}}{4(l+1)}\|{\textbf{N}}(n)\|^{(l+1) \beta/(j+1)} M^{l+1} \Bigg) \mathcal{R}(l+1,k,p_{1},\dots,p_{M},(a,b))$$ where the constant $\mathcal{R}(l+1,k,p_{1},\dots,p_{M},(a,b))=C(l+1,k,p_{1},\dots,p_{M})(b-a)$.
We write $$|{\hat{\lambda}}-\tilde{\lambda}|\leq \int_{a}^{b}|{\hat{p}}^{*}(\theta)-{\tilde{p}}^{*}(\theta)|\, d\theta$$ and then we just apply the Lagrange interpolation error from Lemma \[estinterrlmm\].
We will continue by following the same steps as in subsection \[AnalysisRenormConst\]. The idea is to introduce a functional that will scale the same as ${\mbox{\rm MISE}}({\hat{p}}\,,{\tilde{p}}\ | \ \underline{\Omega}^{M,{\textbf{N}}(n)})$.
\[MISEunderbar\] Suppose $M\geq 1$ and hypotheses , and hold. Given the sample subspace $\underline{\Omega}^{M,{\textbf{N}}(n)}$ we define the functional $$\underline{{\mbox{\rm MISE}}}\left({\hat{p}},{\tilde{p}}\ | \ \underline{\Omega}^{M,{\textbf{N}}(n)}\right)={\mathbb{E}}_{\underline{\Omega}^{M,{\textbf{N}}(n)}}\left[ \left(\frac{\tilde{\lambda}}{{\hat{\lambda}}(\omega)}\right)^{2}\int ({\hat{p}}(\theta;\omega)-{\tilde{p}}(\theta))^{2}\,d\theta \right]$$
\[MISEunderbarasym\] The functional $\underline{{\mbox{\rm MISE}}}$ is asymptotically equivalent to ${\mbox{\rm MISE}}$ on $\underline{\Omega}^{M,{\textbf{N}}(n)}$, in the sense that $$\lim_{\Delta x\to 0}\frac{\underline{{\mbox{\rm MISE}}}\left({\hat{p}},{\tilde{p}}\ | \ \underline{\Omega}^{M,{\textbf{N}}(n)}\right)}{{\mbox{\rm MISE}}\left({\hat{p}},{\tilde{p}}\ | \ \underline{\Omega}^{M,{\textbf{N}}(n)}\right)}=1$$
Notice that $\underline{{\mbox{\rm MISE}}}$ can be written as $$\begin{aligned}
\underline{{\mbox{\rm MISE}}}\left({\hat{p}},{\tilde{p}}\ | \ \underline{\Omega}^{M,{\textbf{N}}(n)}\right)&={\mathbb{E}}_{\underline{\Omega}^{M,{\textbf{N}}(n)}}\left[ \left(\frac{\tilde{\lambda}}{{\hat{\lambda}}}-1+1\right)^{2}\int ({\hat{p}}(\theta;\omega)-{\tilde{p}}(\theta))^{2}\,d\theta \right]\\
&={\mathbb{E}}_{\underline{\Omega}^{M,{\textbf{N}}(n)}}\left[\left( \lambda^{-2}\left(\frac{\lambda}{{\hat{\lambda}}}\right)^{2}\left(\tilde{\lambda}-{\hat{\lambda}}\right)^{2}+2\lambda^{-1}\frac{\lambda}{{\hat{\lambda}}}\left(\tilde{\lambda}-{\hat{\lambda}}\right)+1\right)\int ({\hat{p}}(\theta;\omega)-{\tilde{p}}(\theta))^{2}\,d\theta \right]\,.
\end{aligned}$$ Thus, by Lemmas \[lambdahaterror\] and \[lamtilerror\], where the former implies $$\begin{aligned}
\frac{\lambda}{{\hat{\lambda}}}&\leq \frac{1}{1-C\,M^{1-\beta}\|{\textbf{N}}(n)\|^{-\beta}}\,,
\end{aligned}$$ and for large enough $n$ for which $1-C\,M^{1-\beta}\|{\textbf{N}}(n)\|^{-\beta}>0$, we have $$\underline{{\mbox{\rm MISE}}}\left({\hat{p}},{\tilde{p}}\ | \ \underline{\Omega}^{M,{\textbf{N}}(n)}\right)=(1+\mathcal{E}(n)){\mbox{\rm MISE}}\left({\hat{p}},{\tilde{p}}\ | \ \underline{\Omega}^{M,{\textbf{N}}(n)}\right)$$ with $\mathcal{E}(n)=O(M^{l+1}(\Delta x)^{l+1})$. This then implies the result.
\[MISEphatptil\] Let $M\geq 1$. Assume the conditions - hold. Then $${\mbox{\rm MISE}}\left( {\hat{p}},{\tilde{p}}\ | \ \underline{\Omega}^{M,{\textbf{N}}(n)} \right)=O\left[ \Big( \|{\textbf{N}}(n)\|^{\beta/(j+1)} (\Delta x) M \Big)^{2(l+1)}\right].$$
We will do the work for $\underline{{\mbox{\rm MISE}}}$ and the result will follow from Proposition \[MISEunderbarasym\]. Notice that $\underline{{\mbox{\rm MISE}}}$ can be written as below. Also, let ${\mathbb{E}}_n(\cdot)={\mathbb{E}}(\cdot|\underline{\Omega}^{M,{\textbf{N}}(n)})$ $$\begin{aligned}
\underline{{\mbox{\rm MISE}}}\left( {\hat{p}},{\tilde{p}}\ | \ \underline{\Omega}^{M,{\textbf{N}}(n)} \right)&={\mathbb{E}}_n\left[ \left(\frac{\tilde{\lambda}}{{\hat{\lambda}}}\right)^{2}\int ({\hat{p}}(\theta;\omega)-{\tilde{p}}(\theta))^{2}\,d\theta \right]\\
&={\mathbb{E}}_n\int \left(\frac{\tilde{\lambda}}{{\hat{\lambda}}}{\hat{p}}-\frac{1}{{\hat{\lambda}}}{\tilde{p}}^{*}-{\hat{p}}+{\hat{p}}\right)^{2}\,d\theta\\
&\leq \frac{\lambda^{-1}}{1-C\,M^{1-\beta}\|{\textbf{N}}(n)\|^{-\beta}}{\mathbb{E}}_n\int \left((\tilde{\lambda}-{\hat{\lambda}})({\hat{p}}-p)+(\tilde{\lambda}-{\hat{\lambda}})p+({\hat{p}}^{*}-{\tilde{p}}^{*})\right)^{2}\,d\theta\\
&\leq \frac{\lambda^{-1}}{1-C\,M^{1-\beta}\|{\textbf{N}}(n)\|^{-\beta}}(J_1+J_2+J_3+J_4+J_5+J_6)
\end{aligned}$$ where $$\begin{aligned}
&J_1={\mathbb{E}}_n\int (\tilde{\lambda}-{\hat{\lambda}})^{2}({\hat{p}}-p)^{2}\,d\theta,&
&J_2={\mathbb{E}}_n\int (\tilde{\lambda}-{\hat{\lambda}})^{2}p^{2}\,d\theta,&\\
&J_3={\mathbb{E}}_n\int ({\hat{p}}^{*}-{\tilde{p}}^{*})^{2}\,d\theta,&
&J_4=2\,{\mathbb{E}}_n\int (\tilde{\lambda}-{\hat{\lambda}})^{2}({\hat{p}}-p)p\,d\theta,&\\
&J_5=2\,{\mathbb{E}}_n\int (\tilde{\lambda}-{\hat{\lambda}})({\hat{p}}-p)({\hat{p}}^{*}-{\tilde{p}}^{*})\,d\theta,&
&J_6=2\,{\mathbb{E}}_n\int (\tilde{\lambda}-{\hat{\lambda}})({\hat{p}}^{*}-{\tilde{p}}^{*})p\,d\theta.&
\end{aligned}$$ and by hypotheses - and Lemmas \[MISEpphat\], \[MISEpptil\] and \[lamtilerror\], we obtain $$\begin{aligned}
\vert J_1\vert &\leq C_1\Big( \|{\textbf{N}}(n)\|^{\beta/(j+1)} (\Delta x) M \Big)^{2(l+1)}\cdot M^{2-2\beta}\|{\textbf{N}}(n)\|^{-2\beta}\\
\vert J_2\vert +\vert J_3\vert +\vert J_6\vert &\leq C_2\Big( \|{\textbf{N}}(n)\|^{\beta/(j+1)} (\Delta x) M \Big)^{2(l+1)}\\
\vert J_4\vert +\vert J_5\vert &\leq C_3\Big( \|{\textbf{N}}(n)\|^{\beta/(j+1)} (\Delta x) M \Big)^{2(l+1)}\cdot M^{1-\beta}\|{\textbf{N}}(n)\|^{-\beta}
\end{aligned}$$ which for large $n$ implies the result.
\[MISEestthm\] Assume that hypotheses - hold. Let ${\tilde{p}}$ be the polynomial that interpolates ${\hat{p}}$ as defined in , given the partition $\mathcal{X}$. We then have the estimate $$\label{MISEest}
\begin{aligned}
{\mbox{\rm MISE}}(p\,,{\tilde{p}}\ | \ \underline{\Omega}^{M,{\textbf{N}}(n)}) &
\, =\, \mathbb{E}\int_{a}^{b}\big(p(\theta)-{\tilde{p}}(\theta)\big)^{2}d\theta \, \\
& \, \leq \, \mathcal{C}\left[ M^{2-2\beta}\|{\textbf{N}}(n)\|^{-2\beta}+\Bigg((\Delta x)\|{\textbf{N}}(n)\|^{\beta/(j+1)} M \Bigg)^{2(l+1)} \right]
\end{aligned}$$ where the constant $\mathcal{C}$ depends on the order $k$ of the B-splines, the degree $l$ of the interpolating polynomial, the densities $p_{1},\dots,p_{M}$ and the length of the interval $(a,b)$. Furthermore, assuming that $\Delta x$ is a function of the vector of samples ${\textbf{N}}(n)$, then ${\mbox{\rm MISE}}$ scales optimally with respect to ${\textbf{N}}(n)$ such that $$\label{MISEestthm_b}
{\mbox{\rm MISE}}(p\,,{\tilde{p}}\ | \ \underline{\Omega}^{M,{\textbf{N}}(n)}) \quad \leq \, C \|{\textbf{N}}(n)\|^{-2\beta} \quad \text{when} \quad \Delta x = O\bigg( \|{\textbf{N}}(n)\|^{-\beta\left(\frac{1}{l+1}+\frac{1}{j+1}\right)}\bigg) \,.$$
Observe that $$\begin{aligned}
{\mbox{\rm MISE}}(p\,,{\tilde{p}}\ | \ \underline{\Omega}^{M,{\textbf{N}}(n)})
& \leq \mathbb{E}\int_{a}^{b}\big(p(\theta)-\hat{p}(\theta)\big)^{2}d\theta+\mathbb{E}\int_{a}^{b}\big(\hat{p}(\theta)-{\tilde{p}}(\theta)\big)^{2}d\theta
\\ & =: I_1 + I_2.
\end{aligned}$$ then follows from Theorem \[MISEpphat\] and Theorem \[MISEphatptil\]. Using that estimate we can ask the following question. Suppose that we chose $\Delta x$ to be a function of the number of samples so that $$\label{dxfunc}
c_{1}\|{\textbf{N}}(n)\|^{-\alpha} \leq \Delta x (n) \leq c_{2} \| {\textbf{N}}(n) \|^{-\alpha}$$ for some constants $c_{1},c_{2}$ and $\alpha$. Clearly, one would not like $\Delta x$ to be excessively small in order to avoid difficulties that appear with round-off error when computing. On the other hand one would like the error to converge to zero as fast as possible. Thus let us find the smallest rate $\alpha$ for which the asymptotic rate achieves its maximum. To this end we define the function $$R(\alpha) := -\lim_{\|{\textbf{N}}(n)\| \to \infty} \log_{\|{\textbf{N}}(n)\|} {\mbox{\rm MISE}}(p,{\tilde{p}}\ | \ \underline{\Omega}^{M,{\textbf{N}}(n)})$$ that describes the asymptotic rate of convergence of the mean integrated squared error. By we have $$R(\alpha) = \left\{
\begin{aligned}
& 2\beta,& \alpha \geq \beta\left(\frac{1}{l+1}+\frac{1}{j+1}\right) & \\
& \left(\alpha-\frac{\beta}{j+1}\right)2(l+1), & \alpha < \beta\left(\frac{1}{l+1}+\frac{1}{j+1}\right) &
\end{aligned}\right.$$ It is obvious that the smallest rate for which the function $R(\alpha)$ achieves its maximum value of $2\beta$ is given by $\alpha=\beta\left(\frac{1}{l+1}+\frac{1}{j+1}\right)$. This concludes the proof.
Numerical Experiments {#section:num_exp}
=====================
Numerical experiment with normal subset posterior densities
-----------------------------------------------------------
### Description of experiment
This numerical experiment, as well as the following, is designed to investigate the relationship between the approximated value of ${\mbox{\rm MISE}}(p\,,{\tilde{p}}\ | \ \underline{\Omega}^{M,{\textbf{N}}(n)})$ and the bound given by . One iteration of the experiment generates $M=3$ subsets of a predetermined number of MCMC samples with ${\hat{p}}_{m} \sim \mathcal{N}(2,1), \ m=1,2,3$. Then for each iteration the Lagrange polynomial ${\tilde{p}}$ is computed a hundred times by re-sampling in order to obtain an approximation to ${\mbox{\rm MISE}}$ and its standard deviation. For this specific example, we perform ten iterations starting with $20,000$ samples and increasing that number by $10,000$ for each experiment. In the experiments we ran, we chose the parameters so that the optimal rate of convergence for ${\mbox{\rm MISE}}$ was obtained. Thus, $\beta=1/2$ was chosen. The logspline density estimation that was implemented utilized cubic B-splines (thus, order $k=4$), which implies $l=1$ in . Furthermore, we chose $j=1$. This yields the rate $C\|{\textbf{N}}\|^{-1}$ as the upper bound for the convergence rate of ${\mbox{\rm MISE}}$.
### Numerical results
![The full data posterior (black line) is shown with the 3 subset posterior densities (red, blue, green) for one iteration of 110,000 samples.[]{data-label="Fig1_norm"}](normalplot1_alt.jpeg "fig:")\
![The full data posterior (black line) is shown with the combined subset posterior density (blue points) for one iteration of 110,000 samples.[]{data-label="Fig2_norm"}](normalplot2.jpeg "fig:")\
![The average ${\mbox{\rm MISE}}$ estimate is depicted for the ten experiments along with standard deviation bars (black) plotted on a log-log scale with a regression line added. The red line is the upper bound of as calculated for the different number of samples.[]{data-label="Fig3_norm_reg"}](normalplot3_log_reg.jpeg "fig:")\
Notice in Figure \[Fig3\_norm\_reg\] how the regression line and the theoretical error line seem parallel. This implies that the rate obtained from is numerically satisfied.
Numerical experiment with gamma subset posterior densities
----------------------------------------------------------
### Description of experiment
This experiment mimics the previous one with the normally distributed generated MCMC samples, with the difference now that they are generated by a $Gamma(1,1)$. The number of samples again increases from $20,000$ to $110,000$ by an increment of $10,000$ for each iteration. Furthermore, $M=5$ subsets are now created.
### Numerical results
![The full data posterior (black line) is shown with the 5 subset posterior densities (red, blue, green, purple, gray) for one iteration of 110,000 samples.[]{data-label="Fig1_gam"}](gammaplot1_alt.jpeg "fig:")\
![The full data posterior (black line) is shown with the combined subset posterior density (blue points) for one iteration of 110,000 samples.[]{data-label="Fig2_gam"}](gammaplot2.jpeg "fig:")\
![The averaged ${\mbox{\rm MISE}}$ is depicted for the ten experiments along with standard deviation bars (black) plotted on a log-log scale with a regression line added. The red line is the upper bound of as calculated for the different number of samples.[]{data-label="Fig3_gam_reg"}](gammaplot3_log_reg.jpeg "fig:")\
The result we obtain from Figure \[Fig3\_gam\_reg\] is similar to the previous example. The rate from the bound is again numerically satisfied.
Numerical experiment conducted on flights from/to New York in January 2018
--------------------------------------------------------------------------
### Description of experiment
In this series of experiments we employ the data of US flights that are from or to the state of New York for the month of January 2018. The data was obtained from the [@DOT]. We were specifically interested in the delayed arrival times, thus flights that arrived 15 minutes or later from the scheduled time. There were a total of 12,100 such flights, which in turn were divided into 5 data subsets of 2,420 each. We assumed that the delayed arrival times are distributed according to a Gamma distribution with some shape parameter and rate parameter. In what follows, we will be doing inference for the shape parameter, denoted by $\alpha$. Using the JAGS sampling package [@Plummer], we generated samples from the marginal full data posterior distribution and the subset posterior distributions for $\alpha$. The data were shuffled beforehand to ensure that the condition of independence between subsets is satisfied. Ten iterations were performed, starting with 20,000 samples and increasing that number by 10,000. In each iteration, the values were then re-sampled 100 times in order to obtain an approximation to MISE and its standard deviation. Similar to the first example in this section, the parameters were chosen in a manner to achieve optimal convergence for MISE. Therefore, $\beta=1/2$, cubic B-splines were implemented, which implies $l=1$, and we chose $j=1$. These yield the rate of $C\|{\textbf{N}}\|^{-1}$ for MISE, as given in .
### Numerical results
![The full data posterior (black line) is shown with the 5 subset posterior densities (red, blue, green, purple, gray) for one iteration of 110,000 samples.[]{data-label="Fig1_air"}](airline_alpha_plot1.jpeg)
![The full data posterior (black line) is shown with the combined subset posterior density (blue points) for one iteration of 110,000 samples.[]{data-label="Fig2_air"}](airline_alpha_plot2.jpeg)
![The averaged ${\mbox{\rm MISE}}$ is depicted for the ten experiments along with standard deviation bars (black) plotted on a log-log scale with a regression line added. The red line is the upper bound of as calculated for the different number of samples.[]{data-label="Fig3_air_reg"}](airline_alpha_plot3.jpeg)
From Figure \[Fig3\_air\_reg\], the conclusion is similar to the previous examples with the simulated data. The regression line shows that the rate given by , with the choice of parameters as mentioned in the description, is again numerically satisfied.
Appendix {#section:appendix}
========
Here we provide all the relevant results related to B-splines and logspline density estimators based on the works of [@de; @Boor; @StoneKoo; @Stone89; @Stone90].
B-Splines
---------
In this section we will define the logspline family of densities and present an overview of how the logspline density estimator is chosen for the density $p$. The idea behind logspline density estimation of an unknown density p is that the logarithm of p is estimated by a spline function, a piecewise polynomial that interpolates the function to be estimated. Therefore, the family of estimators constructed for the unknown density is a family of functions that are exponentials of splines that are suitably normalized so that they can be densities. Thus, to build up the estimation method, we need to start the theory with the building blocks of splines themselves, the functions we call basis splines or B-splines for short whose linear combination generates the set of splines of a given order.
So, the first question we will answer is how we construct B-splines. There are several ways to do this, some less intuitive than others. The approach we will take will be through the use of **divided differences**. It is a recursive division process that is used to calculate the coefficients of interpolating polynomials written in a specific form called the Newton form.
\[divdifdef\] The kth divided difference of a function g at the knots $t_{0},\dots,t_{k}$ is the leading coefficient (meaning the coefficient of $x^{k}$) of the interpolating polynomial q of order k+1 that agrees with g at those knots. We denote this number as $$[t_{0},\dots,t_{k}]g$$
Here we use the terminology found in De Boor [@de; @Boor], where a polynomial of order k+1 is a polynomial of degree less than or equal to k. It’s better to work with the “order” of a polynomial since all polynomials of a certain order form a vector space, whereas polynomials of a certain degree do not. The term “agree” in the definition means that for the sequence of knots $(t_{i})_{i=0}^{k}$, if $\zeta$ appears in the sequence m times, then for the interpolating polynomial we have $$q^{(i-1)}(\zeta)=g^{(i-1)}(\zeta), \qquad i=1,\dots,m$$ Since the interpolating polynomial depends only on the data points, the order in which the values of $t_{0},\dots,t_{1}$ appear in the notation in does not matter. Also, if all the knots are distinct, then the interpolating polynomial is unique.
At this point let’s write down some examples to see how the recursion algorithm pops up. If we want to interpolate a function g using only one knot, say $t_{0}$, then we will of course have the constant polynomial $q(x)=g(t_{0})$. Thus, since $g(t_{0})$ is the only coefficient, we have $$[t_{0}]g=g(t_{0})$$ Now suppose we have two knots, $t_{0},t_{1}$.\
If $t_{0}\neq t_{1}$, then q is the secant line defined by the two points $(t_{0},g(t_{0}))$ and $(t_{1},g(t_{1}))$. Thus, the interpolating polynomial will be given by $$q(x)=g(t_{0})+(x-t_{0})\frac{g(t_{1})-g(t_{0})}{t_{1}-t_{0}}$$ Therefore, $$[t_{0},t_{1}]g=\frac{g(t_{1})-g(t_{0})}{t_{1}-t_{0}}=\frac{[t_{1}]g-[t_{0}]g}{t_{1}-t_{0}}$$ To see what happens when $t_{0}=t_{1}$, we can take the limit $t_{1} \to t_{0}$ above and thus $[t_{0},t_{1}]g=g'(t_{0})$.
By continuing these calculations for more knots yields the following result:
\[recuralgo\] Given a function g and a sequence of knots $(t_{i})_{i=0}^{k}$, the kth divided difference of g is given by
- $[t_{0},\dots,t_{k}]g={\displaystyle}\frac{g^{(k)}(t_{0})}{k!}$ when $t_{0}=\dots=t_{k}, g\in C^{k}$, therefore yielding the leading coefficient of the Taylor approximation of order k+1 to g.
- $[t_{0},\dots,t_{k}]g={\displaystyle}\frac{[t_{0},\dots,t_{r-1},t_{r+1},\dots,t_{k}]g-[t_{0},\dots,t_{s-1},t_{s+1},\dots,t_{k}]g}{t_{s}-t_{r}}$, where $t_{r}$ and $t_{s}$ are any two distinct knots in the sequence $(t_{i})_{i=0}^{k}$.
Now that we have defined the kth divided difference of a function, we can easily state what B-splines are. B-splines arise as appropriately scaled divided differences of the positive part of a certain power function and it can be shown that B-splines form a basis of the linear space of splines of some order. Let’s start with the definition.
\[bsplinedef\] Let $t=(t_{i})_{i=0}^{N}$ be a nondecreasing sequence of knots. Let $1\leq k \leq N$. The j-th B-spline of order k, with $j\in \{0,1,\dots,N-k\}$, for the knot sequence $(t_{i})_{i=0}^{N}$ is denoted by $B_{j,k,t}$ and is defined by the rule $$B_{j,k,t}(x)=(t_{j+k}-t_{j})[t_{j},\dots,t_{j+k}](\cdot-x)_{+}^{k-1}$$ where $(\cdot)_{+}$ defines the positive part of a function, i.e. $(f(x))_{+}={\displaystyle}\max_{x}\{f(x),0\}$.
The “placeholder” notation in the above definition says that the kth divided difference of $(\cdot-x)_{+}^{k-1}$ is to be considered for the function $(t-x)_{+}^{k-1}$ as a function of $t$ and have $x$ fixed. Of course, in the end the number will vary as $x$ varies, giving rise to the function $B_{j,k,t}$. If either $k$ or $t$ can be inferred from context then we will usually drop them from the notation and write $B_{j}$ instead of $B_{j,k,t}$. A direct consequence we receive from the above definition is the support of $B_{j,k,t}$.
\[Bsplinesupp\] Let $B_{j,k,t}$ be defined as in \[bsplinedef\]. Then the support of the function is contained in the interval $[t_{j},t_{j+k})$.
All we need to do is show that if $x\notin [t_{j},t_{j+k})$, then $B_{j,k,t}(x)=0$.\
Suppose first that $x\geq t_{j+k}$. Then we will have that $t_{i}-x\leq 0$ for $i=j,\dots,j+k$ which in turn implies $(t_{i}-x)_{+}=0$ and finally $[t_{j},\dots,t_{j+k}](\cdot-x)_{+}^{k-1}=0$.\
On the other hand, if $x<t_{j}$, then since $(t-x)_{+}^{k-1}$ as a function of $t$ is a polynomial of order $k$ and we have $k+1$ sites where it agrees with its interpolating polynomial, necessarily they are both the same. This implies $[t_{j},\dots,t_{j+k}](\cdot-x)_{+}^{k-1}=0$ since the coefficient of $t^{k}$ is zero.
Recurrence relation and various properties
------------------------------------------
Since we stated the definition of B-splines using divided differences, we can use that to state the **recurrence relation** for B-splines which will be useful when we will later prove various properties of these functions. We start by stating and proving the Leibniz formula which will be needed in the proof of the recurrence relation
\[Leibniz\] Suppose $f,g,h$ are functions such that $f=g\cdot h$, meaning $f(x)=g(x)h(x)$ for all $x$ and let $(t_{i})$ be a sequence of knots. Then we have the following formula $$[t_{j},\dots,t_{j+k}]f=\sum_{r=j}^{j+k}([t_{j},\dots,t_{r}]g)([t_{r},\dots,t_{j+k}]h), \quad \text{for some} \ j,k\in {\mathbb{N}}.$$
First of all, observe that the function $$\left(g(t_{j})+\sum_{r=j+1}^{j+k}(x-t_{j})\dots(x-t_{r-1})[t_{j},\dots,t_{r}]g\right)\cdot \left(h(t_{j+k})+\sum_{s=j}^{j+k-1}(x-t_{s+1})\dots(x-t_{j+k})[t_{s},\dots,t_{j+k}]h\right)$$ agrees with $f$ at the knots $t_{j},\dots,t_{j+k}$ since the first and second factor agree with $g$ and $h$ respectively at those values. Now, observe that if $r>s$ then the above product vanishes at all the knots since the term $(x-t_{i})$ for $i=j,\dots,j+k$ will appear in at least one of the two factors. Thus, the above agrees with $f$ at $t_{j},\dots,t_{j+k}$ when $r\leq s$. But then the product turns into a polynomial of order $k+1$ whose leading coefficient is $$\sum_{r=s}([t_{j},\dots,t_{r}]g)([t_{s},\dots,t_{j+k}]h)$$ and that of course must be equal to $$[t_{j},\dots,t_{j+k}]f$$
Now we can state and prove the recurrence relation for B-splines.
\[Bsplinerecdef\] Let $t=(t_{i})_{i=0}^{N}$ be a sequence of knots and let $1\leq k \leq N$. For $j\in \{0,1,\dots,N-k\}$ we can construct the $j$-th B-spline $B_{j,k}$ of order $k$ associated with the knots $t=(t_{i})_{i=0}^{N}$ as follows:
- First we have $B_{j,1}$ be the characteristic function on the interval $[t_{j},t_{j+1})$ $$B_{j,1}(x) = \left \{
\begin{aligned}
&1,& & x \in [t_{j},t_{j+1})&\\
&0,& & x \notin [t_{j},t_{j+1}) &
\end{aligned}
\right.$$
- The B-splines of order $k$ for $k>1$ on $[t_{j},t_{j+k})$ are given by $$B_{j,k}(x)=\frac{x-t_{j}}{t_{j+k-1}-t_{j}}B_{j,k-1}(x)+\frac{t_{j+k}-x}{t_{j+k}-t_{j+1}}B_{j+1,k-1}(x)$$
\(1) easily follows from the definition we gave for B-splines using divided differences in Definition \[bsplinedef\]. (2) can be proven using Lemma \[Leibniz\]. Since B-splines were defined using the function $(t-x)_{+}^{k-1}$ for fixed $x$, we apply the Leibniz formula for the kth divided difference to the product $$(t-x)_{+}^{k-1}=(t-x)(t-x)_{+}^{k-2}$$ This yields $$\label{Leibnizres}
[t_{j},\dots,t_{j+k}](\cdot-x)_{+}^{k-1}=(t_{j}-x)[t_{j},\dots,t_{j+k}](\cdot-x)_{+}^{k-2}+1\cdot[t_{j+1},\dots,t_{j+k}](\cdot-x)_{+}^{k-2}$$ since $[t_{j}](\cdot-x)=(t_{j}-x),\ [t_{j},t_{j+1}](\cdot-x)=1$ and $[t_{j},\dots,t_{r}](\cdot-x)=0$ for $r>j+1$. Now, from Lemma \[recuralgo\] (b), we have that $(t_{j}-x)[t_{j},\dots,t_{j+k}](\cdot-x)_{+}^{k-2}$ can be written as $$(t_{j}-x)[t_{j},\dots,t_{j+k}](\cdot-x)_{+}^{k-2}=\frac{t_{j}-x}{t_{j+k}-t_{j}}([t_{j+1},\dots,t_{j+k}]-[t_{j},\dots,t_{j+k-1}])$$ Thus, by replacing that term in the result we obtained by Leibniz, we get $$[t_{j},\dots,t_{j+k}](\cdot-x)_{+}^{k-1}=\frac{x-t_{j}}{t_{j+k}-t_{j}}[t_{j},\dots,t_{j+k-1}](\cdot-x)_{+}^{k-2}+\frac{t_{j+k}-x}{t_{j+k}-t_{j}}[t_{j+1},\dots,t_{j+k}](\cdot-x)_{+}^{k-2}$$ The result in (2) follows immediately once we multiply both sides by $(t_{j+k}-t_{j})$ and then multiply and divide the first term in the sum on the right hand side by $(t_{j+k-1}-t_{j})$ and then multiply and divide the second term by $(t_{j+k}-t_{j+1})$.
From the recurrence relation we acquire information about B-splines that was not clear from the first definition we gave using divided differences. $B_{j,1}$ is a characteristic function, or otherwise piecewise constant. By Lemma \[Bsplinerecdef\] (b), since the coefficients of $B_{j,k-1}$ are linear functions of $x$, we have $B_{j,2}$ is a piecewise linear function on $[t_{j},t_{j+2})$. Therefore, inductively we have $B_{j,3}$ is a piecewise parabolic function on $[t_{j},t_{j+3})$, $B_{j,4}$ is a piecewise polynomial of degree 3 on $[t_{j},t_{j+4})$ and so on. Below there is a visual representation of B-splines showing how the graph changes as the order increases.
![image](Bsplineskorder.png)
Since we now have defined what a B-spline is as a function, the next step is to ask what set is generated when considering linear combinations of these functions. Since B-splines are piecewise polynomials themselves, we have that this set is a subset of the set of piecewise polynomials with breaks at the knots $(t_{i})$. Something that can be proven though, is that it is exactly the set of piecewise polynomials with certain break and continuity conditions at the knots and this equality occurs on a smaller interval, which we call the basic interval, denoted by $I_{k,t}$.
\[basicint\] Suppose $t=(t_{0},\dots,t_{N})$ is a nondecreasing sequence of knots. Then for the B-splines of order $k$, with $2k<N+2$, that arise from these knots, we define $I_{k,t}=[t_{k-1},t_{N-k+1}]$ and call it the **basic interval**.
In order for this definition to be correct, we need to extend the B-splines and have them be left continuous at the right endpoint of the basic interval since we are defining it as a closed interval.
The basic interval for the $N-k+1$ B-splines of order $k>1$ is defined in such a way so that at least two of them are always supported on any subinterval of $I_{k,t}$ and later we will see that the B-splines form a partition of unity on the basic interval. For $k=1$, by construction the B-splines already form a partition of unity on $I_{1,t}=[t_{0},t_{N}]$.
For example, let $t=(t_{i})_{i=0}^{6}$ be disjoint and $k=3$. Then there are 4 B-splines, $B_{j,3},\ j=0,1,2,3$, of order 3 that arise in this framework. Their supports are $[t_{0},t_{3}),\ [t_{1},t_{4}),\ [t_{2},t_{5}),\ [t_{3},t_{6})$ respectively. Clearly, on $[t_{0},t_{1})$ only $B_{0,3}$ is supported and since as a function is non-constant we cannot have ${\displaystyle}\sum_{j=0}^{3}B_{j,3}=B_{0,3}$ on $[t_{0},t_{1})$ be equal to 1.
The partition of unity is stated and proved in the next lemma together with other properties of the B-splines. The recurrence relation makes the proofs fairly easy compared to using the divided difference definition of the B-splines.
Let $B_{j,k,t}$ be the function as given in Definition \[bsplinedef\] for the knot sequence $t=(t_{i})_{i=0}^{N}$. Then the following hold:
- $B_{j,k,t}(x)>0$ for $x\in(t_{j},t_{j+k})$.
- (Marsden’s Identity) For any $\alpha\in {\mathbb{R}}$, we have $(x-\alpha)^{k-1}=\sum_{j}\psi_{j,k}(\alpha)B_{j,k,t}(x)$, where $\psi_{j,k}(\alpha)=(t_{j+1}-\alpha)\dots(t_{j+k-1}-\alpha)$ and $\psi_{j,1}(\alpha)=1$.
- $\sum_{j}B_{j,k,t}=1$ on the basic interval $I_{k,t}$.
\(a) This is a simple induction. For $k=1$ the hypothesis holds since the B-splines are just characteristic functions on $[t_{j},t_{j+1})$ and thus strictly positive in the interior.\
For $k=2$ by the recurrence relation, $B_{j,2,t}$ is a linear combination of $B_{j,1},\ B_{j+1,1}$ with coefficients the linear functions $\frac{x-t_{j}}{t_{j+1}-t_{j}},\ \frac{t_{j+2}-x}{t_{j+2}-t_{j+1}}$ which is positive on $(t_{j},t_{j+2})$.\
Assuming the hypothesis holds for $k=r$, we can show it is true for $k=r+1$ by using the same argument as in the previous case.
\(b) Let $\omega_{j,k}(x)=\frac{x-t_{j}}{t_{j+k-1}-t_{j}}$. Thus, $\frac{t_{j+k}-x}{t_{j+k}-t_{j+1}}=1-\omega_{j+1,k}(x)$. This way we can write the recurrence relation as $$B_{j,k}(x)=\omega_{j,k}(x)B_{j,k-1}(x)+(1-\omega_{j+1,k}(x))B_{j+1,k-1}$$ Using this we can write $\sum_{j}\psi_{j,k}(\alpha)B_{j,k,t}(x)$ as $$\begin{aligned}
\sum_{j}\psi_{j,k}(\alpha)B_{j,k,t}(x)=&\sum_{j}[\omega_{j,k}(x)\psi_{j,k}(\alpha)+(1-\omega_{j,k}(x))\psi_{j-1,k}(\alpha)]B_{j,k-1,t}(x) \\
=&\sum_{j}\psi_{j,k-1}(\alpha)[\omega_{j,k}(x)(t_{j+k-1}-\alpha)+(1-\omega_{j,k}(x))(t_{j}-\alpha)]B_{j,k-1,t}(x) \\
=&\sum_{j}\psi_{j,k-1}(\alpha)(x-\alpha)B_{j,k-1,t}(x)
\end{aligned}$$ since $\omega_{j,k}(x)f(t_{j+k-1})+(1-\omega_{j,k}(x))f(t_{j})$ is the unique straight line that intersects $f$ at $x=t_{j}$ and $x=t_{j+k-1}$. Thus, $$\omega_{j,k}(x)(t_{j+k-1}-\alpha)+(1-\omega_{j,k}(x))(t_{j}-\alpha)=x-\alpha$$
Therefore, by induction we have $$\begin{aligned}
\sum_{j}\psi_{j,k}(\alpha)B_{j,k,t}(x)&=\sum_{j}\psi_{j,1}(\alpha)(x-\alpha)^{k-1}B_{j,1,t}(x) \\
&=(x-\alpha)^{k-1}\sum_{j}\psi_{j,1}(\alpha)B_{j,1,t}(x) \\
&=(x-\alpha)^{k-1}
\end{aligned}$$ since $\psi_{j,1}(\alpha)=1$ and $B_{j,1,t}$ are just characteristic functions.
\(c) To prove the partition of unity, we start with Marsden’s Identity and divide both sides by $(k-1)!$ and differentiate $\nu -1$ times with respect to $\alpha$ for some positive integer $\nu \leq k-1$. We then have $$\frac{(x-\alpha)^{k-\nu}}{(k-\nu)!}=\sum_{j}\frac{(-1)^{\nu-1}}{(k-1)!}\frac{d^{\nu-1}\psi_{j,k}(\alpha)}{d\alpha^{\nu-1}}B_{j,k,t}(x)$$ Now, for some polynomial $q$ of order $k$, we can use the Taylor expansion of $q$ $$q=\sum_{\nu=1}^{k}\frac{(x-\alpha)^{k-\nu}}{(k-\nu)!}\frac{d^{k-\nu}q(\alpha)}{d\alpha^{k-\nu}}$$ Using this we see that $$q=\sum_{j}\lambda_{j,k}[q]B_{j,k,t} \quad \text{where} \quad
\lambda_{j,k}[q]=\sum_{\nu=1}^{k}\frac{(-1)^{\nu-1}}{(k-1)!}\frac{d^{\nu-1}\psi_{j,k}(\alpha)}{d\alpha^{\nu-1}}\frac{d^{k-\nu}q(\alpha)}{d\alpha^{k-\nu}}$$ which holds only on the basic interval. Now, to show that the B-splines are a partition of unity, we just use this identity for $q=1$.
Marsden’s Identity says something very important. That all polynomials of order $k$ are contained in the set generated by the B-splines $B_{j,k}$, which is also what makes the step in the proof of (c) viable. Furthermore, we can replace the $(x-\alpha)$ in the identity by $(x-\alpha)_{+}$ which shows that piecewise polynomials are also contained in the same set.
\[CurryScho\] Another consequence of Marsden’s Identity is the Curry-Schoenberg theorem. We do not explicitly state the theorem as we do not require it, rather we state a simple result from it for B-splines of order $k$ given a sequence of knots $(t_{i})_{i=0}^{N}$, which can be summarized as $$\text{number of continuity conditions at} \ t_{i}+ \text{multiplicity of} \ t_{i}=k$$ Therefore, for a simple knot $t_{i}$, any B-spline of order $k$ there will be continuous and also have $k-2$ continuous derivatives. On the other hand, if $t_{i}$ has multiplicity $k$, any $k$-th order B-spline will have a discontinuity there.
Below there is a figure which shows the importance of the basic interval as the interval where we have partition of unity.
![image](Bsplines.png)
When the sequence of $t_{i}'s$ is distinct then the sum of B-splines belongs to $C_{0}\big((t_{0},t_{N})\big)$. However, the sum of B-splines on the basic interval $I_{k,t}$ is equal to 1. To make sure that the sum equals to 1 on the whole interval $(t_{0},t_{N})$, the assumption of the knots being distinct has to be dropped. It is obvious that we have to take $t_{0}=\dots=t_{k-1}$ and $t_{N-k+1}=\dots=t_{N}$.
\[Bsplinespace\] Let $(t_{i})_{i=0}^{N}$ be a sequence of knots such that $t_{0}=\dots=t_{k-1}$ and $t_{N-k+1}=\dots=t_{N}$, where $1\leq k\leq N$. Let $B_{j,k,t}$ be the B-splines as defined in \[bsplinedef\] with knot sequence $t=(t_{i})_{i=0}^{N}$. The set generated by the sequence $\{B_{j,k,t}:\text{all} \ j\}$, denoted by ${\mathcal{S}}_{k,t}$, is the set of splines of order $k$ with knot sequence $t$. In symbols we have $${\mathcal{S}}_{k,t}=\left\{ \sum_{j}a_{j}B_{j,k,t}:a_{j}\in {\mathbb{R}}, \ \text{all} \ j \right\}$$
\[Bsplinedense\] Fix an interval $[a,b]$. Let $T_{N}=(t_{i})_{i=0}^{N}$ be a sequence as in definition \[Bsplinespace\] with $t_{0}=a$ and $t_{N}=b$, where $N\in {\mathbb{N}}$. The choice in definition \[Bsplinespace\] implies that $$\bigcup_{N\in {\mathbb{N}}}S_{k,T_{N}} \quad \text{is dense in} \quad C([a,b])$$
Derivatives of B-spline functions
---------------------------------
Later in this paper when we will be conducting our analysis on MISE, derivatives of spline functions will factor in. Since splines are just linear combinations of B-splines we just need to investigate the result of differentiating a B-spline on the interior of its support. The derivative of a $k$-th order B-spline is directly associated with B-splines of order $k-1$. To see this we use the recurrence relation which leads us to the following theorem:
\[Bsplinederiv\] Let $B_{j,k,t}$ be the function as defined in \[bsplinedef\]. The support of $B_{j,k,t}$ is the interval $[t_j,t_{j+k})$. Then the following equation holds on the open interval $(t_j,t_{j+k})$ $$\frac{d}{d \theta}B_{j,k,t}(\theta)=
\begin{cases}
0, \ &k=1\\
(k-1)\left( \dfrac{B_{j,k-1,t}(\theta)}{t_{j+k-1}-t_j}-\dfrac{B_{j+1,k-1,t}(\theta)}{t_{j+k}-t_{j+1}} \right), \ &k>1
\end{cases}$$
The proof is done by induction on $k$. For $k=1$ it is straightforward since $B_{j,1,t}$ is a constant on $(t_{j},t_{j+1})$ and for $k>1$ we use the recurrence relation described in lemma \[Bsplinerecdef\].
Using the above formula we can easily obtain bounds for higher derivatives of B-splines. First of all, by construction of the space ${\mathcal{S}}_{k,t}$, the B-splines we will be working with form a partition of unity on $[t_{0},t_{N}]$ and since they are strictly positive on the interior of their supports, we have that each B-spline is bounded by 1 for all $\theta$. $$B_{j,k,t}(\theta)\leq 1, \quad \forall \theta\in {\mathbb{R}}$$ Furthermore, by induction we can prove the following lemma:
\[Bsplinederbound\] Let $t=(t_{i})_{i=0}^{N}$ be a sequence of knots as in definition \[Bsplinespace\] and $B_{j,k,t}$ be the function as defined in \[bsplinedef\]. Let $h_{N}={\displaystyle}\min_{k\leq i\leq N-k+1 }(t_{i}-t_{i-1})$ and $\alpha$ be a positive integer such that $\alpha<k-1$. Then, on the open interval $(t_{j},t_{j+k})$ we have $$\sup_{\theta \in (t_{j},t_{j+k})}\left|\frac{d^{\alpha}}{d\theta^{\alpha}}B_{j,k,t}(\theta)\right|\leq \dfrac{2^{\alpha}}{h_{N}^{\alpha}}\dfrac{(k-1)!}{(k-\alpha-1)!}, \text{for any } j$$
We fix $k$ and we do induction on $\alpha$. Let’s start with $\alpha=1$ $$\begin{aligned}
\left|\frac{d}{d \theta}B_{j,k,t}(\theta)\right|&=\left|(k-1)\left( \dfrac{B_{j,k-1,t}(\theta)}{t_{j+k-1}-t_j}-\dfrac{B_{j+1,k-1,t}(\theta)}{t_{j+k}-t_{j+1}} \right)\right|
\leq \frac{2}{h_{N}}\frac{(k-1)!}{(k-2)!}.
\end{aligned}$$ Thus the inequality holds for $\alpha=1$.
Now we assume it holds for $\alpha=n$ and we will show it holds for $\alpha=n+1$. $$\begin{aligned}
\left|\frac{d^{n+1}}{d\theta^{n+1}}B_{j,k,t}(\theta)\right|&=\left| \frac{d^{n}}{d\theta^{n}}(k-1)\left( \dfrac{B_{j,k-1,t}(\theta)}{t_{j+k-1}-t_j}-\dfrac{B_{j+1,k-1,t}(\theta)}{t_{j+k}-t_{j+1}} \right) \right|
\leq \dfrac{2^{n+1}}{h_{N}^{n+1}}\frac{(k-1)!}{[k-(n+1)-1]!}.
\end{aligned}$$
This concludes the proof.
\[diffclosed\] Considering Remark \[CurryScho\], the bound in Lemma \[Bsplinederbound\] can be extended to hold on the closed interval $[t_{j},t_{j+k}]$ assuming the knots $t_{j},\dots,t_{j+k}$ are simple. Also, it is clear that we need to utilize at least parabolic B-splines in order to have a bound on a continuous derivative.
Logspline Density Estimation
----------------------------
In this part we will present the method for constructing logspline density estimators using B-splines. Let $p$ be a continuous probability density function supported on an interval $[a,b]$. Suppose $p$ is unknown and we would like to construct density estimators for this function. The methodology is as follows
\[LogsplineEst\] Let $T_{N}=(t_{i})_{i=0}^{N}$, $N\in {\mathbb{N}}$, be a sequence of knots such that $t_{0}=\dots=t_{k-1}=a$ and $t_{N-k+1}=\dots=t_{N}=b$, where $1\leq k \leq N$, $k$ fixed. Thus, the set of splines $S_{k,T_{N}}$ of order $k$ generated by the B-splines $B_{j,k,T_{N}}$ can be obtained. We suppress the parameters $k, T_{N}$ and just write $B_{j}$ instead of $B_{j,k,T_{N}}$. Define the spline function $$\label{Bsplinesum}
B(\theta;y) = {\displaystyle}\sum_{j=0}^{L}y_{j}B_{j}(\theta)\,, \quad y=(y_{0},\dots,y_{L}) \in {\mathbb{R}}^{L+1} \quad \text{with} \ L:=N-k.$$ and for each $y$ we set the probability density function $$\label{logmodel1}
\begin{aligned}
f(\theta;y)&=\exp\Big({{\displaystyle}\sum_{j=0}^{L}y_{j}B_{j}(\theta)-c(y))}\Big)=\exp\Big(B(\theta; y)-c(y))\Big)\,, \\[3pt]
\text{where} \quad c(y)&=\log\left(\int_{a}^{b}\exp\Big(\sum_{j=0}^{L} y_j B_j(\theta)\Big) d\theta \right)<\infty\,.
\end{aligned}$$
The family of exponential densities $\{f(\theta;y): \ y\in {\mathbb{R}}^{L+1}\}$ is not identifiable since if $\beta$ is any constant, then $c((y_{0}+\beta,\dots,y_{L}+\beta))=c(y)+\beta$ and thus $$f(\theta;(y_{0}+\beta,\dots,y_{L}+\beta))=f(\theta;y)$$ To make the family identifiable we restrict the vectors $y$ to the set $$Y_{0}=\left\{ y\in {\mathbb{R}}^{L+1} : \sum_{i=0}^{L}y_{i}=0 \right\}.$$
$Y_{0} $ depends only on the number of knots and the order of the B-splines and not the number of samples.
We define the logspline model as the family of estimators $$\mathcal{L}=\big\{f(\theta;y) \ \text{given by \eqref{logmodel1}}: \ y\in Y_{0}\big\}.$$ For any $f\in \mathcal{L}$ $$\log{(f)}=\sum_{j=0}^{L}y_{j}B_{j}(\theta)-c(y)\in S_{k,T_{N}}.$$
Next, let us pick a set of independent, identically distributed random variables $$\Theta_n= \big( \theta_{1},\theta_{2},...,\theta_{n} \big) \in {\mathbb{R}}^n, \ n\in {\mathbb{N}}$$ where each $\theta_i$ is drawn from a distribution that has density $p(\theta)$.
We next define the log-likelihood function $l_n:{\mathbb{R}}^{L+1+n} \to {\mathbb{R}}$ corresponding to the logspline model by $$\label{logl1}
\begin{aligned}
l_n(y) &= l_n(y;\theta_1,\theta,\dots,\theta_n) = l_n(y;\Theta_n) \\
& =\sum_{i=1}^{n}\log(f(\theta_{i};y))=\sum_{i=1}^{n} \bigg(\sum_{j=0}^{L}y_{j}B_{j}(\theta_i)\bigg) -nc(y)\,, \quad y \in Y_0
\end{aligned}$$ and the maximizer of the log-likelihood $l_{n}(y)$ by $${\hat{y}}_n= {\hat{y}}_n(\theta_1,\dots,\theta_n)= {\displaystyle}\arg \max_{y\in Y_{0}}l_{n}(y)$$ whenever this random variable exists, which will be shown on a subset of the sample space whose probability will tend to 1. The density $f( \, \cdot \, ;{\hat{y}}_{n})$ is called the *logspline density estimate* of $p$.
We define the expected log-likelihood function $\lambda_{n}(y)$ by $$\label{explog1}
\lambda_{n}(y)={\mathbb{E}}[l(y;\theta_1,\dots,\theta_{n})]=n\left(-c(y)+\int_{a}^{b}\bigg(\sum_{j=0}^{L}y_{j}B_{j}(\theta)\bigg)p(\theta) \, d\theta \right) <\infty\,, \quad y \in Y_{0}.$$ It follows by a convexity argument that the expected log-likelihood function has a unique maximizing value $$\label{estdef}
{\bar{y}}={\displaystyle}\arg \max_{y\in Y_{0}}\lambda_{n}(y)={\displaystyle}\arg \max_{y\in Y_{0}}\frac{\lambda_{n}(y)}{n}$$ which is independent of $n$ but depends on the knots.
Note that the function $\lambda_{n}(y)$ is bounded above and goes to $-\infty$ as $|y| \to \infty$ within $Y_0$ and therefore, due to Jensen’s Inequality, the constant $\bar{y}$ is finite; see Stone [@Stone90]. The estimator ${\hat{y}}(\theta_1,\dots,\theta_n)$, in general does not exist. This motivates us to define the set $$\label{omegan}
\Omega_n = \bigg\{ \omega \in \Omega: {\hat{y}}={\hat{y}}(\theta_1,\dots,\theta_n) \in {\mathbb{R}}^{L+1} \;\; \text{exists} \bigg\}.$$ In what follows we will show that ${\mathbb{P}}(\Omega_n) \to 1$ as $n \to \infty$. We also note that due to convexity of $l_n(y)$ and $\lambda_n(y)$ the estimators ${\hat{y}}$ and ${\bar{y}}$ are unique whenever they exist.
We define the logspline estimator ${\hat{p}}$ of $p$ on the space $\Omega_n$ by $$\label{phatdef}
{\hat{p}}: {\mathbb{R}}\times \Omega_n \quad \text{defined by} \quad {\hat{p}}(\theta,\omega)=f(\theta, {\hat{y}}(\theta_1,\dots,\theta_n)), \ \omega \in \Omega_n$$ and define the function $$\label{pbardef}
{\bar{p}}(\theta):=f(\theta,{\bar{y}})\,.$$
In order for the maximum likelihood estimates to be reliable, we require that the modeling error tend to 0 as $n\to \infty$, as described in hypothesis .
Notions of distance from the set of splines $S_{k,t}$
-----------------------------------------------------
It is a well known fact that continuous functions can be approximated by polynomials. Now that we have defined the set of splines $S_{k,t}$ in Definition \[Bsplinespace\] and from what we have stated in remark \[Bsplinedense\], that $\bigcup_{N\in {\mathbb{N}}}S_{k,T_{N}}$ is dense in the space of continuous functions, there is a question that arises at this point:
Given an arbitrary continuous function $g$ on $[a,b]$, an integer $k\geq 1$ and a set of knots $T_{N}=(t_{i})_{i=0}^{N}$ as in Remark \[Bsplinedense\], how close is $g$ to the set $S_{k,T_{N}}$ of splines of order $k$?
Let’s state this question in a slightly different way. What we would like to do is find a bound for the sup-norm distance between $g\in C[a,b]$ and $S_{k,T_{N}}$, where this distance is denoted by $dist(g,S_{k,T_{N}})$ and is defined as $$\dist(g,S_{k,T_{N}})=\inf_{s\in S_{k,T_{N}}}\|g-s\|_{\infty}, \quad g\in C[a,b].$$ The answer to our question is given by Jackson’s Theorem found in de Boor [@de; @Boor]. To state it we first need the following definition.
\[modcont\] The modulus of continuity $\omega(g;h)$ of some function $g\in C[a,b]$ for some positive number $h$ is defined as $$\omega(g;h)=\max\{|g(\theta_1)-g(\theta_2)|: \ \theta_1, \theta_2\in [a,b], \ |\theta_1-\theta_2|\leq h\}.$$
The bound given by Jackson’s Theorem contains the modulus of continuity of the function whose sup-norm distance we want to estimate from the set of splines. The theorem is stated below.
\[Jackson\] Let $T_{N}=(t_{i})_{i=0}^{N}$, $N\in {\mathbb{N}}$, be a sequence of knots such that $t_{0}=\dots=t_{k-1}=a$ and $b=t_{N-k+1}=\dots=t_{N}$, where $1\leq k\leq N$. Let $S_{k,T_{N}}$ be the set of splines as in definition \[Bsplinespace\] for the knot sequence $T_{N}$. For each $j\in \{0,\dots,k-1\}$, there exists $C=C(k,j)$ such that for $g\in C^{j}[a,b]$ $$dist(g,S_{k,T_{N}})\leq C \ h^{j} \ \omega\left( \frac{d^{j}g}{d\theta^{j}};|t| \right) \quad \text{where} \quad h=\max_{i}|t_{i+1}-t_{i}|.$$ In particular, from the Mean Value Theorem it follows $$\label{Jacksonbound}
dist(g,S_{k,T_{N}})\leq C \ h^{j+1} \ \left\| \frac{d^{j+1}g}{d\theta^{j+1}} \right\|_{\infty}$$ in the case that $g\in C^{j+1}[a,b]$.
Please note that for the approximation the mesh size enters into the bound in which dictates the placement for the knots.
Jackson’s Theorem supplies us with an estimate of how good an approximation is contained in the space of splines for a continuous function. However, in this paper we are interested in estimates for probability densities, especially since the focus is on logspline density estimates. At this point let’s state results specifically for densities. The following can be found in Stone [@Stone89].
Suppose that $p$ is a continuous probability density supported on some interval $[a,b]$, similar to the set-up when we defined the logspline density estimation method. Define the family $\mathcal{F}_{p}$ of densities such that $$\label{densfamily}
\mathcal{F}_{p}=\left\{ p_{\alpha}: \ p_{\alpha}(x)=\frac{(p(x))^{\alpha}}{\int (p(y))^{\alpha}\ dy}, \ 0\leq \alpha \leq 1 \right\}.$$ It is easy to see that for $\alpha\in [0,1]$ $p_{\alpha}$ is a probability density on \[a,b\]. An interesting consequence from this family is the following
\[equifamily\] We define the family of functions $$\mathcal{F}_{p}^{log}=\{ \log{(u)}: \ u\in \mathcal{F}_{p} \}.$$ Then, $\mathcal{F}_{p}^{log}$ defines a family of functions that is equicontinuous on the set $\{ \theta: \ p(\theta)>0 \}$.
The proof is simple enough. Pick $\epsilon>0$. There exists $\delta>0$ such that $|\log{(p(x))}-\log{(p(y))}|<\epsilon$ whenever $|x-y|<\delta$. Pick any $\alpha\in [0,1)$.
If $\alpha=0$ then $p_{0}$ is just a constant and thus $|\log{(p_{0}(x))}-\log{(p_{0}(y))}|=0<\epsilon$.
If $0<\alpha<1$, then $|\log{(p_{\alpha}(x))}-\log{(p_{\alpha}(y))}|=|\alpha\log{(p(x))}-\alpha\log{(p(y))}|<\alpha \ \epsilon<\epsilon$.
It is practical to work with $p(x)>0$ on the set $[a,b]$ and this is what we assume until the end of the manuscript. In this case, $\log{(p)}\in C[a,b]$.
We will be using the notation $\bar{h}=\max_{i}|t_{i+1}-t_{i}|$ and $\underline{h}=\min_{i}|t_{i+1}-t_{i}|$, and $\gamma(T_{N})=\bar{h}/\underline{h}$.
We can apply the logspline estimation method to $p$. Let ${\bar{p}}$ be defined as in , the density estimate given by maximizing the expected log-likelihood. We then have the following lemma:
\[pbarsupnormbound\] Suppose $p$ is an unknown continuous density function supported on $[a,b]$ and ${\bar{p}}$ is as in . Then there exists constant $M'=M'(\mathcal{F}_{p},k,\gamma(T_{N}))$ that depends on the family $\mathcal{F}_p$, order $k$ and global mesh ratio $\gamma(T_{N})$ of $S_{k,T_{N}}$ such that $$\|\log{(p)}-\log{({\bar{p}})}\|_{\infty}\leq M' \ \dist(\log({p}),S_{k,T_{N}})$$ and therefore $$\|p-{\bar{p}}\|_{\infty}\leq \big(\exp\{M' \ \dist(\log({p}),S_{k,T_{N}})\}-1\big)\|p\|_{\infty}.$$ Moreover, if $\log({p})\in C^{j+1}([a,b])$ for some $j\in \{0,\dots,k-1\}$ then by Jackson’s Theorem we obtain $$\begin{aligned}
\|\log{(p)}-\log{({\bar{p}})}\|_{\infty}&\leq M' \ C(k,j) \ \bar{h}^{j+1} \ \left\| \frac{d^{j+1}\log({p})}{d\theta^{j+1}} \right\|_{\infty} \\
\|p-{\bar{p}}\|_{\infty}&\leq \left(\exp\left\{M' \ C(k,j) \ \bar{h}^{j+1} \ \left\| \frac{d^{j+1}\log({p})}{d\theta^{j+1}} \right\|_{\infty}\right\}-1\right)\|p\|_{\infty}.
\end{aligned}$$
Please note that the constant $M$ does not depend on the dimension of $S_{k,T_{N}}$. For all practical purposes, we will be using uniformly placed knots, thus suppressing the dependence on $\gamma(T_{N})$, which will be equal to the constant 1.
Now we will present certain error bounds required to calculate a bound for . Assume $p,\hat{p}$ and ${\bar{p}}$ as in the previous section. Also, assume that $n$ is the number of random samples drawn from $p$.
We will state a series of definitions and theorems that encompass the results from Lemma 5, Lemma 6, Lemma 7, and Lemma 8 in the work of Stone[@Stone90]\[pp.728-729\].
Let $n \geq 1$ and $b>0$. Let $y \in Y_{0}$. Let $l_n$ and $\lambda_{n}$ be defined by and , respectively. We define $$\label{boubdset}
\begin{aligned}
A_{n,b}(y) & =\Bigg\{\omega\in \Omega: \left|l(y;\Theta_n(\omega))-l({\bar{y}};\Theta_n(\omega))-(\lambda_{n}(y)-\lambda_{n}({\bar{y}}))\right| \\
&\qquad \qquad \qquad <
nb \bigg( \int | \log(f(\theta;y)) - \log(f(\theta; {\bar{y}})) |^2 d \, \theta \bigg) ^{1 /2}\Bigg\}\,.
\end{aligned}$$ where $f$ is defined in as a function in the logspline family.
Given $n \geq 1$ and $0<\epsilon$ we define $E_{\epsilon,n}$ to be the subset of ${\cal{F}}=\{ f(\cdot \ ;y) : y\in Y_{0} \}$ such that $$E_{\epsilon,n}=\left\{ f(\cdot \ ;y) :y\in Y_{0} \ \text{and} \ \bigg( \int | \log(f(\theta;y)) - \log(f(\theta; {\bar{y}})) |^2 d \, \theta \bigg) ^{1/2} \leq n^{\epsilon}\sqrt{\frac{L+1}{n}} \right\}\,.$$
For each $y_1,y_2 \in Y_0$ and $\omega\in \Omega$ we have $$\left|l(y_1;\Theta_n(\omega))-l(y_2;\Theta_n(\omega))-(\lambda_{n}(y_1)-\lambda_{n}(y_2))\right| \leq
2n\| \log{f(\cdot \ ;y_1)}-\log{f(\cdot \ ;y_2)} \|_{\infty}\,.$$
Let $n \geq 1$. Given $\epsilon>0$ and $\delta>0$, there exists an integer $N=N(n)>0$ and sets $E_j \subset {\cal{F}}$, $j=1,\dots,N$ satisfying $$\sup_{f_1,f_2 \in E_j} \|\log(f_1)-\log(f_2)\|_{\infty} \leq \delta n^{2 \epsilon - 1}(L+1)$$ such that $E_{\eps,n} \; \subset \; \bigcup_{i=1}^{N}E_i$.
Combining the above lemmas it leads to the following theorem, which is a result outlined in lemmas 5 and 8 found in Stone[@Stone90].
\[Aboundthm\] Given $D>0$ and $\epsilon>0$, let $b_{n}=n^{\epsilon}\sqrt{\dfrac{L(n)+1}{n}}$, $n \geq 1$, and $0<\epsilon< \frac{1}{2}$ and $\beta=\epsilon$ in . There exists $N=N(D)$ such that for all $n>N$ $$\label{AcOmegan}
A_{n,b_n}(y) \subset \Omega_n \quad \text{for each $y \in Y_0$}$$ and thus $$\label{Acest}
{\mathbb{P}}(\Omega_{n}^{c})\leq {\mathbb{P}}\big(A^c_{n,b_{n}}(y)\big)\leq 2e^{{-n^{2\epsilon}(L+1)\delta(D)}} \,.$$
From we can see that as number of samples goes to infinity, we have that $${\mathbb{P}}(\Omega_{n})\to 1 \ \text{as} \ n\to \infty.$$
The bound presented in Theorem \[Aboundthm\] is a consequence of Hoeffdings inequality which states that for any $t>0$ $${\mathbb{P}}\bigg( \Big| \frac{1}{n} \sum_{i=1}^n X_i - {\mathbb{E}}X_1 \Big| \geq t \bigg) \leq 2 \exp\bigg(
-\frac{2 n^2 t^2}{\sum_{i=1}^n (b_i-a_i)^2} \bigg)$$ where $X_1,\dots,X_n$ are identically distributed independent random variables with ${\mathbb{P}}(X_1 \in [a_i,b_i])=1$. To get the bound one needs to choose $$t = b \bigg( \int | \log(f(\theta;y)) - \log(f(\theta; {\bar{y}})) |^2 \, d\theta \bigg)^{\frac{1}{2}}.$$
Now that we have defined the set where ${\hat{y}}$ exists and showed that the probability of its complement vanishes as $n\to \infty$ with a specific exponential rate, we will now state certain rates of convergence that only apply on $\Omega_{n}$. The following theorem contains results of Theorem 2 and Lemma 12 of Stone[@Stone90].
\[estonOmegan\] There exist constants $M_1$, $M_2$, $M_3$ and $M_4$ such that for all $\omega\in \Omega_{n}$ $$\begin{aligned}
& |\hat{y}(\theta_1(\omega),\dots,\theta_n(\omega))-{\bar{y}}| \leq \dfrac{M_{1}(L+1)}{\sqrt{n}}\\
& \| \hat{p}(\cdot,\omega)-{\bar{p}}(\cdot)\|_{2} \leq M_{3}\sqrt{\dfrac{L+1}{n}}\\
& \|\log({\hat{p}}(\cdot,\omega))-\log({\bar{p}}(\cdot))\|_{\infty} \leq \dfrac{M_{4}(L+1)}{\sqrt{n}}.
\end{aligned}$$
Lagrange interpolation
----------------------
The following two theorems are well-known facts which we cite from [@Atkinson p.132, p.134].
\[approx\] Let $f:[a,b]\rightarrow \mathbb{R}$. Given distinct points $a=x_{0}<x_{1}<...<x_{l}=b$ and $l+1$ ordinates $y_{i}=f(x_{i})$, $i=0,\dots,l$ there exists an interpolating polynomial $q(x)$ of degree at most $l$ such that $f(x_{i})=q(x_{i})$, $i=0,\dots,l$. This polynomial $q(x)$ is unique among the set of all polynomials of degree at most $l$. Moreover, $q(x)$ is called the Lagrange interpolating polynomial of $f$ and can be written in the explicit form $$\label{lagrp}
q(x)=\sum_{i=0}^{l}y_{i}l_{i}(x) \quad \text{with} \quad l_{i}(x) ={\displaystyle}\prod_{j\neq
i}\left(\frac{x-x_{j}}{x_{i}-x_{j}}\right)\,,\;i=0,1,\dots,l.$$
\[interpthm\] Suppose that $f:[a,b]\rightarrow \mathbb{R}$ has $l+1$ continuous derivatives on $(a,b)$. Let $a=x_{0}<x_{1}<...<x_{l}=b$ and $y_{i}=f(x_{i})$, $i=0,\dots,l$. Let $q(x)$ be the Lagrange interpolating polynomial of $f$ given by formula . Then for every $x \in [a,b]$ there exists $\xi \in (a,b)$ such that $$\label{interrror}
f(x)-q(x)= \dfrac{\prod_{i=0}^{l}(x-x_{i})}{(l+1)!}f^{(l+1)}(\xi).$$
We next prove an elementary lemma that provides the estimate of the interpolation error when information on the derivatives of $f$ is available. This lemma is used later in Theorem \[MISEestthm\] to compute the mean integrated squared error.
\[estinterrlmm\] Let $f(x)$, $q(x)$, and $(x_i,y_i)$, $i=0,\dots,l$, with $l \geq 1$, be as in Theorem \[interpthm\]. Suppose that $$\sup_{x \in [a,b]}|f^{(l+1)}(x)| \leq C$$ for some constant $C\geq 0$ and $x_{i+1}-x_{i}=\dfrac{b-a}{l} =: \Delta x$ for each $i=0,\dots, l-1$. Then $$\label{estinterr}
\max_{x \in [a,b]}|f(x)-q(x)|\leq C \dfrac{(\Delta x)^{l+1}}{4(l+1)}.$$
Let $x \in [a,b]$. Then $x \in [x_j,x_{j+1}]$ for some $j \in \{0,\dots,l-1\}$. Observe that $$|(x-x_j)(x-x_{j+1})| \leq \frac{1}{4}(\Delta x)^2$$ and for $m \in \{-j,-j+1,\dots,-1\} \cup \{2,\dots, l-j\}$ we have $|x-x_{j+m}| \leq (\Delta x) |m|$. From this it follows that $$\prod_{i=0}^{l}|x-x_i| \leq \frac{(\Delta x)^{(l+1)}}{4} j! (l-j)! \leq \frac{(\Delta x)^{(l+1)}l!}{4} \, .$$ Then Theorem \[interpthm\] together with the above estimate implies .
[99]{}
C. de Boor, *A Practical Guide to Splines (Revised Edition)*, Springer, New York, 2001.
C. J.Stone and C.-Y. Koo, *Logspline Density Estimation*, in Contemporary Mathematics, Volume 59, 1986.
C. J. Stone, *Uniform Error Bounds Involving Logspline Models*, in Probability, Statistics and Mathematics: Papers in honor of Samuel Karlin, 335-355, 1989.
C. J. Stone, *Large-sample Inference for Log-spline Models*, in Annals of Statistics, Vol. 18, No. 2, 717-741, 1990.
W. Neiswanger, C. Wang and E. P. Xing, *Asymptotically Exact, Embarrassingly Parallel MCMC*, Proceedings of the Thirtieth Conference on Uncertainty in Artificial Intelligence. 2014; pp. 623-632.
A. Miroshnikov, Z. Wei and E. Conlon, *Parallel Markov Chain Monte Carlo for Non-Gaussian Posterior Distributions*, Stat. Accepted (2015).
K. E. Atkinson, *An Introduction to Numerical Analysis, 2nd Edition*, John Wiley & Sons, 1989.
J. Langford, A.J. Smola and M. Zinkevich, Slow learners are fast. In: Bengio Y, Schuurmans D, J.D. Lafferty, C.K.I. Williams, A. Culotta, *Advances in Neural Information Processing Systems* (2009), 22 (NIPS), New York: Curran Associates, Inc.
D. Newman, A. Asuncion, P. Smyth and M. Welling, *Distributed algorithms for topic models*. J Machine Learn Res (2009), 10, 1801-1828.
A. Smola and S. Narayanamurthy, *An architecture for parallel topic models*. Proceedings of the VLDB Endowment (2010), 3, 1-2, 703-710.
M. Plummer, *JAGS: A Program for Analysis of Bayesian Graphical Models using Gibbs Sampling*, Proceedings of the 3rd International Workshop on Distributed Statistical Computing, March 20-22, Vienna, Austria (2003).
, URL <https://www.transtats.bts.gov/>
|
---
abstract: 'For $n\geq 4$ we show that generic closed Riemannian $n$–manifolds have no nontrivial totally geodesic submanifolds, answering a question of Spivak. An immediate consequence is a severe restriction on the isometry group of a generic Riemannian metric. Both results are widely believed to be true, but we are not aware of any proofs in the literature.'
address:
- |
Tommy Murphy, Dept. of Mathematics, California State University Fullerton\
Fullerton CA 92831
- 'Fred Wilhelm, Dept. of Mathematics, University of California Riverside, Riverside, Ca 92521. '
author:
- Thomas Murphy
- Frederick Wilhelm
date: 'March 2017.'
title: Random Manifolds have no Totally Geodesic Submanifolds
---
[^1]
Schoen-Simon showed that every Riemannian manifold admits an embedded minimal hypersurface ([@SchSim], cf. also [@Pitts]). Intuition suggests that the analogous result about totally geodesic submanifolds is false. In fact, Spivak writes that it
> *seems rather clear that if one takes a Riemannian manifold* $\left( N,\left\langle \cdot ,\cdot \right\rangle \right) $* ‘at random’, then it will not have any totally geodesic submanifolds of dimension $>1$. But I must admit that I don’t know of any specific example of such a manifold.* ([@spivak], p. 39)
The existence of specific examples was established by Tsukada in [tsukada]{}, who found some left-invariant metrics on $3$–dimensional Lie groups without totally geodesic surfaces. In the present paper, we prove that Spivak’s intuition about generic metrics is correct for compact Riemannian $n$–manifolds with $n\geq 4$.
\[main thm\]Let $M$ be a compact, smooth manifold of dimension $\geq 4$. For any finite $q\geq 2,$ the set of Riemannian metrics on $M$ with no nontrivial immersed totally geodesic submanifolds contains a set that is open and dense in the $C^{q}$–topology.
Put another way: in a generic Riemannian $n$–manifold with $n\neq 3,$ any totally geodesic submanifold is either a geodesic or the whole manifold. We emphasize that this statement applies to all immersed submanifolds—there is no requirement that the submanifolds be closed or complete.
In [@Eb], Ebin showed most Riemannian manifolds have no isometries other than the identity. Theorem \[main thm\] yields a simple, alternative proof of this for most group actions.
Let $M$ be a compact, smooth manifold of dimension $\geq 4.$ Let $G$ act smoothly and effectively on $M$ so that either:
1. A subgroup of $G$ has a fixed point set of dimension $\geq 2,$ or
2. $G$ has a subgroup $H$ whose fixed point set is $0$ or $1$ dimensional, and $H$ does not act freely and linearly on a sphere.
Then for any finite $q\geq 2,$ the set of Riemannian metrics on $M$ that are not $G$–invariant contains a set that is open and dense in the $C^{q}$–topology.
To see how this follows from Theorem \[main thm\], suppose that $G$ acts isometrically and effectively on a Riemannian manifold that has no nontrivial immersed totally geodesic submanifolds. Then the fixed point sets of $G$ and all of its subgroups have dimension $\leq 1.$ If a subgroup $H$ has a one dimensional fixed point set, then since no subgroup of $H$ can have a larger fixed point set, $H$ acts freely on any unit normal sphere to its fixed point set. If no subgroup of $G$ has a one dimensional fixed point set, but some subgroup $H$ has a zero dimensional fixed point set, then differentiating $H$ produces a free action on the unit tangent sphere at any fixed point of $H.$ In particular, if $G$ has a subgroup, $H,$ with a one dimensional fixed point set, then $H$ is either discrete or isomorphic to $S^{1},$ $S^{3},$ or to a $\mathbb{Z}_{2}$–extension of $S^{1},$ and if $H$ is discrete, then it is the fundamental group of a complete manifold of constant curvature $1.$ $H$ also satisfies these constraints if it has a $0$–dimensional fixed point set and does not contain a subgroup with a $1$–dimensional fixed point set.
It seems rather easy to construct a deformation that kills the totally geodesic property for a fixed submanifold or a fixed compact family of submanifolds (see, e.g., [@bryant]). Although there are compactness theorems for submanifolds with constrained geometry in, e.g., [@GuijW], the space of all submanifolds of a compact Riemannian manifold is not compact. For example, via the Nash isometric embedding theorem, all Riemannian manifolds of any fixed dimension $k$ embed isometrically into a fixed flat $n$–torus if $n>>k.$
To circumvent this difficulty we propose a new concept called *partially geodesic*.* *It is defined in terms of the following invariant of self adjoint linear maps.
Let $\Phi :V\longrightarrow V$ be a self adjoint linear map of an inner product space $V$. For a subspace $W$ of $V,$ we set $$\mathcal{I}_{\Phi }\left( W\right) \equiv \max_{\left\{ \left. w\in W\text{ }\right\vert \text{ }\left\vert w\right\vert =1\right\} }\left\vert \Phi
\left( w\right) ^{W^{\perp }}\right\vert , \label{I measu}$$where $\Phi \left( w\right) ^{W^{\perp }}$ is the component of $\Phi \left(
w\right) $ that is perpendicular to $W.$
Let $V=T_{p}M$ be a tangent space to a Riemannian manifold $\left(
M,g\right) .$ For $v\in W\subset T_{p}M,$ the Jacobi operator $R_{v}=R(\cdot
,v)v:T_{p}M\longrightarrow T_{p}M$ is self adjoint with respect to $g$, and if $$\mathcal{I}_{R_{v}}\left( W\right) \neq 0$$for some $v\in W,$ then $W$ is not tangent to any totally geodesic submanifold. This motivates the following concept.
For $l\in \left\{ 2,3,\ldots ,n-1\right\} ,$ an $l$–plane $P$ tangent to a Riemannian $n$–manifold $M$ is called partially geodesic if and only for all $v\in P,$ $$\mathcal{I}_{R_{v}}\left( P\right) =0.$$
Theorem \[main thm\] is a consequence of
\[partiall geod thm\]Let $M$ be a compact, smooth manifold of dimension $\geq 4$. For any finite $q\geq 2,$ the set of Riemannian metrics on $M$ with no partially geodesic $l$–planes is open and dense in the $C^{q}$–topology.
Since $q\geq 2$, the curvature tensor is continuous in the $C^{q}$–topology. Combined with the compactness of the Grassmannians of $l$–planes, it follows that that the set of metrics with no partially geodesic $l$–planes is open.
Since the $C^{2}$–topology is finer than the $C^{0}$ and $C^{1}$–topologies, it follows from Theorem \[partiall geod thm\] that the set of metrics with no partially geodesic $l$–planes is dense in the $C^{0}$ and $C^{1}$–topologies; however, as the curvature tensor is not continuous in these topologies, it seems likely that the openness assertion fails.
The balance of the paper is therefore devoted to proving the density assertion of Theorem \[partiall geod thm\]. To do this we use reverse induction on $l$ via the following statement.
$\mathbf{l}^{th}$–**Partially Geodesic Assertion.** *Given* $l\in \left\{ 2,3,\ldots ,n-1\right\} ,$* a finite* $q\geq 2,$ *and* $\xi >0$, *there is a Riemannian metric* $\tilde{g}$ *on* $M$ *that has no partially geodesic* $k$*–planes for all* $k\in
\left\{ l,l+1,\ldots ,n-1\right\} $ *and satisfies* $$\left\vert \tilde{g}-g\right\vert _{C^{q}}<\xi .$$
The rest of the paper is devoted to proving this assertion. To do so, we exploit a principle given in the following lemma.
\[uber stru lemma\]Let $\left\{ g_{s}\right\} _{s\geq 0}$ be a smooth family of Riemannian metrics on $M.$ Let $R^{s}$ be the curvature tensor of $g_{s}.$ Let $\mathcal{P}_{0}$ be the set of partially geodesic $l$–planes for $g_{0},$ and suppose that for all $k\in \left\{ l+1,\ldots ,n-1\right\}
, $ $g_{0}$ has no partially geodesic $k$–planes.
Suppose further that there are $c,s_{0}>0$ and a neighborhood $\mathcal{U}_{0}$ of $\mathcal{P}_{0}$ so that for every $P\in \mathcal{U}_{0}$, every $s\in \left( 0,s_{0}\right) ,$ and some $g_{0}$–unit $v\in P,$ $$\mathcal{I}_{R_{v}^{s}}\left( P\right) >cs. \label{unif pos}$$
Then for all sufficiently small $s,$ and all $k\in \left\{ l,\ldots
,n-1\right\} $, $\left( M,g_{s}\right) $ has no partially geodesic $k$–planes.
We write $\mathcal{G}_{k}\left( M\right) $ for the Grassmannian of $k$–planes tangent to $M.$ Since each $\mathcal{G}_{k}\left( M\right) $ is compact, there is a $\delta >0$ so that for all $k\in \left\{ l+1,\ldots
,n-1\right\} $ and all $P\in \mathcal{G}_{k}\left( M\right) $ there is a unit $v\in P$ so that $$\mathcal{I}_{R_{v}^{0}}\left( P\right) >\delta .$$
Similarly $\mathcal{G}_{l}\left( M\right) \setminus \mathcal{U}_{0}$ is compact. Thus there is a (possibly different) $\delta >0$ so that for all $P\in \mathcal{G}_{l}\left( M\right) \setminus \mathcal{U}_{0}$ there is a unit $v\in P$ so that $$\mathcal{I}_{R_{v}^{0}}\left( P\right) >\delta .$$By combining the previous two displays with Inequality (\[unif pos\]) and a continuity argument, it follows that for all sufficiently small $s,$ $\left( M,g_{s}\right) $ has no partially geodesic $l$–planes.
In Section \[not and conve\], we establish notations and conventions. In Section \[local constr\], we prove Lemma \[local constr main lemma\], which implies that the $l^{th}$–Partially Geodesic Assertion holds locally, in a sense that is quantifiable. This allows us, in Section \[global constr\], to piece together various local deformations and complete the proof of Theorem \[partiall geod thm\]. We do not use Lemma \[uber stru lemma\] explicitly, but the reader will notice that a similar principle is used in our global argument in Section \[global constr\].
For a quick overview of the proof, imagine that $v$ and $T$ are tangent to a partially geodesic plane $P$ and $n\perp P.$ The strategy is to change $\left\langle T,n\right\rangle $ by a function $f$ that has a relatively large $2^{nd}$ derivative in the $v$–direction. This has the effect of giving $R\left( T,v\right) v$ a component in the $n$–direction. In particular, $P$ is no longer partially geodesic. Since this is a local deformation, $f$ has compact support and necessarily has inflection points. To deal with this, we simultaneously change two components of the metric tensor using two functions whose inflection points occur at different places. Since this construction requires the presence of two distinct orthonormal triples, it only works in dimensions $\geq 4.$ Our sense is that a modification of our ideas might also yield a proof of Theorems \[main thm\] and \[partiall geod thm\] in dimension 3. In fact, Bryant has outlined a local proof in [@bryant].
As mentioned above, Theorem \[main thm\] is widely believed to be true. In [@berger], Berger wrote (without proof) a generic Riemannian manifold does not admit any such submanifold.
Hermann states in [@hermann] that Theorem \[main thm\] should be true but that there is little research in this direction.
In [@SchSim], Schoen-Simon showed that every Riemannian $n$–manifold admits an embedded minimal hypersurface. If $n\geq 8,$ the Schoen-Simon construction can lead to minimal hypersurfaces with singularities. By contrast, Theorem \[main thm\] rules out the possibility of a generic metric having any totally geodesic submanifold, complete or otherwise. In particular, generic Riemannian manifolds, of dimension $\geq 4,$ have no totally geodesic submanifolds with singularities.
Theorem \[main thm\] asserts that the set of metrics with no totally geodesic submanifolds has nonempty interior. On the other hand, Theorem [partiall geod thm]{} says that the set of metrics with no partially geodesic submanifolds is an actual open set in the $C^{q}$–topology$.$ It is not clear to us whether the set of metrics with no totally geodesic submanifolds is open. As mentioned above, one difficulty is that the space of isometrically embedded $k$–manifolds in an $n$–manifold is not compact.
**Acknowledgement:** *We are grateful to referees for so thoroughly reading the manuscript, to Jim Kelliher, Catherine Searle and the referees for valuable suggestions, and to Paula Bergen for copyediting the manuscript.*
Notations and Conventions\[not and conve\]
==========================================
Throughout, $(M,g)$ will be a smooth, connected compact Riemannian manifold of dimension $n\geq 4.$ We will denote the Levi-Civita connection, curvature tensor, and Christoffel symbols by $\nabla $, $R$, and $\Gamma ,$ respectively. We adopt the sign convention that $R_{xyyx}$ is the sectional curvature of a plane spanned by orthonormal $x,y\in T_{p}M$. Thus the Jacobi operator is $R_{v}=R(\cdot ,v)v:T_{p}M\longrightarrow T_{p}M.$ For a nearby metric $\tilde{g}$, the corresponding objects will be denoted $\tilde{\nabla}
$, $\tilde{R}$, and $\tilde{\Gamma},$ respectively.
Given local coordinates $\left\{ x_{i}\right\} _{i=1}^{n}$, define $\partial
_{i}$ to be the partial derivative in the direction $\frac{\partial }{\partial x_{i}}$. At times the notation $\partial _{x_{i}}$ will also be used for the same object. We let $\mathcal{G}_{l}\left( M\right) $ denote the Grassmannian of $l$-planes in $M$, and $\pi :\mathcal{G}_{l}\left(
M\right) \longrightarrow M$ the projection of $\mathcal{G}_{l}\left(
M\right) $ to $M$. We fix a Riemannian metric on $\mathcal{G}_{l}\left(
M\right) $ so that $\pi :\mathcal{G}_{l}\left( M\right) \longrightarrow
\left( M,g\right) $ is a Riemannian submersion with totally geodesic fibers that are isometric to the Grassmannian of $l$-planes in $\mathbb{R}^{n}.$ For a metric space $X$, $A\subset $ $X$, and $r>0,$ we let $$B\left( A,r\right) \equiv \left\{ \left. x\in X\text{ }\right\vert \text{
\textrm{dist}}\left( x,A\right) <r\right\} .$$
For some $l\in \left\{ 2,\ldots ,n-1\right\} $ we let $\mathcal{P}_{0}$ be the set of partially geodesic $l$–planes for $g$.
The Local construction\[local constr\]
=======================================
In this section, we prove Lemma \[local constr main lemma\], which can be viewed as a local version of the $l^{th}$–Partially Geodesic Assertion. In Section \[global constr\], we exploit the fact that $\mathcal{P}_{0}$ is compact and apply Lemma \[local constr main lemma\] successively to each element of a finite open cover $\left\{ O_{i}\right\} _{i}^{G}$ of $\mathcal{P}_{0}$. This will produce a finite sequence of metrics $g_{1},g_{2},\ldots
,g_{G}$ where, for example, $g_{2}$ is obtained by applying Lemma \[local constr main lemma\] to $g_{1}.$ The idea is that Lemma \[local constr main lemma\] kills the partially geodesic property on $O_{k}$ while simultaneously preserving it on $\cup _{i=1}^{k-1}O_{i}.$ In particular, the set of partially geodesic $l$–planes for $g_{k}$ is contained in $\cup
_{i=k+1}^{G}O_{i}.$
Because of the successive nature of our construction, in Lemma \[local constr main lemma\] we construct a deformation, not of $g,$ but rather of an abstract metric, $\hat{g},$ that is $C^{q}$–close to $g.$
\[local constr main lemma\] Given $K,\eta >0$, $P\in $ $\mathcal{P}_{0},$ and sufficiently small $\varepsilon _{0},\rho >0,$ there is a $\xi >0$ so that if $$\left\vert g-\hat{g}\right\vert _{C^{q}}<\xi ,$$then there is a $C^{\infty }$–family of metrics $\left\{ g_{s}\right\}
_{s\in \left[ 0,s_{0}\right] }$ so that the following hold.
1. For all $s,$ $g_{s}=\hat{g}$ on $M\setminus B\left( \pi \left(
P\right) ,\rho +\eta \right) ,$ and $g_{0}=\hat{g}.$
2. Let $\sigma \left( P\right) $ be the section of $\mathcal{G}_{l}\left(
B\left( \pi \left( P\right) ,\rho \right) \right) $ determined by $P$ via normal coordinates at $\pi \left( P\right) $ with respect to $g.$ For all$$\check{P}\in \pi ^{-1}\left( B\left( \pi \left( P\right) ,\rho \right)
\right) \cap B\left( \mathfrak{\sigma }\left( P\right) ,\rho \right) ,
\label{effected set}$$there is a $v\in \check{P}$ so that$$\left\vert \mathcal{I}_{R_{v}^{s}}\left( \check{P}\right) -\mathcal{I}_{R_{v}^{\hat{g}}}\left( \check{P}\right) \right\vert >Ks.$$Here $R^{s}$ and $R^{\hat{g}}$ are the curvature tensors of $g_{s}$ and $\hat{g},$ respectively.
3. For all $\check{P}\in \mathcal{G}_{l}\left( M\right) $ and all $v\in
\check{P},$ $$\left\vert \mathcal{I}_{R_{v}^{s}}\left( \check{P}\right) -\mathcal{I}_{R_{v}^{\hat{g}}}\left( \check{P}\right) \right\vert \leq 2Ks.$$
4. For all $\check{P}\in \mathcal{G}_{l}\left( M\right) \setminus \left\{
\pi ^{-1}\left( B\left( \pi \left( P\right) ,\rho +\eta \right) \right) \cap
B\left( \mathfrak{\sigma }\left( P\right) ,\rho +\eta \right) \right\} $ and $w\in \check{P},$ $$\left\vert \mathcal{I}_{R_{w}^{s}}\left( \check{P}\right) -\mathcal{I}_{R_{w}^{\hat{g}}}\left( \check{P}\right) \right\vert \leq \varepsilon _{0}s.$$
We will not need Parts 3 and 4 to prove Theorem \[main thm\], but have included them since they are obtained relatively easily and seem to be of independent interest. [ ]{}
The proof of Lemma \[local constr main lemma\] occupies the rest of this section and starts with some preliminary results.
\[constr of f on M lemma\]Given $K,\varepsilon ,\eta >0$, and $P\in $ $\mathcal{P}_{0},$ there are coordinate neighborhoods $N$ and $G$ of $\pi
\left( P\right) $ and $C^{\infty }$ functions $f_{1},f_{2}:M\longrightarrow
\mathbb{R}$ with the following properties.
1. $$\mathrm{dist}\left( N,M\setminus G\right) <\eta .$$
2. On $N,$ the second partial derivatives in the first coordinate direction satisfy $$\max \left\{ \left\vert \partial _{1}\partial _{1}f_{1}\right\vert
,\left\vert \partial _{1}\partial _{1}f_{2}\right\vert \right\} >2K.$$
3. In general, $$\max \left\{ \left\vert \partial _{1}\partial _{1}f_{1}\right\vert
,\left\vert \partial _{1}\partial _{1}f_{2}\right\vert \right\} \leq 4K.$$
4. For $j\in \left\{ 1,2,\ldots ,n\right\} ,$ $k\in \left\{ 2,\ldots
,n\right\} ,$ and $i\in \left\{ 1,2\right\} ,$ $$\left\vert \partial _{j}\partial _{k}f_{i}\right\vert <\varepsilon$$and$$\left\vert f_{i}\right\vert _{C^{1}}<\varepsilon \text{.}$$
5. On $M\setminus G,$ $f_{1}=f_{2}=0.$
This follows by composing the coordinate chart of $G$ with the functions on Euclidean space given by the next lemma.
\[constrution of f lemma\]Let $\pi _{1}:\mathbb{R}^{n}\longrightarrow
\mathbb{R}$ be orthogonal projection onto the first factor. Let $\mathcal{C}$ be a compact subset of $\mathbb{R}^{n}$ with $\pi _{1}\left( \mathcal{C}\right) =\left[ a,b\right] ,$ for $a,b\in \mathbb{R}.$ Given $K,\varepsilon
>0$ and a compact set $\mathcal{\tilde{C}}$ with $\mathcal{C}\subset \mathrm{int}\left( \mathcal{\tilde{C}}\right) $, there are $C^{\infty }$ functions $f_{1},f_{2}:\mathbb{R}^{n}\longrightarrow \mathbb{R}$ with the following properties.
1. On $\mathcal{C},$$$\max \left\{ \left\vert \partial _{1}\partial _{1}f_{1}\right\vert
,\left\vert \partial _{1}\partial _{1}f_{2}\right\vert \right\} >2K.$$
2. In general, $$\max \left\{ \left\vert \partial _{1}\partial _{1}f_{1}\right\vert
,\left\vert \partial _{1}\partial _{1}f_{2}\right\vert \right\} \leq 4K.$$
3. For $j\in \left\{ 1,2,\ldots ,n\right\} ,$ $k\in \left\{ 2,\ldots
,n\right\} ,$ and $i\in \left\{ 1,2\right\} ,$ $$\left\vert \partial _{j}\partial _{k}f_{i}\right\vert <\varepsilon$$and$$\left\vert f_{i}\right\vert _{C^{1}}<\varepsilon \text{.}$$
4. On $\mathbb{R}^{n}\setminus \mathcal{\tilde{C}},$ $f_{1}=f_{2}=0.$
To prove Lemma \[constrution of f lemma\] we will use the following single variable calculus result.
\[jim lemma\]Given any $K>1$ and $\varepsilon >0,$ there are $C^{\infty
} $ functions $h_{1},h_{2}:\mathbb{R}\longrightarrow \mathbb{R}$ so that for all $t$ $$\begin{aligned}
\frac{201}{100}K &<&\max_{i\in \left\{ 1,2\right\} }\left\{ \left\vert
h_{i}^{\prime \prime }\left( t\right) \right\vert \right\} <\frac{399}{100}K\text{ and\label{2nd derrrv}} \\
\left\vert h_{i}\right\vert _{C^{1}} &<&\varepsilon . \label{lots
wiggle}\end{aligned}$$
Let $\eta =\frac{\varepsilon }{4\cdot K},$ and for $\delta >0$ set $$h_{\delta ,1}\left( t\right) =\left( 4-\delta \right) K\eta ^{2}\sin \left(
\frac{t}{\eta }\right) \text{ and }h_{\delta ,2}\left( t\right) =\left(
4-\delta \right) K\eta ^{2}\cos \left( \frac{t}{\eta }\right) .$$Then $$\begin{aligned}
h_{\delta ,1}^{\prime }\left( t\right) &=&\left( 4-\delta \right) K\eta
\cos \left( \frac{t}{\eta }\right) =\frac{\left( 4-\delta \right) }{4}\varepsilon \cos \left( \frac{t}{\eta }\right) \text{ and } \\
h_{\delta ,2}^{\prime }\left( t\right) &=&-\left( 4-\delta \right) K\eta
\sin \left( \frac{t}{\eta }\right) =-\frac{\left( 4-\delta \right) }{4}\varepsilon \sin \left( \frac{t}{\eta }\right) ,\end{aligned}$$so for all $\delta \in \left( 0,4\right) ,$ $$\left\vert h_{\delta ,1}\right\vert _{C^{1}}<\varepsilon \text{ and }\left\vert h_{\delta ,2}\right\vert _{C^{1}}<\varepsilon .$$
Also $$h_{\delta ,1}^{\prime \prime }\left( t\right) =-\left( 4-\delta \right)
K\sin \left( \frac{t}{\eta }\right) \text{ and }h_{\delta ,2}^{\prime \prime
}\left( t\right) =-\left( 4-\delta \right) K\cos \left( \frac{t}{\eta }\right) .$$Since for all $t,$$$\frac{1}{\sqrt{2}}\leq \max \left\{ \left\vert \sin \left( t\right)
\right\vert ,\left\vert \cos \left( t\right) \right\vert \right\} \leq 1,$$ $$\frac{\left( 4-\delta \right) }{\sqrt{2}}K\leq \max_{i\in \left\{
1,2\right\} }\left\{ \left\vert h_{\delta ,i}^{\prime \prime }\left(
t\right) \right\vert \right\} \leq \left( 4-\delta \right) K,$$and (\[2nd derrrv\]) holds with $h_{1}=h_{\delta ,1}$ and $h_{2}=h_{\delta
,2},$ provided $$\sqrt{2}\frac{201}{100}<\left( 4-\delta \right) <\frac{399}{100}.$$
Let $\chi :\mathbb{R}^{n}\longrightarrow \left[ 0,1\right] $ be $C^{\infty }$ and satisfy $$\begin{aligned}
\chi |_{\mathcal{C}} &\equiv &1, \notag \\
\chi |_{\mathbb{R}^{n}\setminus \mathcal{\tilde{C}}} &\equiv &0.
\label{dfn
chi}\end{aligned}$$Let $M>1$ satisfy $$\left\vert \chi \right\vert _{C^{2}}\leq M. \label{C2 norm of chi}$$
For $\tilde{\varepsilon}\in \left( 0,\varepsilon \right) ,$ we use Lemma [jim lemma]{} to choose $C^{\infty }$–functions $h_{1},h_{2}:\mathbb{R}\longrightarrow \mathbb{R}$ that satisfy $$\left\vert h_{i}\right\vert _{C^{1}}<\frac{\tilde{\varepsilon}}{2M}
\label{C1 norm}$$and $$\frac{201}{100}K<\max_{i\in \left\{ 1,2\right\} }\left\{ \left\vert
h_{1}^{\prime \prime }\left( t\right) \right\vert ,\left\vert h_{2}^{\prime
\prime }\left( t\right) \right\vert \right\} <\frac{399}{100}K.
\label{h1 2nd deriv}$$
For $i=1,2,$ we set $$f_{i}\left( p\right) =\chi \left( p\right) \cdot \left( h_{i}\circ \pi
_{1}\right) \left( p\right) . \label{dfn of f}$$Then $$\partial _{k}f_{i}\left( p\right) =\partial _{k}\chi \left( p\right) \cdot
\left( h_{i}\circ \pi _{1}\right) \left( p\right) +\chi \left( p\right)
\cdot \partial _{k}\left( h_{i}\circ \pi _{1}\right) \left( p\right)$$and $$\begin{aligned}
\partial _{j}\partial _{k}f_{i}\left( p\right) &=&\partial _{j}\partial
_{k}\chi \left( p\right) \cdot \left( h_{i}\circ \pi _{1}\right) \left(
p\right) +\partial _{j}\chi \left( p\right) \cdot \partial _{k}\left(
h_{i}\circ \pi _{1}\right) \left( p\right) \\
&&+\partial _{k}\chi \left( p\right) \cdot \partial _{j}\left( h_{i}\circ
\pi _{1}\right) \left( p\right) +\chi \left( p\right) \cdot \partial
_{j}\partial _{k}\left( h_{i}\circ \pi _{1}\right) \left( p\right) .\end{aligned}$$Combining this with (\[dfn chi\]), (\[C2 norm of chi\]), (\[C1 norm\]), and (\[h1 2nd deriv\]) we see that Properties 1, 2, and 3 hold, provided $\tilde{\varepsilon}$ is sufficiently small. Property 4 follows from (\[dfn chi\]) and (\[dfn of f\]).
Let $P\in \mathcal{P}_{0}$ be as in Lemma \[local constr main lemma\] and have foot point $p.$ If $l=2,$ let $\left\{ v,T,n_{3},n_{4}\right\} $ be an ordered orthonormal quadruplet at $p$ with$$\left\{ v,T\right\} \in P\text{ and }\left\{ n_{3},n_{4}\right\} \text{
normal to }P. \label{surf choice}$$If $l=n-1,$ let $\left\{ v,n,T_{3},T_{4}\right\} $ be an ordered $\hat{g}$–orthonormal quadruplet at $p$ with $$\left\{ v,T_{3},T_{4}\right\} \in P\text{ and }n\text{ normal to }P.
\label{hyp surf choice}$$If $l\in \left\{ 3,\ldots ,n-2\right\} ,$ let $\left\{ E_{i}\right\}
_{i=1}^{4}$ be an ordered $\hat{g}$–orthonormal quadruplet at $p$ that satisfies either $\left( \ref{surf choice}\right) $ or $\left( \ref{hyp surf
choice}\right) $. In either case, extend the ordered quadruplet to a coordinate frame $\left\{ E_{i}\right\} _{i=1}^{n}$.
Choose $g_{s}$ so that with respect to the ordered frame $\left\{
E_{i}\right\} _{i=1}^{n},$ the matrix of $g_{s}-\hat{g}$ is $0$ except for the upper $\left( 4\times 4\right) $–block which is $$\left\{ g_{s}-\hat{g}\right\} _{l,m}=\left(
\begin{array}{llll}
0 & 0 & 0 & 0 \\
0 & 0 & sf_{1} & sf_{2} \\
0 & sf_{1} & 0 & 0 \\
0 & sf_{2} & 0 & 0\end{array}\right) , \label{dfn ofg tilde}$$where to construct $f_{1}$ and $f_{2}$ we apply Lemma \[constr of f on M lemma\] with $N=B\left( \pi \left( P\right) ,\rho \right) .$
To simplify notation, we write $\tilde{g}$ for $g_{s}$ and use $\widetilde{}$ for objects associated to $\tilde{g}.$ Recall (see, e.g., [@Pet]) that with respect to $\left\{ E_{i}\right\} _{i=1}^{n},$ the Christoffel symbols are $$\tilde{\Gamma}_{ij,k}\equiv \tilde{g}\left( \tilde{\nabla}_{E_{i}}E_{j},E_{k}\right)$$and $$\tilde{R}_{ijkl}=\partial _{i}\tilde{\Gamma}_{jk,l}-\partial _{j}\tilde{\Gamma}_{ik,l}+\tilde{g}^{\sigma \tau }\left( \tilde{\Gamma}_{ik,\sigma }\tilde{\Gamma}_{jl,\tau }-\tilde{\Gamma}_{jk,\sigma }\tilde{\Gamma}_{il,\tau
}\right) , \label{curv via christ eqn}$$where $\tilde{g}^{\sigma \tau }$ are the coefficients of the inverse $\left(
\left\{ \tilde{g}\right\} _{\sigma \tau }\right) ^{-1}$ of $\left\{ \tilde{g}\right\} _{\sigma \tau },$ and the Einstein summation convention is being used. Combining Lemma \[constr of f on M lemma\] with the definition of $\tilde{g}$ gives us
\[g inv prop\]The coefficients $\hat{g}^{l,m}$ and $\tilde{g}^{l,m}$ of the inverses of $\left\{ \hat{g}\right\} _{l,m}$ and $\left\{ \tilde{g}\right\} _{l,m}$ satisfy$$\left\vert \hat{g}^{l,m}-\tilde{g}^{l,m}\right\vert <O\left( \varepsilon
s\right) .$$
Using Equation (\[dfn ofg tilde\]) and Lemma \[constr of f on M lemma\], we will show
\[Gamma control prop\]Writing $\left( \tilde{\Gamma}-\hat{\Gamma}\right)
_{jk,l}$ for $\tilde{\Gamma}_{ij,k}-\hat{\Gamma}_{ij,k}$ we have $$\left\vert \left( \tilde{\Gamma}-\hat{\Gamma}\right) _{jk,l}\right\vert
<O\left( \varepsilon s\right) . \label{Gamma diff inequal}$$
Let $i,j,k,l$ be arbitrary elements of $\left\{ 1,2,\ldots ,n\right\} .$ Then all expressions $$\partial _{i}\left( \tilde{\Gamma}-\hat{\Gamma}\right) _{jk,l}$$are $\leq O\left( \varepsilon s\right) $ except for$$\begin{array}{ccccc}
\partial _{v}\left( \tilde{\Gamma}-\hat{\Gamma}\right) _{2v,3} & = &
\partial _{v}\left( \tilde{\Gamma}-\hat{\Gamma}\right) _{v3,2} & = &
-\partial _{v}\left( \tilde{\Gamma}-\hat{\Gamma}\right) _{23,v} \\
\shortparallel & & \shortparallel & & \shortparallel \\
\partial _{v}\left( \tilde{\Gamma}-\hat{\Gamma}\right) _{v2,3} & = &
\partial _{v}\left( \tilde{\Gamma}-\hat{\Gamma}\right) _{3v,2} & = &
-\partial _{v}\left( \tilde{\Gamma}-\hat{\Gamma}\right) _{32,v}\end{array}
\label{big curv}$$and$$\begin{array}{ccccc}
\partial _{v}\left( \tilde{\Gamma}-\hat{\Gamma}\right) _{2v,4} & = &
\partial _{v}\left( \tilde{\Gamma}-\hat{\Gamma}\right) _{v4,2} & = &
-\partial _{v}\left( \tilde{\Gamma}-\hat{\Gamma}\right) _{24,v} \\
\shortparallel & & \shortparallel & & \shortparallel \\
\partial _{v}\left( \tilde{\Gamma}-\hat{\Gamma}\right) _{v2,4} & = &
\partial _{v}\left( \tilde{\Gamma}-\hat{\Gamma}\right) _{4v,2} & = &
-\partial _{v}\left( \tilde{\Gamma}-\hat{\Gamma}\right) _{42,v},\end{array}
\label{other big curv}$$where we write $v$ for the first element of our frame to emphasize its special role. The expressions in (\[big curv\]) and (\[other big curv\]) are $\leq 2\tilde{K}s$ everywhere, and on $N,$$$\max \left\{ \left( \ref{big curv}\right) ,\left( \ref{other big curv}\right) \right\} \geq Ks.$$
Inequality (\[Gamma diff inequal\]) follows from the fact that $\left\vert
sf_{i}\right\vert _{C^{1}}<\varepsilon s.$
To prove the remainder note $$\begin{aligned}
\partial _{i}\tilde{\Gamma}_{jk,l} &=&\partial _{E_{i}}\tilde{g}\left(
\tilde{\nabla}_{E_{j}}E_{k},E_{l}\right) \\
&=&\frac{1}{2}\partial _{E_{i}}\left[ \partial _{E_{k}}\tilde{g}\left(
E_{j},E_{l}\right) +\partial _{E_{j}}\tilde{g}\left( E_{l},E_{k}\right)
-\partial _{E_{l}}\tilde{g}\left( E_{k},E_{j}\right) \right] \\
&=&\frac{1}{2}\partial _{E_{i}}\left[ \partial _{E_{k}}\left( \tilde{g}-\hat{g}\right) \left( E_{j},E_{l}\right) +\partial _{E_{j}}\left( \tilde{g}-\hat{g}\right) \left( E_{l},E_{k}\right) -\partial _{E_{l}}\left( \tilde{g}-\hat{g}\right) \left( E_{k},E_{j}\right) \right] \\
&&+\frac{1}{2}\partial _{E_{i}}\left[ \partial _{E_{k}}\hat{g}\left(
E_{j},E_{l}\right) +\partial _{E_{j}}\hat{g}\left( E_{l},E_{k}\right)
-\partial _{_{E_{l}}}\hat{g}\left( E_{k},E_{j}\right) \right] .\end{aligned}$$Combining this with Lemma \[constr of f on M lemma\] and the definition of $\tilde{g}$ gives us $$\begin{aligned}
\partial _{i}\tilde{\Gamma}_{jk,l} &=&\partial _{E_{i}}\hat{g}\left( \nabla
_{E_{j}}E_{k},E_{l}\right) +O\left( \varepsilon s\right) \\
&=&\partial _{i}\hat{\Gamma}_{jk,l}+O\left( \varepsilon s\right) ,\end{aligned}$$unless the indices correspond to the situation in (\[big curv\]) or ([other big curv]{}). In the former case,$$\begin{aligned}
\partial _{v}\left( \tilde{\Gamma}-\hat{\Gamma}\right) _{2v,3} &=&\frac{1}{2}\partial _{E_{v}}\left[ \partial _{E_{v}}\left( \tilde{g}-\hat{g}\right)
\left( E_{2},E_{3}\right) +\partial _{E_{2}}\left( \tilde{g}-\hat{g}\right)
\left( E_{3},E_{v}\right) -\partial _{E_{3}}\left( \tilde{g}-\hat{g}\right)
\left( E_{v},E_{2}\right) \right] \\
&=&\frac{s}{2}\partial _{E_{v}}\partial _{E_{v}}\left( f_{1}\right) .\end{aligned}$$
In the case of (\[other big curv\]) we have $$\begin{aligned}
\partial _{v}\left( \tilde{\Gamma}-\hat{\Gamma}\right) _{2v,4} &=&\frac{1}{2}\partial _{E_{v}}\left[ \partial _{E_{v}}\left( \tilde{g}-\hat{g}\right)
\left( E_{2},E_{4}\right) +\partial _{E_{2}}\left( \tilde{g}-\hat{g}\right)
\left( E_{4},E_{v}\right) -\partial _{E_{4}}\left( \tilde{g}-\hat{g}\right)
\left( E_{v},E_{2}\right) \right] \\
&=&\frac{s}{2}\partial _{E_{v}}\partial _{E_{v}}\left( f_{2}\right) .\end{aligned}$$
The result follows by combining the previous three displays with Lemma [constr of f on M lemma]{}.
Combining Propositions \[g inv prop\] and \[Gamma control prop\] with Equation (\[curv via christ eqn\]) we see that $$\left\vert \left( \tilde{R}-\hat{R}\right) _{ijkl}\right\vert \leq O\left(
\varepsilon s\right) ,$$except if the quadruple corresponds, up to a symmetry of the curvature tensor, to either $\left( \tilde{R}-\hat{R}\right) _{2vv3}$ or $\left(
\tilde{R}-\hat{R}\right) _{2vv4},$ in which case we have $$\begin{aligned}
\left( \tilde{R}-\hat{R}\right) _{2vv3} &=&\partial _{v}\partial _{v}\left(
f_{1}\right) +O\left( \varepsilon s\right) \text{ and} \\
\left( \tilde{R}-\hat{R}\right) _{2vv4} &=&\partial _{v}\partial _{v}\left(
f_{2}\right) +O\left( \varepsilon s\right) .\end{aligned}$$
Lemma \[local constr main lemma\] follows from the previous two equations and our choices of $f_{1}$ and $f_{2},$ provided $\varepsilon $ is sufficiently small.
The Global Construction\[global constr\]
========================================
In this section, we prove the $l^{th}$–Partially Geodesic Assertion and hence Theorems \[main thm\] and \[partiall geod thm\]. The proof is by** **reverse induction, starting with the case when $l=n-1.$ The strategy is to apply Lemma \[local constr main lemma\] successively to the elements of an open cover of $\mathcal{P}_{0}.$ When $l=n-1,$ this is all that is needed. Otherwise, as in the proof of Lemma \[uber stru lemma\], we note that for each $k\in \left\{ l+1,\ldots ,n-1\right\} $, $\mathcal{G}_{k}\left( M\right) $ is compact. By our induction hypothesis, there is a $\delta >0$ so that for all $P\in \mathcal{G}_{k}\left( M\right) $ there is a unit $v\in P$ with $$\mathcal{I}_{R_{v}^{0}}\left( P\right) >\delta .$$Thus all sufficiently small deformations of $g$ have no partially geodesic $k $–planes for all $k\in \left\{ l+1,\ldots ,n-1\right\} .$ In particular, the $l^{th}$–Partially Geodesic Assertion follows from the
**Modified** $\mathbf{l}^{th}$–**Partially Geodesic Assertion.** * Given* $l\in \left\{ 2,3,\ldots ,n-1\right\} ,$* a finite* $q\geq 2,$ *and* $\xi >0$, *there is a Riemannian metric* $\tilde{g}$ *on* $M$ *with* $$\left\vert \tilde{g}-g\right\vert _{C^{q}}<\xi$$*that has no partially geodesic* $l$*–planes.*
Given $K>0,$ we combine Lemma \[local constr main lemma\] with the compactness of $\mathcal{P}_{0}$ to see that there is a finite open cover $\left\{ \pi ^{-1}\left( B\left( \pi \left( P_{i}\right) ,\rho _{i}\right)
\right) \cap B\left( \mathfrak{\sigma }\left( P_{i}\right) ,\rho _{i}\right)
\right\} _{i=1}^{G}$ of $\mathcal{P}_{0}$ whose elements are as in ([effected set]{}). In particular, for each $i\in \left\{ 2,3,\ldots ,G\right\}
,$ there is a $\xi _{i}>0$ so that if $$\left\vert g-\hat{g}\right\vert _{C^{q}}<\xi _{i},$$then the conclusion of Lemma \[local constr main lemma\] holds on $\pi
^{-1}\left( B\left( \pi \left( P_{i}\right) ,\rho _{i}\right) \right) \cap
B\left( \mathfrak{\sigma }\left( P_{i}\right) ,\rho _{i}\right) .$ Set $$\xi =\min\nolimits_{i}\left\{ \xi _{i}\right\} .$$Since $\mathcal{G}_{l}\left( M\right) \setminus \left\{
\bigcup\limits_{i=1}^{G}\pi ^{-1}\left( B\left( \pi \left( P_{i}\right)
,\rho _{i}\right) \right) \cap B\left( \mathfrak{\sigma }\left( P_{i}\right)
,\rho _{i}\right) \right\} $ is compact, there is a $\delta >0$ so that for all $$P\in \mathcal{G}_{l}\left( M\right) \setminus \left\{
\bigcup\limits_{i=1}^{G}\pi ^{-1}\left( B\left( \pi \left( P_{i}\right)
,\rho _{i}\right) \right) \cap B\left( \mathfrak{\sigma }\left( P_{i}\right)
,\rho _{i}\right) \right\} ,$$there is a $v\in P$ so that$$\left\vert \mathcal{I}_{R_{v}^{g}}\left( P\right) \right\vert >\delta .
\label{lemma D all over again}$$
We will successively apply Lemma \[local constr main lemma\] to the $\pi
^{-1}\left( B\left( \pi \left( P_{i}\right) ,\rho _{i}\right) \right) \cap
B\left( \mathfrak{\sigma }\left( P_{i}\right) ,\rho _{i}\right) $ and get a sequence of metrics $g_{1},$ $g_{2},$ $\ldots ,g_{G}.$ To obtain $g_{1},$ we apply Lemma \[local constr main lemma\] with $g=\hat{g}$, $P=P_{1},$ and $\rho =\rho _{1}.$ This yields a deformation $g_{s}$ of $g.$ Let $\mathcal{P}_{s}$ be the set of partially geodesic $l$–planes for $g_{s}.$ It follows from Part 2 of Lemma \[local constr main lemma\] that for all sufficiently small $s,$ $$\text{ }\mathcal{P}_{s}\dbigcap \pi ^{-1}\left( B\left( \pi \left(
P_{1}\right) ,\rho _{1}\right) \right) \cap B\left( \mathfrak{\sigma }\left(
P_{1}\right) ,\rho _{1}\right) =\emptyset . \label{Lemma 21 first app}$$By combining (\[lemma D all over again\]) and (\[Lemma 21 first app\]), we see that for sufficiently small $s,$ $$\mathcal{P}_{s}\subset \bigcup\limits_{i=2}^{G}\pi ^{-1}\left( B\left( \pi
\left( P_{i}\right) ,\rho _{i}\right) \right) \cap B\left( \mathfrak{\sigma }\left( P_{i}\right) ,\rho _{i}\right) .$$
Moreover, by further restricting $s,$ we can ensure that $g_{s}$ is close enough to $g$ in the $C^{q}$–topology so that$$\left\vert g_{s}-g\right\vert _{C^{q}}<\xi .\text{ }$$
We let $g_{1}=g_{s}$ for some $s$ as above. Assume, by induction, that for some $k\in \left\{ 1,\ldots ,G-1\right\} ,$ we have constructed a metric $g_{k}$ so that the following hold:
$\left( \mathbf{Hypothesis}\text{ }\mathbf{1}_{k}\right) $ If $\mathcal{P}_{g_{k}}$ is the set of $l$–dimensional partially geodesic subspaces for $g_{k},$ then $$\mathcal{P}_{g_{k}}\subset \bigcup\limits_{i=k+1}^{G}\pi ^{-1}\left( B\left(
\pi \left( P_{i}\right) ,\rho _{i}\right) \right) \cap B\left( \mathfrak{\sigma }\left( P_{i}\right) ,\rho _{i}\right) .$$
$\left( \mathbf{Hypothesis}\text{ }\mathbf{2}_{k}\right) $$$\left\vert g_{k}-g\right\vert _{C^{q}}<\xi .$$
It follows from Hypothesis $1_{k}$ that there is $\delta >0$ so that for all $$P\in \mathcal{G}_{l}\left( M\right) \setminus \left\{
\bigcup\limits_{i=k+1}^{G}\pi ^{-1}\left( B\left( \pi \left( P_{i}\right)
,\rho _{i}\right) \right) \cap B\left( \mathfrak{\sigma }\left( P_{i}\right)
,\rho _{i}\right) \right\} ,$$there is a $v\in P$ so that $$\left\vert \mathcal{I}_{R_{v}^{g_{k}}}\left( P\right) \right\vert >\delta .
\label{old dist}$$Since $\left\vert g_{k}-g\right\vert _{C^{q}}<\xi ,$ we can apply Lemma [local constr main lemma]{} with $\hat{g}=g_{k}$ , $P=P_{k+1},$ and $\rho
=\rho _{k+1}.$ This yields a deformation $g_{s}$ of $g_{k}$ so that for all sufficiently small $s>0,$
$$\left\vert g_{s}-g\right\vert _{C^{q}}<\xi .$$
In other words, Hypothesis $2_{k+1}$ holds.
To establish Hypothesis $1_{k+1},$ combine Part 2 of Lemma \[local constr main lemma\] with (\[old dist\]) to see that $g_{s}$ has no partially geodesic $l$–dimensional subspaces in$$\mathcal{G}_{l}\left( M\right) \setminus \left\{
\bigcup\limits_{i=k+2}^{G}\pi ^{-1}\left( B\left( \pi \left( P_{i}\right)
,\rho _{i}\right) \right) \cap B\left( \mathfrak{\sigma }\left( P_{i}\right)
,\rho _{i}\right) \right\} ,$$provided $s$ is positive and sufficiently small. So Hypothesis $1_{k+1}$ also holds.
[99]{} M. Berger, * A panoramic view of Riemannian geometry,* Berlin* *Springer Verlag, 2007.
R. Bryant, *http://mathoverflow.net/questions/209618/existence-of-totally-geodesic-hypersurfaces*.
D. Ebin, *On the space of Riemannian metrics*, Bull. Amer. Math. Soc. **74** (1968) 1001–1003.
L. Guijarro and F. Wilhelm, *Restrictions on submanifolds via focal radius bounds,* preprint, https://arxiv.org/pdf/1606.04121v2.pdf
M. Hirsch, *Differential Topology*, Graduate Texts in Mathematics, Springer-Verlag, 1994.
R. Hermann, *Differential Geometry and the Calculus of Variations*. Elsevier Science, 2000.
P. Petersen, *Riemannian Geometry,* GTM Vol. 171 ,$2^{nd}$ Ed., New York: Springer Verlag, 2006.
J. Pitts, *Existence and regularity of minimal surfaces on Riemannian manifolds,* Mathematical Notes **27**, Princeton University Press, Princeton, (1981).
R. Schoen and L. Simon, *Regularity of stable minimal hypersurfaces*, Comm. Pure Appl. Math. **34** (1981), 741–797.
M. Spivak, *Differential Geometry*, Vol. III, Publish or Perish, 1975.
K. Tsukada, *Totally geodesic submanifolds of Riemannian manifolds and curvature-invariant subspaces*, Kodai Math. J. **19**, No. 3 (1996), 395–437.
[^1]: This work was supported by a grant from the Simons Foundation (\#358068, Frederick Wilhelm)
|
---
abstract: 'This work prolongs recent investigations by Bergeron et al \[see 2012 [*J. Phys. A: Math. Theo.*]{} [**45**]{} 244028\] on new SUSYQM coherent states for Pöschl-Teller potentials. It mainly addresses explicit computations of eigenfunctions and spectrum associated to the higher order hierarchic supersymmetric Hamiltonian. Analysis of relevant properties and normal and anti-normal forms is performed and discussed. Coherent states of the hierarchic first order differential operator $A_{m,\nu,\beta}$ of the Pöschl-Teller Hamiltonian ${\bf H_{\nu,\beta}^{(m)}}$ and their characteristics are studied.'
address: 'International Chair of Mathematical Physics and Applications (ICMPA-UNESCO Chair), University of Abomey-Calavi, 072 B. P.: 50 Cotonou, Republic of Benin'
author:
- 'Mahouton Norbert Hounkonnou, Sama Arjika and Ezinvi Baloïtcha'
title: 'Higher order SUSY-QM for Pöschl-Teller potentials: coherent states and operator properties '
---
ICMPA-MPA/027/2012
Introduction
============
The search for exactly solvable models remains in the core of today research interest in quantum mechanics. A reference list of exactly solvable one-dimensional problems (harmonic oscillator, Coulomb, Morse, Pöschl-Teller potentials, etc.) obtained by an algebraic procedure, namely by a differential operator factorization methods [@infeld], can be found in [@berg] and references therein. This technique, introduced long ago by Schrödinger [@infeld], was analyzed in depth by Infeld and Hull [@ih51], who made an exhaustive classification of factorizable potentials. It was reproduced rather recently in supersymmetric quantum mechanics (SUSY QM) approach [@cooper] initiated by Witten [@wit] and was immediately applied to the hydrogen potential [@fe84]. This approach gave many new exactly solvable potentials which were obtained as superpartners of known exactly solvable models. Later on, it was noticed by Witten the possibility of arranging the Schrödinger’s Hamiltonians into isospectral pairs called [*supersymmetric partners*]{} [@wit]. The resulting supersymmetric quantum mechanics revived the study of exactly solvable Hamiltonians[@uh83].
SUSY QM is also used for the description of hidden symmetries of various atomic and nuclear physical systems [@gend]. Besides, it provides a theoretical laboratory for the investigation of algebraic and dynamical problems in supersymmetric field theory. The simplified setting of SUSY helps to analyze the difficult problem of dynamical SUSY breaking at full length and to examine the validity of the Witten index criterion[@wit].
The main result of the present work concerns with the explicit analytical expressions of eigenfunctions and spectrum associated to the first and second order supersymmetric Hamiltonians with Pöschl-Teller potentials. The related higher order Hamiltonian coherent states (CS) are also constructed and discussed, thus well completing recent investigations in [@berg] with the same model.
This paper is organized as follows. In Section 2, we recall known results and give an explicit characterization of the hierarchic Hamiltonians of the Pöschl-Teller Hamiltonian ${\bf H_{\nu,\beta}}$. Particular cases of eigenvalues, eigenfunctions, super-potentials and super-partner potentials are computed. In Section 3, relevant operator forms (normal and anti-normal), as well as interesting operator properties and mean-values are discussed. In Section 4, we study the CS related to the first order differential operator $A_{m,\nu,\beta}$ of the $m-$ order hierarchic Pöschl-Teller Hamiltonian ${\bf H_{\nu,\beta}^{(m)}}$ and their main mathematical properties, i.e the orthogonality, the normalizability, the continuity in the label and the resolution of the identity. We end with some concluding remarks in Section 5.
The Pöschl-Teller Hamiltonian and SUSY-QM formalism
===================================================
In this section, we first briefly recall the Pöschl-Teller Hamiltonian model presented in [@berg]. Then, we solve the associated time independent Schrödinger equation with explicit calculation of the wavefunction normalization constant. Finally, from the formalism of higher order hierarchic supersymmetric factorisation method we derive and discuss main results on the hierarchy of the Pöschl-Teller Hamiltonian.
The model
---------
The physical system is described by the Hamiltonian [@berg]: $$\begin{aligned}
\label{ham}
{\bf H_{\nu,\beta}}\phi:=\Big[-\frac{\hbar^2}{2M}\frac{d^2}{dx^2}+V_{\varepsilon_0,\nu,\beta}(x)\Big]
\phi\quad\mbox{for}\quad \phi\in\mathcal{D}_{\bf H_{\nu,\beta}}\end{aligned}$$ in a suitable Hilbert space $\mathcal{H}=L^2([0,L],dx)$ endowed with the inner product defined by $$(u,v)=\int_{0}^{L}dx\,\bar{u}(x)v(x),\quad u,v\in\mathcal{H},\; [0,L]\subset\mathbb{R}$$ where $\bar{u}$ denotes the complex conjugate of $u.$ $M$ is the particle mass and $\mathcal{D}_{\bf H_{\nu,\beta}}$ is the domain of definition of ${\bf H_{\nu,\beta}}$.
$$\begin{aligned}
\label{hamk}
V_{\varepsilon_0,\nu,\beta}(x)=\varepsilon_0
\Big(\frac{\nu(\nu+1)}{\sin^2\frac{\pi x}{L}}-2\beta\cot\frac{\pi x}{L}\Big)\end{aligned}$$
is the Pöschl-Teller potential; $\varepsilon_0$ is some energy scale, $\nu$ and $\beta$ are dimensionless parameters. The one-dimensional second-order operator ${\bf H_{\nu,\beta}}$ has singularities at the end points $x=0$ and $x=L$ permiting to choose $\varepsilon_0\geq 0$ and $\nu\geq 0$. Further, since the symmetry $x\rightarrow L-x$ corresponds to the parameter change $\beta\rightarrow -\beta,$ we can choose $\beta\geq 0.$ As assumed in [@berg], we consider the energy scale $\varepsilon_0$ as the zero point energy of the energy of the infinite well, i.e. $\varepsilon_0=\hbar^2\pi^2/(2ML^2)$ so that the unique free parameters of the problem remain $\nu$ and $\beta$ which will be always assumed to be positive. The case $\beta=0$ corresponds to the symmetric repulsive potentials investigated in [@gazeau], while the case $\beta\neq 0$ leads to the Coloumb potential in the limit $L\rightarrow \infty.$
Let us define the operator ${\bf \mathcal{H}_{\nu,\beta}}$ with the action $-\frac{\hbar^2}{2M}\phi''(x)+\varepsilon_0
\Big(\frac{\nu(\nu+1)}{\sin^2\frac{\pi x}{L}}-2\beta\cot\frac{\pi x}{L}\Big)\phi$ with the domain being the set of smooth functions with a compact support, $\mathcal{C}_0^\infty(0,L)$. The Pöschl-Teller potential is in the limit point case at both ends $x=0$ and $x=L$, if the parameter $\nu\geq 1/2$, and in the limit circle case at both ends if $0\leq \nu< 1/2. $ Therefore, the operator ${\bf \mathcal{H}_{\nu,\beta}}$ is essentially self-adjoint in the former case. The closure of ${\bf \mathcal{H}_{\nu,\beta}}$ is $\overline{\bf \mathcal{H}_{\nu,\beta}}={\bf H_{\nu,\beta}}$ i.e. $\mathcal{D}_{\overline{\bf \mathcal{H}_{\nu,\beta}}}=\mathcal{D}_{\bf H_{\nu,\beta}}$ and its domain coincides with the maximal one , i.e. $$\mathcal{D}_{{\bf H_{\nu,\beta}}}=\Big\{\phi\in ac^2(0,L),\; \Big[-\frac{\hbar^2}{2M}\phi''+\varepsilon_0
\Big(\frac{\nu(\nu+1)}{\sin^2\frac{\pi x}{L}}-2\beta\cot\frac{\pi x}{L}\Big)\phi\Big]\in\mathcal{H}\Big\},$$ where $ac^2(0,L)$ denotes the absolutely continuous functions with absolutely continuous derivatives. As mentioned in [@berg], a function of this domain satisfies Dirichlet boundary conditions and in the range of considered $\nu$, the deficiency indices of ${\bf \mathcal{H}_{\nu,\beta}}$ is $(2,2)$ indicating that this operator is no longer essentially self-adjoint but has a two-parameter family of self-adjoint extensions indeed. As in [@berg], we will restrict only to the extension described by Dirichlet boundary conditions, i. e. $$\mathcal{D}_{{\bf H_{\nu,\beta}}}=\Big\{\phi\in ac^2(0,L), \mid \phi(0)= \phi(L),\; \Big[-\frac{\hbar^2}{2M}\phi''+\varepsilon_0
\Big(\frac{\nu(\nu+1)}{\sin^2\frac{\pi x}{L}}-2\beta\cot\frac{\pi x}{L}\Big)\phi\Big]\in\mathcal{H}\Big\},$$ $\mathcal{D}_{{\bf H_{\nu,\beta}}}$ is dense in $\mathcal{H}$ since $H^{2,2}(0, L)\subset\mathcal{C}_0^\infty(0,L)\subset\mathcal{D}_{{\bf H_{\nu,\beta}}} $ and ${\bf H_{\nu,\beta}}$ is self-adjoint [@tesh] where $H^{m,n}(0, L)$ is the Sobolev space of indice $(m,n)$ [@sob]. Later on, we use the dense domain: $$\mathcal{D}_{\bf H_{\nu,\beta}}=\Big\{\phi\in AC^2(0,L), \,\varepsilon_0
\Big(\frac{\nu(\nu+1)}{\sin^2\frac{\pi x}{L}}-2\beta\cot\frac{\pi x}{L}\Big)\phi\in\mathcal{H}\Big\},$$ where $AC_{loc}^2(]0,L[)$ is given by $$\begin{aligned}
AC_{loc}^2([0,L])& =&\Big\{\phi\in AC([\alpha,\beta]), \forall \;[\alpha,\beta]\subset \;[0,L],\; [\alpha,\beta] \mbox{ compact }\Big\},\cr
AC[\alpha,\beta]&=&\Big\{\phi\in C[\alpha,\beta], \phi(x)=\phi(\alpha)+\int_{\alpha}^x dt\, g(t), g\in L^1([\alpha,\beta])\Big\}.\end{aligned}$$
Eigenvalues and eigenfunctions
------------------------------
The eigen-values $E_{n}^{(\nu,\beta)}$ and functions $\phi_{n}^{(\nu,\beta)}$ solving the Sturm-Liouville differential equation (\[ham\]), i.e. ${\bf H_{\nu,\beta}}\phi_{n}^{(\nu,\beta)}=E_{n}^{(\nu,\beta)}\phi_{n}^{(\nu,\beta)},$ are given by [@berg] $$\begin{aligned}
\label{eigen1}
E_{n}^{(\nu,\beta)}&=&\varepsilon_0\Big((n+\nu+1)^2-\frac{\beta^2}{(n+\nu+1)^2}\Big)\\
\label{eigen2}
\phi_{n}^{(\nu,\beta)}(x)&=&K_n^{(\nu,\beta)}\sin^{\nu+n+1}\frac{\pi x}{L}
\exp \Big(-\frac{\beta \pi x}{L(\nu+n+1)}\Big)P_n^{(a_n,\bar{a}_n)}\Big(i\cot \frac{\pi x}{L}\Big)\end{aligned}$$ where $n\in\mathbb{N}$, $a_n=-(n+\nu+1)+i\frac{\beta}{n+\nu+1}$, $P_n^{(\lambda,\eta)}(z)$ are the Jacobi polynomials [@ASK] and $K_n^{(\nu,\beta)}$ is a normalization constant giving by: $$\begin{aligned}
\label{prpos}
K_n^{(\nu,\beta)}
=2^{n+\nu+1}L^{-\frac{1}{2}}
\mathcal{T}(n;\nu,\beta)
\mathcal{O}^{-\frac{1}{2}}(n;\nu,\beta)\exp\Big(\frac{\beta\pi}{2(n+\nu+1)}\Big),\end{aligned}$$ where $$\begin{aligned}
\label{rem}
\fl
\mathcal{O}(n;\nu,\beta)&=&\sum_{k=0}^n \frac{(-n,-2\nu-n-1)_k}{(-\nu-n-\frac{i\beta}{\nu+n+1})_kk!\Gamma(n+\nu+2-k+\frac{i\beta}{\nu+n+1})}\cr
&\times&\sum_{s=0}^n \frac{(-n,-2\nu-n-1,)_s\Gamma(2n+2\nu-s-k+3)}{ (-\nu-n+\frac{i\beta}{\nu+n+1})_s s! \Gamma(n+\nu+2-s-\frac{i\beta}{\nu+n+1})}\end{aligned}$$ and $$\begin{aligned}
\label{remo}
\mathcal{T}(n;\nu,\beta)=n!\Big|\Big(-n-\nu+\frac{i\beta}{n+\nu+1}\Big)_n\Big|^{-1}.\end{aligned}$$ For details on the $K_n^{(\nu,\beta)},$ see Appendix A.
For $n=0$, one can retrieve $$\begin{aligned}
\label{eigen2e}
\phi_0^{(\nu,\beta)}(x)=\frac{2^{\nu+1}}{\sqrt{L\Gamma(2\nu+3)}}
\sin^{\nu+1}\frac{\pi x}{L}
\exp \Big(\frac{\beta \pi }{\nu+1}\Big[\frac{1}{2}-\frac{x}{L}\Big]\Big).\end{aligned}$$
Factorisation method and hierarchy of the Pöschl-Teller Hamiltonian: main results
----------------------------------------------------------------------------------
Let us use the factorization method [@cooper; @david2; @hounk; @ih51] to find the hierarchy of Pöschl-Teller Hamiltonian. We assume the ground state eigenfunction $\phi_0^{(\nu,\beta)}$ and eigenvalue $E_0^{(\nu,\beta)}$ are known. Then we can define the differential operators $A_{\nu,\beta},\; A_{\nu,\beta}^\dag $ factorizing the Pöschl-Teller Hamiltonian ${\bf H_{\nu,\beta}}$ (\[ham\]), and the associated superpotential $W_{\nu,\beta}$ as follows: $$\begin{aligned}
\label{fact}
{\bf H_{\nu,\beta}}:=\frac{1}{2M}A_{\nu,\beta}^\dag A_{\nu,\beta}+E_0^{(\nu,\beta)},\end{aligned}$$ where the differential operators $A_{\nu,\beta}$ and $A_{\nu,\beta}^\dag$ are defined by $$\begin{aligned}
\label{facto}
A_{\nu,\beta}:=\hbar\frac{d}{dx}+W_{\nu,\beta}(x),\quad A_{\nu,\beta}^\dag :=-\hbar\frac{d}{dx}+W_{\nu,\beta}(x),\end{aligned}$$ acting in the domains $$\begin{aligned}
\label{maine}
\fl
\mathcal{D}_{A_{\nu,\beta}}=\{\phi\in ac(0,L)| \;(\hbar \phi'+W_{\nu,\beta}\phi)\in\mathcal{H}\},\\
\fl
\mathcal{D}_{A_{\nu,\beta}^\dag}=\{\phi\in ac(0,L)|\;\exists \;\tilde{\phi}\in \mathcal{H}:
[\hbar\psi(x)\phi(x)]_0^L=0,\;\langle A_{\nu,\beta} \psi,\phi \rangle= \langle
\psi,\tilde{\phi }\rangle,\;\forall\;\psi\in\mathcal{D}_{A_{\nu,\beta}}\},\nonumber\end{aligned}$$ where $A_{\nu,\beta}^\dag \phi=\tilde{\phi}$. The operator $A_{\nu,\beta}^\dag$ is the adjoint of $A_{\nu,\beta}.$ Besides, considering their common restriction $$\begin{aligned}
\label{maine}
\mathcal{D}_{A}=\{\phi\in AC(0,L)| \;W_{\nu,\beta}\phi\in\mathcal{H}\},\end{aligned}$$ we have $\overline{A_{\nu,\beta}\upharpoonright\mathcal{D}_{A}}=A_{\nu,\beta}$ and $\overline{A_{\nu,\beta}^\dag\upharpoonright\mathcal{D}_{A}}=A_{\nu,\beta}^\dag$. For more details on the role of these operators, see [@berg]. The super-potential $W_{\nu,\beta}$ is given by $$\begin{aligned}
\label{factoz}
W_{\nu,\beta}(x):=-\hbar\frac{[\phi_0^{(\nu,\beta)}(x)]'}{\phi_0^{(\nu,\beta)}(x)}=-\frac{\pi\hbar}{L}\Big( (\nu+1)\cot\frac{\pi x}{L}-\frac{\beta}{\nu+1}\Big),\end{aligned}$$ where $\phi_0^{(\nu,\beta)}(x)$ is defined by (\[eigen2e\]). To derive the $m-$th order hierarchic supersymmetric potential, we proceed as follows:
$\bullet$ Permut the operators $A_{\nu,\beta}^\dag$ and $A_{\nu,\beta} $ to get the superpartner Hamiltonian $
{\bf H_{\nu,\beta}^{(1)}}$ of ${\bf H_{\nu,\beta}:= H_{\nu,\beta}^{(0)}}$: $$\begin{aligned}
\label{facte}
{\bf H^{(1)}_{\nu,\beta}}:=\frac{1}{2M}A_{\nu,\beta} A_{\nu,\beta} ^\dag+E_0^{(\nu,\beta)}=-\frac{\hbar^2}{2M}\frac{d^2}{dx^2}+V_{1,\nu,\beta}(x),\end{aligned}$$ where the partner-potential $V_{1,\nu,\beta}$ of $V_{\nu,\beta}$ is defined by the relation $$\begin{aligned}
\label{poten}
V_{1,\nu,\beta}(x)&:=&\frac{1}{2M}\Big(W^2_{\nu,\beta}(x)+W'_{\nu,\beta}(x)\Big)+E_0^{(\nu,\beta)}.\end{aligned}$$ where $W_{\nu,\beta}$ is given by (\[factoz\]) and $E_0^{(\nu,\beta)}$ by (\[eigen1\]). In the equation $$\begin{aligned}
\label{eqnn}
{\bf H_{\nu,\beta}^{(1)}}\phi_n^{(1,\nu,\beta)}=E_n^{(1,\nu,\beta)}\phi_n^{(1,\nu,\beta)},\end{aligned}$$ the eigenfunction $\phi_n^{(1,\nu,\beta)}$ and the eigenvalue $E_n^{(1,\nu,\beta)}$ of ${\bf H_{\nu,\beta}^{(1)}}$ are related to those of ${\bf H_{\nu,\beta}}$, i.e. $E_n^{(1,\nu,\beta)}:=E_{n+1}^{(\nu,\beta)}$ and $\phi_n^{(1,\nu,\beta)}(x)\propto A_{\nu,\beta}\phi_{n+1}^{(\nu,\beta)}(x)$.
Since we know $E_0^{(1,\nu,\beta)}$ and $\phi_0^{(1,\nu,\beta)}(x)$, the Hamiltonian ${\bf H_{\nu,\beta}^{(1)}}$ can be re-factorized to give $$\begin{aligned}
\label{eqnrn}
{\bf H_{\nu,\beta}^{(1)}}:=\frac{1}{2M} A_{1,\nu,\beta} ^\dag A_{1,\nu,\beta} +E_0^{(1,\nu,\beta)},\end{aligned}$$ where $$\begin{aligned}
\label{factii}
A_{1,\nu,\beta}:=\hbar\frac{d}{dx}+W_{1,\nu,\beta}(x),\quad A_{1,\nu,\beta}^\dag :=-\hbar\frac{d}{dx}+W_{1,\nu,\beta}(x),\end{aligned}$$ with $$\begin{aligned}
\label{factii}
W_{1,\nu,\beta}(x):=-\hbar\frac{[\phi_0^{(1,\nu,\beta)}(x)]'}{\phi_0^{(1,\nu,\beta)}(x)}.\end{aligned}$$
$\bullet$ Permut now the operators $A_{1,\nu,\beta}$ and $A_{1,\nu,\beta}^\dag$ to build the third order hierarchic Hamiltonian ${\bf H_{\nu,\beta}^{(2)}},$ i.e. a superpartner Hamiltonian of ${\bf H_{\nu,\beta}^{(1)}}$: $$\begin{aligned}
\label{ceqnrn}
{\bf H_{\nu,\beta}^{(2)}}:=\frac{1}{2M} A_{1,\nu,\beta}A_{1,\nu,\beta} ^\dag +E_0^{(1,\nu,\beta)}
=-\frac{\hbar^2}{2M}\frac{d^2}{dx^2}+V_{2,\nu,\beta}(x),\end{aligned}$$ with $$\begin{aligned}
V_{2,\nu,\beta}(x)&:=&\frac{1}{2M}\Big(W^2_{1,\nu,\beta}(x)+\hbar W'_{1,\nu,\beta}(x)\Big)+E_0^{(1,\nu,\beta)},\end{aligned}$$ where $W_{1,\nu,\beta}$ is defined in (\[factii\]) and $E_0^{(1,\nu,\beta)}$ in (\[eigen1\]). Start now from the following equation $$\begin{aligned}
{\bf H_{\nu,\beta}^{(2)}}\phi_n^{(2,\nu,\beta)}=E_n^{(2,\nu,\beta)}\phi_n^{(2,\nu,\beta)}.\end{aligned}$$ The eigenvalue $E_n^{(2,\nu,\beta)}$ and the eigenfunction $\phi_n^{(2,\nu,\beta)}$ of $\;{\bf H_{\nu,\beta}^{(2)}}$ are related to those of ${\bf H_{\nu,\beta}^{(1)}}$, i.e. $E_n^{(2,\nu,\beta)}:=E_{n+1}^{(1,\nu,\beta)}$ and $\phi_n^{(2,\nu,\beta)}(x)\propto A_{1,\nu,\beta}\phi_{n+1}^{(1,\nu,\beta)}(x)$.
From known $E_0^{(2,\nu,\beta)}$ and $\phi_0^{(2,\nu,\beta)}$, we can re-factorize the Hamiltonian ${\bf H_{\nu,\beta}^{(2)}}$: $$\begin{aligned}
\label{cqnrn}
{\bf H_{\nu,\beta}^{(2)}}:=\frac{1}{2M} A_{2,\nu,\beta}^\dag A_{2,\nu,\beta} +E_0^{(2,\nu,\beta)}
=-\frac{\hbar^2}{2M}\frac{d^2}{dx^2}+V_{2,\nu,\beta}(x),\end{aligned}$$ where the operators $A_{2,\nu,\beta}$ and $A_{2,\nu,\beta}^\dag$ are given, respectively, by $$\begin{aligned}
A_{2,\nu,\beta}:=\hbar\frac{d}{dx}+W_{2,\nu,\beta}(x),\, A_{2,\nu,\beta}^\dag :=-\hbar\frac{d}{dx}+W_{2,\nu,\beta}(x),\end{aligned}$$ with the superpotential $$\begin{aligned}
\label{are}
W_{2,\nu,\beta}(x)=-\hbar\frac{[\phi_0^{(2,\nu,\beta)}(x)]'}{\phi_0^{(2,\nu,\beta)}(x)},\end{aligned}$$ and the partner potential $V_{2,\nu,\beta}$ $$\begin{aligned}
V_{2,\nu,\beta}(x)&:=&\frac{1}{2M}\Big(W^2_{2,\nu,\beta}(x)-\hbar W'_{2,\nu,\beta}(x)\Big)+E_0^{(2,\nu,\beta)},\end{aligned}$$ where $W_{2,\nu,\beta}$ is defined in (\[are\]) and $E_0^{(2,\nu,\beta)}$ in (\[eigen1\]).
$\bullet$ So, we have shown that one can determine the superpartner Hamiltonian ${\bf H_{\nu,\beta}^{(1)}}$ of ${\bf H_{\nu,\beta}},$ re-factorize ${\bf H_{\nu,\beta}^{(1)}}$ in order to determine its superpartner ${\bf H_{\nu,\beta}^{(2)}},$ then re-factorize ${\bf H_{\nu,\beta}^{(2)}}$ to determine its superpartner ${\bf H_{\nu,\beta}^{(3)}},$ and so on. Each Hamiltonian has eigenfunctions and eigenvalues. Thus, if the first Hamiltonian $
{\bf H_{\nu,\beta}}$ has $r$ eigenfunctions $\phi_n^{(\nu,\beta)}$ related to the eigenvalues $E_n^{(\nu,\beta)},$ $0\leq n \leq (r-1),$ then one can always generate an hierarchy of $(r-1)$ Hamiltonians ${\bf H_{\nu,\beta}^{(2)}}, {\bf H_{\nu,\beta}^{(3)}},\ldots, {\bf H_{\nu,\beta}^{(r)}}$ such that ${\bf H_{\nu,\beta}^{(m)}}$ has the same eigenvalues as ${\bf H_{\nu,\beta}}$, except for the first $(m-1)$ eigenvalues of ${\bf H_{\nu,\beta}}.$ In fact, for $m=2,3,4,\ldots,r$, we define the Hamiltonian in its factorized form as follows: $$\begin{aligned}
\label{wcqnrn}
{\bf H_{\nu,\beta}^{(m)}}:=\frac{1}{2M} A_{m,\nu,\beta}^\dag A_{m,\nu,\beta} +E_0^{(m,\nu,\beta)}
=-\frac{\hbar^2}{2M}\frac{d^2}{dx^2}+V_{m,\nu,\beta}(x),\end{aligned}$$ while its super-partner Hamiltonian ${\bf H_{\nu,\beta}^{(m+1)}}$ is given by $$\begin{aligned}
\label{superp}
{\bf H_{\nu,\beta}^{(m+1)}}:&=&\frac{1}{2M} A_{m,\nu,\beta} A_{m,\nu,\beta}^\dag +E_0^{(m,\nu,\beta)}=-\frac{\hbar^2}{2M}\frac{d^2}{dx^2}+V_{m+1,\nu,\beta}(x),\end{aligned}$$ where the operators $A_{m,\nu,\beta}$ and $A_{m,\nu,\beta}^\dag$ are defined by $$\begin{aligned}
\label{toutt}
A_{m,\nu,\beta}:=\hbar\frac{d}{dx}+W_{m,\nu,\beta}(x),\quad A_{m,\nu,\beta}^\dag: =-\hbar\frac{d}{dx}+W_{m,\nu,\beta}(x),\end{aligned}$$ which do not commute with the Hamiltonians ${\bf H_{\nu,\beta}^{(m)}}$ and ${\bf H_{\nu,\beta}^{(m+1)}},$ but satisfy the intertwining relations $$\begin{aligned}
\label{comm}
{\bf H_{\nu,\beta}^{(m)}}A_{m,\nu,\beta}^\dag=A_{m,\nu,\beta}^\dag{\bf H_{\nu,\beta}^{(m+1)}},\quad
{\bf H_{\nu,\beta}^{(m+1)}}A_{m,\nu,\beta}=A_{m,\nu,\beta}{\bf H_{\nu,\beta}^{(m)}}.\end{aligned}$$ The super-potential $ W_{m,\nu,\beta}$ is given by definition by the relation: $$\begin{aligned}
W_{m,\nu,\beta}(x):=-\hbar\frac{[\phi_0^{(m,\nu,\beta)}(x)]'}{\phi_0^{(m,\nu,\beta)}(x)},\end{aligned}$$ while the potential $V_{m,\nu,\beta}$ and its superpartner potential $V_{m+1,\nu,\beta}$ are defined by $$\begin{aligned}
V_{m,\nu,\beta}(x)&:=&\frac{1}{2M}\Big(W^2_{m,\nu,\beta}(x)-\hbar W'_{m,\nu,\beta}(x)\Big)+E_0^{(m,\nu,\beta)},\cr
V_{m+1,\nu,\beta}(x)&:=&\frac{1}{2M}\Big(W^2_{m,\nu,\beta}(x)+\hbar W'_{m,\nu,\beta}(x)\Big)+E_0^{(m,\nu,\beta)}.\end{aligned}$$ The energy spectrum $E_n^{(m+1,\nu,\beta)}$ and the eigenfunction $\phi_{n}^{(m+1,\nu,\beta)}$ of the super-partner Hamiltonian ${\bf H_{\nu,\beta}^{(m+1)}}$ are related to those of ${\bf H_{\nu,\beta}^{(m)}}$, i. e. $E_n^{(m+1,\nu,\beta)}:=E_{n+1}^{(m,\nu,\beta)}$ and $\phi_{n}^{(m+1,\nu,\beta)}(x)\propto A_{m,\nu,\beta}\phi_{n+1}^{(m,\nu,\beta)}(x)$ as formulated below.
\[ppp\] The eigen-energy spectrum $E_n^{(m+1,\nu,\beta)}$ and eigen-function $\phi_{n}^{(m+1,\nu,\beta)}$ that solve the time-independent Schrödinger equation for the $(m+1)-$order hierarchic superpartner Hamiltonian $\;{\bf H_{\nu,\beta}^{(m+1)}}$, i.e $\;{\bf H_{\nu,\beta}^{(m+1)}}\phi_{n}^{(m+1,\nu,\beta)}=E_n^{(m+1,\nu,\beta)}\phi_{n}^{(m+1,\nu,\beta)},$ are given, respectively, by: $$\begin{aligned}
E_n^{(m+1,\nu,\beta)}&=&\varepsilon_0\Big((n+m+\nu+2)^2-\frac{\beta^2}{(n+m+\nu+2)^2}\Big),\end{aligned}$$ $$\begin{aligned}
\label{tote}
\phi_{n}^{(m+1,\nu,\beta)}(x)=\frac{A_{m,\nu,\beta}A_{m-1,\nu,\beta}\ldots A_{1,\nu,\beta}A_{\nu,\beta}\phi_{n+m+1}^{(\nu,\beta)}(x)}{\sqrt{(2M)^{m+1}\prod_{k=0}^{m}\Big(E_{n+m+1}^{( \nu,\beta)}-E_k^{( \nu,\beta)}\Big)}}.\end{aligned}$$
As a matter of explicit computation, for the particular value of $m=0,$ we get
1. the energy spectrum $$\begin{aligned}
E_n^{(1,\nu,\beta)}&=&\varepsilon_0\Big((n+\nu+2)^2-\frac{\beta^2}{(n+\nu+2)^2}\Big),\end{aligned}$$
2. the eigenfunction $$\begin{aligned}
\fl
\phi_{n}^{(1,\nu,\beta)}(x)=\frac{2^{n+\nu+2}e^{\frac{\beta\pi}{2(n+\nu+2)}}
\mathcal{T}(n+1;\nu,\beta)
}{\sqrt{2ML(n+1)\mathcal{O}(n+1;\nu,\beta)\Delta_{n+1}^0 E_0^{(\nu,\beta)}}}\Bigg[\Bigg[\frac{2M(n+1)^2\overline{\Delta_{n+1}^0 E_0^{(\nu,\beta)}}}{n+2\nu+3}\Bigg]^{1/2}\cr
\times\cos \Big(\frac{\pi x}{L}
-\alpha_{\nu,\beta}(n)\Big)P_{n+1}^{(a_{n+1},\bar{a}_{n+1})}\Big(i\cot \frac{\pi x}{L}\Big)+\frac{i\pi\hbar(n+2\nu+2)}{2L\sin\frac{\pi x}{L}}\cr
\times P_{n}^{(a_{n+1}+1,\bar{a}_{n+1}+1)}\Big(i\cot \frac{\pi x}{L}\Big)\Bigg]
\sin^{\nu+n+1}\frac{\pi x}{L}\exp \Big(-\frac{\beta \pi x}{L(\nu+n+2)}\Big).\end{aligned}$$
3. the superpotential $$\begin{aligned}
W_{1,\nu,\beta}(x)&=&-\frac{\hbar\pi}{L}
\Big((\nu+2)\cot\frac{\pi x}{L}-\frac{\beta}{\nu+2}\Big),\end{aligned}$$
4. the potential $$\begin{aligned}
V_{1,\nu,\beta}(x)&=&\varepsilon_0
\Big(\frac{(\nu+1)(\nu+2)}{\sin^2\frac{\pi x}{L}}-2\beta\cot\frac{\pi x}{L}\Big),\end{aligned}$$ where $$\begin{aligned}
\fl
\alpha_{\nu,\beta}(n)=\arctan\Bigg(\frac{\beta}{(\nu+1)(\nu+n+2)}\Bigg),\qquad \overline{\Delta_{n+1}^0 E_0^{(\nu,\beta)}}:=\frac{E_{n+1}^{(\nu,\beta)}-E_0^{(\nu,\beta)}}{n+1}.\end{aligned}$$
It is worth noticing that the potentials $V_{\varepsilon_0,\nu,\beta}$ and $V_{m+1,\nu,\beta}$ are related in a simpler way, i.e $$\begin{aligned}
V_{m+1, \nu,\beta}(x)=V_{\varepsilon_0, \nu,\beta}(x)-
\frac{\hbar^2(m+1)(2\nu+m+2)}{2M}
\frac{d^2}{dx^2 }\ln\Big(\sin\frac{\pi x}{L}\Big).\end{aligned}$$
Relevant operator properties
============================
This section is devoted to the investigation of relevant properties of the operators ${\bf H_{\nu,\beta}^{(m+1)}}$ and ${\bf H_{\nu,\beta}}.$
\[propae3\] For the operators $A_{m,\nu,\beta}$ and $A_{m,\nu,\beta}^\dag$, there is a pair of $(m+1)-$ order hierarchic operators intertwining ${\bf H_{\nu,\beta}}$ and ${\bf H_{\nu,\beta}^{(m+1)}}$, namely $$\begin{aligned}
\label{46}
{\bf H_{\nu,\beta}}B_m^\dag =B_m^\dag {\bf H_{\nu,\beta}^{(m+1)}},\quad B_m{\bf H_{\nu,\beta}} ={\bf H_{\nu,\beta}^{(m+1)}}B_m,\end{aligned}$$ where $$\begin{aligned}
\label{seu}
B_m:=A_{m,\nu,\beta}\ldots A_{1,\nu,\beta}A_{\nu,\beta},\;\; B_m^\dag:=A_{\nu,\beta}^\dag A_{1,\nu,\beta}^\dag\ldots A_{m,\nu,\beta}^\dag. \end{aligned}$$
[**Proof.**]{} By multiplying on the left hand the intertwining relation (\[comm\]) by the operator $A_{m-1,\nu,\beta}^\dag$, we have $$\begin{aligned}
A_{m-1,\nu,\beta}^\dag{\bf H_{\nu,\beta}^{(m)}}A_{m,\nu,\beta}^\dag =A_{m-1,\nu,\beta}^\dag A_{m,\nu,\beta}^\dag {\bf H_{\nu,\beta}^{(m+1)}}\end{aligned}$$ which is equivalent to $$\begin{aligned}
\label{proor}
{\bf H_{\nu,\beta}^{(m-1)}}A_{m-1,\nu,\beta}^\dag A_{m,\nu,\beta}^\dag =A_{m-1,\nu,\beta}^\dag A_{m,\nu,\beta}^\dag {\bf H_{\nu,\beta}^{(m+1)}}.\end{aligned}$$ By continuing the process until the order $m-1$, we have $$\begin{aligned}
\label{proorf}
{\bf H_{\nu,\beta}^{(1)}}A_{1,\nu,\beta}^\dag\ldots A_{m,\nu,\beta}^\dag = A_{1,\nu,\beta}^\dag\ldots A_{m,\nu,\beta}^\dag {\bf H_{\nu,\beta}^{(m+1)}}.\end{aligned}$$ By multiplying on the left hand the equation (\[proorf\]) by the operator $A_{\nu,\beta}^\dag$ we have $$\begin{aligned}
\label{proorfe}
A_{\nu,\beta}^\dag{\bf H_{\nu,\beta}^{(1)}}A_{1,\nu,\beta}^\dag\ldots A_{m,\nu,\beta}^\dag = A_{\nu,\beta}^\dag A_{1,\nu,\beta}^\dag\ldots A_{m,\nu,\beta}^\dag {\bf H_{\nu,\beta}^{(m+1)}}\end{aligned}$$ which is equivalent to ${\bf H_{\nu,\beta}}B_m^\dag = B_m^\dag {\bf H_{\nu,\beta}^{(m+1)}}$. Similarly we get $B_m {\bf H_{\nu,\beta}} = {\bf H_{\nu,\beta}^{(m+1)}}B_m$.$\square$
\[ttttt\] For any positive integers $n, m,$ the following result holds: $$\begin{aligned}
\label{5dp1ed}
B_nB_m^\dag&=&\left\{\begin{array}{ll}(2M)^{m+1}{_{m+1}\Lambda}_{n,\nu,\beta}\prod_{k=0}^{m}\Big({\bf H_{\nu,\beta}^{(m+1)}}-E_k^{(\nu,\beta)}\Big) &n > m\\\\
(2M)^{n+1}\prod_{k=0}^{n}\Big({\bf H_{\nu,\beta}^{(n+1)}}-E_k^{(\nu,\beta)}\Big)
{^{n+1}\Theta}_{m,\nu,\beta} &n < m. \end{array}
\right.\end{aligned}$$ In particular, if $n = m$, we have $$\begin{aligned}
\label{5dp}
B_mB_m^\dag&=&(2M)^{m+1}\prod_{k=0}^{m}\Big({\bf H_{\nu,\beta}^{(m+1)}}-E_k^{(\nu,\beta)}\Big),\\
\label{5dpp}
B_m^\dag B_m&=&(2M)^{m+1}\prod_{k=0}^{m}\Big({\bf H_{\nu,\beta}}-E_k^{(\nu,\beta)}\Big),\end{aligned}$$ where the operators $ {_{m+1}\Lambda}_{n,\nu,\beta}$ and ${^{n+1}\Theta}_{m,\nu,\beta}$ are given by $$\begin{aligned}
\label{5dedp}
\fl
{_{m+1}\Lambda}_{n,\nu,\beta}:=A_{n,\nu,\beta}A_{n-1,\nu,\beta}\ldots A_{m+1,\nu,\beta},\;\;
{^{n+1}\Theta}_{m,\nu,\beta}:=A_{n+1,\nu,\beta}^\dag A_{n+2,\nu,\beta}^\dag\ldots A_{m,\nu,\beta}^\dag.\end{aligned}$$
[**Proof.**]{} From (\[seu\]), we have $$\begin{aligned}
B_nB_m^\dag&=&A_{n,\nu,\beta} A_{n-1,\nu,\beta}\ldots (A_{\nu,\beta}A_{\nu,\beta}^\dag) A_{1,\nu,\beta}^\dag\ldots A_{m,\nu,\beta}^\dag\cr
&=&2MA_{m,\nu,\beta} \ldots A_{1,\nu,\beta}({\bf H_{\nu,\beta}^{(1)}}-E_0^{(\nu,\beta)})A_{1,\nu,\beta}^\dag\ldots A_{m,\nu,\beta}^\dag\cr
&=&2MA_{n,\nu,\beta} \ldots A_{1,\nu,\beta}A_{1,\nu,\beta}^\dag({\bf H_{\nu,\beta}^{(2)}}-E_0^{(\nu,\beta)})A_{2,\nu,\beta}^\dag\ldots A_{m,\nu,\beta}^\dag\cr
&\vdots&\cr
&=&(2M)^{m+1}{_{m+1}\Lambda}_{n,\nu,\beta}\prod_{k=0}^{m}\Big({\bf H_{\nu,\beta}^{(m+1)}}-E_k^{(\nu,\beta)}\Big)\end{aligned}$$ if $n>m$, $$\begin{aligned}
B_nB_m^\dag&=&A_{n,\nu,\beta} A_{n-1,\nu,\beta}\ldots (A_{\nu,\beta}A_{\nu,\beta}^\dag) A_{1,\nu,\beta}^\dag\ldots A_{m,\nu,\beta}^\dag\cr
&=&2MA_{m,\nu,\beta} \ldots A_{1,\nu,\beta}({\bf H_{\nu,\beta}^{(1)}}-E_0^{(\nu,\beta)})A_{1,\nu,\beta}^\dag\ldots A_{m,\nu,\beta}^\dag\cr
&=&2MA_{n,\nu,\beta} \ldots A_{1,\nu,\beta}A_{1,\nu,\beta}^\dag({\bf H_{\nu,\beta}^{(2)}}-E_0^{(\nu,\beta)})A_{2,\nu,\beta}^\dag\ldots A_{m,\nu,\beta}^\dag\cr
&\vdots&\cr
&=&(2M)^{n+1}\prod_{k=0}^{n}\Big({\bf H_{\nu,\beta}^{(n+1)}}-E_k^{(\nu,\beta)}\Big)
{^{n+1}\Theta}_{m,\nu,\beta}\end{aligned}$$ if $n<m$.\
For $n=m$, the proof is immediate. $\square$
\[colo\] The operators ${_{m+1}\Lambda}_{n,\nu,\beta}$ and ${^{n+1}\Theta}_{m,\nu,\beta}$ satisfies the following identities $$\begin{aligned}
\label{4etttp7}
{_{m+1}\Lambda}_{n,\nu,\beta}\,{_{m+1}\Lambda}_{n,\nu,\beta}^\dag&=&
(2m)^{n-m}\prod_{k=m+1}^{n}\Big({\bf H_{\nu,\beta}^{(n+1)}}-E_{k}^{(\nu,\beta)}\Big),\\
{_{m+1}\Lambda}_{n,\nu,\beta}^\dag\,{_{m+1}\Lambda}_{n,\nu,\beta}&=&
(2m)^{n-m}\prod_{k=m+1}^{n}\Big({\bf H_{\nu,\beta}^{(m+1)}}-E_{k}^{(\nu,\beta)}\Big),\\
\label{4ettt7z}
{^{n+1}\Theta}_{m,\nu,\beta} {^{n+1}\Theta}_{m,\nu,\beta}^\dag&=&
(2m)^{m-n}\prod_{k=n+1}^{m}\Big({\bf H_{\nu,\beta}^{(n+1)}}-E_{k}^{(\nu,\beta)}\Big),\\
{^{n+1}\Theta}_{m,\nu,\beta}^\dag {^{n+1}\Theta}_{m,\nu,\beta}&=&
(2m)^{m-n}\prod_{k=n+1}^{m}\Big({\bf H_{\nu,\beta}^{(m+1)}}-E_{k}^{(\nu,\beta)}\Big).\end{aligned}$$
[**Proof.**]{} The proof is obviously true by using (\[comm\]) and (\[5dedp\]). $\square$
Besides, considering the supercharges $$Q:=
\left(\begin{array}{cc}0 & 0\\ B_m& 0\end{array}\right),\quad Q^\dag:=\left(\begin{array}{cc}0 & B_m^\dag\\
0 & 0\end{array}\right).$$ and the SUSY Hamiltonian ${\bf H_{\nu,\beta}^{ss}}$ given by $$\begin{aligned}
\label{susyh}
\fl
{\bf H_{\nu,\beta}^{ss}}:=
(2M)^{m+1}\Bigg[\begin{array}{cc}\prod_{k=0}^{m}({\bf H_{\nu,\beta}}-E_k^{(\nu,\beta)}) & 0\\
0 &\prod_{k=0}^{m}\Big({\bf H_{\nu,\beta}}-E_k^{(\nu,\beta)}+\frac{\varepsilon_0(m+1)(2\nu+m+2)}{\sin^2\frac{\pi x}{L}}\Big)\end{array}\Bigg],\end{aligned}$$ we readily check, like in [@david2; @hounk; @wit], that
$$\begin{aligned}
\label{algera}
{\bf H_{\nu,\beta}^{ss}}=\left\{Q,Q^\dag\right\}:=QQ^\dag+Q^\dag Q,\quad [{\bf H_{\nu,\beta}^{ss}},Q]:={\bf H_{\nu,\beta}^{ss}}Q-Q{\bf H_{\nu,\beta}^{ss}}=0\cr
[{\bf H_{\nu,\beta}^{ss}},Q^\dag]:={\bf H_{\nu,\beta}^{ss}}Q^\dag-Q^\dag{\bf H_{\nu,\beta}^{ss}}=0.\end{aligned}$$
In terms of the Hermitian supercharges $$Q_1:=\frac{1}{2}\left(\begin{array}{cc}0 & B_m^\dag \\
B_m&0 \end{array}\right) \quad\; and \;\quad Q_2:=\frac{1}{2i}\left(\begin{array}{cc}0 & B_m^\dag \\
-B_m&0 \end{array}\right)$$ the superalgebra (\[algera\]) takes the form $$\begin{aligned}
\label{49}
[Q_i, {\bf H_{\nu,\beta}^{ss}}]=0,\quad \{Q_i, Q_j\}:=Q_iQ_j+Q_iQ_i=
\delta_{ij}{\bf H_{\nu,\beta}^{ss}}, \quad i,j=1,2.\end{aligned}$$
\[prop4o\] The actions of $B_m^\dag$ and $B_m$ on the normalized eigenfunctions $\phi_n^{(m+1,\nu,\beta)}$ and $\phi_n^{(\nu,\beta)}$ of $\;{\bf H_{\nu,\beta}^{(m+1)}}$ and $\;{\bf H_{\nu,\beta}},$ associated to the eigenvalues $E_{n}^{(m+1,\nu,\beta)}$ and $E_n^{(\nu,\beta)},$ are given by $$\begin{aligned}
\label{48}
\fl
B_m^\dag \phi_n^{(m+1,\nu,\beta)}(x)
&=2^{n+m+\nu+2}(\hbar\pi L^{-1})^{m+1}\mathcal{T}(n+m+1;\nu,\beta)\Big[L\mathcal{O}(n+m+1;\nu,\beta)\Big]^{-\frac{1}{2}}\cr
&\times
\exp\Big[\frac{\beta\pi}{n+m+\nu+2}\Big(\frac{1}{2}-\frac{x}{L}\Big)\Big]\mathcal{M}(n,m;\nu,\beta)\sin^{n+m+\nu+2}\Big(\frac{\pi x}{L}\Big)\cr
&\times P_{n+m+1}^{(a_{n+m+1},\bar{a}_{n+m+1})}\Big(i\cot \frac{\pi x}{L}\Big),\end{aligned}$$ and $$\begin{aligned}
\label{cr}
B_m\phi_{n+m+1}^{(\nu,\beta)}(x)= (\pi\hbar L^{-1})^{m+1}\mathcal{M}(n,m;\nu,\beta)
\phi_n^{(m+1,\nu,\beta)}(x),\end{aligned}$$ respectively, where $\mathcal{M}(n,m;\nu,\beta)$ is expressed by $$\begin{aligned}
\fl
\mathcal{M}^2(n,m;\nu,\beta)
=\prod_{k=0}^{m}\frac{(n+m-k+1)}{(n+m+2\nu+k+3)^{-1}}
\Big(1+\frac{\beta^2}{[(k+\nu+1)(n+m+\nu+2)]^2}\Big).\end{aligned}$$
[**Proof.**]{} The proof is immediate by using (\[tote\]) and (\[5dpp\]). $\square$
\[prosp4o\] Consider $ |\phi_n^{(\nu,\beta)}\rangle$ and $ |\phi_n^{(m+1, \nu,\beta)}\rangle$ two states in the Hilbert space $\mathcal{H}$ . The operators $B_mB_m^\dag$ and $B_m^\dag B_m$ mean-values are given by $$\begin{aligned}
\label{4d8}
\langle B_mB_m^\dag \rangle_{\phi_n^{(m+1,\nu,\beta)}}&=&\Big[(\pi\hbar L^{-1})^{m+1}\mathcal{M}(n,m;\nu,\beta)\Big]^2,\\
\label{poty}
\langle B_m^\dag B_m \rangle_{\phi_n^{(\nu,\beta)}}&=&\Big[(\pi\hbar L^{-1})^{m+1}\mathcal{M}(n-m-1,m;\nu,\beta)\Big]^2,\end{aligned}$$ where $\langle A_{\nu,\beta} \rangle_{\phi_n^{(\nu,\beta)}}:=\int_0^L dx\;\overline{\phi_{n}^{(\nu,\beta)}(x)}A_{\nu,\beta}\phi_{n}^{(\nu,\beta)}(x)$.
[**Proof.**]{} It uses Proposition \[propae3\]. $\square$
\[col2\] The operators ${_{m+1}\Lambda}_{n,\nu,\beta}$ and ${^{n+1}\Theta}_{m,\nu,\beta}$ satisfy the following identities: $$\begin{aligned}
\label{deux}
\fl
\langle{_{m+1}\Lambda}_{n,\nu,\beta}\,{_{m+1}\Lambda}_{n,\nu,\beta}^\dag\rangle_{\phi_n^{(n+1,\nu,\beta)}}=\Big[(\hbar\pi L^{-1})^{n-m}\Big]^2\mathcal{N}(n,n;\nu,\beta)\mathcal{N}^{-1}(n,m;\nu,\beta),\\
\label{lave}
\fl
\langle{_{m+1}\Lambda}_{n,\nu,\beta}^\dag{_{m+1}\Lambda}_{n,\nu,\beta}\rangle_{\phi_n^{(m+1,\nu,\beta)}}=\Big[(\hbar\pi L^{-1})^{n-m}\mathcal{M}(m,n;\nu,\beta)\mathcal{M}^{-1}(n,m;\nu,\beta)\Big]^2,\\
\label{des}
\fl
\langle {^{n+1}\Theta}_{m,\nu,\beta} {^{n+1}\Theta}_{m,\nu,\beta}^\dag\rangle_{\phi_n^{(n+1,\nu,\beta)}}=\Big[(\hbar\pi L^{-1})^{m-n}\Big]^2\mathcal{N}^{-1}(n,n;\nu,\beta)\mathcal{N}(n,m;\nu,\beta),\\
\label{oux}
\fl
\langle {^{n+1}\Theta}_{m,\nu,\beta}^\dag{^{n+1}\Theta}_{m,\nu,\beta}\rangle_{\phi_n^{(m+1,\nu,\beta)}}=\Big[(\hbar\pi L^{-1})^{m-n}\mathcal{M}(n,m;\nu,\beta)\mathcal{M}^{-1}(m,n;\nu,\beta)\Big]^2,\end{aligned}$$ where $\mathcal{N}(n,m;\nu,\beta)$ is given by $$\begin{aligned}
\fl
\mathcal{N}(n,m;\nu,\beta)
=\prod_{k=0}^{m}\frac{(2n-k+1)}{(2n+2\nu+k+3)^{-1}}
\Big(1+\frac{\beta^2}{[(k+\nu+1)(2n+\nu+2)]^2}\Big)\end{aligned}$$ and $\mathcal{N}(n,n;\nu,\beta)=\mathcal{M}^2(n,n;\nu,\beta).$
Remark that the equations (\[des\]) and (\[oux\]) can be obtained by replacing $\mathcal{N}$ and $n,m$ by $\mathcal{N}^{-1}$ and $m,n$ in (\[deux\]) and (\[lave\]), respectively.
Coherents states
================
Let $|\zeta_z^{[m,\nu,\beta]}\rangle, z\in\mathbb{C}$ be the eigenstates of the operator $A_{m,\nu,\beta}$ associated to the eignevalue $z$. Then, $$\begin{aligned}
\label{toto}
|\zeta_z^{[m,\nu,\beta]}\rangle=\mathcal{R}
\exp \Big(\frac{zx}{\hbar}\Big)
\phi_0^{(m,\nu,\beta)}(x),\quad \forall \,x\,\in[0,L],\end{aligned}$$ where $\mathcal{R}$ is the normalization constant. In order to determine $\mathcal{R}$, let us consider the set $\mathcal{K}=\Big\{(q,p)|q\in[0,L], p\in\mathbb{R}\Big\}$ which corresponds to the classical phase space of the Pöschl-Teller problem. We re-express the operator ${\bf A}_{m,\nu,\beta}$ in terms of ${\bf Q}$ and ${\bf P}$ i.e ${\bf A}_{m,\nu,\beta}=W_{m,\nu,\beta}({\bf Q})+i{\bf P}, $ where their actions on the function $\phi$ are given by ${\bf Q}:\phi(x)\rightarrow x\phi(x)$ and ${\bf P}:\phi(x)\rightarrow -i\hbar\phi'(x)$ on $\mathcal{D}_{A}$. Latter on, we change the variable $z$ as $z=W_{m,\nu,\beta}(q)+ip$ [@berg; @gazeau] i.e $|\zeta_{W_{m,\nu,\beta}(q)+ip}^{[m,\nu,\beta]}\rangle=|\eta_{q,p}^{[m,\nu,\beta]}\rangle.$ Then, the equation (\[toto\]) becomes $$\begin{aligned}
\label{totoer}
|\eta_{q,p}^{[m,\nu,\beta]}\rangle=\mathcal{R}_m^{(\nu,\beta)}(q)
\exp \Big(\frac{(W_{m,\nu,\beta}(q)+ip)}{\hbar}x\Big)
\phi_0^{(m,\nu,\beta)}(x), \, \forall \,x\in[0,L],\end{aligned}$$ where $\phi_0^{(m,\nu,\beta)}$ is given in (\[tote\]). The normalization constant $\mathcal{R}_m^{(\nu,\beta)}(q)$ is given by $$\begin{aligned}
\label{prposttt}
\mathcal{R}_m^{(\nu,\beta)}(q)
=\exp\Big(-\frac{LW_{m,\nu,\beta}(q)}{2\hbar}\Big)\widetilde{\mathcal{O}}_m(q;L;\nu,\beta)
,\end{aligned}$$ where $\widetilde{\mathcal{O}}_m(q;L;\nu,\beta)$ is provided by the expression $$\begin{aligned}
\label{wide}
\fl
\widetilde{\mathcal{O}}_m^2(q;L;\nu,\beta)&=&
\sum_{k=0}^m \frac{(-m,-m-2\nu-1)_k}{(-m-\nu-\frac{i\beta}{\nu+m+1})_kk!\Gamma(m+\nu+2-k+\frac{i\beta}{\nu+m+1})}\cr
&\times&\sum_{s=0}^m\frac{(-m,-m-2\nu-1,)_s\Gamma(2m+2\nu-s-k+3)}{ (-m-\nu+\frac{i\beta}{\nu+m+1})_s s! \Gamma(m+\nu+2-s-\frac{i\beta}{\nu+m+1})}\cr
&\times&\Bigg[\sum_{k=0}^m \frac{(-m,-2\nu-m-1)_k}{(-m-\nu-\frac{i\beta}{\nu+m+1})_k\Gamma(m+\nu+2-k+i(\nu+m+1)\cot\frac{\pi q}{L}) k! }\cr
&\times&\sum_{s=0}^m\frac{(-m,-2\nu-m-1)_s\Gamma(2m+2\nu-s-k+3)}{(-m-\nu+\frac{i\beta}{\nu+m+1})_s\Gamma(m+\nu+2-s-i(\nu+m+1)\cot\frac{\pi q}{L})s!}\Bigg]^{-1}.\end{aligned}$$ For computational details, see Appendix C. In the limit, when the parameter $m\rightarrow 0$, the coherent states (\[totoer\]), (\[prposttt\]) are reduced to ones obtained by Bergeron et al [@berg].
The scalar product of two coherents states $|\eta_{q,p}^{[m,\nu,\beta]}\rangle$ and $|\eta_{q',p'}^{[m,\nu',\beta']}\rangle$ satisfies $$\begin{aligned}
\label{prposqtt}
\fl
\langle\eta_{q',p'}^{[m,\nu',\beta']}|\eta_{q,p}^{[m,\nu,\beta]}\rangle=Le^{\frac{L\alpha}{2\hbar}}\mathcal{R}_m^{(\nu',\beta')}(q')K_m^{(\nu',\beta')}\mathcal{R}_m^{(\nu,\beta)}(q)K_m^{(\nu,\beta)}\widetilde{\mathcal{T}}(m;\nu,\nu',\beta,\beta',\frac{ L\alpha}{2\pi\hbar}),\end{aligned}$$ where $\alpha=W_{m,\nu,\beta}(q)+W_{m,\nu',\beta'}(q')+i(p-p')$ and $$\begin{aligned}
\label{aaaa}
\fl
\widetilde{\mathcal{T}}(m;\nu,\nu',\beta,\beta',\frac{ L\alpha}{2\pi\hbar})&=&\Big(-m-\nu-i\frac{\beta}{m+\nu+1},-m-\nu'+i\frac{\beta'}{m+\nu'+1}\Big)_m\cr
&\times&\sum_{k=0}^m \frac{2^{-2m-\nu-\nu'-2}(-m,-m-\nu-\nu'-1)_k}{(-\nu-m-i\frac{\beta}{\nu+m+1})_k\Gamma(m+\frac{\nu+\nu'}{2}+2-k-i\frac{L\alpha}{2\pi\hbar}) k! m! }\cr
&\times&\sum_{s=0}^m\frac{(-m,-m-\nu-\nu'-1)_s\Gamma(2m+\nu+\nu'+3-k-s)}{(-m-\nu'+i\frac{\beta'}{\nu'+m+1})_s\Gamma(m+\frac{\nu+\nu'}{2}+2-s+i\frac{L\alpha}{2\pi\hbar}) s!m!}.\end{aligned}$$
\[labels\] The coherent states defined in (\[totoer\])
1. are normalized $$\begin{aligned}
\label{qqqq}
\langle\eta_{q,p}^{[m,\nu,\beta]}|\eta_{q,p}^{[m,\nu,\beta]}\rangle=1,\end{aligned}$$
2. are not orthogonal to each other, i.e. $$\begin{aligned}
\label{ppsqtt}
\langle\eta_{q',p'}^{[m,\nu',\beta']}|\eta_{q,p}^{[m,\nu,\beta]}\rangle\neq \delta(q-q')\delta(p-p'),\end{aligned}$$
3. are continuous in $q,p$,
4. solve the identity, i.e. $$\begin{aligned}
\label{uint}
\int_{\mathcal{K}}\frac{dq\,dp}{2\pi\hbar}|\eta_{q,p}^{[m,\nu,\beta]}\rangle\langle\eta_{q,p}^{[m,\nu,\beta]}|={\bf 1}.\end{aligned}$$
[**Proof.**]{}
$\bullet$ Non orthogonality: From (\[prposqtt\]) one can see that $$\begin{aligned}
\label{prpttttt}
\langle\eta_{q',p'}^{[m,\nu',\beta']}|\eta_{q,p}^{[m,\nu,\beta]}\rangle\neq0,\end{aligned}$$ which signifies that the CS are not orthogonal.
$\bullet$ In the limit when the parameters $\nu'\rightarrow \nu,\beta'\rightarrow\beta,q'\rightarrow q$ and $p'\rightarrow p$, the quantity $$\begin{aligned}
Le^{\frac{L\alpha}{2\hbar}}\widetilde{\mathcal{T}}\Big(m;\nu,\nu',\beta,\beta',\frac{ L\alpha}{2\pi\hbar}\Big)\rightarrow\frac{1}{(\mathcal{R}_m^{(\nu,\beta)}(q)K_m^{(\nu,\beta)})^2} \mbox{ and }
\langle\eta_{q,p}^{[m,\nu,\beta]}|\eta_{q,p}^{[m,\nu,\beta]}\rangle=1,\end{aligned}$$ i.e the CS are normalized.
$\bullet$ Continuity in $q,p$ $$\begin{aligned}
\label{contu}
||(|\eta_{q',p'}^{[m,\nu,\beta]}\rangle-|\eta_{q,p}^{[m,\nu,\beta]}\rangle)||^2=2\Big(1-\mathcal{R}e\langle\eta_{q',p'}^{[m,\nu,\beta]}|\eta_{q,p}^{[m,\nu,\beta]}\rangle\Big).\end{aligned}$$ So, $|||\eta_{q',p'}^{[m,\nu,\beta]}\rangle-|\eta_{q,p}^{[m,\nu,\beta]}\rangle||^2\rightarrow 0$ as $|q'-q|, |p'-p|\rightarrow 0$, since $\langle\eta_{q',p'}^{[m,\nu,\beta]}|\eta_{q,p}^{[m,\nu,\beta]}\rangle
\rightarrow 1$ as $|q'-q|, |p'-p|\rightarrow 0$.
$\bullet$ Resolution of the identity\
Here we proceed as in [@berg] to show that $$\begin{aligned}
\label{ui}
\fl
\int_\mathbb{R}\frac{4^{\nu+m}\Gamma(m+\nu+1-k+i(m+\nu+1)\cot\pi q)\Gamma(m+\nu+1-s-i(m+\nu+1)\cot\pi q)}{\pi^2\Gamma(2m+2\nu-k-s+2)}\cr
\times\exp\Big((\nu+m+1)\cot\pi q (1-2x)\Big)dq\cr
=\frac{1}{\sin^{2\nu+2m+2}(\pi x)\Big(\frac{1+i\cot(\pi x)}{2}\Big)^k\Big(\frac{1-i\cot(\pi x)}{2}\Big)^s},\;\forall \,x \,]0, 1[,\;\forall \,\nu > -1.\end{aligned}$$ Let $\phi\in\mathcal{H}$ and $h_{q,p}$ a function in $ L^2(\mathbb{R},dx)$ defined by $$\begin{aligned}
h_{q,p}(x):=\left\{\begin{array}{l}\phi(x)\exp\Big(\frac{W_{m,\nu,\beta}(q)+ip}{\hbar}x\Big)
\phi_0^{(m,\nu,\beta)}(x),\quad \mbox{ if } x\in[0,L]\\
0\qquad\quad \qquad\quad\quad\quad\qquad\qquad\quad\qquad\mbox{ otherwise }.\end{array}\right.\end{aligned}$$ One can see that the scalar product $\langle\eta_{q,p}^{[m,\nu,\beta]}|\phi\rangle$ given by $$\begin{aligned}
\label{parpan}
\langle\eta_{q,p}^{[m,\nu,\beta]}|\phi\rangle=\mathcal{R}_m^{(\nu,\beta)}(q)\int_{0}^{L}dx\,e^{-i\frac{p}{\hbar}x}\phi(x)
\exp \Big(\frac{W_{m,\nu,\beta}(q)x}{\hbar}\Big)
\overline{\phi_0^{(m,\nu,\beta)}(x)}\end{aligned}$$ is the Fourier transform of $h_{q,p},$ i.e. $\langle\eta_{q,p}^{[m,\nu,\beta]}|\phi\rangle=\mathcal{R}_m^{(\nu,\beta)}(q)\hat{h}_{q,p}(p/\hbar).$ Since the function $h_{q,p}\in\,L^1(\mathbb{R},dx)\cap L^2(\mathbb{R},dx)$, by using the Plancherel-Parseval Theorem (PPT) we have $$\begin{aligned}
\label{rrparpan}
\int_{\mathbb{R}}\frac{dp}{2\pi\hbar}|\langle\eta_{q,p}^{[m,\nu,\beta]}|\phi\rangle|^2=(\mathcal{R}_m^{(\nu,\beta)}(q))^2\int_{0}^L\frac{dx}{4\pi^2}|h_{q,p}(x)|^2.\end{aligned}$$ The Fubini theorem yields $$\begin{aligned}
\label{rrttparpan}
\fl
\int_{\mathcal{K}}\frac{dqdp}{2\pi\hbar}|\langle\eta_{q,p}^{[m,\nu,\beta]}|\phi\rangle|^2
=\int_0^Ldx \int_0^L\frac{dq}{4\pi^2}(\mathcal{R}_m^{(\nu,\beta)}(q))^2|\phi(x)|^2e^{\frac{2W_{m,\nu,\beta}(q)x}{\hbar}}\overline{\phi_0^{(m,\nu,\beta)}(x)}\phi_0^{(m,\nu,\beta)}(x)\cr\end{aligned}$$ After using the inverse Fourier transform (see Appendix D), the above equation yields $$\begin{aligned}
\int_{\mathcal{K}}\frac{dqdp}{2\pi\hbar}|\langle\eta_{q,p}^{[m,\nu,\beta]}|\phi\rangle|^2=\int_0^Ldx |\phi(x)|^2.\end{aligned}$$ By using the polarization identity on the interval $[0, L]$, i.e $\int_{0}^Ldx|\psi(x)|^2=\int_{0}^Ldx\langle\psi|x\rangle\langle x|\psi\rangle$ we get the resolution of the identity. $\square$
Conclusion
==========
In this paper, we have determined a familly of normalized eigenfunctions of the hierarchic Hamiltonians of the Pöschl-Teller Hamiltonian ${\bf H_{\nu,\beta}}$. New operators with novel relevant properties and their mean values are determined. A new hierachic familly of CS is determined and discussed. In the limit, when $m\rightarrow 0$, the constructed CS well reduce to the CS investigated by Bergeron et al [@berg].
Acknowledgements {#acknowledgements .unnumbered}
================
This work is partially supported by the Abdus Salam International Centre for Theoretical Physics (ICTP, Trieste, Italy) through the Office of External Activities (OEA) - . The ICMPA is also in partnership with the Daniel Iagolnitzer Foundation (DIF), France.
Appendix A. The normalization constant of the eigenvector $|\phi_n^{(\nu,\beta)}\rangle$ {#appendix-a.-the-normalization-constant-of-the-eigenvector-phi_nnubetarangle .unnumbered}
========================================================================================
By using the property of the eigenstates, we have $$\begin{aligned}
\label{eqnart}
\delta_{n,m}&=:&(\phi_{n}^{(\nu,\beta)},\phi_{m}^{(\nu,\beta)})\cr
&=&\overline{K_n^{(\nu,\beta)}}K_m^{(\nu,\beta)}\int_{0}^L dx \sin^{2\nu+n+m+2}\frac{\pi x}{L}\;
e^{-\frac{\beta \pi x}{L}\Big(\frac{1}{\nu+n+1}+\frac{1}{\nu+m+1}\Big)}\cr
&\times& \overline{P_n^{(a_n,\bar{a}_n)}\Big(i\cot \frac{\pi x}{L}\Big)}P_m^{(a_m,\bar{a}_m)}\Big(i\cot \frac{\pi x}{L}\Big)\cr
&=&\overline{K_n^{(\nu,\beta)}}K_m^{(\nu,\beta)}
\frac{(\bar{a}_n+1)_n(a_m+1)_n}{n!m!}\sum_{k=0}^m \frac{(-m,a_m+\bar{a}_m+m+1)_k}{(\bar{a}_m+1)_kk!}\cr
& \times& \sum_{s=0}^n \frac{(-n,a_n+\bar{a}_n+n+1)_s}{(a_n+1)_ss!}\times{\bf \mathcal{J}}
, \end{aligned}$$ where $$\begin{aligned}
\fl
{\bf \mathcal{J}} =2^{-k-s}\int_{0}^L dx\Big[ \sin^{2\nu+m+n+2}\frac{\pi x}{L}
e^{-\frac{\beta \pi x}{L}\Big(\frac{1}{\nu+n+1}+\frac{1}{\nu+m+1}\Big)}\Big(1+i\cot \frac{\pi x}{L}\Big)^k
\Big(1-i\cot \frac{\pi x}{L}\Big)^s\Big]\end{aligned}$$ In [@berg] it is shown that $$\begin{aligned}
\int_{0}^1 dx \sin^{2\delta+2}(\pi x)e^{zx}=\frac{\Gamma(2\delta+3)e^{z/2}}{4^{\delta+1}\Gamma(\delta+2+i\frac{z}{2\pi})\Gamma(\delta+2-i\frac{z}{2\pi})}, \;\;\delta> -\frac{3}{2}.\end{aligned}$$ Therefore, $$\begin{aligned}
\fl
\frac{\delta_{n,m}}{\overline{K_n^{(\nu,\beta)}}K_m^{(\nu,\beta)}}
&=&L\frac{(-\nu-m-\frac{i\beta}{\nu+m+1})_m(-\nu-n+\frac{i\beta}{\nu+n+1})_n}{n!m!\exp\Big\{-\frac{\beta \pi }{2}\Big(\frac{1}{\nu+n+1}+\frac{1}{\nu+m+1}\Big)\Big\}2^{2\nu+n+m+2}}\cr
&\times&\sum_{k=0}^m \frac{(-m,-2\nu-m-1)_k}{(-\nu-m-\frac{i\beta}{\nu+m+1})_kk!\Gamma(\frac{n+m}{2}+\nu+2-k+\frac{i\beta}{\nu+n+1})}\cr
&\times&\sum_{s=0}^n \Bigg\{\frac{(-n,-2\nu-n-1,)_s\Gamma(n+m+2\nu-s-k+3)}{ \Big(-\nu-n+i\frac{\beta }{2}\Big(\frac{1}{\nu+n+1}+\frac{1}{\nu+m+1}\Big)\Big)_s s! }\cr
&\times&\frac{1}{\Gamma\Big(\frac{n+m}{2}+\nu+2-s-i\frac{\beta }{2}\Big(\frac{1}{\nu+n+1}+\frac{1}{\nu+m+1}\Big)\Big)}\Bigg\}.\end{aligned}$$ The proof is achieved by taking $n=m.$
Appendix B. Computation of $\phi_{n}^{(1,\nu,\beta)}$ {#appendix-b.-computation-of-phi_n1nubeta .unnumbered}
======================================================
From (\[eigen1\]) we have $$\begin{aligned}
\label{ddxdf}
\fl
\frac{d}{dx}\phi_{n+1}^{(\nu,\beta)}(x)
&=&\frac{\pi K_{n+1}^{\nu,\beta} }{L}\Bigg[\Big((\nu+n+2)\cos \frac{\pi x}{L}-\frac{\beta }{\nu+n+2}\sin \frac{\pi x}{L}\Big)\cr
&\times&P_{n+1}^{(a_{n+1},\bar{a}_{n+1})}\Big(i\cot \frac{\pi x}{L}\Big)+\frac{i(n+2\nu+2)}{2}
\sin^{-1}\frac{\pi x}{L}\\
&\times&P_{n}^{(a_{n+1}+1,\bar{a}_{n+1}+1)}\Big(i\cot \frac{\pi x}{L}\Big)\Bigg]\sin^{\nu+n+1}\frac{\pi x}{L}\exp \Big(-\frac{\beta \pi x}{L(n+\nu+2)}\Big)\nonumber\end{aligned}$$ and $$\begin{aligned}
\fl
W_{\nu,\beta}\phi_{n+1}^{(\nu,\beta)}(x)
&=&\frac{\hbar \pi K_{n+1}^{\nu,\beta}}{L}\Bigg[\frac{\beta}{\nu+1}
\sin \frac{\pi x}{L}-(\nu+1)\cos \frac{\pi x}{L}\Bigg]\cr
&\times&\sin^{\nu+n+1}\frac{\pi x}{L}\exp \Big[\frac{-\beta \pi L^{-1}x}{L(\nu+n+2)}\Big]
P_{n+1}^{(a_{n+1},\bar{a}_{n+m+1})}
\Big(i\cot \frac{\pi x}{L}\Big).\end{aligned}$$ From the latter expression and (\[ddxdf\]), we have $$\begin{aligned}
\fl
A_{\nu,\beta}\phi_{n+1}^{(\nu,\beta)}(x)&=&\frac{\pi \hbar K_{n+1}^{\nu,\beta}}{L}\Bigg[(n+1)\Big[\cos \frac{\pi x}{L}+\frac{\beta\sin \frac{\pi x}{L} }{(\nu+1)(\nu+n+2)}\Big]\cr
&\times&
P_{n+1}^{(a_{n+1},\bar{a}_{n+1})}\Big(i\cot \frac{\pi x}{L}\Big)
+\frac{i(n+2\nu+2)}{2}\sin^{-1}\frac{\pi x}{L}
\cr
&\times&P_{n}^{(a_{n+1}+1,\bar{a}_{n+1}+1)}
\Big(i\cot \frac{\pi x}{L}\Big)
\Bigg]\sin^{\nu+n+1}\frac{\pi x}{L}
\exp \Big(-\frac{\beta \pi x}{L(n+\nu+2)}\Big).\end{aligned}$$ Let us determine $\cos \frac{\pi x}{L}+\frac{\beta }{(\nu+1)(\nu+n+2)}
\sin \frac{\pi x}{L}$: $$\begin{aligned}
\label{init}
&&\cos \frac{\pi x}{L}+\frac{\beta }{(\nu+1)(\nu+n+2)}
\sin \frac{\pi x}{L}\cr
&=&\sqrt{1+\frac{\beta^2}{[(\nu+1)(\nu+n+2)]^2}}\Bigg[
\frac{1}{\sqrt{1+\frac{\beta^2}{[(\nu+1)(\nu+n+2)]^2}}}\cos \frac{\pi x}{L}\cr
&+&\frac{\beta}{(\nu+1)(\nu+n+2)\sqrt{1+\frac{\beta^2}{[(\nu+1)(\nu+n+2)]^2}}}\sin \frac{\pi x}{L}\Bigg]\cr
&=&\sqrt{1+\frac{\beta^2}{[(\nu+1)(\nu+n+2)]^2}}\Big(\cos \frac{\pi x}{L}\cos \alpha_{\nu,\beta}(n)+\sin \frac{\pi x}{L}\sin\alpha_{\nu,\beta}(n)\Big)\cr
&=&\sqrt{1+\frac{\beta^2}{[(\nu+1)(\nu+n+2)]^2}}\cos \Big(\frac{\pi x}{L}-\alpha_{\nu,\beta}(n)\Big)\cr
&=&\sqrt{\frac{\overline{\Delta_{n+1}^0 E_{0}^{(\nu,\beta)}}}{\varepsilon_0(2\nu+n+3)}}
\cos \Big(\frac{\pi x}{L}-\alpha_{\nu,\beta}(n)\Big).\end{aligned}$$ Finally, $$\begin{aligned}
\fl
A_{\nu,\beta}\phi_{n+1}^{(\nu,\beta)}(x)
&=&K_{n+1}^{\nu,\beta}\Bigg[\sqrt{\frac{2M(n+1)^2\overline{\Delta_{n+1}^0 E_{0}^{(\nu,\beta)}}}{2\nu+n+3}}
\cos \Big(\frac{\pi x}{L}
-\alpha_{\nu,\beta}(n)\Big)\cr
&\times&P_{n+1}^{(a_{n+1},\bar{a}_{n+1})}\Big(i\cot \frac{\pi x}{L}\Big)
+\frac{i\pi\hbar(n+2\nu+2)}{2L\sin\frac{\pi x}{L}}P_{n}^{(a_{n+1}+1,\bar{a}_{n+1}+1)}\Big(i\cot \frac{\pi x}{L}\Big)\Bigg]\cr
&\times&\sin^{\nu+n+1}\frac{\pi x}{L}\exp \Big(-\frac{\beta \pi x}{L(\nu+n+2)}\Big),\end{aligned}$$ with $$\begin{aligned}
\fl
\cos^2 \alpha_{\nu,\beta}(n)+\sin^2 \alpha_{\nu,\beta}(n)&=&\frac{1}{1+\frac{\beta^2}{[(\nu+1)(\nu+n+2)]^2}}+\frac{\beta^2}{\beta^2+[(\nu+1)(\nu+n+2)]^2}\cr
&=&1,\end{aligned}$$ and the constant $K_{n+1}^{\nu,\beta}$ defined in (\[prpos\]).
Appendix C. Computation of the normalization constant of CS {#appendix-c.-computation-of-the-normalization-constant-of-cs .unnumbered}
===========================================================
By using the definition $$\begin{aligned}
\label{eqnarter}
&1=:(|\eta_{q,p}^{[m,\nu,\beta]}\rangle,|\eta_{q,p}^{[m,\nu,\beta]}\rangle)\cr
&=(\mathcal{R}_m^{(\nu,\beta)})^2\int_{0}^L dx \exp\Big(\frac{2W_{m,\nu,\beta}(q)x}{\hbar}\Big)\overline{\phi_0^{(m,\nu,\beta)}(x) }\phi_0^{(m,\nu,\beta)}(x) \cr
&=(\mathcal{R}_m^{(\nu,\beta)}K_m^{(\nu,\beta)})^2\int_{0}^L dx \sin^{2\nu+2m+2}\frac{\pi x}{L}
\exp \Big(\frac{2W_{m,\nu,0}(q)x}{\hbar}\Big)\cr
&\times\overline{P_m^{(a_m,\bar{a}_m)}\Big(i\cot \frac{\pi x}{L}\Big)}
P_m^{(a_m,\bar{a}_m)}\Big(i\cot \frac{\pi x}{L}\Big)\cr
&=(\mathcal{R}_m^{(\nu,\beta)}K_m^{(\nu,\beta)})^2\frac{(\bar{a}_m+1)_m(a_m+1)_m}{(m!)^2}\sum_{k=0}^m \frac{(-m,a_m+\bar{a}_m+m+1)_k}{(\bar{a}_m+1)_kk!}\cr
& \times \sum_{s=0}^m \frac{(-m,a_m+\bar{a}_m+m+1)_s}{(a_m+1)_ss!}\times{\bf \mathcal{S}}
, \end{aligned}$$ with $$\begin{aligned}
{\bf \mathcal{S}} =2^{-k-s}\int_{0}^L dx\Bigg\{\sin^{2\nu+2m+2}\frac{\pi x}{L}
\exp \Big(-\frac{2\pi (\nu+m+1)x}{L}\cot\frac{\pi q}{L}\Big)\cr
\Big(1+i\cot \frac{\pi x}{L}\Big)^k
\Big(1-i\cot \frac{\pi x}{L}\Big)^s\Bigg\}\cr
=2^{-k-s}Le^{\frac{i(k-s)\pi}{2}}\int_{0}^1 dx \sin^{2\delta+2}(\pi x)e^{tx},\end{aligned}$$ In [@berg] it is shown that $$\begin{aligned}
\int_{0}^1 dx \sin^{2\delta+2}(\pi x)e^{tx}=\frac{\Gamma(2\delta+3)e^{t/2}}{4^{\delta+1}\Gamma(\delta+2+i\frac{t}{2\pi})\Gamma(\delta+2-i\frac{t}{2\pi})}, \;\;\delta> -\frac{3}{2}.\end{aligned}$$ Then the relation (\[eqnarter\]) becomes $$\begin{aligned}
\fl
\frac{1}{(\mathcal{R}_m^{(\nu,\beta)})^2}
=&L(K_m^{(\nu,\beta)})^2(2^{\nu+m+1}m!)^{-2}e^{-\pi(\nu+m+1)\cot\frac{\pi q}{L}}\Big|\Big(-\nu-m+i\frac{\beta}{\nu+m+1}\Big)_m\Big|^2\cr
&\sum_{k=0}^m \frac{(-m,-2\nu-m-1)_k}{(-m-\nu-\frac{i\beta}{\nu+m+1})_k\Gamma(m+\nu+2-k+i(\nu+m+1)\cot\frac{\pi q}{L}) k! }\cr
&\sum_{s=0}^m\frac{(-m,-2\nu-m-1)_s\Gamma(2m+2\nu-s-k+3)}{(-m-\nu+\frac{i\beta}{\nu+m+1})_s\Gamma(m+\nu+2-s-i(\nu+m+1)\cot\frac{\pi q}{L})s!},\end{aligned}$$ where $K_m^{(\nu,\beta)}$ is given in (\[prpos\]).
Appendix D. Integral involved in the resolution of the identity {#appendix-d.-integral-involved-in-the-resolution-of-the-identity .unnumbered}
===============================================================
Here, in similar way as in [@berg], using the well-known Fourier transform ([@grad] p 520) $$\begin{aligned}
\fl
\forall\,k\in\mathbb{R}, \forall\;\nu>-1, \quad \int_\mathbb{R}
\frac{e^{-itx}}{2\pi\cosh^{2\delta+2}(x)}dx=\frac{4^\delta\Gamma(\delta+1-i\frac{t}{2})\Gamma(\delta+1+i\frac{t}{2})}{\pi\Gamma(2\delta+2)}.\end{aligned}$$ The inverse Fourier transform yields $$\begin{aligned}
\fl
\forall\,x\in\mathbb{R}, \forall\;\delta>-1, \quad
\int_\mathbb{R}\frac{4^\delta\Gamma(\delta+1-i\frac{t}{2})\Gamma(\delta+1+i\frac{t}{2})}{\pi\Gamma(2\delta+2)}e^{ikx}dx=\frac{1}{\cosh^{2\delta+2}(x)}.\end{aligned}$$ The analytical extension is unique; then, the above equality can be extended for $x\in\mathbb{C}$ with $\mathcal{I}m(x)\in\,]-\pi/2,\pi/2[.$ By taking $u=ix,t\rightarrow \frac{t}{\pi}$ and $ u=\pi x-\frac{\pi}{2},\;\delta=m+\nu-\frac{k}{2}-\frac{s}{2},\;t=-2\pi(\nu+m+1)\cot\pi q-i\pi(k-s)$, we arrive at $$\begin{aligned}
\fl
\int_\mathbb{R}\frac{\Gamma(m+\nu+1-k+i(m+\nu+1)\cot\pi q)\Gamma(m+\nu+1-s-i(m+\nu+1)\cot\pi q)}{\pi^2\Gamma(2m+2\nu-k-s+2)}\cr
\times \exp\Big((\nu+m+1)\cot\pi q(1-2x)\Big)dq\cr
=\frac{4^{-\nu-m}}{\sin^{2\nu+2m+2}(\pi x)\Big(\frac{1+i\cot(\pi x)}{2}\Big)^k\Big(\frac{1-i\cot(\pi x)}{2}\Big)^s},\; m+n+\nu-\frac{s}{2}-\frac{k}{2} >-1.\end{aligned}$$
References {#references .unnumbered}
==========
[9]{}
Bergeron H, Siegl P and Youssouf A 2012 [ New SUSYQM coherent states for Pöschl - Teller potentials: a detailed mathematical analysis]{}, J. Phys. A: Math. Theo. [**45**]{} 244028
Bergeron H, Gazeau J P and Youssouf A 2010 [Semi-classical behavior of Pöschl-Teller coherent states]{} [*Europhys. Lett*]{} [**92**]{} 60003
Cooper F, Khare A, Sukhatme and Los Alamos U 2001 [*Supersymmetry in quantum mechanics*]{}, World Scientific Publishing Co. Pte. Ltd
David J F C and Nicolás F G 2005 [Higher-order supersymmetric quantum mechanics]{} AIP. Conf. Proc. [**744**]{} (236-273)
Fernández D J 1984 [New hydrogen-like potentials]{} [Lett. Math. Phys.]{} [**8**]{} 337-343
Gendenstein L E and Krive I V 1985 [*Dynamical groups and spectrum generating algebras*]{} Eds. A. Bohm, A. O. Barut, and Y. Ne’eman (World Scientific, Singapore, 1988). Sov. J. Usp. Phys. [**28**]{} 645
Gradshteyn I S and Ryzhik I M 1980 [*Tables of Integrales, Series and Products* ]{} (New York: Academic)
Hounkonnou M N, Sodoga K and Azatassou E S 2004 [Factorization of Sturm-Liouville operators: solvable potentials and underlying algebraic structure]{} [J. Phys. A: Math. Gen]{} [**38**]{} 371-390
Infeld L and Hull T E 1951 [The Factorization Method]{}, [ Rev. Mod. Phys.]{} [**23**]{} 21
Koekoek R and Swarttouw R F 1998 [*The Askey-scheme of orthogonal polynomials and its $q-$analogue*]{} Report 98-17, [TU Delft]{}.
Reed F and Simon B 1975 [Methods of Modern Mathematical Physics: Fourier Analysis, Self-Adjointness]{} (New York: Academic)
Robert D and Jacques L L [*Mathematical Analysis and Numerical methods for Science and Technology*]{} (Springer-Verlag berlin Heidelberg) vol 2
Schrödinger E 1940 [Supersymmetrical Separation of Variables in Two-Dimensional Quantum Mechanics]{} [ Proc. Roy. Irish Acad. A]{} [**46**]{} 183\
Schrödinger E 1940 [A method of determining quantum-mechanical eigenvalues and eigenfunctions]{}, [*Proc. Roy. Irish Acad. Sect. A*]{} [**46**]{} , 9 - 16\
Schrödinger E 1940 [Further studies on solving eigenvalue problems by factorization]{}, [*Proc. Roy. Irish Acad. Sect.- A*]{} [**46**]{} , 183-206\
Schrödinger E 1941 [ The factorization of the hypergeometric equation]{}, [*Proc. Roy. Irish Acad. Sect.- A*]{} [**47**]{} , 53-54
Teschl G 1999 [Schrödinger operators course]{} [ Online at http://www.mat.univie.ac.at/- gerald]{}
Urrutia L F, Hernández E 1983 [Long-Range Behavior of Nuclear Forces as a Manifestation of Supersymmetry in Nature]{} [*Phys. Rev. Lett.*]{} [ **51**]{} 755
Witten E 1981 [ Dynamical breaking of supersymmetry,]{} [*Nuclear Phys. B*]{} [**185**]{}, 513 - 554
|
---
abstract: 'In this paper, an analysis of the undetected error probability of ensembles of $m \times n$ binary matrices is presented. The ensemble called the [*Bernoulli ensemble*]{} whose members are considered as matrices generated from i.i.d. Bernoulli source is mainly considered here. The main contributions of this work are (i) derivation of the error exponent of the average undetected error probability and (ii) closed form expressions for the variance of the undetected error probability. It is shown that the behavior of the exponent for a sparse ensemble is somewhat different from that for a dense ensemble. Furthermore, as a byproduct of the proof of the variance formula, simple covariance formula of the weight distribution is derived.'
author:
- 'Tadashi Wadayama$^\dagger$[^1]'
title: |
On Undetected Error Probability\
of Binary Matrix Ensembles
---
Introduction
============
[*Random coding*]{} is an extremely powerful technique to show the existence of a code satisfying certain properties. It has been used for proving the direct part (achievability) of many types of coding theorems. Recently, the idea of random coding has also come to be regarded as important from a practical point of view. An LDPC (Low-density parity-check) code can be constructed by choosing a parity check matrix from an ensemble of sparse matrices. Thus, there is a growing interest in randomly generated codes.
One of the main difficulties associated with the use of randomly generated codes is the difficulty in evaluating the properties or performance of such codes. For example, it is difficult to evaluate minimum distance, weight distribution, ML decoding performance, etc. for these codes. To overcome this problem, we can take a [*probabilistic approach*]{}. In such an approach, we consider an ensemble of parity check matrices: i.e., probability is assigned to each matrix in the ensemble. A property of a matrix (e.g., minimum distance, weight distributions) can then be regarded as a random variable. It is natural to consider statistics of the random variable such as mean, variance, higher moments and covariance. In some cases, we can show that a property is strongly concentrated around its expectation. Such a concentration result justifies the use of the probabilistic approach.
Recent advances in the analysis of the average weight distributions of LDPC codes, such as those described by Litsyn and Shevelev [@LS02][@LS03], Burshtein and Miller [@MV04], Richardson and Urbanke [@modern], show that the probabilistic approach is a useful technique for investigating typical properties of codes and matrices, which are not easy to obtain. Furthermore, the second moment analysis of the weight distribution of LDPC codes [@BB05][@VR05] can be utilized to prove concentration results for weight distributions.
The evaluation of the error detection probability of a given code (or given parity check matrix) is a classical problem in coding theory [@klove2], [@klove] and some results on this topic have been derived from the view point of a probabilistic approach. For example, for a linear code ensemble the inequality, $P_U<2^{-m}$ has long been known where $P_U$ is the undetected error probability and $m$ is the number of rows of a parity check matrix. Since the undetected error probability can be expressed as a linear combination of the weight distribution of a code, there is a natural connection between the expectation of the weight distribution and the expectation of the undetected error probability.
In this paper, an analysis of the undetected error probability of ensembles of binary matrices of size $m \times n$ is presented. An error detection scheme is a crucial part of a feedback error correction scheme such as ARQ(Automatic Repeat reQuest). Detailed knowledge of the error detection performance of a matrix ensemble would be useful for assessing the performance of a feedback error correction scheme.
Average undetected error probability
====================================
Notation
--------
For a given $m \times n (m,n \ge 1)$ binary parity check matrix $H$, let $C(H)$ be the binary linear code of length $n$ defined by $H$, namely, $
C(H) {\stackrel{\triangle}{=}}\{{ \mbox{\boldmath$x$} } \in F_2^n: H { \mbox{\boldmath$x$} }^t = 0^m\}
$ where $F_2$ is the Galois field with two elements $\{0,1\}$ (the addition over $F_2$ is denoted by $\oplus$). The notation $0^m$ denotes the zero vector of length $m$. In this paper, a boldface letter, such as ${ \mbox{\boldmath$x$} }$ for example, denotes a binary row vector.
Throughout the paper, a binary symmetric channel (BSC) with crossover probability $\epsilon$ ($0 < \epsilon < 1/2$) is assumed. We assume the conventional scenario for error detection: A transmitter sends a codeword ${ \mbox{\boldmath$x$} } \in C(H)$ to a receiver via a BSC with crossover probability $\epsilon$. The receiver obtains a received word ${ \mbox{\boldmath$y$} } = { \mbox{\boldmath$x$} } \oplus { \mbox{\boldmath$e$} }$, where ${ \mbox{\boldmath$e$} }$ denotes an error vector. The receiver firstly computes the syndrome ${ \mbox{\boldmath$s$} } = H { \mbox{\boldmath$y$} }^t$ and then checks whether ${ \mbox{\boldmath$s$} } = 0^m$ holds or not.
An undetected error event occurs when $H { \mbox{\boldmath$e$} }^t = 0^m$ and ${ \mbox{\boldmath$e$} } \ne 0^m$. This means that the error vector ${ \mbox{\boldmath$e$} } \in C({ \mbox{\boldmath$e$} } \ne 0^n)$ causes an undetected error event. Thus, the undetected error probability $P_U(H)$ can be expressed as $$P_U(H)
= \sum_{{ \mbox{\boldmath$e$} } \in C(H), { \mbox{\boldmath$e$} } \ne 0^m} \epsilon^{w({ \mbox{\boldmath$e$} })} (1-\epsilon)^{n - w({ \mbox{\boldmath$e$} })}$$ where $w({ \mbox{\boldmath$x$} })$ denotes the Hamming weight of vector ${ \mbox{\boldmath$x$} }$. The above equation can be rewritten as $$P_U(H) = \sum_{w = 1}^n A_w(H) \epsilon^w (1-\epsilon)^{n - w},$$ where $A_w(H)$ is defined by $$A_w(H) {\stackrel{\triangle}{=}}\sum_{{ \mbox{\boldmath$x$} }\in Z^{(n,w)}} I[H { \mbox{\boldmath$x$} }^t = 0^m].$$ The set $\{A_w(H) \}_{w=0}^n$ is usually called the [ *weight distribution*]{} of $C(H)$. The notation $Z^{(n,w)}$ denotes the set of $n$-tuples with weight $w$. The notation $I[condition]$ is the indicator function such that $I[condition] = 1$ if $condition$ is true; otherwise, it evaluates to 0.
Suppose that ${{\cal G}}$ is a set of binary $m\times n$ matrices $(m, n \ge 1)$. Note that ${{\cal G}}$ may contain some matrices with all elements identical. Such matrices should be distinguished as distinct matrices. A probability $P(H)$ is associated with each matrix $H$ in ${{\cal G}}$. Thus, ${{\cal G}}$ can be considered as an [*ensemble*]{} of binary matrices. Let $f(H)$ be a real-valued function which depends on $H \in {{\cal G}}$. The expectation of $f(H)$ with respect to the ensemble ${{\cal G}}$ is defined by $$E_{{{\cal G}}}[f(H)] {\stackrel{\triangle}{=}}\sum_{H \in {{\cal G}}} P(H) f(H).$$ The average weight distribution of a given ensemble ${{\cal G}}$ is given by $
E_{{{\cal G}}}[A_w(H)].
$ This quantity is very useful for analyzing the performance of binary linear codes, including analysis of the undetected error probability.
Bernoulli ensemble
------------------
In this paper, we will focus on a parameterized ensemble ${{\cal B}}_{m,n,k}$ which is called the [*Bernoulli ensemble*]{} because the Bernoulli ensemble is amenable to ensemble analysis. The Bernoulli ensemble ${{\cal B}}_{m,n,k}$ contains all the binary $m \times n$ matrices ($m,n \ge 1$), whose elements are regarded as i.i.d. binary random variables such that an element takes the value 1 with probability $p {\stackrel{\triangle}{=}}k/n$. The parameter $k (0 < k \le n/2)$ is a positive real number which represents the average number of ones for each row. In other words, a matrix $H \in {{\cal B}}_{m,n,k}$ can be considered as an output from the Bernoulli source such that symbol 1 occurs with probability $p$.
From the above definition, it is clear that a matrix $H \in {{\cal B}}_{m,n,k}$ is associated with the probability $$P(H) = p^{\bar w(H)} (1-p)^{m n - \bar w(H)} ,$$ where $\bar w(H)$ is the number of ones in $H$ (i.e., Hamming weight of $H$). The average weight distribution of the Bernoulli ensemble is given by $$E_{{{\cal B}}_{m,n,k}}[A_w(H) ] = \left(\frac{1+z^w}{2} \right)^m {n \choose w}$$ for $w \in [0,n]$ where $z {\stackrel{\triangle}{=}}1- 2p$. The notation $[a,b]$ denotes the set of consecutive integers from $a$ to $b$. The average weight distribution of this ensemble was first discussed by Litsyn and Shevelev [@LS02].
If $k$ is a constant (i.e., not a function of $n$), this ensemble can be considered as an ensemble of sparse matrices. In the spacial case where $k = n/2$, equal probability $1/2^{mn}$ is assigned to every matrix in the Bernoulli ensemble. As a simplified notation, we will denote $
{{\cal R}}_{m,n} {\stackrel{\triangle}{=}}{{\cal B}}_{m,n,n/2},
$ where ${{\cal R}}_{m,n}$ is called the [*random ensemble*]{}. Since a typical instance of ${{\cal R}}_{m,n}$ contains $\Theta(m n)$ ones, the ensemble can be regarded as an ensemble of dense matrices.
Average undetected error probability of an ensemble
---------------------------------------------------
For a given $m \times n$ matrix $H$, the evaluation of the undetected error probability $P_U(H)$ is in general computationally difficult because we need to know the weight distribution of $C(H)$ for such evaluation. On the other hand, in some cases, we can evaluate the average of $P_U(H)$ for a given ensemble. Such an average probability is useful for the estimation of the undetected error probability of a matrix which belongs to the ensemble.
Taking the ensemble average of the undetected error probability over a given ensemble ${{\cal G}}$, we have $$\begin{aligned}
\nonumber
E_{{\cal G}}[P_U(H)] &=& E_{{\cal G}}\left[\sum_{w = 1}^n A_w(H) \epsilon^w (1-\epsilon)^{n - w} \right] \\ \label{avew}
&=& \sum_{w = 1}^n E_{{\cal G}}[A_w(H)] \epsilon^w (1-\epsilon)^{n - w} .\end{aligned}$$ In the above equations, $H$ can be regarded as a random variable. From this equation, it is evident that the average of $P_U(H)$ can be evaluated if we know the average weight distribution of the ensemble. For example, in the case of the random ensemble ${{\cal R}}_{m,n}$, the average undetected error probability has a simple closed form.
\[averandom\] The average undetected error probability of the random ensemble ${{\cal R}}_{m,n}$ is given by $$E_{{{\cal R}}_{m,n}}[P_U(H)] =2^{-m} (1-(1-\epsilon)^n).$$ (Proof) By using (\[avew\]), we have $$\begin{aligned}
\nonumber
E_{{{\cal R}}_{m,n}}[P_U(H)] &=& \sum_{w = 1}^n E_{{{\cal R}}_{m,n}}[A_w(H)] \epsilon^w (1-\epsilon)^{n - w} \\ \nonumber
&=& \sum_{w = 1}^n 2^{-m} {n \choose w} \epsilon^w (1-\epsilon)^{n - w} \\
&=& 2^{-m} (1-(1-\epsilon)^n).\end{aligned}$$ The second equality is based on the well known result [@Gal63]: $$E_{{{\cal R}}_{m,n}}[A_w(H)] = 2^{-m} {n \choose w}.$$ The last equality is due to the binomial theorem.
Error exponent of undetected error probability
----------------------------------------------
For a given sequence of $(1-R)n \times n$ matrix ensembles $(n=1,2,3,\ldots,)$, the average undetected error probability is usually an exponentially decreasing function of $n$, where $R$ is a real number satisfying $0 < R < 1$ (called the [*design rate*]{}). Thus, the exponent of the undetected error probability is of prime importance in understanding the asymptotic behavior of the undetected error probability.
### Definition of error exponent
Let $\{ {{{\cal G}}_n} \}_{n>0} $ be a series of ensembles such that ${{{\cal G}}_n}$ consists of $(1-R)n \times n$ binary matrices. In order to see the asymptotic behavior of the undetected error probability of this sequence of ensembles, it is reasonable to define the error exponent of undetected error probability in the following way:
The asymptotic error exponent of the average undetected error probability for a series of ensembles $\{ {{{\cal G}}_n} \}_{n>0} $ is defined by $$T_{{{\cal G}}_n} {\stackrel{\triangle}{=}}\lim_{n \rightarrow \infty }\frac 1 n\log_2 E_{{{\cal G}}_n}[ P_U]$$ if the limit exists.
Henceforth we will not explicitly express the dependence of $P_U$ on $H$, writing instead $P_U$ to denote $P_U(H)$ in all cases where there is no fear of confusion.
The following example describes the exponent of the random ensemble.
Consider the series of the random ensembles $\{{{\cal R}}_{n,(1-R)n} \}_{n>0}$. It is easy to evaluate $T_{{{\cal R}}_{(1-R)n,n}} $: $$\begin{aligned}
\nonumber
T_{{{\cal R}}_{(1-R)n,n}} &=& \lim_{n \rightarrow \infty} \frac 1 n \log_2 E_{{{\cal R}}_{(1-R)n,n}}[P_U] \\ \nonumber
&=& \lim_{n \rightarrow \infty} \frac 1 n \log_2 2^{-(1-R)n} (1-(1-\epsilon)^n) \\ \label{1mR}
&=& -(1-R).\end{aligned}$$ This equality implies that the average undetected error probability of the sequence of random ensembles behaves like $$\label{exponential}
E_{{{\cal R}}_{(1-R)n,n}}[P_U] \simeq 2^{-n(1-R)}$$ if $n$ is sufficiently large. Note that the exponent $-(1-R)$ is independent from the crossover probability $\epsilon$.
### Error exponent and asymptotic growth rate
The [*asymptotic growth rate*]{} of the average weight distribution (for simplicity henceforth abbreviated as the asymptotic growth rate), which is the basis of the derivation of the error exponent, is defined as follows.
Suppose that a series of ensembles $\{ {{{\cal G}}_n} \}_{n>0} $ is given. If $$\lim_{n \rightarrow \infty}\frac 1 n \log_2 E_{{{\cal G}}_n}[ A_{\ell n}]$$ exists for $0 \le \ell \le 1$, then we define the [*asymptotic growth rate*]{} $f(\ell)$ by $$f(\ell) {\stackrel{\triangle}{=}}\lim_{n \rightarrow \infty}\frac 1 n \log_2 E_{{{\cal G}}_n}[ A_{\ell n}].$$ The parameter $\ell$ is called the [*normalized weight*]{}.
From this definition, it is clear that $$E_{{{\cal G}}_n}[ A_{\ell n}] = 2^{n(f(\ell) + o(1))},$$ where the notation $o(1)$ denotes terms which converge to 0 in the limit as $n$ goes to infinity. The asymptotic growth rate of some ensembles of binary matrices can be found in [@LS02][@LS03][@MV04].
The next theorem gives the error exponent of the undetected error probability for a series of ensembles $\{ {{{\cal G}}_n} \}_{n>0}$.
\[th1\] The error exponent of $\{ {{{\cal G}}_n} \}_{n>0} $ is given by $$T_{{{\cal G}}_n}
=\sup_{0 < \ell \le 1 } [f(\ell) + \ell \log_2\epsilon +(1 - \ell)\log_2(1 - \epsilon) ],$$ where $f(\ell)$ is the asymptotic growth rate of $\{ {{{\cal G}}_n} \}_{n>0}$.\
(Proof) Based on the definition of asymptotic growth rate, we can rewrite $T_{{{\cal G}}_n} $ in the form $$\begin{aligned}
\nonumber
T_{{{\cal G}}_n}
\hspace{-3mm}&=&
\hspace{-3mm}
\lim_{n \rightarrow \infty }\frac 1 n\log_2 E_{{{\cal G}}_n}[ P_U] \\ \nonumber
&=&\hspace{-3mm}
\lim_{n \rightarrow \infty }\frac 1 n\log_2\sum_{w=1}^n E_{{{\cal G}}_n}[ A_{w}] \epsilon^{w} (1 - \epsilon)^{n - w} \\ \nonumber
&=&\hspace{-3mm}
\lim_{n \rightarrow \infty }\frac 1 n\log_2\sum_{w=1}^n
2^{n(f(\frac{w}{n}) + K(\epsilon, n ,w)+ o(1)) } ,\end{aligned}$$ where $K(\epsilon, n ,w)$ is defined by $$K(\epsilon, n ,w) {\stackrel{\triangle}{=}}\frac{w}{n}\log_2\epsilon + \left(1 - \frac{w}{n} \right)\log_2(1 - \epsilon).$$ Using a conventional technique for bounding summation, we have the following upper bound on $T_{{{\cal G}}_n}$: $$\begin{aligned}
\nonumber
T_{{{\cal G}}_n}\hspace{-3mm}
&=&\hspace{-3mm}
\lim_{n \rightarrow \infty }\frac 1 n\log_2\sum_{w=1}^n
2^{n(f(\frac{w}{n}) + K(\epsilon, n ,w)+ o(1)) } \\ \nonumber
&\le&\hspace{-3mm}
\lim_{n \rightarrow \infty }\frac 1 n\log_2 n \max_{w=1}^n
2^{n(f(\frac{w}{n}) + K(\epsilon, n ,w)+ o(1)) } \\ \nonumber
&=&\hspace{-3mm}
\lim_{n \rightarrow \infty } \max_{w=1}^n \frac 1 n\log_2
2^{n(f(\frac{w}{n}) + K(\epsilon, n ,w)+ o(1)) } \\ \nonumber
&=&\hspace{-3mm}
\lim_{n \rightarrow \infty } \max_{w=1}^n
\left[f\left(\frac{w}{n}\right) + K(\epsilon, n ,w) + o(1) \right] \\ \label{supeq}
&=&\hspace{-3mm}
\sup_{0 < \ell \le 1 } \left[f(\ell) + \ell \log_2\epsilon +(1 - \ell)\log_2(1 - \epsilon) \right].\end{aligned}$$ We can also show that $T_{{{\cal G}}_n}$ is greater than or equal to the right-hand side of the above inequality (\[supeq\]) in a similar manner. This means that the right-hand side of the inequality is asymptotically tight.
The next example discusses the case of the random ensemble.
Let us again consider the series of the random ensembles given by $\{{{\cal R}}_{(1-R)n,n} \}_{n > 0}$. These ensembles have the asymptotic growth rate $f(\ell) = h(\ell)-(1-R)$, where the function $h(x)$ is the binary entropy function defined by $$h(x) {\stackrel{\triangle}{=}}-x \log_2 x -(1-x) \log_2 (1-x).$$ In this case, by using Theorem \[th1\], we have $$\label{trmn}
T_{{{\cal R}}_{(1-R)n,n}} \hspace{-3mm}=
\sup_{0 < \ell \le 1 } [h(\ell)-(1-R) + \ell \log_2\epsilon +(1 - \ell)\log_2(1 - \epsilon) ].$$ Let $$D_{\ell,\epsilon} {\stackrel{\triangle}{=}}\ell \log_2 \left(\frac \ell \epsilon \right) + (1 - \ell) \log_2 \left(\frac{1-\ell}{1-\epsilon} \right).$$ By using $D_{\ell,\epsilon} $, we can rewrite (\[trmn\]) as $$T_{{{\cal R}}_{(1-R)n,n}} =
\sup_{0 < \ell \le 1 } [-(1-R) - D_{\ell,\epsilon} ].$$ Since $D_{\ell,\epsilon}$ can be considered as the Kullback-Libler divergence between two probability distributions $(\epsilon, 1- \epsilon)$ and $(\ell, 1- \ell)$, $D_{\ell,\epsilon}$ is always non-negative and $D_{\ell,\epsilon} = 0$ holds if and only if $\ell = \epsilon$. Thus, we obtain $$\sup_{0 < \ell \le 1 } [-(1-R) - D_{\ell,\epsilon} ] = - (1-R),$$ which is identical to the exponent obtained in expression (\[1mR\]).
Let $g_\epsilon^{(rnd)}(\ell) {\stackrel{\triangle}{=}}h(\ell)-(1-R) + \ell \log_2\epsilon +(1 - \ell)\log_2(1 - \epsilon)$. Figure \[fig-random\] displays the behavior of $g^{(rnd)}_{\epsilon}(\ell)$ when $R = 0.5$. This figure confirms the result that the maximum ($\sup_{0 < \ell \le 1} g_\epsilon^{(rnd)}(\ell)= -0.5$) is attained at $\ell = \epsilon$.
![The curves of $g_\epsilon(\ell)$ for random ensembles with $R=0.5$.[]{data-label="fig-random"}](fig-random.eps "fig:")\
[The curves of $g^{(rnd)}_\epsilon(\ell)$ correspond to the parameters $\epsilon = 0.1,0.2, 0.4$ from left to right are presented. As a reference, line of $-(1-R)=-0.5$ is also included in the figure.]{}
Error exponent of the Bernoulli ensemble with constant $k$
----------------------------------------------------------
The asymptotic growth rate of the Bernoulli ensemble ${{\cal B}}_{m,n,k}$ with a constant $k$ and design rate $R$ is given by $$f(\ell) = h(\ell)+ (1-R) \log_2 \left(\frac{1+e^{-2k \ell}}{2} \right).$$ This formula is presented in [@LS02]. The error exponent of this ensemble shows a different behavior from that for random ensembles.
Consider the Bernoulli ensemble with parameters $R = 0.5$ and $k = 20$. Let $$\begin{aligned}
\nonumber
g^{(spm)}_\epsilon(\ell) &{\stackrel{\triangle}{=}}& H(\ell)+ (1-R) \log_2 \left(\frac{1+e^{-2k \ell}}{2} \right) \\
&+& \ell \log_2\epsilon +(1 - \ell)\log_2(1 - \epsilon).\end{aligned}$$
Figure \[fig-sparse1\] includes the curves of $g^{(spm)}_\epsilon(\ell)$ where $\epsilon = 0.1,0.2, 0.4$. In contrast to $g^{(rnd)}_\epsilon(\ell)$ of a random ensemble, we can see that $g^{(spm)}_\epsilon(\ell)$ is not a concave function. The shape of the curve of $g^{(spm)}_\epsilon(\ell)$ depends on the crossover probability $\epsilon$. For large $\epsilon$, $g_\epsilon(\ell)$ takes its largest value around $\ell = \epsilon$. On the other hand, for small $\epsilon$, $g^{(spm)}_\epsilon(\ell)$ has the supremum at $\epsilon = 0$.
Figure \[fig-sparse2\] presents the error exponent of Bernoulli ensembles with parameters $R = 0.3, 0.5, 0.7, 0.9$ and $k = 20$. As an example, consider the exponent for $R=0.5$. In the regime where $\epsilon$ is smaller than (around) 0.3, the error exponent is a monotonically decreasing function of $\epsilon$.
![The curves of $g^{(spm)}_\epsilon(\ell)$ for Bernoulli ensembles.[]{data-label="fig-sparse1"}](fig-sparse1.eps "fig:")\
[The curves of $g^{(spm)}_\epsilon(\ell)$ correspond to the parameters $\epsilon = 0.1,0.2, 0.4$ are presented. The parameters $R = 0.5, k = 20$ are assumed. As a reference, line of $-(1-R)=-0.5$ is also included in the figure.]{}
![Error exponent of Bernoulli ensemble.[]{data-label="fig-sparse2"}](fig-sparse2.eps "fig:")\
[The curves of $T_{{{\cal B}}_{m,n,k}}$ correspond to the parameters $R = 0.3,0.5, 0.7,0.9$ and $k=20$. are presented. ]{}
The examples suggest that a sparse ensemble has less powerful error detection performance than that of a dense ensemble (such as the random ensemble) in terms of the error exponent. However, if the crossover probability is sufficiently large, the difference in exponent of sparse and dense ensembles is negligible. For example, the exponent of the Bernoulli ensemble in Fig. \[fig-sparse2\] is almost equal to that of the random ensemble when $\epsilon$ is larger than (around) 0.3.
The above properties of the error exponents of the Bernoulli ensembles can be explained with reference to their average weight distributions (or asymptotic growth rate). Figure \[fig-agr\] displays the asymptotic growth rates of a random ensemble and a Bernoulli ensemble.
![Asymptotic growth rate of a random ensemble and a Bernoulli ensemble.[]{data-label="fig-agr"}](fig-agr.eps "fig:")\
The weight of typical error vectors is very close to $\epsilon n$ when $n$ is sufficiently large. For a large value of $\epsilon$, such as $\epsilon = 0.4$, the average weight distribution around $w = 0.4 n$, namely $E_{{\cal G}}[A_{0.4 n}]$, dominates the undetected error probability. In such a range, the difference in the average weight distributions corresponding to the random and the Bernoulli ensembles is small. On the other hand, if the crossover probability is small, weight distributions of low weight become the most influential parameter. The difference in the average weight distributions of small weight results in a difference in the error exponent.
Note that the time complexity of the error detection operation (multiplication of received vector and a parity check matrix) is $O(n^2)$-time for a typical instance of a random ensemble, and is $O(n)$-time for a typical instance of a Bernoulli ensemble with constant $k$. A sparse matrix offers almost same error detection performance of a dense matrix with linear time complexity if $\epsilon$ is sufficiently large.
Variance of undetected error probability
========================================
In the previous section, we have seen that the average weight distribution plays an important role in the derivation of average undetected error probability. Similarly, we need to examine the [*covariance of weight distribution*]{} in order to analyze the variance of undetected error probability.
Covariance formula
------------------
The covariance between two real-valued functions $f(\cdot), g(\cdot)$ defined on an ensemble ${{\cal G}}$ is given by $${\rm Cov}_{{{\cal G}}} [f,g] {\stackrel{\triangle}{=}}E_{{{\cal G}}} [ f g ] - E_{{{\cal G}}} [ f ] E_{{{\cal G}}} [ g].$$
The next theorem forms the basis of the derivation of the variance of the undetected error probability for the Bernoulli ensemble. The covariance of the weight distribution for the Bernoulli ensemble is given in the following theorem.
\[covsparse\] The covariance of the weight distribution for the Bernoulli ensemble ${{\cal B}}_{m,n,k}$ is given by $$\begin{aligned}
\nonumber
&&\hspace{-15mm}{\rm Cov}_{{{\cal B}}_{m,n,k}}(A_{w_1}, A_{w_2}) \\ \nonumber
&{\stackrel{\triangle}{=}}&
\left(\frac{1+z^{w_1}}{2} \right)^m \left(\frac{1+z^{w_2}}{2} \right)^m \\ \nonumber
&\times& \hspace{-10mm}\sum_{v= \max\{0,w_1+w_2 - n\}}^{w_1}\hspace{-2mm}
{n \choose w_1} {w_1 \choose v} {n - w_1 \choose w_2 - v} \\ \label{convformula}
&\times& \left(\left(1+ \frac{z^{w_1+w_2-2v} - z^{w_1+w_2}}{(1+z^{w_1})(1+z^{w_2})}\right)^m -1\right)\end{aligned}$$ for $1 \le w_1 \le w_2 \le n$ and $$\label{commutative}
{\rm Cov}_{{{\cal B}}_{m,n,k}}(A_{w_1}, A_{w_2}) = {\rm Cov}_{{{\cal B}}_{m,n,k}}(A_{w_2}, A_{w_1})$$ for $1 \le w_2 < w_1 \le n$ where $z = 1 - 2 p$ and $p = k/n$.\
(Proof) See Appendix.
When $k = n/2$, ${{\cal B}}_{m,n,k}$ becomes the random ensemble ${{\cal R}}_{m,n}$. We discuss this case here.
We first assume that $1 \le w_1 \le w_2\le n$. Let $p = 1/2$ (i.e., $k = n/2$). In such a case, we have $z = 1 - 2p = 0$. Define $L$ by $$L {\stackrel{\triangle}{=}}\left(1+ \frac{z^{w_1+w_2-2v} - z^{w_1+w_2}}{(1+z^{w_1})(1+z^{w_2})}\right).$$ The variable $L$ takes the following values: $$L =
\left\{
\begin{array}{ll}
1, & w_1 < w_2\\
1, & w_1 = w_2, \ v < w_1\\
2, & w_1 = w_2, \ v = w_1.
\end{array}
\right.$$ Substituting $z = 0$ into equation (\[convformula\]) and using the identity (\[commutative\]), we get $$\begin{aligned}
\nonumber
&&\hspace{-15mm}{\rm Cov}_{{{\cal R}}_{m,n}}(A_{w_1},A_{w_2}) \\ \label{covrandom}
&=& \left\{
\begin{array}{ll}
0, & 1 \le w_1 \ne w_2\le n\\
2^{-2m}{n \choose w} (2^m - 1), & 1 \le w_1 =w_2\le n.
\end{array}
\right.\end{aligned}$$ Another proof of this formula is presented in [@acr].
Variance of undetected error probability
----------------------------------------
The variance of the undetected error probability is a straightforward consequence of Theorem \[covsparse\].
\[spvarthreom\] The variance of the undetected error probability of the Bernoulli ensemble, $\sigma_{{{\cal B}}_{m,n,k}}^2$ is given by $$\begin{aligned}
\nonumber
\sigma^2_{{{\cal B}}_{m,n,k}} &=&
\sum_{w_1 = 1}^n \sum_{w_2 = 1}^n {\rm Cov}_{{{\cal B}}_{m,n,k}}(A_{w_1}, A_{w_2}) \\
&\times& \epsilon^{w_1 + w_2} (1-\epsilon)^{2n - w_1 - w_2} .\end{aligned}$$ (Proof) The variance of the undetected error probability $P_U$ is given by $$\begin{aligned}
\nonumber
\sigma^2_{{{\cal B}}_{m,n,k}} &=& E_{{{\cal B}}_{m,n,k}}[(P_U - \mu)^2] \\
&=& E_{{{\cal B}}_{m,n,k}}[P_U^2] - E_{{{\cal B}}_{m,n,k}}[P_U]^2. \end{aligned}$$ We first consider the second moment of the undetected error probability: $$\begin{aligned}
\nonumber
&&\hspace{-1cm} E_{{{\cal B}}_{m,n,k}}[ P_U^2] \\ \nonumber
&=& \hspace{-3mm}E_{{{\cal B}}_{m,n,k}}\left[ \left(\sum_{w = 1}^n A_w \epsilon^w (1-\epsilon)^{n - w} \right)^2 \right]
\\ \nonumber
&=& \hspace{-3mm}E_{{{\cal B}}_{m,n,k}}\left[ \sum_{w_1 = 1}^n \sum_{w_2 = 1}^n A_{w_1} A_{w_2} \epsilon^{w_1+w_2}
(1-\epsilon)^{2n - w_1-w_2} \right] \\
&=& \hspace{-4mm}\sum_{w_1 = 1}^n \sum_{w_2 = 1}^n \hspace{-1mm} E_{{{\cal B}}_{m,n,k}}\left[A_{w_1} A_{w_2}\right] \epsilon^{w_1+w_2}
(1-\epsilon)^{2n - w_1-w_2}\hspace{-1mm}.\end{aligned}$$ The squared average undetected error probability can be expressed as $$\begin{aligned}
\nonumber
E_{{{\cal B}}_{m,n,k}}[ P_U]^2&=& \hspace{-3mm}E_{{{\cal B}}_{m,n,k}}\left[ \left(\sum_{w = 1}^n A_w \epsilon^w (1-\epsilon)^{n - w} \right) \right]^2
\\ \nonumber
&=& \hspace{-4mm}\sum_{w_1 = 1}^n \sum_{w_2 = 1}^n \hspace{-1mm}
E_{{{\cal B}}_{m,n,k}}\left[A_{w_1}\right] E_{{{\cal B}}_{m,n,k}}\left[A_{w_2} \right] \\
&\times& \epsilon^{w_1+w_2} (1-\epsilon)^{2n - w_1-w_2}\hspace{-1mm}.\end{aligned}$$ Combining these equalities and the covariance of the weight distribution, the variance of undetected error probability $\sigma^2_{{{\cal B}}_{m,n,k}} $ is obtained.
The covariance of the weight distribution for a given ensemble ${{\cal B}}_{m,n,k}$ is useful not only for the evaluation of the variance of $P_U$. Let $X$ be a random variable represented by $$X = \sum_{w =0}^n \alpha(w) A_w,$$ where $\alpha(w)$ is a real-valued function of $w$. The covariance of the weight distribution is required more generally for the evaluation of the variance of $X$, which is given by $$\sigma^2_X = \sum_{w_1=0}^n \sum_{w_2=0}^n {\rm Cov}_{{{\cal B}}_{m,n,k}}(A_{w_1}, A_{w_2}) \alpha(w_1)\alpha(w_2).$$ A specialized version (the case where $X = P_U$) of this equation has been derived in the previous corollary.
Let us consider the Bernoulli ensemble with $m = 1, n = 2$ and $k = 1/2 (p = 1/4)$. Table \[r12\] displays the weight distributions and undetected error probabilities for the 4 matrices in ${{\cal B}}_{1,2,1/2}$.
$H$ $C(H)$ $A_1(H)$ $A_2(H)$ $P_U(H)$
------- ------------------- ---------- ---------- --------------------------
(0,0) $\{00,01,10,11\}$ 2 1 $2\epsilon - \epsilon^2$
(0,1) $\{00,10 \}$ 1 0 $\epsilon - \epsilon^2$
(1,0) $\{00,01 \}$ 1 0 $\epsilon - \epsilon^2$
(1,1) $\{00,11 \}$ 0 1 $\epsilon^2$
: Weight distributions and undetected error probabilities
\[r12\]
From the definition of a Bernoulli ensemble, the following probability is assigned to each matrix: $
P((0,0)) = 9/16, P((0,1)) = 3/16, P((1,0)) = 3/16, P((1,1)) = 1/16.
$ Combining the undetected error probabilities presented in Table \[r12\] and the above probability assignment, we immediately have the first and second moments: $$\begin{aligned}
E_{{{\cal B}}_{1,2,1/2}}[P_U] &=& \frac 2 3 \epsilon - \frac 7 8 \epsilon^2 \\
E_{{{\cal B}}_{1,2,1/2}}[P_U^2] &=& \frac {21}{8} \epsilon^2 - \frac 3 8 \epsilon^3 + \epsilon^4.\end{aligned}$$ From these moments, the variance can be derived: $$\begin{aligned}
\nonumber
\sigma^2_{{{\cal B}}_{1,2,1/2}}
&=&E_{{{\cal B}}_{1,2,1/2}}[P_U^2] -E_{{{\cal B}}_{1,2,1/2}}[P_U] ^2 \\ \label{var12}
&=& \frac 3 8 \epsilon^2 - \frac 3 8 \epsilon^3 +\frac{15}{64} \epsilon^4.\end{aligned}$$
We can also consider another route to derive the variance by using Corollary \[spvarthreom\]. The covariances of ${{\cal B}}_{1,2,1/2}$ are given by $$\begin{aligned}
{\rm Cov}_{{{\cal B}}_{1,2,1/2}}(1,1) &=& 3/8 \\
{\rm Cov}_{{{\cal B}}_{1,2,1/2}}(1,2) &=& {\rm Cov}_{{{\cal B}}_{1,2,1/2}}(2,1) = 3/16 \\
{\rm Cov}_{{{\cal B}}_{1,2,1/2}}(2,2) &=& 15/64.\end{aligned}$$ From Corollary \[spvarthreom\], we obtain the variance $$\begin{aligned}
\nonumber
\sigma^2_{{{\cal B}}_{1,2,1/2}}
&=& \sum_{w_1=1}^2 \sum_{w_2=1}^2 {\rm Cov}_{{{\cal B}}_{m,n,k}}(A_{w_1}, A_{w_2}) \\ \nonumber
&\times& \epsilon^{w_1+w_2} (1- \epsilon)^{4 - w_1-w_2} \\ \nonumber
&=& (3/8)\epsilon^2(1-\epsilon)^2 + (3/16)\epsilon^3(1-\epsilon) \\ \nonumber
&+& (3/16)\epsilon^3(1-\epsilon) + (15/64)\epsilon^4 \\ \nonumber
&=&\frac 3 8 \epsilon^2 - \frac 3 8 \epsilon^3 +\frac{15}{64} \epsilon^4,\end{aligned}$$ that is identical to expression (\[var12\]).
In the case of $k = n/2$ (i.e. the case of a random ensemble), we can derive a closed form expression for the variance.
\[rvartheorem\] For the random ensemble ${{\cal R}}_{m,n}$, the variance of the undetected error probability $P_U$ is given by $$\sigma^2_{{{\cal R}}_{m,n}} = (1-2^{-m})2^{-m}\left((\epsilon^2 + (1-\epsilon)^2)^n- (1-\epsilon)^{2n} \right).$$ (Proof) The variance of undetected error probability $\sigma^2_{{{\cal R}}_{m,n}} $ can be obtained in the following way: $$\begin{aligned}
\nonumber
&& \hspace{-1cm} \sigma^2_{{{\cal R}}_{m,n}} \\ \nonumber
&=& \hspace{-3mm}
E_{{{\cal R}}_{m,n}}[ P_U^2]-E_{{{\cal R}}_{m,n}}[ P_U]^2 \\ \nonumber
&=& \hspace{-3mm}\sum_{w_1 = 1}^n \sum_{w_2 = 1}^n {\rm Cov}_{{{\cal R}}_{m,n}}\left[A_{w_1}, A_{w_2}\right] \epsilon^{w_1+w_2}
(1-\epsilon)^{2n - w_1-w_2} \\ \nonumber
&=& \hspace{-3mm}\sum_{w=1}^n(1-2^{-m})2^{-m}{n \choose w} \epsilon^{2w} (1-\epsilon)^{2n - 2w} .\end{aligned}$$ The second equality is due to Corollary \[spvarthreom\]. The last equality are due to Eq. (\[covrandom\]). We can further simplify the expression using the binomial theorem: $$\begin{aligned}
\nonumber
\sigma^2_{{{\cal R}}_{m,n}}
&=& (1-2^{-m})2^{-m} \sum_{w=0}^n{n \choose w}(\epsilon^2)^{w} ((1-\epsilon)^2)^{n - w}\\ \nonumber
&-& (1-2^{-m})2^{-m}(1-\epsilon)^{2n}\\ \nonumber
&=& (1-2^{-m})2^{-m} \\
&\times& \left((\epsilon^2 + (1-\epsilon)^2)^n- (1-\epsilon)^{2n} \right).\end{aligned}$$ The last equality is the claim of the theorem.
The next example facilitates an understanding of how the average and the variance of $P_U$ behave.
We consider the random ensemble with $m = 20, n = 40$, and the Bernoulli ensemble with $m = 20, n=40, k=5$ (labeled “Sparse” in Fig. \[fig-curve1\]). Figure \[fig-curve1\] depicts the average undetected error probabilities of the two ensembles. It can be observed that the average undetected error probability of the random ensemble monotonically decreases as $\epsilon$ decreases. In contrast, the curve for the Bernoulli ensemble has a peak around $\epsilon \simeq 0.025$.
![Average undetected error probabilities.[]{data-label="fig-curve1"}](fig-curve1.eps "fig:")\
[Random ensemble: $m=20, n=40$, Sparse matrix ensemble: $m=20,n=40, k=5$.]{}
Figure \[fig-curve2\] shows the variance of $P_U$ for the above two ensembles. The two curves have a similar shape, but the variance of the sparse ensemble is always larger than that of the random ensemble.
![Variance of undetected error probability.[]{data-label="fig-curve2"}](fig-curve2.eps "fig:")\
[Random ensemble: $m=20, n=40$, Sparse matrix ensemble: $m=20,n=40, k=5$.]{}
Asymptotic behavior
-------------------
We here discuss the asymptotic behavior of the covariance of the weight distribution and the variance of $P_U$ for the Bernoulli ensemble. The following corollary explains the asymptotic behavior of the covariance of the weight distribution.
\[asymptcov\] Let the asymptotic growth rate of the covariance of the weigh distribution of the Bernoulli ensemble be $T(\ell_1, \ell_2)$ defined by $$T(\ell_1, \ell_2) {\stackrel{\triangle}{=}}\lim_{n \rightarrow \infty} \frac 1 n \log_2{\rm Cov}_{{{\cal B}}_{(1-R)n ,n,k}}(A_{\ell_1 n}, A_{\ell_2 n})$$ for $0 < \ell_1, \ell_2 \le 1$ and $0 < R \le 1$. The asymptotic growth rate is given by $$T(\ell_1, \ell_2) = \sup_{\max\{0, \ell_1+\ell_2 - 1\} \le \nu \le \ell_1}Q(\nu)$$ for $0 < \ell_1 \le \ell_2 \le 1$ and $$T(\ell_1, \ell_2) = T(\ell_2, \ell_1)$$ for $0 < \ell_2 < \ell_1 \le 1$ where $Q(\nu)$ is defined by $$\begin{aligned}
\nonumber
Q(\nu) &{\stackrel{\triangle}{=}}&-2(1-R) + h(\ell_1) \\
&+& \hspace{-2mm} h\left(\frac{\nu}{\ell_1} \right)+ h\left(\frac{\ell_2 - \nu}{1-\ell_1} \right) + \hspace{-2mm}
\sup_{0 < \mu \le 1-R} \alpha(\mu,\nu).\end{aligned}$$ The function $\alpha(\mu,\nu)$ is defined by $$\begin{aligned}
\nonumber
&&\hspace{-10mm}\alpha(\mu,\nu) \\ \nonumber
&{\stackrel{\triangle}{=}}& h\left(\frac{\mu}{1-R} \right)
+ \mu \log_2\left(e^{-2k(\ell_1+\ell_2-2\nu)}-e^{-2k(\ell_1+\ell_2)}\right) \\
&+& (1-R-\mu)\log_2\left((1+e^{-2k\ell_1})(1+e^{-2k\ell_2}) \right).\end{aligned}$$ (Proof) We here rewrite the covariance formula (\[convformula\]) into asymptotic form. By using the Binomial theorem, we have $$\begin{aligned}
\nonumber
&&\hspace{-15mm}\left(1+ \frac{z^{w_1+w_2-2v} - z^{w_1+w_2}}{(1+z^{w_1})(1+z^{w_2})}\right)^m - 1\\
&=&
\sum_{i=1}^m {m \choose i}\left(\frac{z^{w_1+w_2-2v} - z^{w_1+w_2}}{(1+z^{w_1})(1+z^{w_2})} \right)^i.\end{aligned}$$ By using this identity, the covariance in (\[convformula\]) can be rewritten in the following form: $$\begin{aligned}
\nonumber
&&\hspace{-6mm}{\rm Cov}_{{{\cal B}}_{m,n,k}}(A_{w_1}, A_{w_2}) \\ \nonumber
&=& 2^{-2m} \sum_{v= \max\{0,w_1+w_2 - n\}}^{w_1}\hspace{-2mm}
{n \choose w_1} {w_1 \choose v} {n - w_1 \choose w_2 - v} \Theta,\end{aligned}$$ where $\Theta$ is defined by $$\begin{aligned}
\nonumber
\Theta &{\stackrel{\triangle}{=}}&
\sum_{i=1}^m {m \choose i} \left(z^{w_1+w_2-2v}-z^{w_1+w_2} \right)^i \\
&\times& \left((1+z^{w_1})(1+z^{w_2}) \right)^{m-i}.\end{aligned}$$
Letting $w_1 = \ell_1 n, w_2 = \ell_2 n, v = \nu n, m = (1-R)n$, we have $$\lim_{n \rightarrow \infty}\frac{1}{n} \log_2 2^{-2m} = -2(1-R)$$ and $$\begin{aligned}
\nonumber
&& \hspace{-16mm }
\lim_{n \rightarrow \infty}\frac{1}{n} \log_2 {n \choose w_1} {w_1 \choose v} {n - w_1 \choose w_2 - v} \\
&=& h(\ell_1) + h\left(\frac{\nu}{\ell_1} \right) + h\left(\frac{\ell_2 - \nu}{1-\ell_1} \right).\end{aligned}$$ If $k$ is a constant and $0 \le \ell \le 1$, then, making use of the identity [@LS02] $$\begin{aligned}
\nonumber
\lim_{n \rightarrow \infty}\left(1 - 2 \left(\frac{k}{n} \right) \right)^{\ell n}
&=& \lim_{n \rightarrow \infty}z^{\ell n} \\
&=& e^{-2 k \ell}\end{aligned}$$ we get $$\begin{aligned}
\lim_{n \rightarrow \infty}\frac{1}{n} \log_2 \Theta = \sup_{0 < \mu \le 1-R} \alpha(\mu).\end{aligned}$$ Combining these asymptotic expressions, the claim of the corollary is derived.
The following corollary gives the asymptotic growth rate of the variance of the undetected error probability.
The asymptotic growth rate of the variance of the undetected error is given by $$\lim_{n \rightarrow \infty} \frac 1 n \log_2 \sigma^2_{{{\cal B}}_{n,(1-R)n,k}}
=\sup_{0 < \ell_1 \le 1}\sup_{0 < \ell_2 \le 1} S(\ell_1,\ell_2),$$ where $S(\ell_1,\ell_2)$ is given by $$\begin{aligned}
\nonumber
S(\ell_1,\ell_2)
&{\stackrel{\triangle}{=}}& (\ell_1 + \ell_2) \log_2 \epsilon +(2-\ell_1-\ell_2) \log_2(1-\epsilon) \\
&+& T(\ell_1, \ell_2).\end{aligned}$$ (Proof) It is evident that $$\begin{aligned}
\nonumber
&& \lim_{n \rightarrow \infty}\frac{1}{n} \log_2
\left(\epsilon^{\ell_1 n + \ell_2 n} (1-\epsilon)^{2n - \ell_1 n - \ell_2 n} \right) \\
&=& (\ell_1 + \ell_2) \log_2 \epsilon +(2-\ell_1-\ell_2) \log_2(1-\epsilon).\end{aligned}$$ holds. Combining this identity and Corollaries \[spvarthreom\] and \[asymptcov\], we immediately have the claim of the corollary.
Appendix
========
### Preparation of the proof
The second moment of the weight distribution for a given ensemble ${{\cal G}}$ is given by $$\begin{aligned}
\nonumber
&&\hspace{-1cm}E_{{{\cal G}}}\left[A_{w_1}A_{w_2} \right] \\ \nonumber
&=&
E_{{{\cal G}}}\left[
\sum_{{ \mbox{\boldmath$x$} } \in Z^{(n,w_1)}} \sum_{{ \mbox{\boldmath$y$} } \in Z^{(n,w_2)}}
I[H { \mbox{\boldmath$x$} }^t = 0^m] I[H { \mbox{\boldmath$y$} }^t = 0^m] \right].\end{aligned}$$ for $0 < w_1, w_2 \le n$. Since $$I[H { \mbox{\boldmath$x$} }^t = 0^m] I[H { \mbox{\boldmath$y$} }^t = 0^m] = I[H { \mbox{\boldmath$x$} }^t = 0^m,H { \mbox{\boldmath$y$} }^t = 0^m],$$ we have $$\begin{aligned}
\nonumber
&&\hspace{-1cm}E_{{{\cal G}}}\left[A_{w_1}A_{w_2} \right] \\ \nonumber
&=&\hspace{-3mm}
E_{{{\cal G}}}\left[
\sum_{{ \mbox{\boldmath$x$} } \in Z^{(n,w_1)}} \sum_{{ \mbox{\boldmath$y$} } \in Z^{(n,w_2)}}
I[H { \mbox{\boldmath$x$} }^t = 0^m,H { \mbox{\boldmath$y$} }^t = 0^m] \right] \\ \label{secmom}
&=& \hspace{-7mm}
\sum_{{ \mbox{\boldmath$x$} } \in Z^{(n,w_1)}} \sum_{{ \mbox{\boldmath$y$} } \in Z^{(n,w_2)}}
E_{{{\cal G}}}\left[
I[H { \mbox{\boldmath$x$} }^t = 0^m,H { \mbox{\boldmath$y$} }^t = 0^m] \right].\end{aligned}$$
We here encounter a problem of evaluating probability of occurrence of both $H { \mbox{\boldmath$x$} }^t = 0^m$ and $H { \mbox{\boldmath$y$} }^t = 0^m$. In preparation to solve this problem, we will introduce some notation:
For a given pair $({ \mbox{\boldmath$x$} }, { \mbox{\boldmath$y$} }) \in Z^{(n,w_1)} \times Z^{(n,w_2)}$, the index sets $I_1,I_2,I_3,I_4$ are defined as follows: $$\begin{aligned}
I_1 &{\stackrel{\triangle}{=}}& \{k \in [1,n]: x_k = 1, y_k = 0 \} \\
I_2 &{\stackrel{\triangle}{=}}& \{k \in [1,n]: x_k = 1, y_k = 1 \} \\
I_3 &{\stackrel{\triangle}{=}}& \{k \in [1,n]: x_k = 0, y_k = 1 \} \\
I_4 &{\stackrel{\triangle}{=}}& \{k \in [1,n]: x_k = 0, y_k = 0 \},\end{aligned}$$ where ${ \mbox{\boldmath$x$} }=(x_1,x_2,\ldots,x_n)$ and ${ \mbox{\boldmath$y$} }=(y_1,y_2,\ldots,y_n).$ These regions are illustrated in Fig.\[fig-regions\]. The size of each index set is denoted by $i _k = \# I_k (k = 1,2,3,4)$. Let ${ \mbox{\boldmath$h$} }=(h_1,h_2,\ldots,h_n)$ be a binary $n$-tuple. The partial weight of ${ \mbox{\boldmath$h$} }$ corresponding to an index set $I_k(k=1,2,3,4)$ is denoted by $w_k({ \mbox{\boldmath$h$} })$, namely $$w_k({ \mbox{\boldmath$h$} }) = \# \{j \in I_k: h_j = 1\}.$$
![The 4 regions $I_1,I_2,I_3,I_4$.[]{data-label="fig-regions"}](regions.eps "fig:")\
Since the index sets are mutually exclusive, the equation $i_1+i_2+i_3+i_4 = n$ holds and $i_2$ can take an integer value in the following range: $$\max\{w_1+w_2-n,0 \} \le i_2\le \min\{w_1,w_2 \}.$$ The size of each index set can be expressed as $i_1 = w_1 - i_2$, $i_3 = w_2 - i_2$, $i_4= n - (w_1+w_2 - i_2)$.
Proof of Lemma \[covsparse\] (Covariance of the Bernoulli ensemble)
-------------------------------------------------------------------
Let ${ \mbox{\boldmath$x$} } \in Z^{(n,w_1)}$ and ${ \mbox{\boldmath$y$} } \in Z^{(n,w_2)}$ be binary vectors satisfying $w_1 \le w_2$. In this proof, we first prove the following equality: $$\begin{aligned}
\nonumber
&&\hspace{-10mm} E_{{{{\cal B}}}_{n,m,k}} [I[H { \mbox{\boldmath$x$} }^t = 0, H { \mbox{\boldmath$y$} }^t = 0] ] \\ \label{xyprob}
&=&\left(\frac{1+z^{w_1} + z^{w_2} + z^{w_1 + w_2 - 2 v} }{4} \right)^m\end{aligned}$$ where $v = \#({{\rm Supp}}({ \mbox{\boldmath$x$} }) \cap {{\rm Supp}}({ \mbox{\boldmath$x$} }))$, $z = 1 - 2 p$ and $p = k/n$. The support set ${{\rm Supp}}({ \mbox{\boldmath$v$} })$ is defined by $${{\rm Supp}}({ \mbox{\boldmath$v$} }) {\stackrel{\triangle}{=}}\{i \in [1,n]: v_i \ne 0 \},$$ where ${ \mbox{\boldmath$v$} } = (v_1,v_2,\ldots,v_n)$.
We need to consider the following three cases: Case (i): $0 < i_2 < w_1$ (i.e., the intersection of ${{\rm Supp}}({ \mbox{\boldmath$x$} })$ and ${{\rm Supp}}({ \mbox{\boldmath$y$} })$ is not empty but ${{\rm Supp}}({ \mbox{\boldmath$y$} })$ does not include ${{\rm Supp}}({ \mbox{\boldmath$x$} })$), Case (ii): $i_2 = 0$ (i.e., the intersection of ${{\rm Supp}}({ \mbox{\boldmath$x$} })$ and ${{\rm Supp}}({ \mbox{\boldmath$y$} })$ is empty), Case (iii): $i_2 = w_1$ (i.e., ${{\rm Supp}}({ \mbox{\boldmath$y$} })$ includes ${{\rm Supp}}({ \mbox{\boldmath$x$} })$).
We first study Case (i). Suppose that a binary $n$-tuple ${ \mbox{\boldmath$h$} }$ is generated from a Bernoulli source with $Pr[h_i = 1] = p (i \in [1,n])$. Recall that $p$ is defined by $p = k/n$. In this case, ${ \mbox{\boldmath$h$} } { \mbox{\boldmath$x$} }^t = 0, { \mbox{\boldmath$h$} } { \mbox{\boldmath$y$} }^t = 0$ holds if and only if $w_{i}({ \mbox{\boldmath$h$} })\ \mbox{is even}$ for $i=1,2,3$ or $w_{i}({ \mbox{\boldmath$h$} })\ \mbox{is odd}$ for $i=1,2,3$.
It is well known that a binary vector $(t_1,t_2,\ldots, t_u)$ generated from a Bernoulli source has even weight with probability $(1+ (1-2 q)^u)/2$, where $q$ is the probability that $t_i (i \in [1,u])$ takes 1 [@Gal63]. The probability that $(t_1,t_2,\ldots, t_u)$ has an odd weight is given by $(1- (1-2 q)^u)/2$. For example, the probability that $w_{1}({ \mbox{\boldmath$h$} })$ becomes even is $(1+z^{w_1})/2$ where $z = 1 - 2p$.
Based on the above argument, we can write the probability $Pr[{ \mbox{\boldmath$h$} } { \mbox{\boldmath$x$} }^t = 0, { \mbox{\boldmath$h$} } { \mbox{\boldmath$y$} }^t = 0]$ as a function of $z$: $$\begin{aligned}
\nonumber
&&\hspace{-10mm} Pr[{ \mbox{\boldmath$h$} } { \mbox{\boldmath$x$} }^t = 0, { \mbox{\boldmath$h$} } { \mbox{\boldmath$y$} }^t = 0] \\ \nonumber
\hspace{-3mm}
&=&\hspace{-3mm} \frac{(1+z^{i_1})(1+z^{i_2})(1+z^{i_3})+(1 -z^{i_1})(1-z^{i_2})(1-z^{i_3}) }{8} \\
&=&\hspace{-3mm}\frac{1+z^{w_1} + z^{w_2} + z^{w_1 + w_2 - 2 v} }{4}.\end{aligned}$$ where $v {\stackrel{\triangle}{=}}i_2$.
We next consider Case (ii). For this case, $v=i_2$ is assumed to be zero. In this case, ${ \mbox{\boldmath$h$} } { \mbox{\boldmath$x$} }^t = 0, { \mbox{\boldmath$h$} } { \mbox{\boldmath$y$} }^t = 0$ holds if and only if both $w_1({ \mbox{\boldmath$h$} })$ and $w_3({ \mbox{\boldmath$h$} })$ are even. The probability that ${ \mbox{\boldmath$h$} }$ satisfies ${ \mbox{\boldmath$h$} } { \mbox{\boldmath$x$} }^t = 0$ and $ { \mbox{\boldmath$h$} } { \mbox{\boldmath$y$} }^t = 0$ under the condition $i_2 = 0$ is given by $$\begin{aligned}
\nonumber
&&\hspace{-28mm} Pr[{ \mbox{\boldmath$h$} } { \mbox{\boldmath$x$} }^t = 0, { \mbox{\boldmath$h$} } { \mbox{\boldmath$y$} }^t = 0] \\ \nonumber
&=&\left(\frac{1+z^{i_1}}{2}\right) \left(\frac{1 +z^{i_3}}{2} \right) \\ \nonumber
&=&\left(\frac{1+z^{w_1}}{2}\right) \left(\frac{1 +z^{w_2}}{2} \right) \\
&=&\frac{1+z^{w_1} + z^{w_2} + z^{w_1 + w_2 - 2 v} }{4}.\end{aligned}$$
Finally we consider Case (iii). Assume the case $v = i_2 = w_1, { \mbox{\boldmath$x$} } \ne { \mbox{\boldmath$y$} }$. In this case, ${ \mbox{\boldmath$h$} } { \mbox{\boldmath$x$} }^t = 0, { \mbox{\boldmath$h$} } { \mbox{\boldmath$y$} }^t = 0$ holds if and only if both $w_2({ \mbox{\boldmath$h$} })$ and $w_3({ \mbox{\boldmath$h$} })$ are even. The probability $Pr[{ \mbox{\boldmath$h$} } { \mbox{\boldmath$x$} }^t = 0, { \mbox{\boldmath$h$} } { \mbox{\boldmath$y$} }^t = 0]$ under the condition $v = w_1, { \mbox{\boldmath$x$} } \ne { \mbox{\boldmath$y$} }$ is thus given by $$\begin{aligned}
\nonumber
&&\hspace{-22mm}Pr[{ \mbox{\boldmath$h$} } { \mbox{\boldmath$x$} }^t = 0, { \mbox{\boldmath$h$} } { \mbox{\boldmath$y$} }^t = 0] \\ \nonumber
&=& \left(\frac{1+z^{i_2}}{2}\right) \left(\frac{1 +z^{i_3}}{2} \right) \\ \nonumber
&=& \frac{1+z^{w_1}+z^{w_2} + z^{w_2 - w_1} }{4} \\
&=& \frac{1+z^{w_1}+z^{w_2} + z^{w_1 + w_2 - 2 v} }{4}. \end{aligned}$$ We next consider the case ${ \mbox{\boldmath$x$} } = { \mbox{\boldmath$y$} }$. For this case, we also have $$\begin{aligned}
\nonumber
&&\hspace{-22mm}Pr[{ \mbox{\boldmath$h$} } { \mbox{\boldmath$x$} }^t = 0, { \mbox{\boldmath$h$} } { \mbox{\boldmath$y$} }^t = 0] \\ \nonumber
&=& \frac{1+x^{w_1}}{2} \\
&=&\frac{1+z^{w_1} + z^{w_2} + z^{w_1 + w_2 - 2 v} }{4}.\end{aligned}$$
In summary, for any cases (Cases (i), (ii), (iii)), $$Pr[{ \mbox{\boldmath$h$} } { \mbox{\boldmath$x$} }^t = 0, { \mbox{\boldmath$h$} } { \mbox{\boldmath$y$} }^t = 0] = \frac{1+z^{w_1} + z^{w_2} + z^{w_1 + w_2 - 2 v} }{4}$$ holds. Since the rows of parity check matrices in ${{{\cal B}}}_{n,m,k}$ can be independently chosen, we obtain Eq. (\[xyprob\]) in the following way: $$\begin{aligned}
\nonumber
&& \hspace{-10mm} E_{{{{\cal B}}}_{n,m,k}} [I[H { \mbox{\boldmath$x$} }^t = 0, H { \mbox{\boldmath$y$} }^t = 0] ] \\ \nonumber
&=& Pr[H { \mbox{\boldmath$x$} }^t = 0, H { \mbox{\boldmath$y$} }^t = 0 ] \\ \nonumber
&=&Pr[{ \mbox{\boldmath$h$} } { \mbox{\boldmath$x$} }^t = 0, { \mbox{\boldmath$h$} } { \mbox{\boldmath$y$} }^t = 0]^m \\
&=&\left(\frac{1+z^{w_1} + z^{w_2} + z^{w_1 + w_2 - 2 v} }{4} \right)^m.\end{aligned}$$
Combining (\[secmom\]) and (\[xyprob\]), we have $$\begin{aligned}
\nonumber
&&\hspace{-1cm}E_{{{{\cal B}}}_{n,m,k}}\left[A_{w_1}A_{w_2} \right] \\ \nonumber
&=& \hspace{-7mm}
\sum_{{ \mbox{\boldmath$x$} } \in Z^{(n,w_1)}} \sum_{{ \mbox{\boldmath$y$} } \in Z^{(n,w_2)}}
E_{{{{\cal B}}}_{n,m,k}}\left[
I[H { \mbox{\boldmath$x$} }^t = 0^m,H { \mbox{\boldmath$y$} }^t = 0^m] \right] \\ \nonumber
&=& \hspace{-7mm}
\sum_{{ \mbox{\boldmath$x$} } \in Z^{(n,w_1)}} \sum_{{ \mbox{\boldmath$y$} } \in Z^{(n,w_2)}}
\left(\frac{1+z^{w_1} + z^{w_2} + z^{w_1 + w_2 - 2 v} }{4} \right)^m \\ \nonumber
&=& \hspace{-2mm}\sum_{v= \max\{0,w_1+w_2 - n\}}^{w_1} {n \choose w_1} {w_1 \choose v} {n - w_1 \choose w_2 - v} \\ \label{2nd}
&\times& \left(\frac{1+z^{w_1} + z^{w_2} + z^{w_1 + w_2 - 2 v} }{4} \right)^m.\end{aligned}$$ Since $$E_{{{{\cal B}}}_{n,m,k}}\left[A_{w} \right] = {n \choose w} \left(\frac{1+z^w}{2} \right)^m$$ holds [@LS02], we thus have $$\begin{aligned}
\nonumber
&&\hspace{-12mm} E_{{{{\cal B}}}_{n,m,k}}\left[A_{w_1} \right] E_{{{{\cal B}}}_{n,m,k}}\left[A_{w_2} \right] \\ \nonumber
&=& {n \choose w_1}{n \choose w_2} \left(\frac{1+z^{w_1}}{2} \right)^m \left(\frac{1+z^{w_2}}{2} \right)^m \\ \nonumber
&= &\sum_{v= \max\{0,w_1+w_2 - n\}}^{w_1} {n \choose w_1} {w_1 \choose v} {n - w_1 \choose w_2 - v} \\ \label{exp2}
&\times& \left(\frac{1+z^{w_1} + z^{w_2} + z^{w_1 + w_2 } }{4} \right)^m. \end{aligned}$$ The last equality is due to the following combinatorial identity: $$\sum_{v= \max\{0,w_1+w_2 - n\}}^{w_1} {n \choose w_1} {w_1 \choose v} {n - w_1 \choose w_2 - v}
= {n \choose w_1} {n \choose w_2}.$$ We are ready to derive the covariance of weight distributions for the case $w_1 \le w_2$. Substituting (\[2nd\]) and (\[exp2\]) into $$\begin{aligned}
\nonumber
&&\hspace{-10mm}{\rm Cov}_{{{\cal B}}_{m,n,k}}(A_{w_1}, A_{w_2}) \\ \nonumber
&=&E_{{{{\cal B}}}_{n,m,k}}\left[A_{w_1}A_{w_2} \right] -
E_{{{{\cal B}}}_{n,m,k}}\left[A_{w_1} \right] E_{{{{\cal B}}}_{n,m,k}}\left[A_{w_2} \right],\end{aligned}$$ we have (\[convformula\]) in the claim part of the Theorem. Since the definition of covariance is commutative, ${\rm Cov}_{{{\cal B}}_{m,n,k}}(A_{w_1}, A_{w_2}) = {\rm Cov}_{{{\cal B}}_{m,n,k}}(A_{w_2}, A_{w_1})$ holds if $w_1 > w_2$.
Acknowledgment {#acknowledgment .unnumbered}
==============
This work was partly supported by the Ministry of Education, Science, Sports and Culture, Japan, Grant-in-Aid for Scientific Research on Priority Areas (Deepening and Expansion of Statistical Informatics) 180790091.
[99]{}
R.G.Gallager, [*“Low Density Parity Check Codes”*]{}. Cambridge, MA:MIT Press 1963.
T.Klove, [*“Codes for Error Detection”*]{}, World Scientific, 2007.
T. Klove and V. Korzhik, [*“Error Detecting Codes: General Theory and Their Application in Feedback Communication Systems”*]{}, Kluwer Academic, 1995.
S.Litsyn and V. Shevelev, “On ensembles of low-density parity-check codes: asymptotic distance distributions,” [*IEEE Trans. Inform. Theory*]{}, vol.48, pp.887–908, Apr. 2002.
S.Litsyn and V. Shevelev, “Distance distributions in ensembles of irregular low-density parity-check codes,” [*IEEE Trans. Inform. Theory*]{}, vol.49, pp.3140–3159, Nov. 2003.
D.Burshtein and G. Miller, “Asymptotic enumeration methods for analyzing LDPC codes,” [*IEEE Trans. Inform. Theory*]{}, vol.50, pp.1115–1131, June 2004.
O. Barak, D. Burshtein, “Lower bounds on the spectrum and error rate of LDPC code ensembles,” in Proceedings of International Symposium on Information Theory, 2005.
V. Rathi, “On the asymptotic weight distribution of regular LDPC ensembles,” in Proceedings of International Symposium on Information Theory, 2005.
T. Richardson, R. Urbanke, “Modern Coding Theory,” online: http://lthcwww.epfl.ch/
T.Wadayama, “Asymptotic concentration behaviors of linear combinations of weight distributions on random linear code ensemble,” ArXiv, arXiv:0803.1025v1 (2008).
[^1]: $\dagger$Nagoya Institute of Technology, email:wadayama@nitech.ac.jp. A part of this work was presented at ITA workshop in UCSD, Feb. 2007.
|
---
abstract: |
In this work, we present a method for unsupervised domain adaptation. Many adversarial learning methods train domain classifier networks to distinguish the features as either a source or target and train a feature generator network to mimic the discriminator. Two problems exist with these methods. First, the domain classifier only tries to distinguish the features as a source or target and thus does not consider task-specific decision boundaries between classes. Therefore, a trained generator can generate ambiguous features near class boundaries. Second, these methods aim to completely match the feature distributions between different domains, which is difficult because of each domain’s characteristics.
To solve these problems, we introduce a new approach that attempts to align distributions of source and target by utilizing the task-specific decision boundaries. We propose to maximize the discrepancy between two classifiers’ outputs to detect target samples that are far from the support of the source. A feature generator learns to generate target features near the support to minimize the discrepancy. Our method outperforms other methods on several datasets of image classification and semantic segmentation. The codes are available at <https://github.com/mil-tokyo/MCD_DA>
author:
- |
Kuniaki Saito[^1^]{}, Kohei Watanabe[^1^]{}, Yoshitaka Ushiku[^1^]{}, and Tatsuya Harada[^1,2^]{}\
[[^1^]{}The University of Tokyo]{}, [[^2^]{}RIKEN]{}\
[`{k-saito,watanabe,ushiku,harada}@mi.t.u-tokyo.ac.jp`]{}\
bibliography:
- 'egbib.bib'
title: Maximum Classifier Discrepancy for Unsupervised Domain Adaptation
---
Introduction
============
![(Best viewed in color.) Comparison of previous and the proposed distribution matching methods.. [**Left**]{}: Previous methods try to match different distributions by mimicing the domain classifier. They do not consider the decision boundary. [**Right**]{}: Our proposed method attempts to detect target samples outside the support of the source distribution using task-specific classifiers.[]{data-label="fig:intro"}](fig1.pdf){width="\hsize"}
The classification accuracy of images has improved substantially with the advent of deep convolutional neural networks (CNN) which utilize numerous labeled samples [@krizhevsky2012imagenet]. However, collecting numerous labeled samples in various domains is expensive and time-consuming.
Domain adaptation (DA) tackles this problem by transferring knowledge from a label-rich domain (i.e., source domain) to a label-scarce domain (i.e., target domain). DA aims to train a classifier using source samples that generalize well to the target domain. However, each domain’s samples have different characteristics, which makes the problem difficult to solve. Consider neural networks trained on labeled source images collected from the Web. Although such neural networks perform well on the source images, correctly recognizing target images collected from a real camera is difficult for them. This is because the target images can have different characteristics from the source images, such as change of light, noise, and angle in which the image is captured. Furthermore, regarding unsupervised DA (UDA), we have access to labeled source samples and only unlabeled target samples. We must construct a model that works well on target samples despite the absence of their labels during training. UDA is the most challenging situation, and we propose a method for UDA in this study.
Many UDA algorithms, particularly those for training neural networks, attempt to match the distribution of the source features with that of the target without considering the category of the samples [@ganin2016domain; @sun2016deep; @bousmalis2016domain; @tzeng2014deep]. In particular, domain classifier-based adaptation algorithms have been applied to many tasks [@ganin2016domain; @bousmalis2016domain]. The methods utilize two players to align distributions in an adversarial manner: domain classifier (i.e., a discriminator) and feature generator. Source and target samples are input to the same feature generator. Features from the feature generator are shared by the discriminator and a task-specific classifier. The discriminator is trained to discriminate the domain labels of the features generated by the generator whereas the generator is trained to fool it. The generator aims to match distributions between the source and target because such distributions will mimic the discriminator. They assume that such target features are classified correctly by the task-specific classifier because they are aligned with the source samples.
However, this method should fail to extract discriminative features because it does not consider the relationship between target samples and the task-specific decision boundary when aligning distributions. As shown in the left side of Fig. \[fig:intro\], the generator can generate ambiguous features near the boundary because it simply tries to make the two distributions similar.
To overcome both problems, we propose to align distributions of features from source and target domain by using the classifier’s output for the target samples.
We introduce a new adversarial learning method that utilizes two types of players: task-specific classifiers and a feature generator. [*task-specific classifiers*]{} denotes the classifiers trained for each task such as object classification or semantic segmentation. Two classifiers take features from the generator. Two classifiers try to classify source samples correctly and, simultaneously, are trained to detect the target samples that are far from the support of the source. The samples existing far from the support do not have discriminative features because they are not clearly categorized into some classes. Thus, our method utilizes the task-specific classifiers as a discriminator. Generator tries to fool the classifiers. In other words, it is trained to generate target features near the support while considering classifiers’ output for target samples. Thus, our method allows the generator to generate discriminative features for target samples because it considers the relationship between the decision boundary and target samples. This training is achieved in an adversarial manner. In addition, please note that we do not use domain labels in our method.
We evaluate our method on image recognition and semantic segmentation. In many settings, our method outperforms other methods by a large margin. The contributions of our paper are summarized as follows:
- We propose a novel adversarial training method for domain adaptation that tries to align the distribution of a target domain by considering task-specific decision boundaries.
- We confirm the behavior of our method through a toy problem.
- We extensively evaluate our method on various tasks: digit classification, object classification, and semantic segmentation.
Related Work
============
Training CNN for DA can be realized through various strategies. Ghifary proposed using an autoencoder for the target domain to obtain domain-invariant features [@ghifary2016deep]. Sener proposed using clustering techniques and pseudo-labels to obtain discriminative features [@sener2016learning]. Taigman proposed cross-domain image translation methods [@taigman2016unsupervised]. Matching distributions of the middle features in CNN is considered to be effective in realizing an accurate adaptation. To this end, numerous methods have been proposed [@ganin2016domain; @sun2016deep; @bousmalis2016domain; @purushotham2017variational; @tzeng2014deep; @sun2015return].
![image](overview_new.pdf){width="0.85\hsize"}
The representative method of distribution matching involves training a domain classifier using the middle features and generating the features that deceive the domain classifier [@ganin2016domain]. This method utilizes the techniques used in generative adversarial networks [@GAN]. The domain classifier is trained to predict the domain of each input, and the category classifier is trained to predict the task-specific category labels. Feature extraction layers are shared by the two classifiers. The layers are trained to correctly predict the label of source samples as well as to deceive the domain classifier. Thus, the distributions of the middle features of the target and source samples are made similar. Some methods utilize maximum mean discrepancy (MMD) [@long2016unsupervised; @long2015learning], which can be applied to measure the divergence in high-dimensional space between different domains. This approach can train the CNN to simultaneously minimize both the divergence and category loss for the source domain. These methods are based on the theory proposed by [@ben2007analysis], which states that the error on the target domain is bounded by the divergence of the distributions. To our understanding, these distribution aligning methods using GAN or MMD do not consider the relationship between target samples and decision boundaries. To tackle these problems, we propose a novel approach using task-specific classifiers as a discriminator.
Consensus regularization is a technique used in multi-source domain adaptation and multi-view learning, in which multiple classifiers are trained to maximize the consensus of their outputs [@luo2008transfer]. In our method, we address a training step that minimizes the consensus of two classifiers, which is totally different from consensus regularization. Consensus regularization utilizes samples of multi-source domains to construct different classifiers as in [@luo2008transfer]. In order to construct different classifiers, it relies on the different characteristics of samples in different source domains. By contrast, our method can construct different classifiers from only one source domain.
Method
======
In this section, we present the detail of our proposed method. First, we give the overall idea of our method in Section \[mtd:overall\]. Second, we explain about the loss function we used in experiments in Section \[mtd:loss\]. Finally, we explain the entire training procedure of our method in Section \[mtd:steps\].
Overall Idea {#mtd:overall}
------------
We have access to a labeled source image $\mathbf{x_{s}}$ and a corresponding label $y_{s}$ drawn from a set of labeled source images {$X_{s},Y_{s}$}, as well as an unlabeled target image $\mathbf{x_{t}}$ drawn from unlabeled target images $X_{t}$. We train a feature generator network $G$, which takes inputs $\mathbf{x_{s}}$ or $\mathbf{x_{t}}$, and classifier networks $F_1$ and $F_2$, which take features from $G$. $F_1$ and $F_2$ classify them into $K$ classes, that is, they output a $K$-dimensional vector of logits. We obtain class probabilities by applying the softmax function for the vector. We use the notation $p_1(\mathbf{y}|\mathbf{x})$, $p_2(\mathbf{y}|\mathbf{x})$ to denote the $K$-dimensional probabilistic outputs for input $\mathbf{x}$ obtained by $F_1$ and $F_2$ respectively.
The goal of our method is to align source and target features by utilizing the task-specific classifiers as a discriminator in order to consider the relationship between class boundaries and target samples. For this objective, we have to detect target samples far from the support of the source. The question is how to detect target samples far from the support. These target samples are likely to be misclassified by the classifier learned from source samples because they are near the class boundaries. Then, in order to detect these target samples, we propose to utilize the disagreement of the two classifiers on the prediction for target samples. Consider two classifiers ($F_1$ and $F_2$) that have different characteristics in the leftmost side of Fig. \[fig:propose\]. We assume that the two classifiers can classify source samples correctly. This assumption is realistic because we have access to labeled source samples in the setting of UDA. In addition, please note that $F_1$ and $F_2$ are initialized differently to obtain different classifiers from the beginning of training. Here, we have the key intuition that target samples outside the support of the source are likely to be classified differently by the two distinct classifiers. This region is denoted by black lines in the leftmost side of Fig. \[fig:propose\] ([*Discrepancy Region*]{}). Conversely, if we can measure the disagreement between the two classifiers and train the generator to minimize the disagreement, the generator will avoid generating target features outside the support of the source. Here, we consider measuring the difference for a target sample using the following equation, $d(p_1(\mathbf{y}|\mathbf{x_t}),p_2(\mathbf{y}|\mathbf{x_t}))$ where $d$ denotes the function measuring divergence between two probabilistic outputs. This term indicates how the two classifiers disagree on their predictions and, hereafter, we call the term as [*discrepancy*]{}. Our goal is to obtain a feature generator that can minimize the discrepancy on target samples.
In order to effectively detect target samples outside the support of the source, we propose to train discriminators ($F_1$ and $F_2$) to maximize the discrepancy given target features ([*Maximize Discrepancy*]{} in Fig. \[fig:propose\]). Without this operation, the two classifiers can be very similar ones and cannot detect target samples outside the support of the source. We then train the generator to fool the discriminator, that is, by minimizing the discrepancy ([*Minimize Discrepancy*]{} in Fig. \[fig:propose\]). This operation encourages the target samples to be generated inside the support of the source. This adversarial learning steps are repeated in our method. Our goal is to obtain the features, in which the support of the target is included by that of the source ([*Obtained Distributions*]{} in Fig. \[fig:propose\]). We show the loss function used for discrepancy loss in the next section. Then, we detail the training procedure.
Discrepancy Loss {#mtd:loss}
----------------
In this study, we utilize the absolute values of the difference between the two classifiers’ probabilistic outputs as discrepancy loss: $$d(p_1,p_2) = \frac{1}{K}\sum_{k=1}^{K}|{p_1}_{k}-{p_2}_{k}|,$$ where the ${p_1}_{k}$ and ${p_2}_{k}$ denote probability output of ${p_1}$ and ${p_2}$ for class $k$ respectively. The choice for L1-distance is based on the Theorem \[th:th\_1\]. Additionally, we experimentally found that L2-distance does not work well.
Training Steps {#mtd:steps}
--------------
To sum up the previous discussion in Section \[mtd:overall\], we need to train two classifiers, which take inputs from the generator and maximize $d(p_1(\mathbf{y}|\mathbf{x_t}),p_2(\mathbf{y}|\mathbf{x_t}))$, and the generator which tries to mimic the classifiers. Both the classifiers and generator must classify source samples correctly. We will show the manner in which to achieve this. We solve this problem in three steps.
![Adversarial training steps of our method. We separate the network into two modules: generator (${\it G}$) and classifiers (${\it F_{1}, F_{2}}$). The classifiers learn to maximize the discrepancy [**Step B**]{} on the target samples, and the generator learns to minimize the discrepancy [**Step C**]{}. Please note that we employ a training [**Step A**]{} to ensure the discriminative features for source samples.[]{data-label="fig:model"}](training_steps.pdf){width="0.9\hsize"}
**Step A** First, we train both classifiers and generator to classify the source samples correctly. In order to make classifiers and generator obtain task-specific discriminative features, this step is crucial. We train the networks to minimize softmax cross entropy. The objective is as follows: $${\mathop{\rm min}\limits}_{G,F_1,F_2} \mathcal{L}(X_{s},Y_{s}).$$ $$\mathcal{L}(X_{s},Y_{s}) = -{\mathbb{E}_{(\mathbf{x_{s}},y_{s})\sim(X_{s},Y_{s})}}\sum_{k=1}^{K}{{\mbox{1}\hspace{-0.25em}\mbox{l}}_{[k=y_{s}]}}\log p({\mathbf y}|{\mathbf x_{s}})
\label{eq:crossentropy}$$ **Step B** In this step, we train the classifiers ($F_{1}, F_{2}$) as a discriminator for a fixed generator ($G$). By training the classifiers to increase the discrepancy, they can detect the target samples excluded by the support of the source. This step corresponds to [**Step B**]{} in Fig. \[fig:model\]. We add a classification loss on the source samples. Without this loss, we experimentally found that our algorithm’s performance drops significantly. We use the same number of source and target samples to update the model. The objective is as follows: $${\mathop{\rm min}\limits}_{F_1,F_2} \mathcal{L}(X_{s},Y_{s}) - \mathcal{L}_{\rm adv}(X_{t}). \\$$ $$\mathcal{L}_{\rm adv}(X_{t}) = {\mathbb{E}_{\mathbf{x_{t}}\sim X_{t}}}[d(p_1(\mathbf{y}|\mathbf{x_t}),p_2(\mathbf{y}|\mathbf{x_t}))]
\label{eq:sensitivity}$$ **Step C** We train the generator to minimize the discrepancy for fixed classifiers. This step corresponds to [**Step C**]{} in Fig. \[fig:model\]. The number $n$ indicates the number of times we repeat this for the same mini-batch. This number is a hyper-parameter of our method. This term denotes the trade-off between the generator and the classifiers. The objective is as follows: $${\mathop{\rm min}\limits}_{G} \mathcal{L}_{\rm adv}(X_{t}).\\$$ These three steps are repeated in our method. To our understanding, the order of the three steps is not important. Instead, our major concern is to train the classifiers and generator in an adversarial manner under the condition that they can classify source samples correctly.
Theoretical Insight
-------------------
Since our method is motivated by the theory proposed by Ben-David . [@ben2010theory], we want to show the relationship between our method and the theory in this section.
Ben-David [@ben2010theory] proposed the theory that bounds the expected error on the target samples, $R_{\mathcal{T}}(h)$, by using three terms: (i) expected error on the source domain, $R_{\mathcal{S}}(h)$; (ii) $\mathcal{H} \Delta \mathcal{H}$-distance ($d_{{{\mathcal{H}\Delta\mathcal{H}}}}(\mathcal{S},\mathcal{T})$), which is measured as the discrepancy between two classifiers; and (iii) the shared error of the ideal joint hypothesis, $\lambda$. $\mathcal{S}$ and $\mathcal{T}$ denote source and target domain respectively. Another theory [@ben2007analysis] bounds the error on the target domain, which introduced $\mathcal{H}$-distance ($d_{{\mathcal{H}}}(\mathcal{S},\mathcal{T})$) for domain divergence. The two theories and their relationships can be explained as follows.
\[th:th\_1\] Let $H$ be the hypothesis class. Given two domains $\mathcal{S}$ and $\mathcal{T}$, we have $$\begin{aligned}
\begin{split}
\forall h \in H, R_{\mathcal{T}}(h) &\leq R_{\mathcal{S}}(h) +\frac{1}{2}{d_{\mathcal{H} \Delta \mathcal{H}}(\mathcal{S},\mathcal{T})}+\lambda \\
&\leq R_{\mathcal{S}}(h) +\frac{1}{2}{d_{\mathcal{H}}(\mathcal{S},\mathcal{T})}+\lambda \\
\end{split}
\label{eq:main}
\end{aligned}$$ where $$\begin{aligned}
\begin{split}
d_{{{\mathcal{H}\Delta\mathcal{H}}}}(\mathcal{S},\mathcal{T}) &= 2\sup_{(h,h{'})\in \mathcal{H}^{2}} \left| \underset{{\bf x}\sim \mathcal{S}}{\mathbf{E}} {\rm I}\bigl[h({\bf x}) \neq h^{'}({\bf x}) \bigr]- \underset{{\bf x}\sim \mathcal{T}}{\mathbf{E}} {\rm I}\bigl[h({\bf x}) \neq h^{'}({\bf x}) \bigr]\right| \\
d_{{\mathcal{H}}}(\mathcal{S},\mathcal{T}) &= 2\sup_{h\in \mathcal{H}} \left| \underset{{\bf x}\sim \mathcal{S}}{\mathbf{E}} {\rm I} \bigl[h({\bf x}) \neq 1 \bigr] - \underset{{\bf x}\sim \mathcal{T}}{\mathbf{E}} {\rm I} \bigl[h({\bf x}) \neq 1 \bigr]\right|, \\
\lambda&=\min \left[R_{\mathcal{S}}(h)+R_{\mathcal{T}}(h)\right]\\
\end{split}
\end{aligned}$$ Here, $R_{\mathcal{T}}(h)$ is the error of hypothesis $h$ on the target domain, and $R_{\mathcal{S}}(h)$ is the corresponding error on the source domain. ${\rm I}[a]$ is the indicator function, which is 1 if predicate a is true and 0 otherwise. \[th:thm1\]
$\mathcal{H}$-distance is shown to be empirically measured by the error of the domain classifier, which is trained to discriminate the domain of features. $\lambda$ is a constant—the shared error of the ideal joint hypothesis—which is considered sufficiently low to achieve an accurate adaptation. Earlier studies [@ganin2016domain; @sun2016deep; @bousmalis2016domain; @purushotham2017variational; @tzeng2014deep] attempted to measure and minimize $\mathcal{H}$-distance in order to realize the adaptation. As this inequality suggests, $\mathcal{H}$-distance upper-bounds the $\mathcal{H} \Delta \mathcal{H}$-distance. We will show the relationship between our method and $\mathcal{H} \Delta \mathcal{H}$-distance.
Regarding $d_{{{\mathcal{H}\Delta\mathcal{H}}}}(\mathcal{S},\mathcal{T})$, if we consider that $h$ and $h{'}$ can classify source samples correctly, the term $\scalebox{0.9}{$\displaystyle \underset{{\bf x}\sim \mathcal{S}}{\mathbf{E}} {\rm I}\bigl[h({\bf x}) \neq h^{'}({\bf x}) \bigr]$}$ is assumed to be very low. $h$ and $h{'}$ should agree on their predictions on source samples. Thus, $d_{{{\mathcal{H}\Delta\mathcal{H}}}}(\mathcal{S},\mathcal{T})$ is approximately calculated as $\scalebox{0.9}{$\displaystyle \sup_{(h,h{'})\in \mathcal{H}^{2}}\underset{{\bf x}\sim \mathcal{T}}{\mathbf{E}} {\rm I}\bigl[h({\bf x}) \neq h^{'}({\bf x}) \bigr]$}$, which denotes the supremum of the expected disagreement of two classifiers’ predictions on target samples.
We assume that $h$ and $h^{'}$ share the feature extraction part. Then, we decompose the hypothesis $h$ into $G$ and $F_1$, and $h^{'}$ into $G$ and $F_2$. $G$, $F_1$ and $F_2$ correspond to the network in our method. If we substitute these notations into the $\scalebox{0.9}{$\displaystyle \sup_{(h,h{'})\in \mathcal{H}^{2}}\underset{{\bf x}\sim \mathcal{T}}{\mathbf{E}} {\rm I}\bigl[h({\bf x}) \neq h^{'}({\bf x}) \bigr]$}$ and for fixed $G$, the term will become $$\scalebox{0.95}{$\displaystyle \sup_{F_1,F_2}\underset{{\bf x}\sim \mathcal{T}}{\mathbf{E}} {\rm I}\left[F_{1}\circ G({\bf x}) \neq F_{2}\circ G({\bf x}) \right]$}\label{eq:eq1}.$$ Furthermore, if we replace $\sup$ with $\max$ and minimize the term with respect to $G$, we obtain $$\scalebox{0.95}{$\displaystyle {\mathop{\rm min}\limits}_{G}{\mathop{\rm max}\limits}_{F_1,F_2} \underset{{\bf x}\sim \mathcal{T}}{\mathbf{E}} {\rm I}\left[F_{1}\circ G({\bf x}) \neq F_{2}\circ G({\bf x}) \right]$}\label{eq:eq2}.$$ This equation is very similar to the mini-max problem we solve in our method, in which classifiers are trained to maximize their discrepancy on target samples and generator tries to minimize it. Although we must train all networks to minimize the classification loss on source samples, we can see the connection to the theory proposed by [@ben2010theory].
Experiments on Classification
=============================
First, we observed the behavior of our model on toy problem. Then, we performed an extensive evaluation of the proposed methods on the following datasets: digits, traffic signs, and object classification.
Comparison of three decision boundaries
[ ![image](gtsrb_noad_vis.pdf){width="0.48\hsize" height="0.48\hsize"}\[fig:gtsr\_noad\]]{}
[ ![image](gtsrb_ad_vis.pdf){width="0.48\hsize" height="0.48\hsize"}\[fig:gtsr\_ad\]]{}
Experiments on Toy Datasets
---------------------------
In the first experiment, we observed the behavior of the proposed method on [*inter twinning moons*]{} 2D problems, in which we used [*scikit-learn*]{} [@pedregosa2011scikit] to generate the target samples by rotating the source samples. The goal of the experiment was to observe the learned classifiers’ boundary. For the source samples, we generated a lower moon and an upper moon, labeled 0 and 1, respectively. Target samples were generated by rotating the angle of the distribution of the source samples. We generated 300 source and target samples per class as the training samples. In this experiment, we compared the decision boundary obtained from our method with that obtained from both the model trained only on source samples and from that trained only to increase the discrepancy. In order to train the second comparable model, we simply skipped Step C in Section \[mtd:steps\] during training. We tested the method on 1000 target samples and visualized the learned decision boundary with source and target samples. Other details including the network architecture used in this experiment are provided in our supplementary material.
\[my-label\]
As we expected, when we trained the two classifiers to increase the discrepancy on the target samples, two classifiers largely disagreed on their predictions on target samples (Fig. \[fig:rot30\_2\]). This is clear when compared to the source only model (Fig. \[fig:rot30\_1\]). Two classifiers were trained on the source samples without adaptation, and the boundaries seemed to be nearly the same. Then, our proposed method attempted to generate target samples that reduce the discrepancy. Therefore, we could expect that the two classifiers will be similar. Fig. \[fig:rot30\_3\] demonstrates the assumption. The decision boundaries are drawn considering the target samples. The two classifiers output nearly the same prediction for target samples, and they classified most target samples correctly.
Experiments on Digits Datasets
------------------------------
In this experiment, we evaluate the adaptation of the model on three scenarios. The example datasets are presented in the supplementary material.
We assessed four types of adaptation scenarios by using the digits datasets, namely MNIST [@lecun1998gradient], Street View House Numbers (SVHN) [@netzer2011reading], and USPS [@hull1994database]. We further evaluated our method on the traffic sign datasets, Synthetic Traffic Signs (SYN SIGNS) [@moiseev2013evaluation] and the German Traffic Signs Recognition Benchmark [@stallkamp2011german] (GTSRB). In this experiment, we employed the CNN architecture used in [@ganin2014unsupervised] and [@bousmalis2016unsupervised]. We added batch normalization to each layer in these models. We used Adam [@kingma2014adam] to optimize our model and set the learning rate as $2.0\times10^{-4}$ in all experiments. We set the batch size to 128 in all experiments. The hyper-parameter peculiar to our method was $n$, which denotes the number of times we update the feature generator to mimic classifiers. We varied the value of $n$ from $2$ to $4$ in our experiment and observed the sensitivity to the hyper-parameter. We followed the protocol of unsupervised domain adaptation and did not use validation samples to tune hyper-parameters. The other details are provided in our supplementary material due to a limit of space.
**SVHN$\rightarrow$MNIST**
SVHN [@netzer2011reading] and MNIST [@lecun1998gradient] have distinct properties because SVHN datasets contain images with a colored background, multiple digits, and extremely blurred digits, meaning that the domain divergence is very large between these datasets.
**SYN SIGNS$\rightarrow$GTSRB** In this experiment, we evaluated the adaptation from synthesized traffic signs datasets (SYN SIGNS dataset [@ganin2014unsupervised]) to real-world signs datasets (GTSRB dataset [@stallkamp2011german]). These datasets contain 43 types of classes.
**MNIST$\leftrightarrow$USPS** We also evaluate our method on MNIST and USPS datasets [@lecun1998gradient] to compare our method with other methods. We followed the different protocols provided by the paper, ADDA [@tzeng2017adversarial] and PixelDA [@bousmalis2016unsupervised].
**Results** Table \[tb:exp\_digit\] lists the accuracies for the target samples, and Fig. \[fig:svhn\_graph\] and \[fig:synth\_graph\] show the relationship between the discrepancy loss and accuracy during training. For the [*source only*]{} model, we used the same network architecture as used in our method. Details are provided in the supplementary material. We extensively compared our methods with distribution matching-based methods as shown in Table \[tb:exp\_digit\]. The proposed method outperformed these methods in all settings. The performance improved as we increased the value of $n$. Although other methods such as ATDA [@saito2017asymmetric] performed better than our method in some situations, the method utilized a few labeled target samples to decide hyper-parameters for each dataset. The performance of our method will improve too if we can choose the best hyper-parameters for each dataset. As Fig. \[fig:svhn\_graph\] and \[fig:synth\_graph\] show, as the discrepancy loss diminishes, the accuracy improves, confirming that minimizing the discrepancy for target samples can result in accurate adaptation.
We visualized learned features as shown in Fig. \[fig:gtsr\_noad\] and \[fig:gtsr\_ad\]. Our method did not match the distributions of source and target completely as shown in Fig. \[fig:gtsr\_ad\]. However, the target samples seemed to be aligned with each class of source samples. Although the target samples did not separate well in the non-adapted situation, they did separate clearly as do source samples in the adapted situation.
![image](example_new.pdf){width="0.9\linewidth"}
Experiments on VisDA Classification Dataset
-------------------------------------------
We further evaluated our method on an object classification setting. The VisDA dataset [@peng2017visda] was used in this experiment, which evaluated adaptation from synthetic-object to real-object images. To date, this dataset represents the largest for cross-domain object classification, with over 280K images across 12 categories in the combined training, validation, and testing domains. The source images were generated by rendering 3D models of the same object categories as in the real data from different angles and under different lighting conditions. It contains 152,397 synthetic images. The validation images were collected from MSCOCO [@lin2014microsoft] and they amount to 55,388 in total. In our experiment, we considered the images of validation splits as the target domain and trained models in unsupervised domain adaptation settings. We evaluate the performance of ResNet101 [@he2016deep] model pre-trained on Imagenet [@deng2009imagenet]. The final fully-connected layer was removed and all layers were updated with the same learning rate because this dataset has abundant source and target samples. We regarded the pre-trained model as a generator network and we used three-layered fully-connected networks for classification networks. The batch size was set to 32 and we used SGD with learning rate $1.0\times 10^{-3}$ to optimize the model. We report the accuracy after 10 epochs. The training details for baseline methods are written in our supplementary material due to the limit of space.
**Results** Our method achieved an accuracy much better than other distribution matching based methods (Table \[tb:visda\]). In addition, our method performed better than the source only model in all classes, whereas MMD and DANN perform worse than the source only model in some classes such as car and plant. We can clearly see the clear effectiveness of our method in this regard. In this experiment, as the value of $n$ increase, the performance improved. We think that it was because of the large domain difference between synthetic objects and real images. The generator had to be updated many times to align such distributions.
Experiments on Semantic Segmentation
====================================
We further applied our method to semantic segmentation. Considering a huge annotation cost for semantic segmentation datasets, adaptation between different domains is an important problem in semantic segmentation.
**Implementation Detail** We used the publicly available synthetic dataset GTA5 [@richter2016playing] or Synthia [@ros2016synthia] as the source domain dataset and real dataset Cityscapes [@cordts2016cityscapes] as the target domain dataset. Following the work [@hoffman2016fcns; @Zhang_2017_ICCV], the Cityscapes validation set was used as our test set. As our training set, the Cityscapes train set was used. During training, we randomly sampled just a single sample (setting the batch size to 1 because of the GPU memory limit) from both the images (and their labels) of the source dataset and the remaining images of the target dataset but with no labels.
We applied our method to VGG-16 [@simonyan2014very] based FCN-8s [@long2015fully] and DRN-D-105 [@Yu2017] to evaluate our method. The details of models, including their architecture and other hyper-parameters, are described in the supplementary material.
We used Momentum SGD to optimize our model and set the momentum rate to 0.9 and the learning rate to $1.0 \times 10^{-3}$ in all experiments. The image size was resized to $1024 \times 512$. Here, we report the output of $F_1$ after 50,000 iterations.
**Results** Table \[tb:gta2city\], Table \[tb:synthia2city\], and Fig. \[fig:vis\_gta2city\] show quantitative and qualitative results, respectively. These results illustrate that even with a large domain difference between synthetic to real images, our method is capable of improving the performance. Considering the mIoU of the model trained only on source samples, we can see the clear effectiveness of our adaptation method. Also, compared to the score of DANN, our method shows clearly better performance.
Conclusion
==========
In this paper, we proposed a new approach for UDA, which utilizes task-specific classifiers to align distributions. We propose to utilize task-specific classifiers as discriminators that try to detect target samples that are far from the support of the source. A feature generator learns to generate target features near the support to fool the classifiers. Since the generator uses feedback from task-specific classifiers, it will avoid generating target features near class boundaries. We extensively evaluated our method on image classification and semantic segmentation datasets. In almost all experiments, our method outperformed state-of-the-art methods. We provide the results when applying gradient reversal layer [@ganin2014unsupervised] in the supplementary material, which enables to update parameters of the model in one step.
Acknowledgements
================
The work was partially supported by CREST, JST, and was partially funded by the ImPACT Program of the Council for Science, Technology, and Innovation (Cabinet Office, Government of Japan).
We would like to show supplementary information for our main paper. First, we introduce the detail of the experiments. Finally, we show some additional results of our method.
Toy Dataset Experiment {#toy-dataset-experiment .unnumbered}
======================
We show the detail of experiments on toy dataset in main paper.
Detail on experimental setting {#detail-on-experimental-setting .unnumbered}
------------------------------
The detail of experiment on toy dataset is shown in this section. When generating target samples, we set the rotation angle 30 in experiments of our main paper. We used Adam with learning rate $2.0\times10^{-4}$ to optimizer the model. The batch size was set to 200. For a feature generator, we used 3-layered fully-connected networks with 15 neurons in hidden layer, in which ReLU is used as the activation function. For classifiers, we used three-layed fully-connected networks with 15 neurons in hidden layer and 2 neurons in output layer. The decision boundary shown in the main paper is obtained when we rotate the source samples 30 degrees to generate target samples. We set $n$ to $3$ in this experiment.
Experiment on Digit Dataset {#experiment-on-digit-dataset .unnumbered}
===========================
We report the accuracy after training 20,000 iterations except for the adaptation between MNIST and USPS. Due to the lack of training samples of the datasets, we stopped training after 200 epochs (13 iterations per one epoch) to prevent over-fitting. We followed the protocol presented by [@ganin2014unsupervised] in the following three adaptation scenarios. **SVHN$\rightarrow$MNIST** In this adaptation scenario, we used the standard training set as training samples, and testing set as testing samples both for source and target samples.
**SYN DIGITS$\rightarrow$SVHN** We used 479400 source samples and 73257 target samples for training, 26032 samples for testing.
**SYN SIGNS$\rightarrow$GTSRB** We randomly selected 31367 samples for target training and evaluated the accuracy on the rest.
**MNIST$\leftrightarrow$USPS** In this setting, we followed the different protocols provided by the paper, ADDA [@tzeng2017adversarial] and PixelDA [@bousmalis2016unsupervised]. The former protocol provides the setting where a part of training samples are utilized during training. 2,000 training samples are picked up for MNIST and 1,800 samples are used for USPS. The latter one allows to utilize all training samples during training. We utilized the architecture used as a classification network in PixelDA [@bousmalis2016unsupervised]. We added Batch Normalization layer to the architecture.
Experiment on VisDA Classification Dataset {#experiment-on-visda-classification-dataset .unnumbered}
==========================================
The detail of architecture we used and the detail of other methods are shown in this section.
**Class Balance Loss** In addition to feature alignment loss, we used a class balance loss to improve the accuracy in this experiment. Please note that we incorporated this loss in comparable methods too. We aimed to assign the target samples to each classes equally. Without this loss, the target samples can be aligned in an unbalanced way. The loss is calculated as follows: $${\mathbb{E}_{\mathbf{x_{t}}\sim X_{t}}}\sum_{k=1}^{K} \log p(y=k|{\mathbf x_{t}})\\$$ The constant term $\lambda=0.01$ was multiplied to the loss and add this loss in Step 2 and Step 3 of our method. This loss was also introduced in MMD and DANN too when updating parameters of the networks.
For the fully-connected layers of classification networks, we set the number of neurons to $1000$. In order to fairly compare our method with others, we used the exact the same architecture for other methods.
**MMD** We calculated the maximum mean discrepancy (MMD) [@long2015learning], namely the last layer of feature generator networks. We used RBF kernels to calculate the loss. We used the the following standard deviation parameters: $$\sigma = [0.1,0.05,0.01,0.0001,0.00001]$$ We changed the number of the kernels and their parameters, but we could not observe significant performance difference. We report the performance after 5 epochs. We could not see any improvement after the epoch.
**DANN** To train a model ([@ganin2014unsupervised]), we used two-layered domain classification networks. We set the number of neurons in the hidden layer as $100$. We also used Batch Normalization, ReLU and dropout layer. Experimentally, we did not see any improvement when the network architecture is changed. According to the original method ([@ganin2014unsupervised]), learning rate is decreased every iteration. However, in our experiment, we could not see improvement, thus, we fixed learning rate $1.0\times10^{-3}$. In addition, we did not introduce gradient reversal layer for our model. We separately update discriminator and generator. We report the accuracy after 1 epoch.
Experiments on Semantic Segmentation {#experiments-on-semantic-segmentation-1 .unnumbered}
====================================
We describe the details of our experiments on semantic segmentation.
Details {#details .unnumbered}
-------
**Datasets** GTA [@richter2016playing], Synthia [@ros2016synthia] and Cityscapes [@cordts2016cityscapes] are vehicle-egocentric image datasets but GTA and Synthia are synthetic and Cityscapes is real world dataset. GTA is collected from the open world in the realistically rendered computer game Grand Theft Auto V (GTA, or GTA5). It contains 24,996 images, whose semantic segmentation annotations are fully compatible with the classes used in Cityscapes. Cityscapes is collected in 50 cities in Germany and nearby countries. We only used dense pixel-level annotated dataset collected in 27 cities. It contains 2,975 training set, 500 validation set, and 1525 test set. We used training and validation set. Please note that the labels of Cityscapes are just used for evaluation and never used in training. Similarly, we used the training splits of Synthia dataset to train our model.
**Training Details** When training, we ignored the pixelwise loss that is annotated *backward* (*void*). Therefore, when testing, no predicted *backward* label existed. The weight decay ratio was set to $2 \times 10^{-5}$ and we used no augmentation methods.
**Network Architecture** We applied our method to FCN-8s based on VGG-16 network. Convolution layers in original VGG-16 networks are used as generator and fully-connected layers are used as classifiers. For DRN-D-105, we followed the implementation of <https://github.com/fyu/drn>. We applied our method to dilated residual networks [@Yu2016; @Yu2017] for base networks. We used [*DRN-D-105*]{} model. We used the last convolution networks as classifier networks. All of lower layers are used as a generator.
**Evaluation Metrics** As evaluation metrics, we use intersection-over-union (IoU) and pixel accuracy. We use the evaluation code[^1] released along with VisDA challenge [@peng2017visda]. It calculates the PASCAL VOC intersection-over-union, i.e., $\textrm{IoU} = \frac{\textrm{TP}}{\textrm{TP}+\textrm{FP}+\textrm{FN}}$, where TP, FP, and FN are the numbers of true positive, false positive, and false negative pixels, respectively, determined over the whole test set. For further discussing our result, we also compute pixel accuracy, $\textrm{pixelAcc.} = \frac{\Sigma_{i} n_{ii}}{\Sigma_{i} t_{i}}$, where $n_{ii}$ denotes number of pixels of class $i$ predicted to belong to class $j$ and $t_{i}$ denotes total number of pixels of class $i$ in ground truth segmentation.
Additional Results {#additional-results .unnumbered}
==================
Training via Gradient Reversal Layer {#training-via-gradient-reversal-layer .unnumbered}
------------------------------------
In our main paper, we provide the training procedure that consists of three training steps and the number of updating generator ($k$) is a hyper-parameter in our method. We found that introducing gradient reversal layer (GRL) [@ganin2014unsupervised] enables to update our model in only one step and works well in many settings. This improvement makes training faster and deletes hyper-parameter in our method. We provide the detail of the improvement and some experimental results here.
**Training Procedure** We simply applied gradient reversal layer when updating classifiers and generator in an adversarial manner. The layer flips the sign of gradients when back-propagating the gradient. Therefore, update for maximizing the discrepancy via classifier and minimizing it via generator was conducted simultaneously. We publicize the code with this implementation.
**Results** The experimental results on semantic segmentation are shown in Table \[tb:gta2city\_sup\],\[tb:synthia2city\_sup\], and Fig. \[fig:vis\_gta2city\_sup\]. Our model with GRL shows the same level of performance compared to the model trained with our proposed training procedure.
Sensitivity to Hyper-Parameter {#sensitivity-to-hyper-parameter .unnumbered}
------------------------------
The number of updating generator is the hyper-parameter peculiar to our method. Therefore, we show additional experimental results related to it. We employed the adaptation from SVHN to MNIST and conducted experiments where $n=5,6$. The accuracy was 96.0% and 96.2% on average. The accuracy seems to increase as we increase the value though it saturates. Training time required to obtain high accuracy can increase too. However, considering the results of GRL on semantic segmentation, the relationship between the accuracy and the number of $n$ seems to depend on which datasets to adapt.
![image](example_dann.pdf){width="\linewidth"}
[^1]: https://github.com/VisionLearningGroup/taskcv-2017-public/blob/master/segmentation/eval.py
|
---
abstract: 'Supernova (SN) blast waves inject energy and momentum into the interstellar medium (ISM), control its turbulent multiphase structure and the launching of galactic outflows. Accurate modelling of the blast wave evolution is therefore essential for ISM and galaxy formation simulations. We present an efficient method to compute the input of momentum, thermal energy, and the velocity distribution of the shock-accelerated gas for ambient media (densities of 0.1 $\ge$ $n_{_{0}}$ \[$\rm cm^{-3}] \ge$ 100) with uniform (and with stellar wind blown bubbles), power-law, and turbulent (Mach numbers $\mathcal{M}$ from 1 $-$ 100) density distributions. Assuming solar metallicity cooling, the blast wave evolution is followed to the beginning of the momentum conserving snowplough phase. The model recovers previous results for uniform ambient media. The momentum injection in wind-blown bubbles depend on the swept-up mass and the efficiency of cooling, when the blast wave hits the wind shell. For power-law density distributions with $n(r) \sim$ $r^{-2}$ (for $n(r) > n_{_{\rm floor}}$) the amount of momentum injection is solely regulated by the background density $n_{_{\rm floor}}$ and compares to $n_{_{\rm uni}}$ = $n_{_{\rm floor}}$. However, in turbulent ambient media with log-normal density distributions the momentum input can increase by a factor of 2 (compared to the homogeneous case) for high Mach numbers. The average momentum boost can be approximated as $p_{_{\rm turb}}/\mathrm{p_{_{0}}}\ =23.07\, \left(\frac{n_{_{0,\rm turb}}}{1\,{\rm cm}^{-3}}\right)^{-0.12} + 0.82 (\ln(1+b^{2}\mathcal{M}^{2}))^{1.49}\left(\frac{n_{_{0,\rm turb}}}{1\,{\rm cm}^{-3}}\right)^{-1.6}$. The velocity distributions are broad as gas can be accelerated to high velocities in low-density channels. The model values agree with results from recent, computationally expensive, three-dimensional simulations of SN explosions in turbulent media.'
author:
- |
S. Haid$^{1}$[^1], S. Walch$^{1}$, T. Naab$^{2}$, D. Seifried$^{1}$, J. Mackey$^{1,3}$ and A. Gatto$^{2}$\
$^{1}$I. Physikalisches Institut, Universität zu Köln, Zülpicher-Strasse 77, 50937 Cologne, Germany\
$^{2}$Max-Planck-Insitut für Astrophysik, Karl-Schwarzschild-Strasse 1, 85741 Garching, Germany\
$^{3}$Dublin Institute for Advanced Studies, School of Cosmic Physics, 31 Fitzwilliam Place, Dublin 2, Ireland
bibliography:
- 'astro.bib'
title: 'Supernova-blast waves in wind-blown bubbles, turbulent, and power-law ambient media'
---
\[firstpage\]
ISM: Supernova remnants, shock wave, turbulence
Introduction {#section1}
============
Supernovae (SN) play a fundamental role in setting the properties of the multi-phase interstellar medium (ISM) (e.g. [@salpeter55; @deavillez04; @joung06a; @kim13; @walch15]). They not only enrich the ISM with metals but also inject energy and momentum leading to the dispersal of molecular clouds (MC), the driving of turbulent motions as well as galactic outflows (e.g. [@maclow03; @dib06; @gent13; @girichidis15]). Therefore, SN explosions may locally (and globally) control star formation [@agertz13; @hennebelle14; @iffrig14; @walch14a]. Spatially and temporally correlated SNe can interact and drive the expansion of coherent shells, often termed as ’super-bubbles’ (e.g. [@mccray87; @maclow88; @tenoriotagle88; @sharma14]). Large-scale super-shells [e.g. Carina Flare; @dawson08; @palous09; @dawson11] may sweep up enough mass to create new MCs, which in turn could spawn new stars and star clusters [@elmegreen77; @wunsch10; @ntormousi11]. On galactic scales SNe might drive fountain flows or even galactic winds (e.g. [@larson74; @maclow99; @ostriker10; @dallavecchia12; @hill12; @creasey13; @girichidis15]). Therefore, SNe might play an important role for regulating the efficiency of galaxy formation and determine galaxy morphology (e.g. [@dekel86; @goldbaum11; @brook12; @aumer13; @hopkins14; @marinacci14; @uebler14]). All of the above conclusions about the impact of SN explosions have been made on the basis of (at the time) computationally expensive numerical simulations with varying degrees of accuracy.
For a long time the evolution of blast waves has been in the focus of theoretical studies (e.g. [@sedov46; @taylor50] and their importance for galactic astrophysics has been realised early on. A key parameter (apart from the explosion energy) determining the fate of a SN remnant (SNR) is the density of the ambient interstellar medium. In numerous analytical studies the evolution of blast waves - also in the presence of cooling - was (mostly) investigated for homogeneous or power-law density distributions [@cox72; @chevalier76; @mckee77; @cowie81; @cox81; @cioffi88; @ostriker88; @franco94; @blondin98].
For more realistic density distributions similar to the observed ISM it is more challenging (or even impossible) to make accurate analytical predictions. The ISM is structured and is subject to supersonic turbulent motions, which lead to the observed log-normal shape of the column density probability distribution function [PDF; @kainulainen09; @schneider11]. Numerical and analytic work confirms a log-normal surface density [@maclow03] as well as volume density PDF in isothermal supersonic flows [@vazquezsemadeni93; @padoan97; @padoan97a; @kritsuk06; @federrath08; @ostriker01; @walch11b; @shetty12; @ward14]. In addition, the structure of the ISM around massive stars is strongly affected by the massive stars’ ionizing radiation (e.g. [@kesseldeynet03; @dale05; @gritschneder09; @walch12]) and stellar winds (e.g. [@weaver77]). These structural changes affect the impact of SN explosions (e.g. [@rogers13; @walch14a; @geen15]).
The efficiency with which energy and momentum from a SN explosion is transferred to the ambient medium depends on the mean ambient density $n_{_{0}}$ and its turbulent Mach number $\mathcal{M}$. Direct numerical simulations indicate that in dense environments ($n_{_{0, \rm turb}}$ = 100 $\mathrm{cm^{-3}}$) and low-Mach-number regimes ($\mathcal{M}$ $<$10) the input of momentum is moderate in the presence of cooling [@walch14a; @kim14] with a momentum transfer of $\sim$ 10 times the initial SN momentum $\mathrm{p_{_{0}}}$ ($\rm p_{_{0}}$ $\sim$ $10^{4} - 3 \times 10^{4}$ $\rm M_{\odot}\, km\,s^{-1}$, in this work $\rm p_{_{0}}$ = 14181 $\rm M_{\odot}\, km\,s^{-1}$), while the momentum input can be $\sim$ 2 times larger for densities $n_{_{0, \rm turb}}<$ 0.1 $\mathrm{cm^{-3}}$. For lower densities, however, the energy and momentum transfer can be significantly higher. Recent numerical simulations have shown that varying assumptions for typical ambient densities of SN explosions can result in very different evolutionary paths of the ISM. In the most extreme case of SN mainly going off in the diffuse phase, the SNRs can interact without significant cooling and the system can go into thermal runaway or start driving a hot outflow [@gatto15; @girichidis15; @li15].
In cosmological simulations of galaxy formation with typical resolution elements of several hundred parsecs, all the above details - in particular the first phases of blast wave evolution - are unresolved in dense environments, leading to discrepancies between the theoretical expectations and the simulated reality (see e.g. [@schaye15]). In general, this long-known ’over-cooling problem’ appears when the main momentum creating stages, the Sedov-Taylor and the pressure driven snowplough phase, stay unresolved and become artifically short [@balogh01; @stinson06; @creasey11; @tomassetti15]. The thermal energy is radiated away too quickly and the momentum input is unresolved as too much mass is accelerated to too low velocities [@hu15], in particular if the time step is not reduced accordingly [@dallavecchia12; @kim14]. The properties of the hot phase within the SNR are also predicted inaccurately and the effect on the global filling factor of the ISM is then biased [@mckee77; @agertz13; @keller15]. A plausible way to overcome these inaccuracies might be the construction of sub-resolution feedback models with information extracted from small-scale resolved numerical simulations of SNRs. However, this computationally expensive process has to cover all the complexity of SNRs and their surroundings [@martizzi14; @thompson14; @walch14a; @kim14].
To better understand the evolution of blast waves in the complex ISM we present an efficient 1-dimensional model, based on the thin-shell approach [@ostriker88], to compute the momentum input from SNe for uniform (see Section \[section4\_1\]), radial power-law (see Section \[section4\_2\]), wind-blown bubble (see Section \[sec\_bubble\]) or turbulent environmental density distributions (see Section \[section5\_1\]). In addition to previous studies [e.g. @cioffi88; @ostriker88] we combine the computation of all blast wave phases and their transitions in a single code using tabulated cooling functions. This way we can cover a wide range of ambient medium parameters. The model is easily customised to different SN scenarios as shown in case of a pre-existing wind bubble or a turbulent environment. We test the code results against recent, highly resolved numerical simulations [@martizzi14; @thompson14; @walch14a; @kim14] and show that we are able to achieve comparable results at almost negligible computational costs.
The paper is structured as follows. In Section \[section2\] we discuss the set of equations which govern the evolution of the SNR and the momentum transfer to the ISM. Section \[section3\] introduces the model which forms the basis for this work. We discuss cases (i) and case (ii) in Section \[section4\]. In Section \[sec\_bubble\] we use show the momentum input in a wind-blown bubble. In Section \[section5\] we extend our model to apply it to a turbulent environment and conclude in Section \[section7\].
The evolution of supernova remnants {#section2}
===================================
![Schematic time evolution (times and radius are not to scale) of a SN blast wave radius in a homogeneous environment. $\mathrm{p_{_{0}}}$ is the initial radial momentum of the SN ejecta. The Pre-Sedov-Taylor phase (red) terminates at $t= t_{_{\mathrm{ST}}}$ with the beginning of the energy conserving (non-radiative) Sedov-Taylor (ST) phase ($r{_{\mathrm{S}}} \propto t^{2/5}$). With radiative losses becoming more important (blue) the blast wave passes through a transition phase $(t=t_{_{\mathrm{TR}}})$ and approaches the fully radiative pressure driven snowplough (PDS) phase at $(t=t_{_{\mathrm{PDS}}})$. The shock radius evolves as $r_{_{\mathrm{S}}} \propto t^{2/7}$ until the momentum conserving snowplough (MCS) phase is reached at $(t=t_{_{\mathrm{MCS}}})$. The swept-up material can only gain radial momentum until the end of the PDS phase.[]{data-label="fig:stages"}](prod_EVO.eps){width="48.00000%"}
When a massive star explodes as a core-collapse SN, gas (typically $\mathrm{\sim 2-5\,M_{\odot}}$) is ejected with supersonic velocities ($v_{_{\mathrm{eject}}} \mathrm{\sim 6000 - 7000\,km\,s^ {-1}}$; [@blondin98; @janka12]), and drives a blast wave into the ISM. The evolution of the blast wave can be characterised by the time evolution $t$ of the shock radius $r_{_{\rm S}}$, $$\label{eq:powerlaw}
r_{_{\rm S}}\propto t^{\eta},$$ where $t$ is the time after the explosion and $\eta$ is the expansion parameter [@klein94; @cohen98; @kushnir10]. It can be separated into five different phases (see Fig. \[fig:stages\]; [@mckee77; @cioffi88; @ostriker88; @petruk06; @li15]).
- **Pre-Sedov-Taylor (PST) phase:** After the initial explosion the density profile of the ejected gas can be approximated with a steep power-law. In this case the shocked ambient medium decelerates the ejecta. The expansion parameter $\eta$ in this ejecta-dominated phase is smaller than one [@chevalier82]. As both shocks merge, the SN ejecta move radially outwards with constant velocity $v_{_{\mathrm{eject}}}$ and sweep up the ambient ISM until the swept-up mass is comparable to the ejecta mass $M_{_{\mathrm{eject}}}$. Part of the kinetic energy of the SN ejecta is converted into heat while the shock wave radius evolves as $r_{_{\mathrm{S}}} \propto t$.
- **Sedov-Taylor (ST) phase:** At the end of the PST phase about 72 per cent of the initial SN energy is converted into thermal energy and the energy conserving ST phase starts at $t=t_{_{\mathrm{ST}}}$ [@taylor50; @sedov58; @mckee77], $$t_{_{\mathrm{ST}}}=\left[ r_{_{\rm S,ST}} \left(\frac{\xi E_{_{\rm SN}}}{\rho_{0}}\right)^{-1/5}\right]^{5/2}$$ with the factor $\xi \sim 2$ and the shock radius $r_{_{\mathrm{S,ST}}}$, which can be computed as $$\label{eq:initalR}
r_{_{\rm S,ST}}=\left(\frac{3}{4}\frac{M_{_{\rm eject}}}{\pi \rho_{_{0}}} \right)^{1/3}.$$
During the energy conserving ST phase the shock evolves adiabatically with $r_{_{\rm S}} \propto
t^{2/5}$ and the radial momentum of the swept-up mass increases.
- **Transition (TR) Phase:** The energy conserving phase ends when the rate-of-change in temperature due to adiabatic expansion is comparable to radiative losses [@ostriker88; @petruk06]. In this TR phase, starting at $t=t_{_{\mathrm{TR}}}$, the post-shock cooling time $t_{_{\rm cool}}$ becomes comparable to the age of the remnant (see Section \[section2\_2\_2\]) $$t_{_{\mathrm{TR}}} \sim t_{_{\rm cool}} \rm .$$ The radial momentum can still significantly increase. As the shock front decelerates, the faster post-shock gas compresses the shocked material and forms a thin, dense shell at the end of the TR phase [@ostriker88; @cioffi88].
- **Pressure driven snowplough (PDS) phase:** At the beginning of the PDS, at $t=t_{_{\mathrm{PDS}}}$, a dense shell has formed behind the radiative shock [@falle75]. Typically $t_{_{\mathrm{PDS}}}$ is a few times $t_{_{\mathrm{TR}}}$ (see Section \[section2\_2\_2\]). The further evolution is dominated by radiation. The homogeneous pressure inside the bubble drives the expansion into the low pressure environment [@cox72; @gaffet83; @cioffi88; @cohen98]. The shock velocity and further momentum input to the ISM decrease.
- **Momentum-conserving snowplough (MCS) phase:** The MCS phase starts at $t=t_{_{\mathrm{MCS}}}$ once the excess thermal energy is radiated away. The momentum of the shell cannot increase any more. Momentum is conserved and inertia becomes the main driver of the further expansion [@cioffi88]. We therefore stop and compare our models at $t_{_{\mathrm{MCS}}}$.
The Ambient Medium {#section2_1}
------------------
The structure and the mean density of the ambient medium have a significant influence on the evolution of a blast wave. Here, we consider the general case of a radial power-law density profile [@ostriker88] $$\label{eq:density}
\rho(r) = \rho_{_{0}}Br^{-\omega},$$ where $\rho_{_{\mathrm{0}}}$ is the central density, $\omega$ is the power-law index and $B$ can be used to normalize the radius [@truelove99].
The mass density is related to the number density, $n$, by $\rho = n \mu m_{_{\mathrm{H}}}$, with $m_{_{\mathrm{H}}}$ being the proton mass and the mean molecular weight $\mu$ (ionized gas with $\mu_{_{\rm i}}\ \mathrm{=0.61}$; atomic gas with $\mu_{_{\rm a}}\ \mathrm{=1.27}$).
The total mass of the SNR, $M$, is $$M(r)=M_{_{\rm eject}}+\frac{4}{3-\omega}\pi \rho_{_{0}} B r_{_{\rm S}}^{3 -\omega}\ \ \mathrm{for}\ \omega \neq 3 ,
\label{eq:mass}$$ where $M_{_{\rm eject}}$ is the mass of the SN ejecta. The second term corresponds to the swept-up mass. As the PST phase is dominated by the mass of the ejecta, we assume a constant density, $\rho_{_{0}}$ until $t_{_{\mathrm{ST}}}$. In the following we describe in detail our numerical model considering the different phases starting with the ST phase.
### Sedov-Taylor phase {#section2_2_1}
At the beginning of the adiabatic ST phase a certain percentage of the initial kinetic energy has thermalized (approximately 75 per cent in a homogeneous medium). The fraction of kinetic to thermal energy stays constant and the total energy is conserved [@chevalier76; @cioffi88].
At $r_{_{\rm S, ST}}$ (Eq. \[eq:initalR\]) the adiabatic expansion begins with the radial evolution of the shock, described by the Sedov solution [@sedov46; @newman80; @ostriker88; @klein94; @truelove99; @breitschwerdt12], $$r_{_{\rm S}}(t)=\left(\frac{\xi E}{\rho_{_{0}}B}\right)^{\frac{1}{5-\omega}}t^{\frac{2}{5-\omega}}
\label{eq:initialSTrad}$$ with $\xi=(5-\omega)(10-3\omega)/8\pi$ and the expansion parameter $\eta = 2/(5-\omega)$.
The expansion speed can be derived by considering the time derivatives of the shock radius $r_{_{\rm S}}$ in the ST stage [@cavaliere76]: $$\label{eq:STvel}
\frac{d}{dt}(r_{_{\rm S}})=v=\frac{2}{5-\omega}\frac{r_{_{\rm S}}}{t}.$$ Here $v$ is the shock velocity. The post-shock velocity $v'$ is $$v' = 3/4 v.$$
### Transition phase {#section2_2_2}
Between the ST and PDS phases, there is an intermediate period of non-self-similar behaviour which, therefore, cannot be described by a power-law solution as in Eq. . We treat the TR phase independently, which allows a more realistic modelling of the SNR [e.g. @cioffi88; @petruk06]. The description of the ST phase as energy conserving is accurate as long as cooling plays a minor role and the energy loss due to radiation is negligible.
Following @blondin98 $t _{_{\mathrm{TR}}}$ is defined as the time at which the cooling time is comparable to the age of the remnant. We obtain similar results when the rate of change in temperature of the SNR, $T$, due to the adiabatic expansion becomes comparable to the radiative losses [@petruk06]: $$\label{eq:t_trans}
\frac{d}{dt_{_{\rm TR}}}\left(T\right)_{\rm exp} \sim \frac{d}{dt_{_{\rm TR}}}\left(T\right)_{\rm cool}.$$
During the TR phase the post-shock gas velocity approaches the shock speed [@cioffi88], $$v'=K_{_{01}}\nu_{1}v,$$ with the velocity moment, $K_{_{\mathrm{01}}}$, and the fraction $\nu_{1}$ of the shock velocity $v$ (see Eq. \[eq:STvel\]).
The velocity moment, $K_{_{\mathrm{01}}}$, is unity in self-similar blast waves but changes whenever this condition is violated, thus at $t _{_{\mathrm{TR}}}$, $K_{ _{\mathrm{01,TR}}}\ \mathrm{=0.857}$ [@cioffi88 but see also @ostriker88, for more details].
We follow @cioffi88 and assume that the TR phase lasts until $$\label{eq:euler}
t_{_{\mathrm{TR}}} c = t_{_{\mathrm{PDS}}}$$ where $c$ = $(1+\eta) / (\eta ^{\eta /(1+ \eta)})$ with $\eta = (4(3-\omega)-2\omega)/(5-\omega)$. We follow the approximation by @petruk06 and assume $c$ = 1.83 for the homogeneous medium and $c$ = 1 for $\omega$ = 2. During this period, $\nu_{1}$ changes as $$\nu_{1}=\frac{3}{4}+0.25\left(\frac{\left(\frac{t}{t_{_{\mathrm{TR}}}}\right)^{2.1}-1}{\left(\frac{1}{c}\right)^{2.1}-1}\right).$$
As radiative cooling becomes important, $\nu_{1}$ increases from the ST value of 3/4 to a value of one at $t_{_{\rm PDS}}$. A thin, dense, radiatively cooling shell forms [@gaffet83; @ostriker88; @cioffi88; @petruk06].
The large thermal pressure gradient across the shock drives the expansion under the influence of radiative cooling [@cioffi88]. We use a set of coupled ordinary differential equations for the further evolution of the SNR starting at $t_{_{\mathrm{TR}}}$, throughout the PDS phase until $t_{_{\mathrm{MCS}}}$. The time evolution of mean momentum and shock radius then read (see @ostriker88, their Eq. (2.9) and appendix D): $$\label{eq:PDSmv}
\frac{d}{dt}(\bar{p})= \frac{4(3-\omega) \pi}{3} K_{_{\mathrm{pres}}}\bar{P}_{_{\rm th}}r_{_{\rm S}}^{2}$$ $$\label{eq:PDSvel}
\frac{d}{dt}(r_{_{\rm S}})=\frac{3}{4r_{_{\rm S}}^{3}\pi \bar{\rho}}\frac{1}{K_{_{\mathrm{01}}}\nu_{1}}(\bar{p})$$
where $K_{_{\mathrm{pres}}}$ is the pressure moment and $\bar{P}_{_{\rm th}}$ is the mean thermal pressure within the SNR, $$\label{eq:pressure}
\bar{P}_{_{\rm th}}=\frac{E_{_{\mathrm{th}}}}{2\pi r_{_{\rm S}}^{3}},$$ which depends on the thermal energy $E_{_{\mathrm{th}}}$ of the SNR changing as $$\label{eq:energyrate}
\frac{d}{dt}(E_{_{\mathrm{th}}})=-V \Lambda (\bar{T})\bar{n}^{2}.$$ $\Lambda$ is the cooling function (see Section \[section3\]) in a volume $V$ with a mean number density $\bar{n}$ and a mean temperature $\bar{T}$. We consider two volumes, namely that of the shock and the interior. Note that Eq. \[eq:energyrate\] is used throughout the entire evolution of the SN blast wave from $t_{_{\rm ST}}$ until the end [@ostriker88; @bisnovatyi95]. During the ST phase almost no thermal energy is radiated away. Internal structures have minor influence compared to the shock and are therefore neglected.
The pressure moment, $K_{_{\mathrm{pres}}}$, can be interpreted as the weighted mean interior pressure of the SNR (see [@ostriker88], Eq. D10a for further details). At the beginning of the TR phase in our SN-model $K_{_{\mathrm{pres, TR}}}$ = 0.932 and approaches $K_{_{\mathrm{pres, PDS}}}$ = 1 [@cioffi88; @ostriker88; @bisnovatyi95].
### Pressure driven snowplough phase {#section2_2_3}
The PDS is the first fully radiative phase. It starts with the formation of a thin shocked shell, which contains most of the mass of the SNR and encloses a roughly isobaric and hot cavity [@blondin98]. Since we restrict ourselves to one dimension, we neglect instabilities or deviations from spherical geometry [@franco94].
The evolution during the PDS is also described by the equations introduced in Section \[section2\_2\_2\] with $K_{_{\mathrm{pres}}}$ = $K_{_{\mathrm{01}}}$ = $\nu_{1}$ = 1. With a dense, uniform, thin shell we can model the flow using a self-similar solution and Eq. is valid. As we neglect the influence of the inner parts, the expansion parameter $\eta$ in this case is [@ostriker88; @gaffet83], $$\eta= \frac{2}{2+3\gamma-\omega},$$ where $\gamma$ = 5/3 is the adiabatic index of a mono-atomic gas.
During the PDS almost all thermal energy is radiated away. The thermal pressure inside the cavity becomes equal to the ambient thermal pressure at $t_{_{\mathrm{MCS}}}$. At this point we stop the calculation of the PDS phase and assume that afterwards the radial momentum stays constant.
The numerical setup {#section3}
===================
We study the evolution of a single SNR from the ST to the MCS phase by solving the set of ODEs (Eq. and , together with Eq. ) , based on the thin-shell approach [@cioffi88; @ostriker88], described in Section \[section2\_1\] via a fifth-order Runge-Kutta-Fehlberg integration scheme [@butcher96] with adaptive step-sizing. This spherically-symmetric, 1-dimensional SN model assumes no instabilities in the shell, no shell perforation or internal structures. An advantage of the presented SN model is, that we can easily and efficiently calculate the evolution of SNe in a large number of different ambient media.
We assume solar metallicity and we model radiative cooling for $\mathrm{10^{4}\ K< T < 10^{8}\ K}$ using the cooling function by @sutherland93. For $\mathrm{T < 10^{4}\ K}$ a cooling function by @koyama00 [@koyama02] is used with $$\label{eq:cooling}
\begin{split}
\mathrm{\Lambda}{}={}& \mathrm{ \Gamma\left[ 10^{7}exp\left( \frac{-1.184\times 10^{5}}{T+1000} \right)\right.} \\
&\mathrm{+1.4 \times 10^{-2}\sqrt{T}exp\left.\left(\frac{-92}{T} \right)\right]\ erg\ cm^3\ s^{-1}}
\end{split}$$ with a fixed heating rate $\Gamma$ [@koyama02; @walch14a], $$\label{gamma}
\mathrm{\Gamma=2\times 10^{-26}\ erg\ s^{-1}.}$$
The SN is initialised at the beginning of the ST phase by adding $10^{51}$ erg of total energy $E_{_{\rm SN}}$ [@ostriker88] and 2 M$_{_{\odot}}$ [@draine11] of ejecta mass at the initial ST radius, Eq. , corresponding to an initial momentum input of $\rm p_{_{\mathrm{0}}}$ = 14181 $\mathrm{M_{\odot}\,km\,s^{-1}}$.
We run simulations with different combinations of ambient medium densities and density distributions (Eq. , see Table \[tab:overview\_runs\]). The initial number densities for a uniform distribution $n_{_{0,\rm uni}}$ and the central density of the power-law distribution $n_{_{0,\rm power}}$ vary in a range of 0.1 $-$ 100 $\mathrm{cm^{-3}}$ ($n_{_{0,\rm uni}}$ = $n_{_{0,\rm power}}$ = 0.1, 0.3, 1, 3, 10, 30, 100 $\mathrm{cm^{-3}}$).
At radii smaller than $R_{_{ST}}$ we assume the density to be homogeneous as the mass of the ejecta dominates the first phase. At larger radii we consider different density distributions (constant, power-law, turbulent) in the ambient medium. For the power-law distribution we assume a density floor, $n_{_{\mathrm{floor}}}$:
$$\label{eq:alldensities}
n_{_{\rm power}}(r) = \begin{cases}
n_{_{0, \rm power}} & \mathrm{for}\ r \leq r_{_{\rm ST}}\\
n_{_{0, \rm power}} \left(\frac{r}{r_{_{\rm ST}}}\right)^{-\omega}& \mathrm{for}\ r > r_{_{\rm ST}}\\
& \ \ \ \mathrm{and}\ n_{_{\rm power}}(r) \geq n_{{\rm floor}} \\
n_{_{\rm floor}} & \mathrm{for}\ r > r_{_{\rm ST}}\\
& \ \ \ \mathrm{and}\ n_{_{\rm power}}(r) < n_{{\rm floor}}.
\end{cases}$$
Without this lower limit the mean of the ambient density would drop to non-physical values and the sound speed of the ambient medium with a fixed pressure would increase to infinity [@chevalier76; @cavaliere76; @greif11; @hennebelle12].
A self-consistent treatment of the chemical evolution is not included and it is not possible to consider multiple ionization states of the ambient medium. For simplicity, we choose a neutral environment with solar abundances with $\mu_{_{\rm a}}$ = 1.27. Some studies [e.g.: @cioffi88; @petruk06] consider the SN environment to be ionized. To compare with these results we rerun the simulations in uniform media and for a turbulent example with $\mu_{_{\rm i}}$ = 0.61 (see Section \[section4\_1\] and Section \[section5\_1\]).
A simulation is terminated at the beginning of the MCS phase, $t_{_{\rm MCS}}$ (see Section \[section2\_2\_3\]), after which the momentum is constant. For all environments we assume an universal ambient pressure, because $P \propto nT \sim \rm const$ [@mckee77]. All parameters of the model and the performed simulations are summarized in Table \[tab:overview\_runs\].
The computational effort to run a single SN depends on the number of time steps. The initial step-size is chosen to be a fraction of the ST time, which depends on the density of the ambient medium. During the computation we use adaptive step-size control. We compare the local, relative error of the radius and the thermal energy obtained from the applied integration scheme with a global tolerance of $10^{-3}$ at densities of $n_{_{0,\rm uni}}$ $\leq$ 50 $\rm cm^{-3}$ and $10^{-4}$ for denser environments. In case the local error exceeds the global tolerance the time-step is adjusted. On a single core (clock speed 3.40 GHz) a simulations needs between 4$\times 10^{3}$ ($n_{_{0,\rm uni}}$ = 3 $\rm cm^{-3}$) and 1.3$\times 10^{4}$ ($n_{_{0,\rm uni}}$ = 100 $\rm cm^{-3}$) time-steps, which corresponds to a CPU time of 1.5 s to 6 s.
Blastwave evolution in idealised environments {#section4}
=============================================
Homogeneous density distribution {#section4_1}
--------------------------------
We apply our model to follow the evolution of blast waves for a single SN in homogeneous media with densities of $n_{_{\mathrm{uni}}}$ = 0.1 $-$ 100 $\mathrm{cm^{-3}}$, covering the more tenuous ISM up to average densities of MCs. We assume both an ionized with $\mu_{_{\rm i}}$ and a neutral ambient medium with $\mu_{_{\rm a}}$.
![Model predictions for the end of the ST phase $t_{_{\rm TR}}$ (black triangles) and the beginning of the pressure driven snowplough phase $t_{_{\rm PDS}}$ (black circles) in ambient media with different number densities $n_{_{0, \rm uni}}$ and different states of ionization of the ambient gas. Full symbols show the case of a neutral ambient medium with solar abundances ($\mu_{_{\rm a}}$), and open symbols show the case of a fully ionized ambient medium with $\mu_{_{\rm i}}$. Our results are consistent with previous works by @blondin98 [here BW98] and @petruk06 [here P06] but differ significantly from @cioffi88 [here CO88] and @franco94 [here FM94] for several reasons (see details in the text).[]{data-label="fig:transition"}](prod_TRANS.eps){width="50.00000%"}
The transition times $t_{_{\mathrm{TR}}}$ and $t_{_{\mathrm{PDS}}}$ (see Fig. \[fig:transition\]) of SNe in homogeneous media, obtained in this work, can be fitted with a power-law which depend on the number density $n_{_{0, \rm uni}}$ and mean molecular weight $\mu$ (see Section \[section4\_1\]) :\
$t_{_{\mathrm{TR},\mu_{_{\rm a}}}} = 4.15\,(n_{_{0, \rm uni}} / 1\,\mathrm{cm^{-3}})^{-0.53} \times 10^{4}$ yr\
$t_{_{\mathrm{PDS},\mu_{_{\rm a}}}} = 7.80\,(n_{_{0, \rm uni}} / 1\,\mathrm{cm^{-3}})^{-0.53} \times 10^{4}$ yr\
$t_{_{\mathrm{TR},\mu_{_{\rm i}}}} = 3.18\,(n_{_{0, \rm uni}} / 1\,\mathrm{cm^{-3}})^{-0.54} \times 10^{4}$ yr\
$t_{_{\mathrm{PDS},\mu_{_{\rm i}}}} = 5.80\,(n_{_{0, \rm uni}} / 1\,\mathrm{cm^{-3}})^{-0.54} \times 10^{4}$ yr.\
The definitions for the respective transition times are not unique. Different numerical setups (e.g. @petruk06), cooling functions [e.g. @cioffi88] and assumptions for the ambient medium (mean molecular weight in ionized, $\mu_{_{\rm i}}$, or neutral, $\mu_{_{\rm a}}$, media) can lead to different results. Fig. \[fig:transition\] compares $t_{_{\mathrm{TR}}}$ and $t_{_{\mathrm{PDS}}}$ from previous works [@cioffi88; @franco94; @blondin98; @petruk06] to values obtained from this work (black triangles, black circles) in uniform ambient media with number densities between 0.1 $\mathrm{cm^{-3}}$ and 100 $\mathrm{cm^{-3}}$.
Our results are consistent with previous studies by @blondin98 and @petruk06 assuming the ambient medium to be ionized (open symbols). The differences in low density environments are less than 10 per cent. Only at $n_{_{0, \rm uni}}$ = 100 $\mathrm{cm^{-3}}$ the values differ by $\sim$ 40 per cent. In models with a neutral medium (full symbols), $t_{_{\mathrm{TR}}}$ and $t_{_{\mathrm{PDS}}}$ are significantly shifted to later times. @cioffi88 and @franco94 use different setups and show no agreement with the findings of all other authors. For a detailed comparison of important times in the evolution of SNRs we refer to @kim14 and @petruk06.\
---------------------------------------------------- ---------------------------------------------------
![image](prod_MASS_homo.eps){width="1.\textwidth"} ![image](prod_ETH_homo.eps){width="1.\textwidth"}
![image](prod_RAD_homo.eps){width="1.\textwidth"} ![image](prod_MOM_homo.eps){width="1.\textwidth"}
---------------------------------------------------- ---------------------------------------------------
In Fig. \[fig:evoconst\], top left panel, we show the evolution of the swept-up mass of the SNR. Initially it is dominated by the ejecta mass. The swept-up mass increases rapidly during the ST phase. The final swept-up mass, $M_{_{tot}}$, is $\sim$ 1290 \[660\] $\mathrm{M_{\odot}}$ in dense environments increasing up to about 8870 \[4590\] $\mathrm{M_{\odot}}$ in an ambient medium with $n_{_{0,\rm uni}}$ = 0.1 $\mathrm{cm^{-3}}$. This significant increase is a consequence of a 30 times longer evolution in lower-density environments. It will be discussed in more detail in Section \[section5\_3\].
In Fig. \[fig:evoconst\] (top right panel) we show the evolution of the thermal energy starting from the ST phase (71.7 per cent of the initial SN energy) until the onset of the MCS phase (end of lines). Here and in all following plots, the beginning of the TR phase is indicated by triangles and the onset of the PDS phase by circles. Filled symbols and thick solid lines show the results for a neutral ambient medium. The open triangles, circles, and dashed lines correspond to the same models assuming an ionised ambient medium. Hereafter, the values for ionised ambient media are given within square brackets.
As expected, for the highest density ($n_{_{0,\rm uni}}$ = 100 $\mathrm{cm^{-3}}$, black line) the ST phase terminates already after 3.6 \[2.8\] kyr while for the lowest density ($n_{_{0,\rm uni}}$ = 0.1 $\mathrm{cm^{-3}}$, dark yellow line) the ST lasts until 150 \[112\] kyr.
As the density of the shell increases, the post-shock gas starts to radiate. At $t_{_{\rm TR}}$ the thermal energy drops significantly at much earlier times for $n_{_{0,\rm uni}}$ = 100 $\mathrm{cm^{-3}}$ than for $n_{_{0,\rm uni}}$ = 0.1 $\mathrm{cm^{-3}}$. For all densities the PDS phase starts at about 1.8 $t_{_{\rm TR}}$. For high densities ($n_{_{0,\rm uni}}$ = 100 $\mathrm{cm^{-3}}$) the PDS phase of 1.9 \[1.4\] kyr is short compared to 185 \[159\] kyr in an ambient density of $n_{_{0,\rm uni}}$ = 0.1 $\mathrm{cm^{-3}}$. The bubble stays over-pressured and drives the evolution throughout the PDS stage. Cooling becomes inefficient (the curves flatten toward the end of the evolution) as the temperature of the SNR drops below $\mathrm{10^{4}}$ K [@koyama02; @sutherland93 see Eq. \[gamma\]].
The time evolution of the shell radius is shown in the bottom left panel of Fig. \[fig:evoconst\]. For all densities the radius evolves as $r_{_{\rm S}} \propto t^{\eta}$ with $\eta = 2/5$ in the ST phase. At $t=t_{_{\rm TR}}$, $\eta$ shifts towards 2/7 and the SNR enters the PDS stage. For the highest density the shell expands to a radius of 3.4 \[3.6\] pc during the ST and to 4.2 \[4.4\] pc in the PDS phase. For the lowest density the TR radius is about 59.5 \[61.6\] pc expanding to 73.5 \[76.2\] pc in the transition phase and finally reaches 85.3 \[90.0\] pc at the end of the PDS. The final expansion radius significantly decreases from low to high density environments, because the cooling of the shell occurs earlier and therefore the interior pressure drops more rapidly in denser media.
In the bottom right panel of Fig. \[fig:evoconst\] we show the corresponding evolution of the radial shell momentum. During the ST phase the SN momentum increases significantly from $\rm{p_{_{0}}}$ $\approx$ 1.4 $\times 10^{4}\,\mathrm{M_\odot\,km\,s^{-1}}$ by a factor of $\sim$ 8 \[6\] for $n_{_{0,\rm uni}}$ = 100 $\mathrm{cm^{-3}}$ and up to a factor 20 \[14\] at $n_{_{0,\rm uni}}$ = 0.1 $\mathrm{cm^{-3}}$. The following transition phase further increases the momentum by $\sim$ 40 per cent with respect to the ST values. At the beginning of the MCS phase the shell momentum varies between 13.4 \[9.3\] $\rm{p_{_{0}}}$ for the highest density and 30.9 \[21.3\] $\rm{p_{_{0}}}$ for an ambient density of 0.1 $\mathrm{cm^{-3}}$. However the momentum increase during the PDS, is almost negligible because the pressure inside the SNR is lowered to values similar to the ambient pressure (see Section \[section3\]). Within a high density environment ($n_{_{0,\rm uni}}$ = 100 $\mathrm{cm^{-3}}$) the increase is only 0.9 $\rm{p_{_{0}}}$. The final radial momentum converges as the temperature inside the SNR drops. Shortly before the onset of the MCS phase a final plateau forms. The temperature has dropped below 10$^{4}$ K and the photoelectric heating starts to compensate the radiative cooling [@koyama02].\
\
In Fig. \[fig:evoconst\_iter\] we compare the final momenta in a density range of $n_{_{0,\rm uni}}$ = 0.1 $-$ 100 $\mathrm{cm^{-3}}$ from our model with recent numerical simulations [@kim14; @li15; @martizzi14] and with previous works [@cioffi88]. We show the results for atomic (full black squares) and ionized media (open black squares). The SN model in an ionized medium with a density of $n_{_{0,\rm uni}}$ = 1 $\mathrm{cm^{-3}}$ has a radial momentum input of $\mathrm{2.3 \times 10^{5}\,M_{\odot}\,km\,s^{-1}}$, which is in good agreement with $\mathrm{2.17 \times 10^{5}\,M_{\odot}\,km\,s^{-1}}$ found by @kim14 with $\mathrm{2.66 \times 10^{5}\,M_{\odot}\,km\,s^{-1}}$ by @li15 and the semi-analytic solution by @cioffi88.
For neutral and ionised gas the final momentum input is
$p_{\mu_{_{\rm a}}}\ =22.44\, (n_{_{0, \rm uni}} / 1\,\mathrm{cm^{-3}})^{-0.12}\ \mathrm{p_{_{0}}} $\
$p_{\mu_{_{\rm i}}}\ =16.52\, (n_{_{0, \rm uni}} / 1\,\mathrm{cm^{-3}})^{-0.12}\ \mathrm{p_{_{0}}} $,\
respectively. Numerical simulations by @kim14 find a lower factor of 19.75 and an exponent of -0.16.
![image](prod_momdens_homo.eps){width="80.00000%"}
Power-law density distribution {#section4_2}
------------------------------
---------------------------------------------------- ---------------------------------------------------
![image](prod_MASS_grad.eps){width="1.\textwidth"} ![image](prod_ETH_grad.eps){width="1.\textwidth"}
![image](prod_RAD_grad.eps){width="1.\textwidth"} ![image](prod_MOM_grad.eps){width="1.\textwidth"}
---------------------------------------------------- ---------------------------------------------------
We now assume a power-law ambient medium density distribution following Eq. with $\omega$ = 2. We vary $n_{_{0,\rm power}}$ = 0.1 $-$ 100 $\mathrm{cm^{-3}}$ [@weaver77; @band88].
In the top left panel of Fig. \[fig:evograd\] we show the corresponding evolution of the swept-up mass. We find two distinct regimes for the mass evolution. Where the ambient density distribution follows a power-law with $M$ $\propto$ $t^{1.95}$. In this medium and a high density ($n_{_{0,\rm power}}$ = 100 $\mathrm{cm^{-3}}$) $\sim$ 155 $\rm M_{\odot}$ is swept-up compared to 6 $\rm M_{\odot}$ for $n_{_{0,\rm power}}$ = 0.1 $\mathrm{cm^{-3}}$. Once the uniform density floor is reached, the swept-up mass is quickly dominated by the surrounding uniform medium with $n_{_{\rm floor}}$. Independent of $n_{_{0,\rm power}}$ the swept-up mass is $\sim$ 5000 $\rm M_{\odot}$ at $t_{_{\rm TR}}$ and 1.3 $\times 10^{4}$ $\rm M_{\odot}$ at $t_{_{\rm MCS}}$. Compared to the uniform ambient medium with $n_{_{0,\rm uni}}$ = 0.01 $\mathrm{cm^{-3}}$, the total swept-up mass in the power-law distribution is $\sim$ 20 per cent smaller. The expansion proceeds shorter in time and expansion in the latter case because slightly less momentum is created during the evolution.
In Fig. \[fig:evograd\] (top right panel) we show the evolution of the thermal energy normalized to the initial SN energy. The initial thermal energy is 0.82 $E_{_{\rm SN}}$ (results from Eq. and the momentum at $t_{_{\rm ST}}$). Starting with energy conservation during the ST phase, thermal energy is radiated away at the same $t_{_{\rm TR}}$ (triangles, $t_{_{\rm TR}}\sim$ 510 kyr) independent of the profile density. The thermal energy drops significantly during the PDS phase (circles, $t_{_{\rm PDS}}\sim$ 1 Myr) to 0.26 $E_{_{\rm SN}}$. For all central densities the thermal energy is lost only within the last $\sim$ 300 kyr of the simulation ($t_{_{\rm MCS}}\sim$ 1.2 Myr). For comparison, the thermal energy retained at $t_{_{\rm PDS}}$ in a uniform ambient medium with $n_{_{0,\rm power}}$ = 0.01 $\mathrm{cm^{-3}}$ is 0.4 $E_{_{\rm SN}}$.
The time evolution of the shell radius is shown in Fig. \[fig:evograd\] (bottom left panel). For all densities the radius evolves with an expansion parameter $\eta \sim 2/(5 - \omega)$ in the ST phase turning to $\eta \sim$ 2/7 as it reaches the PDS phase within the homogeneous medium. For the highest central density ($n_{_{0,\rm power}}$ = 100 $\mathrm{cm^{-3}}$) the shell expands to 155 pc during the ST phase. At $t_{_{\rm PDS}}$ the radius is 204 pc and finally the shell has expanded to 215 pc. These values are almost independent of the central density and are more comparable to the expansion radius of a homogeneous ambient medium with $n_{_{0,\rm power}}$ = 0.01 $\mathrm{cm^{-3}}$, which expands to 230 pc.
The radial momentum (Fig. \[fig:evograd\]; bottom right panel) depends, among others, on the swept-up mass, which couples the thermal energy to the ambient medium. In a power-law medium, where $n(r)$ decreases rapidly the mass of the SN ejecta dominates the initial evolution (Fig. \[fig:evograd\]; bottom right panel). The momentum increases between 2.4 $\rm{p_{_{0}}}$ ($n_{_{0,\rm power}}$ = 0.1 $\mathrm{cm^{-3}}$) and 5.1 $\rm{p_{_{0}}}$ ($n_{_{0,\rm power}}$ = 100 $\mathrm{cm^{-3}}$) before $n(r)$ = $n_{_{\rm floor}}$ is reached. From this point onwards, the momentum increases more rapidly. At $t_{_{\rm TR}}$ all simulations converge to a common value of $\sim$ 25.3 $\rm{p_{_{0}}}$, increase to 36.3 $\rm{p_{_{0}}}$ at $t_{_{\rm PDS}}$ and finally to 37.0 $\rm{p_{_{0}}}$. For comparison, the momentum in a homogeneous medium with $n_{_{0,\rm uni}}$ = 0.01 $\mathrm{cm^{-3}}$ at $t_{_{\rm TR}}$ is 26.6 $\rm{p_{_{0}}}$ and 39.0 $\rm{p_{_{0}}}$ at $t_{_{\rm MCS}}$.\
------------------------------------------------------------ --------------------------------------------------------
![image](prod_PARTDENS_gradOVER.eps){width="1.\textwidth"} ![image](prod_PRES_gradOVER.eps){width="1.\textwidth"}
![image](prod_RAD_gradOVER.eps){width="1.\textwidth"} ![image](prod_MOM_gradOVER.eps){width="1.\textwidth"}
------------------------------------------------------------ --------------------------------------------------------
In Fig. \[fig:evogradover\] we illustrate the impact of different values of $n_{_{\rm floor}}$ ( $n_{_{\rm floor}}$ = 10$^{-2}$, 10$^{-4}$ cm$^{-3}$) on the remnant evolution in power-law environments. For comparison, we show the case of a homogeneous ambient medium with $n_{_{0,\rm power}}$ = 1 cm$^{-3}$ (black, solid line), $n_{_{0,\rm power}}$ = 10$^{-2}$ cm$^{-3}$ (green, dashed line) and $n_{_{0,\rm power}}$ = 10$^{-4}$ cm$^{-3}$ (dark yellow, dashed line). We compare the case of a SNR expanding into a warm ionized medium (WIM case; green lines) with $n_{_{\rm floor}}$ =10$^{-2}$ $\mathrm{cm^{-3}}$ , $T$ = 7000 K, and $P/k_{_{\rm b}}$ = 70 $\rm cm^{-3}$ K; or into a hot ionized medium (HIM case; dark yellow lines) with $n_{_{\rm floor}}$ = 10$^{-4}$ cm$^{-3}$, T = 3$\times10^{5}$ K, and $P/k_{_{\rm b}}$ = 30 $\rm cm^{-3}$ K, respectively [@mckee95]. A plain power-law with no density floor (red lines) is also shown. We terminate the latter simulation at 30 Myr. The density distributions are shown in the top left panel of Fig. \[fig:evogradover\].
In the top right panel of Fig. \[fig:evogradover\] we show the interior pressure, $P/k_{_{\rm b}}$ (full lines) and the counteracting ambient pressure (dotted lines). Assuming an isothermal environment, the ambient pressure is directly proportional to the density distribution. The homogeneous ambient medium is isobaric, whereas in the WIM and HIM the pressure decreases with increasing radius down to the isobaric floor. The pressure in the ambient medium with a plain power-law would decrease to zero at infinity. The pressure inside the bubble decreases and drops significantly at $t_{_{\rm TR}}$ when radiation becomes important. When the ambient pressure is equal to the interior pressure, the simulation terminates at 98 kyr (homogeneous medium), 1.3 Myr (WIM) and 26 Myr (HIM).
The expansion radius of the SNR (left bottom panel) increases with lower ambient densities. In a homogeneous medium the radius is the smallest as the shock sweeps-up mass with a constant density. The power-law media with homogeneous surroundings show similar behaviour but different final radii depending on the ambient pressure. The final radius in the WIM is $\sim$ 200 pc ($t_{_{\rm MCS}}$ = 1.1 Myr) and in the HIM $\sim$ 1020 pc ($t_{_{\rm MCS}}$ = 5.6 Myr). For the plain power-law the density drops with the radius. The counteracting swept-up mass is missing and the expansion terminates without forming a dense shell [@ostriker88; @truelove99; @petruk06].
The final radial momentum input (Fig. \[fig:evogradover\], bottom right panel) increases from 22.9 $\rm p_{_{0}}$ in the homogeneous medium and almost doubles to 39.0 $\rm p_{_{0}}$ assuming a WIM. In the HIM the momentum input is 68.3 $\rm p_{_{0}}$. The momentum in the plain power-law environment increases continuously.
To summarize, we find that the momentum injection in a power-law environment is small compared to the uniform medium, because the decreasing density suppresses the coupling of the momentum to the gas. If the power-law environment is surrounded by a homogeneous density floor the final momentum can increase. However, the momentum input is always smaller or equal to the case of a uniform ambient medium with $n_{_{0, \rm uni}}$ = $n_{_{\rm floor}}$, independent of $n_{_{0, \rm power}}$.
Blast wave evolution in wind-driven bubbles {#sec_bubble}
===========================================
---------------------------------------------------------- -----------------------------------------------------
![image](prod_PARTDENS_weaver.eps){width="1.\textwidth"} ![image](prod_ETH_weaver.eps){width="1.\textwidth"}
![image](prod_RAD_weaver.eps){width="1.\textwidth"} ![image](prod_MOM_weaver.eps){width="1.\textwidth"}
---------------------------------------------------------- -----------------------------------------------------
![image](prod_MOM_over_weaver.eps){width="80.00000%"}
During the lifetime of a massive star strong stellar winds interact with the ambient medium and blow low-density bubbles [@weaver77]. The subsequent SNe explode in these bubbles and the evolution of the blast wave is modified. Here we discuss the evolution of SN blast waves in wind-blown bubbles. We assume a simple model for a constant wind expanding into an initially cold (80 K) homogeneous medium with four different initial densities ($n_{_{0,\rm uni}}$ = 1, 10, 100, 1000 $\rm cm^{-3}$). In these cold environments the wind-blown bubble expands supersonically and drives a strong shock into the ambient ISM. The shock is radiative and cools down to $T_{_{\rm s, SH }}$.
We assume a 20 M$_{_{\odot}}$ O-star with a constant wind velocity of v$_{_{\omega}}$= 2000 km s$^{-1}$ and a constant mass-loss rate of $\dot{\rm M}_{_{\omega}}$ = $10^{-7}$ M$_{_{\odot}}$ yr$^{-1}$ over a lifetime of $t_{_{B}}$ = 10 Myr. The SN has an ejecta mass M$_{_{\rm eject}}$ = 2 M$_{_{\odot}}$ [@puls09]. The expansion radius $r_{_{\rm s, B}}$ of a wind-blown bubble from a constant stellar wind without heat transfer is given by [@weaver77; @pittard13] $$r_{_{\rm s,B}}(t) = \left( \frac{125}{154\pi}\right)^{1/5} \left( \frac{L_{_{\rm \omega}}}{\rho_{_{0, \rm uni}}}\right)^{1/5} t^{3/5}$$ where $ \rho_{_{0, \rm uni}}$ is the density of the initial homogeneous ambient medium with $\mu$ = 1. $L_{_{\rm \omega}}$ is the mechanical luminosity $$L_{_{\rm \omega}} = \frac{1}{2}\dot{M}_{_{\rm \omega}}v_{_{\rm \omega}}^{2}.$$
The average density $\rho_{_{\rm B}}$ within the bubble without mixing is [@dyson73; @garciasegura95; @pittard13] $$\rho_{_{\rm B}}(t) = \frac{3 \dot{M}_{_{\omega}}t}{4\pi r_{_{\rm s,B}}^{3} }.$$
The density of the wind-shocked shell $\rho_{_{\rm s, B}}$ can be estimated by the isothermal shock jump condition ($\gamma$ = 1), $$\rho_{_{\rm s, B}} = \rho_{_{0, \rm uni}} \frac{v_{_{\rm s, B}}^{2}}{c_{_{0}}^{2}}$$ where $c_{_{0}}$ is the sound-speed of the ambient medium with $c_{_{0}} = (\gamma P_{_{0}} / \rho_{_{0, \rm uni}})^{1/2}$. The wind bubble expands supersonically with the velocity $v_{_{\rm s, B}}$ $$\frac{d}{dt}(r_{_{\rm s,B}}) =v_{_{\rm s, B}} = \frac{3}{5}\frac{r_{_{\rm s,B}}}{t}.$$
The shell thickness $\delta r_{_{\rm s, B}}$ is $$\delta r_{_{\rm s, B}} = \frac{c_{_{0}}^{2}}{3}\frac{r_{_{\rm s, B}}}{v_{_{\rm s, B}}^{2}}.$$
In Fig. \[fig:evoweaver\] we show the evolution of a SN in each of the four pre-existing wind-blown bubbles. The densities inside the bubble, $n_{_{\rm B}}$, are 3.7, 14.8, 59.1 and 235.1 $\times 10^{-4}$ $ \rm cm^{-3}$ for ambient densities of $n_{_{0,\rm uni}}$ = 1, 10, 100, 1000 $\rm cm^{-3}$ (top left panel, dashed line). The interior is separated from the ambient medium by a dense shell. The density contrast of between the interior and the shell is constant with 1.5 $\times\, 10^{-5}$. The thickness of the shells are 0.7, 1.2, 1.8 and 2.9 pc. The density of the SNR follows this evolution until the evolutions stalls.
The SN evolution in the low density interior is dominates by the ST phase, which immediately ends when the blast wave hits the dense shell (top left panel). Within $\sim$ 2 kyr 80 per cent of the initial thermal energy is radiated away, almost independently of the shell density. The remaining thermal energy is related to the hot, low-density interior of the SNR. Previous works [e.g. @dwarkadas07] show a similar behaviour of rapid cooling at the shock boundary. Recent numerical simulations [@fierlinger15] point out that 1.5 per cent of the SN energy is left after the SNR stalls at the boundary.
Initially the radial evolution (bottom left panel) is that within a homogeneous medium. For the densest ambient medium ($n_{_{0,\rm uni}}$ = 1000 $\rm cm^{-3}$) the wall of the wind-blown cavity is reached after $\sim$ 4.9 kyr and 22.0 pc, while it takes $\sim$ 12.2 kyr and 87.6 pc for $n_{_{0,\rm uni}}$ = 10 $\rm cm^{-3}$. The final radius corresponds to the inner radius of the bubble.
The density distribution of the wind-bubble is assumed to be static and the shell has no momentum. While in the ST phase, the momentum input by the SN is small because of the low gas density within the bubble. Once the remnant reaches the shell, which is massive compared to the swept-up mass from the SN, it cools quickly and cannot accelerate the shell. As a result the evolution of the SNR stalls. The final momentum input (bottom right panel) lies between $\sim$ 2.4 and 2.9 $\rm p_{_{0}}$.
The density difference between the interior and the shock as well as the density of the wind-blown shell itself determine the final radial momentum. Assuming isothermal behaviour, the ambient temperature of the initial environment is linked to the shell temperature, which again effects the thickness of the shell. Therefore, in Fig. \[fig:evoweaver\_temp\] we show the influence of densities, $n _{_{\rm B}}$, and the temperature of the ambient interstellar medium on the momentum input. We choose $n _{_{\rm B}}$ = 3.7 $\times$ 10$^{-4}$ and 0.37 $\rm cm^{-3}$, where the first corresponds to an wind-blown bubble with an initial density $n _{_{0, \rm uni}}$ = 1 $\rm cm^{-3}$ and the latter corresponds to a bubble which is filled by ionised gas as would be the case for an HII region. We increase the temperatures from 80 K to 800 K and to the temperature (3175 K), which corresponds to $v_{_{\rm s, B}}$ = $c_{_{0}}$. The dashed lines show the momenta of SNe in uniform media with n$_{_{\rm B}}$ = $n _{_{0, \rm uni}}$.
For the low density case ($n _{_{\rm B}}$ = 3.7 $\times$ 10$^{-4}$ $\rm cm^{-3}$) we show how the final momentum increases with temperature from 2.9 $\rm p_{_{0}}$ at 80 K to 4.4 $\rm p_{_{0}}$ at 800 K and up to 6.5 $\rm p_{_{0}}$ at 3175 K. At a higher interior density ($n _{_{\rm B}}$ = 0.37 $\rm cm^{-3}$) the momentum in the cold (80 K) ambient medium is 19.3 $\rm p_{_{0}}$ and is comparable to the corresponding homogeneous medium. Recent numerical results of SNe exploding into bubbles blown by a stellar wind and ionizing radiation give a factor of $\sim$ 10 [@geen15].
This shows that the ambient density and temperature are essential for the evolution of a SNR in a wind-blown bubble. Higher temperatures broaden the wind-blown shell and reduce the density contrast. This results in a lower cooling and an increase of radial momentum [e.g. @walch14a]. The influence of the wind-blown bubble on the evolution of the SNR diminishes as the swept-up mass increases compared to the mass of the shell. SNR with a high density inside the bubble and a small difference between the swept-up mass and the mass of the wind-blown shell show a behaviour that is comparable to a uniform medium with that bubble density.
Blast wave evolution in turbulent environments {#section5}
==============================================
We study the evolution of a SNR expanding in a more realistic ambient medium, which is subject to isothermal, supersonic turbulence [@klessen98; @klessen00; @kainulainen09; @schneider11; @federrath13]. Numerical simulations suggest that the volume-weighted density PDF of gas shaped by isothermal turbulent motions can be described by a log-normal distribution [@vazquezsemadeni93; @padoan97; @nordlund97; @federrath08] , $$\label{eq:lognormal}
q(z)=\frac{1}{\sqrt{2\pi \sigma_{\ln\rho}^{2}}} \exp \left[-\frac{(z-\bar{z})^ {2}}{2\sigma_{\ln\rho}^{2}}\right],$$ where $z=\ln(\rho/\rho_{_{0,\rm turb}})$ with a mean density of the gas $\rho_{_{0,\rm turb}}$. The median is $\bar{z}= - \sigma^{2}_{_{\ln\rho}}/2$ [@vazquezsemadeni94; @thompson14]. The dispersion of the density distribution $\sigma_{\ln\rho}^{2}$ can be related to the Mach number $\mathcal{M}$ of turbulent motions [@federrath08; @thompson14], $$\label{eq:sigma}
\sigma^{2}_{_{\ln\rho}} \sim \ln(1+b^{2}\mathcal{M}^{2}).$$ The turbulent driving factor $b$ is assumed to be 0.5 with a thermal mix of divergence free (solenoidal) and curl free (compressive) turbulence [@federrath08; @brunt10; @krumholz14].
The volume density PDF can also be related to the surface density PDF $\sigma_{_{\ln\Sigma}}$ [@brunt10; @brunt10b; @brunt10c]. In this case, the dispersion reads $$\sigma^{2}_{_{\ln\Sigma}}=\ln(1+Qb^{2}\mathcal{M}^{2}).$$ with the conversion factor $$Q=\sigma_{\ln\Sigma}^{2}/\sigma_{\ln\rho}^{2}.$$
Approximating the turbulent structures of the ambient medium {#section5_1}
------------------------------------------------------------
We adopt our model to compute the SNR evolution in turbulent ambient media, where the density structure is described by the log-normal PDF in Eq. \[eq:lognormal\]. Since the blast wave evolution is primarily determined by the mean density of the swept-up material [@ostriker88; @padoan97], we assume that small-scale density fluctuations along the radial direction of the SNR have a negligible effect on the evolution. We assume that in different directions, the blast wave will encounter gas with different mean densities.
In this simplified model we abstain from following winding shock fronts between structures with a large density gradient [e.g. @martizzi14] or interaction between different radial directions. The first constraint arises from the simple set of equations used in our model. It is not designed to follow the dynamical evolution but gives a statistical expectation of SNR in turbulent media. For the latter we assume no physical interactions between the different cones and assume that during the ST and TR phase the radially outwards directed velocities of the SNR are large and the interaction has minor effects. At later phases the extent of the different radial directions is sufficient to neglect an interacting boundary.
To model the mean densities in different radial directions, the ambient medium in our model is discretized (see Fig. \[fig:explanation\], bottom panel) into $N_{_{\rm cones}}$ cones. The cones are defined by equal solid angles and have equal surface areas and volumes. For each cone, we randomly draw a mean density, $n_{_{i}}$, from the log-normal density distribution and run the 1-dimensional model of the evolution of a SNR for an uniform medium (see Section \[section2\]). The total momentum $p_{_{\rm turb}}$ injected by a SN in this pseudo 3-dimensional turbulent medium is derived from the sum over all cone momenta $p_{i}$, $$\sum_{i}^{N_{_{\rm cones}}}p_{_{\rm i}} = p_{_{\rm turb}}.$$ Each cone is initialised with the same fraction of the total SN energy, i.e. $E_{_{\rm SN}} / N_{_{ \rm cones}}$. As the expansion radius in each cone is different, the symmetry of the SN bubble is broken [@walch14a].\
In Fig. \[fig:mv3\] we show results using 12 cones, which is the minimum number needed to divide the unit sphere into equal surface area pixels [see @gorski05]. With $N_{_{\rm cones}}$ = 12 the log-normal PDF is not well sampled (see Section \[section\_acc\] for a further discussion. The turbulent Mach number is 10 and the mean number density of the ambient medium, is $n_{_{0,\rm turb}}$ = 1 $\mathrm{cm^{-3}}$. The sampled densities $n_{_{i}}$ have values between $\mathrm{3 \times 10^{-3}\,cm^{-3}}$ and $\mathrm{4.5\,cm^{-3}}$ according to a PDF with a width of $\sigma_{_{\ln\rho}}$ = 1.8 for $\mathcal{M}$ = 10. Fig. \[fig:mv3\] shows the equal initial momenta (upside down triangles) as well as the individual momenta $p_{_{\rm i}}$ at the end of the individual ST (triangles), TR (circles) and PDS (squares) phase for a neutral ($\mu_{_{\rm a}}$, black) and ionized ($\mu_{_{\rm i}}$, red) medium for all mean cone densities $n_{_{\rm i}}$ (green line and corresponding y-axis on the right-hand side).
The mean momentum per cone, $\left< p_{_{\rm i}} \right>$, $$\left< p_{_{\rm i}} \right> = \frac{p_{_{\rm turb}}}{N_{_{\rm cones}}},$$ in a neutral \[ionized\] medium at $t_{_{\rm TR}}$ is 1.7 \[1.2\] $\rm{p_{_{0}}}$, which increases up to 2.4 \[1.7\] $\rm{p_{_{0}}}$ at $t_{_{\rm PDS}}$ ($\rm{p_{_{0}}}$ = 14181 $\rm M_{\odot}\,km\,s^{-1}$). At $t_{_{\mathrm{MCS}}}$ the mean momentum per cone is 2.6 \[1.9\] $\rm{p_{_{0}}}$, as indicated by the black horizontal line (red line for ionized ambient medium). This corresponds to a total momentum of 31.2 \[22.8\] $\rm{p_{_{0}}}$ ($\rm 2.16 \times 10^{5}\,M_{\odot}\,km\,s^{-1}$). Note that $t_{_{\rm TR}}$, $t_{_{\rm PDS}}$ and $t_{_{\rm MCS}}$ are different for cones with different densities. However, since the momentum stays constant after $t_{_{\rm MCS}}$, $p(t_{_{\rm MCS}})$ is considered as the final momentum.
The blast wave simulation in a homogeneous medium with $n_{_{0,\rm uni}}$ = 1 $\mathrm{cm^{-3}}$ injects 22.3 \[16.4\] $\rm{p_{_{0}}}$ of momentum. Therefore, the increase in momentum is a direct consequence of turbulence. For higher $\mathcal{M}$, the PDF becomes broader. The blast wave encounters more low density regions, which are subject to less radiative cooling and allow for a higher momentum injection.
![Schematic representation of the model for the blast wave evolution into a turbulent medium. *Top panel*: Sampling of densities from a log-normal PDF, which represents turbulent density structures. The number of sampling points corresponds to the number of cones with equal- surface areas. *Bottom panel*: Homogeneously assigning the densities to the cones. The blast wave evolution is then completed for each cone separately. The total momentum input is the sum of the individual solutions. []{data-label="fig:explanation"}](prod_EXPL2.eps){width="45.00000%"}
![Example for the SN momentum injection in a turbulent medium sampled with 12 cones. The number densities are randomly drawn from a log-normal PDF with a mean number density $n_{_{0,\rm turb}}$ = 1 $\mathrm{cm^{-3}}$ and a turbulent Mach number $\mathcal{M}$ = 10. We show the values at $t_{_{\mathrm{ST}}}$ (upside down triangles), $t_{_{\mathrm{TR}}}$ (triangles), $t_{_{\mathrm{PDS}}}$ (circles) and $t_{_{\mathrm{MCS}}}$ (squares) within a ionized ambient medium ($\mu_{_{\rm i}}$, red symbols) and an atomic ($\mu_{_{\rm a}}$, black symbols). The individual radial momentum for each cone $p_{_{\rm i}}$ is shown as a function of the sampled density $n$. At $t_{_{\mathrm{PDS}}}$ the mean momentum per cone is 2.6 \[1.9\] $\rm{p_{_{0}}}$ (black \[red\] horizontal line). The underlying log-normal PDF is indicated with a green line. []{data-label="fig:mv3"}](prod_OVER.eps){width="50.00000%"}
Accuracy of the model {#section_acc}
---------------------
![Effect of the number of cones $N_{_{\mathrm{cones}}}$ on the accuracy of the turbulent SN model for the mean density (top panel) and momentum input (bottom panel). The number densities are randomly sampled from a log-normal PDF with a fixed mean density $n_{_{0,\rm turb}}$ = 1 $\mathrm{cm^{-3}}$ and Mach number $\mathcal{M}$ = 10. Each of the 6 data sets consists of 50 SN simulations. Mean values and the standard deviation are shown in red. The mean ambient density (blue line; top panel) is well sampled and the momentum injection converges to 29.4 $\rm{p_{_{0}}}$ (blue line; bottom panel).[]{data-label="fig:healpix"}](prod_DENS_stat.eps "fig:"){width="49.00000%"} ![Effect of the number of cones $N_{_{\mathrm{cones}}}$ on the accuracy of the turbulent SN model for the mean density (top panel) and momentum input (bottom panel). The number densities are randomly sampled from a log-normal PDF with a fixed mean density $n_{_{0,\rm turb}}$ = 1 $\mathrm{cm^{-3}}$ and Mach number $\mathcal{M}$ = 10. Each of the 6 data sets consists of 50 SN simulations. Mean values and the standard deviation are shown in red. The mean ambient density (blue line; top panel) is well sampled and the momentum injection converges to 29.4 $\rm{p_{_{0}}}$ (blue line; bottom panel).[]{data-label="fig:healpix"}](prod_MOM_stat.eps "fig:"){width="49.00000%"}
The fidelity of the SN model depends on the number of sampled densities, i.e. $N_{_{\rm cones}}$. We need a sufficient number in order to accurately represent the underlying density distribution.
We compute the evolution of 50 individual SN explosions in turbulent media, each with an increasing number of equal-volume cones (sampling points of the PDF) from 12 to 384. For each of the 50 runs we use a different random seed to sample the number densities in each cone from the log-normal density PDF with $n_{_{0,\rm turb}}$ = 1 $\mathrm{cm^{-3}}$ and $\mathcal{M}$ = 10.
Fig. \[fig:healpix\] presents all 6 sets ($N_{_{\rm cones}}$ = 12, 24, 48, 96, 192, 384; different symbols) with 50 SN simulations each. In the top panel the sampled mean densities of the individual simulations, $\left< n \right> = \sum_{\rm i}^{N_{\rm cones}} n_{_{\rm i}} / N_{_{\rm cones}}$ are shown. Independent of the numbers of cones the mean ambient density ($n_{_{0,\rm turb}}$ = 1 $\mathrm{cm^{-3}}$; blue dashed line) is well sampled by the overall mean of the individual simulations (red bars). The variance decreases from 1.2 to 0.9 with increasing number of cones from 12 to 384.
The bottom panel shows the final momentum $p_{_{\rm turb}}$ (normalized to the initial momentum) of the same simulations. The overall mean converges to 29.4 $\rm{p_{_{0}}}$ at the highest numbers of cones (blue dashed line). The variance is similar in all runs at about 4 $\rm{p_{_{0}}}$.
To summarize, we show that the combination of high-$\mathcal{M}$-turbulence and small $N_{_{\mathrm{cones}}}$ may not accurately represent the turbulent PDF structure. Individual realizations might over/under-predict the mean densities but larger samples and a higher number of cones reduced the variance in the mean density and the momentum input.
Momentum distribution in turbulent media {#section5_2}
----------------------------------------
![image](prod_Mode_MCS.eps){width="80.00000%"}
We perform simulations of SNRs in turbulent media with mean densities of $n_{_{0,\rm turb}}$ = 0.1 $-$ 100 $\mathrm{cm^{-3}}$, and Mach numbers, $\mathcal{M}$ = 1 $-$ 100. Based on the previous section, we decided to use sets of 20 realizations for each turbulent setup with $N_{_{\mathrm{cones}}}$ = 192 and evaluate the total radial momenta up to $t_{_{\mathrm{MCS}}}$ of the cone with the lowest density cone (Fig. \[fig:momdens\]).
The mean shell momenta lie between 13.0 $\rm{p_{_{0}}}$ ($n_{_{0,\rm turb}}$ = 100 $\mathrm{cm^{-3}}$, $\mathcal{M}$ = 1) and 30.6 $\rm{p_{_{0}}}$ ($n_{_{0,\rm turb}}$ = 0.1 $\mathrm{cm^{-3}}$, $\mathcal{M}$ = 1). Higher supersonic turbulence ($\mathcal{M}$ = 100) boosts the momentum by 60 per cent ($n_{_{0,\rm turb}}$ = 100 $\mathrm{cm^{-3}}$) up to 88 per cent ($n_{_{0,\rm turb}}$ = 0.1 $\mathrm{cm^{-3}}$) compared to the low-$\mathcal{M}$-turbulence value.
The radial momentum input of a single SN in a turbulent medium can be quantified in terms of the mean density and the width (Mach number) of the underlying density PDF: $$\begin{gathered}
p_{_{\rm turb}}/\mathrm{p_{_{0}}}\ =23.07\, (n_{_{0,\rm turb}}/ 1\,\mathrm{cm^{-3}})^{-0.12}\\ + 0.82 (\ln(1+b^{2}\mathcal{M}^{2}))^{1.49} (n_{_{0,\rm turb}}/ 1\ \mathrm{cm^{-3}})^{-0.17}.\end{gathered}$$ The first term corresponds to the momentum transfer from a single SN into a homogeneous medium. The second term depends on a combination of the turbulent Mach number (width of the PDF) and the mean density. The factor in the first term is higher compared to the value (22.44) obtained for the uniform medium. The difference results from the additional turbulent term. The fit was generated over all data points by a Bees algorithm coupled with Levenberg-Marquardt provided by the fitting tool MAGIX [$\chi^{2} \sim 8; $ @bernst11; @moeller13].
In Fig. \[fig:momdens\] we compare our results to direct, 3-dimensional (magneto-) hydrodynamical simulations from different authors, namely, @iffrig14, , @martizzi14,@kim14, @li15 and @walch14a (coloured symbols). We compare at times similar to our $t_{_{\rm MCS}}$. As the methodology for setting up the ISM conditions varies from author to author, we explain each set of simulations in more detail.
@iffrig14 [dark blue diamonds] simulate SNR in highly-resolved (maximum grid resolution 0.05 pc) turbulent MCs with magnetic fields, self-gravity and a cooling function similar to Eq. . The initial conditions for the SN explosion evolve from a spherical cloud with a density gradient $\propto$ r$^{-2}$ embedded in a low density environment. The assumed velocity field in the MC represents a Kolmogorov spectrum with a random component. The authors conclude that the influence of magnetic fields is small, rather the position and, therefore, the ambient density of the SN in the MC is determining the final momentum. It is well approximated by the solution of 3-dim SNR simulations in homogeneous medium with 18 $\rm{p_{_{0}}}$ for $n_{_{0}}$ = 1 $\mathrm{cm^{-3}}$.
@kim14 [red squares] pre-evolve the ambient medium from a thermally unstable state with small density perturbations. The SN explodes into a two-phase environment in pressure balance. The fitted final momentum input is comparable to SNe in homogeneous media. The difference to our final momentum in low-$\mathcal{M}$-turbulent environments is smaller than 15 per cent.
@walch14a [dark yellow circles] use a SPH particle code to perform highly-resolved (maximum resolution 0.1 M$_{_{\odot}}$) hydrodynamic simulations with interpolating cooling tables by @plewa95 [ for $T$ $\ge$ 10$^{4}$ K] and the cooling function from @koyama02 [ for $T$ $<$ 10$^{4}$ K]. The ambient medium is initialized with fractal sub-structures, which represent a log-normal density pdf. The resulting variance is translated to a turbulent Mach number, $\mathcal{M}$ = 4.4 [@walch11b]. The normalized final momentum $p$ = 25.6 $\rm{p_{_{0}}}$ is $\sim$ 9 per cent higher compared to values obtained from our SN model ($n_{_{0,\rm turb}}$ = 1 $\mathrm{cm^{-3}}$, $\mathcal{M}$ = 4.4.
@martizzi14 [orange triangles] perform hydrodynamic simulations in an ambient medium with a log-normal density field but only cooling by @sutherland93 at temperatures above 10$^{4}$ K. The variance of the distribution uses a parametrization by @lemaster09. The spatial correlations are parametrized by a Burgers power-spectrum. The initial velocity field is set to zero. Within these structures (maximum grid resolution 0.05 pc) the SNR evolves along the path of least resistance but cools significantly (down to 10$^{4}$ K) when dense structures are hit and merge with the shock. This results in a final momentum input of of 7.3 $\rm{p_{_{0}}}$ in a supersonic environment ($\mathcal{M}$ = 30, $n_{_{0,\rm turb}}$ = 100 $\mathrm{cm^{-3}}$), which is lower than the performed fiducial simulation in a homogeneous medium. The final value is $\sim$ 2.6 lower than a similar simulation with our model.
@li15 [green circles] creates an (artificial) environment with randomly distributed cold clouds and hot inter-cloud medium with a SN in the centre. The results show no distinctive phases and an expansion between the cold and dense regions on a path of least resistance. Initially the radial momentum input is lower, than the homogeneous comparison and shows an increasing power-law behaviour with radius. As the shock expands further it interacts with the medium in non-radial directions. At the end the momentum is almost constant and similar to values from uniform media. The momentum of the homogeneous runs (18.8 $\rm{p_{_{0}}}$) compares with the input from structured media at later phases of 17.7 $\rm{p_{_{0}}}$ ($n_{_{0,\rm turb}}$ = 1 $\mathrm{cm^{-3}}$).
To summarize, we find that momentum input from low-$\mathcal{M}$-turbulent structures is comparable to SNR in homogeneous media. We find similar values compared to different 3-dimensional numerical simulations, under the assumption of an atomic medium. We show that high-$\mathcal{M}$-turbulent structures boost the radial momentum input. We conclude that turbulence could be important for the momentum input. However, more 3-dimensional models with very high resolution will be required to address the impact of a highly turbulent substructure.
Velocity-mass distribution in turbulent media {#section5_3}
---------------------------------------------
The SN model assumes that the swept-up ambient material is condensed into a small volume at the shock front [@klein94]. The density profile inside the SNR can be neglected as the mass is only a small fraction of the total mass. We show the distribution of the shock velocity and the swept-up mass to mean densities $n_{_{0,\rm turb}}$ of 1 $\mathrm{cm^{-3}}$ (Fig. \[fig:massvel\], top panel) and 100 $\mathrm{cm^{-3}}$ (Fig. \[fig:massvel\], bottom panel) with turbulent Mach numbers of 1 and 10 both with $N_{_{\mathrm{cones}}}$ = 384. The distributions are evaluated at fixed times between $t = 10^{2.5}$ yr and $t = 10^{4.5}$ yr. In dense environments ($n_{_{0,\rm turb}}$ = 100 $\mathrm{cm^{-3}}$) the simulations terminate earlier, explaining why in Fig. \[fig:massvel\] (bottom panel) the distributions at $t = 10^{4.5}$ yr are missing.
As expected, the swept-up mass continuously increases during the decelerating expansion of the SNR. At $10^{2.5}$ yr the swept-up mass in a low density and low-$\mathcal{M}$-turbulence environment ($\mathcal{M} =1$, $n_{_{0,\rm turb}}$ = 1 $\mathrm{cm^{-3}}$) is 6.5 $\rm M_{\odot}$. For the case of $n_{_{0,\rm turb}}$ = 100 $\mathrm{cm^{-3}}$ the swept-up mass is 29.8 $\rm M_{\odot}$. In general higher-$\mathcal{M}-$turbulence results in lower swept-up masses, by 12 per cent in low- and 24 per cent in high-density environments. At $10^{4}$ yr the swept-up masses have increased to 280 $\rm M_{\odot}$ and 1279 $\rm M_{\odot}$ in the low- and high-density ambient medium. At this time the SNR evolution in the latter case has almost reached the end of the PDS, whereas in the first medium the PDS lasts longer, until $\sim 10^{5}$ yr.
The mean velocity at $t = 10^{2.5}$ yr is 2569 $\rm km\,s^{-1}$ in the low density environment. High-$\mathcal{M}$-turbulence increases the value to 3096 $\rm km\,s^{-1}$. The SNR slows down by $\sim$ 50 per cent in high density structures with $n_{_{0,\rm turb}}$ = 100 $\mathrm{cm^{-3}}$. Typically, at each plotted time the mean velocity decreases by $\sim$ 50 per cent compared with the previous time. At $t = 10^{4}$, the velocities have dropped to 323 $\rm km\,s^{-1}$ in low density structures with $n_{_{0,\rm turb}}$ = 1 $\mathrm{cm^{-3}}$ and trans-sonic turbulence. In high density environment the mean velocity is 151 $\rm km\,s^{-1}$.
At the end of the simulations, the distributions within an environment with trans-sonic turbulence cover a small velocity range. High-$\mathcal{M}$-turbulence broadens the mass-(shock-) velocity distribution and therefore, a small fraction of the swept-up mass remains at high velocities.
Similar behaviour is found in numerical simulations by @walch14a. At 0.2 Myr the velocity distribution in a dense ($n_{_{0,\rm turb}}$ = 100 $\mathrm{cm^{-3}}$) fractal environment shows that about 2 per cent of a cloud mass of $10^{5}$ $\rm M_{\odot}$ are accelerated to velocities larger than $\sim$ 20 $\rm km\,s^{-1}$.
![Evolution of the mass-velocity distribution at times between $10^{2.5}$ yr and $10^{4.5}$ yr with different turbulent Mach numbers of $\mathcal{M}$ = 1 (solid lines) and $\mathcal{M}$ =100 (dashed lines). *Top panel:* Low density environment with a mean ambient density $n_{_{0,\rm turb}}$ = 1 $\mathrm{cm^{-3}}$. *Bottom panel:* Ambient medium with a density $n_{_{0,\rm turb}}$ = 100 $\mathrm{cm^{-3}}$.[]{data-label="fig:massvel"}](prod_MaVe_M1.eps "fig:"){width="49.00000%"} ![Evolution of the mass-velocity distribution at times between $10^{2.5}$ yr and $10^{4.5}$ yr with different turbulent Mach numbers of $\mathcal{M}$ = 1 (solid lines) and $\mathcal{M}$ =100 (dashed lines). *Top panel:* Low density environment with a mean ambient density $n_{_{0,\rm turb}}$ = 1 $\mathrm{cm^{-3}}$. *Bottom panel:* Ambient medium with a density $n_{_{0,\rm turb}}$ = 100 $\mathrm{cm^{-3}}$.[]{data-label="fig:massvel"}](prod_MaVe_M10.eps "fig:"){width="49.00000%"}
Summary and discussion {#section7}
======================
We present a fast model to follow the evolution of SN blast waves in their momentum generating phases (ST, TR and PDS phase). We test the model for homogeneous and power-law density distributions and extend it to the evolution of SNR in wind-blown bubbles and a turbulent ISM. Previous analytic work is combined in our SN model and extended by the inclusion of a cooling function, a detailed treatment of the thermal energy, and a transition phase between the adiabatic and radiative phase.
The main results are summarized in the following:
- We recover recent numerical results [e.g. @kim14; @martizzi14; @li15]of a single SN in a homogeneous medium as well as the analytic Sedov-Taylor solution. The final momentum for a density range between 1$-$ 100 $\mathrm{cm^{-3}}$ is $\sim$ 13 $-$ 31 p$_{_{0}}$ (p$_{_{0}}$ = 14181 M$_{_{\odot}}$ km s$^{-1}$). We obtain reliable values for the radial momentum, the expansion radius and the thermal energy with small computational effort of a few seconds. The results depend solely on the ambient density.
- In ambient media with a power-law density distribution and a surrounding density floor, the final momentum clearly exceeds the homogeneous results by at most a factor of 2. This is independent of the central density and is controlled by the density of the density floor. The inner power-law part has minor effect.
- The momentum input of SNR in wind-blown bubbles depend on the initial ambient medium. Low initial temperatures result in dense shells, where the incoming SN shock cools efficiently. The momentum input is only $\sim$ 3 p$_{_{0}}$. Higher temperatures of the initial ambient medium delay the radiative cooling in the wind-blown shell. The momentum input increases by a factor up to 10. A high density inside the bubble and a small difference between the swept-up mass and the mass of the wind-blown shell show a behaviour that is comparable to a uniform medium with that bubble density.
- We use the SN model to approximate the lower limit of momentum input in turbulent ambient media. To do this we randomly sample densities from a log-normal density distribution with a given dispersion which is related to the Mach number in the turbulent gas. For low turbulent Mach numbers ($\mathcal{M}$ $\sim$ 1) the momentum input is very similar to homogeneous media ($\sim$ 13 $-$ 31 p$_{_{0}}$). We obtain the largest momentum input in turbulent media with $\mathcal{M}$ $\sim$ 100 by as much as a factor of 2 in a low density environment ($n_{_{0,\rm turb}}$ = 0.1 $\mathrm{cm^{-3}}$). We have parametrised the momentum input as a function of Mach number and average environmental density as follows: $$\begin{gathered}
p_{_{\rm turb}}/\mathrm{p_{_{0}}}\ =23.07\, (n_{_{0,\rm turb}}/ 1\,\mathrm{cm^{-3}})^{-0.12}\\ + 0.82 (\ln(1+b^{2}\mathcal{M}^{2}))^{1.49} (n_{_{0,\rm turb}}/ 1\ \mathrm{cm^{-3}})^{-0.17}.\end{gathered}$$ Under the assumption of a neutral ambient medium we find values comparable to recent numerical simulations [e.g. @kim14; @martizzi14; @walch14a]
- The model is computational cheap and can be used for a variety of parameters. This model is an accurate alternative to recent SN sub-grid models.
Acknowledgments {#acknowledgments .unnumbered}
===============
We thank J. P. Ostriker for the useful suggestions and discussion, which added significantly to the presented paper. SH, SW and DS acknowledge the support by the Bonn-Cologne Graduate School for physics and astronomy as well as the SFB 956 on the ’Conditions and impact of star formation’. JM acknowledges funding from a Royal Society - Science Foundation Ireland University Research Fellowship. We acknowledge the support by the DFG Priority Program 1573 ’The physics of the interstellar medium’. We thank Anika Schmiedeke for her help of fitting with MAGIX and Thomas Möller for providing this tool. We thank the anonymous referees for constructive input.
\[appendix\]
\[lastpage\]
[^1]: E-mail: haid@ph1.uni-koeln.de
|
---
abstract: |
Generalizing results of Jónsson and Tarski, Maddux introduced the notion of a *pair-dense* relation algebra and proved that every pair-dense relation algebra is representable. The notion of a pair below the identity element is readily definable within the equational framework of relation algebras. The notion of a triple, a quadruple, or more generally, an element of size (or measure) $n>2$ is not definable within this framework, and therefore it seems at first glance that Maddux’s theorem cannot be generalized. It turns out, however, that a very far-reaching generalization of Maddux’s result is possible if one is willing to go outside of the equational framework of relation algebras, and work instead within the framework of the first-order theory. Moreover, this generalization sheds a great deal of light not only on Maddux’s theorem, but on the earlier results of Jónsson and Tarski.
In the present paper, we define the notion of an atom below the identity element in a relation algebra having measure $n$ for an arbitrary cardinal number $n>0$, and we define a relation algebra to be *measurable* if it’s identity element is the sum of atoms each of which has some (finite or infinite) measure. The main purpose of the present paper is to construct a large class of new examples of *group relation algebras* using systems of groups and corresponding systems of quotient isomorphisms (instead of the classic example of using a single group and forming its complex algebra), and to prove that each of these algebras is an example of a measurable set relation algebra. In a subsequent paper, the class of examples will be greatly expanded by adding a third ingredient to the mix, namely systems of “shifting" cosets. The expanded class of examples—called *coset relation algebras*—will be large enough to prove a representation theorem saying that every atomic, measurable relation algebra is essentially isomorphic to a coset relation algebra.
address: |
Mills College\
5000 MacArthur Boulevard\
Oakland CA 94613\
USA
author:
- Steven Givant
title: Relation algebras and groups
---
[^1]
Introduction {#S:1}
============
The calculus of relations was created by DeMorgan[@dm], Peirce (see, for example, [@pe]), and Schröder[@sc] in the second half of the nineteenth century. It was intended as an algebraic theory of binary relations analogous in spirit to Boole’s algebraic theory of classes, and much of the early work in the theory consisted of a clarification of some of the important operations on and to binary relations and a study of the laws that hold for these operations on binary relations.
It was Peirce[@pe] who ultimately determined the list of fundamental operations, namely the Boolean operations on and between binary relations (on a *base set* $U$) of forming (binary) unions, intersections, and (unary) complements (with respect to the universal binary relation $U\times U$); and relative operations of forming the (binary) relational composition—or relative product—of two relations $R$ and $S$ (a version of functional composition), $$R\mathbin{\vert} S=\{{(\alpha,\beta)}: {(\alpha,\gamma)}\in R\text{ and
}{(\gamma,\beta)}\in S\text{ for some $\gamma$ in $U$}\};$$ a dual (binary) operation of relational addition—or forming the relative sum—of $R$ and $S$, $$R\mathbin{\dag} S=\{{(\alpha,\beta)}: {(\alpha,\gamma)}\in R\text{ or
}{(\gamma,\beta)}\in S\text{ for all $\gamma$ in $U$}\};$$ and a unary operation of relational inverse (a version of functional inversion)—or forming the converse—of $R$, $$R{^{-1}}=\{{(\beta,\alpha)}:{(\alpha,\beta)}\in R\}{\textnormal{{{\hspace*{.5pt}}}.\ }}$$ He also specified some distinguished relations on the set $U$: the empty relation ${\varnothing}$, the universal relation $U\times U$, the identity relation $${id_{U}}=\{{(\alpha,\alpha)}:\alpha\in U\},$$ and its complement the diversity relation $${di_{U}}=\{{(\alpha,\beta)}:\alpha,\beta\in U\text{ and }\alpha\neq
\beta\}{\textnormal{{{\hspace*{.5pt}}}.\ }}$$
Tarski, starting with[@t41], gave an abstract algebraic formulation of the theory. As several of Peirce’s operations are definable in terms of the remaining ones, he reduced the number of primitive operations to the Boolean operations of addition $\,+\,$ and complement $\,-\,$, and the relative operations of relative multiplication $\,;\,$ and converse $\,{^{\scriptstyle\smallsmile}}\,$, with an identity element ${1{{\hspace*{-.5pt}}}\textnormal{\rq}}$ as the unique distinguished constant. Thus, the models for his set of axioms are algebras of the form $${{\mathfrak {A}}}=( A{\,,}+{\,,}-{\,,};{\,,}\,{^{\scriptstyle\smallsmile}}{\,,}{1{{\hspace*{-.5pt}}}\textnormal{\rq}}){\textnormal{,}\ }$$ where $A$ is a non-empty set called the *universe* of ${{\mathfrak {A}}}$, while $\,+\,$ and $\,;\,$ are binary operations called *addition* and *relative multiplication*, $\,-\,$ and $\,{^{\scriptstyle\smallsmile}}\,$ are unary operations called *complement* and *converse*, and ${1{{\hspace*{-.5pt}}}\textnormal{\rq}}$ is a distinguished constant called the *identity element*. He defined a relation algebra to be any algebra of this form in which a set of ten equational axioms is true. These ten axioms are true in any set relation algebra, and the set-theoretic versions of three of them play a small role in this paper, namely the associative law for relational composition, and first and second involution laws for relational converse:$$R{\mid}(S{\mid}T)=(R{\mid}S){\mid}T,\qquad
(R{^{-1}}){^{-1}}=R{\textnormal{,}\ }\qquad (R{\mid}S){^{-1}}=S{^{-1}}{\mid}R{^{-1}}{\textnormal{{{\hspace*{.5pt}}}.\ }}$$
Tarski raised the problem whether all relation algebras—all models of his axioms—are representable in the sense that they are isomorphic to set relation algebras, that is to say, they are isomorphic to subalgebras of (full) set relation algebras $${\mathfrak{Re}({E})}=( {\ensuremath{\textit{Sb}{({E}\/)}}}{\,,}\cup{\,,}\sim{\,,}\,{\mid}\,{\,,}\,{}{^{-1}}{\,,}{id_{U}})$$ in which the universe ${\ensuremath{\textit{Sb}{({E}\/)}}}$ consists of all subrelations of some equivalence relation $E$ on a base set $U$, and the operations are the standard set-theoretic ones defined above, except that complements are formed with respect to $E$ (which may or may not be the universal relation $U\times U$). Tarski and Jónsson[@jt52] proved several positive representation theorems for classes of relation algebras with special properties. However, a negative solution to the general problem was ultimately given by Lyndon[@lyn1], who constructed an example of a finite relation algebra that possesses no representation at all. Since then, quite a number of papers have appeared in which representation theorems for various special classes of relation algebras have been established, or else new examples of non-representable relation algebras have been constructed. In particular, Maddux[@ma91], generalizing earlier theorems of Jónsson-Tarski[@jt52], defined the notion of a pair-dense relation algebra, and proved that every pair-dense relation algebra—every relation algebra in which the identity element is a sum of “pairs", or what might be called singleton and doubleton elements—is representable.
In trying to generalize Maddux’s theorem, a problem arises. The property of being a pair below the identity element ${1{{\hspace*{-.5pt}}}\textnormal{\rq}}$ is naturally expressible in the equational language of relation algebras. Generally speaking, however, the size of an element below the identity element—even for small sizes like $3$, $4$ or $5$—is *not* expressible equationally. To overcome this difficulty another way must be found of expressing size, using the first-order language of relation algebras. This leads to the notion of a measurable atom.
For an element $x$ below the identity element—a *subidentity element*—the *square* on $x$ (or the *square with side* $x$) is defined to be the element $x;1;x$. In set relation algebras with unit $E=U\times U$, such squares are just Cartesian squares, that is to say, they are relations of the form $X\times X$ for some subset $ X$ of the base set $U$. A *subidentity atom* $x$ is said to be *measurable* if its square $x;1;x$ is the sum (or the supremum) of a set of non-zero functional elements, and the number of non-zero functional elements in this set is called the *measure*, or the *size*, of the atom $x$. If the set is finite, then the atom is said to have *finite measure*, or to be *finitely measurable*. The name comes from the fact that, for set relation algebras in which the unit $E$ is the universal relation $U\times U$, the number of non-zero functional elements beneath the square on a subidentity atom is precisely the same as the number of pairs of elements that belong to that atom. For instance, in such an algebra, a subidentity atom consists of a single ordered pair just in case its square is a function; it consists of two ordered pairs just in case its square is the sum of two non-empty functions; it consists of three ordered pairs just in case its square is the sum of three non-empty functions; and so on.
In fact, the atoms below the square $x;1;x$ of a measurable subidentity atom $x$ may be thought of as “permutations" of $x$, and they form a group ${{G}_{x}}$ under the restricted operations of relative multiplication and converse, with $x$ as the identity element of the group. Moreover, the set of atoms below an arbitrary rectangle $x;1;y$ (with $x$ and $y$ measurable atoms) also form a group, one that is isomorphic to a quotient of ${{G}_{x}}$.
A relation algebra is defined to be *measurable* if its identity element is the sum of a set of measurable atoms. If each of the atoms in this set is in fact finitely measurable, then the algebra is said to be *finitely measurable*. The pair-dense relation algebras of Maddux are finitely measurable, and in fact each subidentity atom has measure one or two. The purpose of this paper and [@ag] is to construct two classes of measurable relation algebras: the class of group relation algebras, which is constructed in this paper; and the broader class of coset relation algebras, which is constructed in [@ag] and whose construction depends on the construction of group relation algebras and the results in this paper. In [@ga], an analysis of atomic, measurable relation algebras is carried out, and it is proved that every atomic, measurable relation algebra is essentially isomorphic to a coset relation algebra. If the given algebra is actually finitely measurable, then the assumption of it being atomic is unnecessary. The results were announced without proofs in [@ga02]. Except for basic facts about groups, the article is intended to be self-contained. For more information about relation algebras, the reader may consult [@ga17], [@ga18], [@hh02], or [@ma06].
Complex algebras of groups {#S:2}
==========================
In the 1940’s, J. C. C. McKinsey observed that the complex algebra of a group is a [relation algebra]{}. Specifically, let $\langle G{\,,}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{\,,}{^{-1}}{\,,}e\rangle$ be a group and ${\ensuremath{\textit{Sb}{({G}\/)}}}$ the collection of all subsets, or *complexes*, of $G$. The group operations of multiplication (or composition) and inverse can be extended to operations on complexes in the obvious way: $$H{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}K=\{ h{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}k : h\in H \text{ and } k\in
K\}$$ and $$H{^{-1}}= \{ h{^{-1}}: h\in H\}.$$ In order to simplify notation, we shall often identify elements with their singletons, writing, for example, $g{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}H$ for $\{ g\}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}H$, so that $$g{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}H =\{ g{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}h : h\in H\}.$$
The collection ${\ensuremath{\textit{Sb}{({G}\/)}}}$ of complexes contains the singleton set $\{
e\}$ and is closed under the Boolean operations of union and complement, as well as under the group operations of complex multiplication and inverse. Thus, it is permissible to form the algebra $${{\mathfrak {Cm}}(G\/)}= \langle
{\ensuremath{\textit{Sb}{({G}\/)}}}{\,,}\cup{\,,}\sim
{\,,}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{\,,}{^{-1}}{\,,}\{ e\}\rangle,$$ and it is easy to check that this is a relation algebra. In fact, it is representable via a slight modification of the Cayley representation of the group. In more detail, for each element $g$ in $G$, let ${R_{g}}$ be the binary relation on $G$ defined by $${R_{g}}=
\{{(h,h{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}g)}:h\in G)\}{\textnormal{{{\hspace*{.5pt}}}.\ }}$$ The correspondence $g\longmapsto{R_{g}}$ is a slightly modified version of the Cayley representation of $G$ as a group of permutations in which the operation of relational composition is used instead of functional composition. In particular, $$\begin{aligned}
{3}
{R_{g}}&={id_{G}}&&\qquad\text{if and only if}\qquad &g &= e,\\
{R_{g}}{^{-1}}&={R_{k}}&&\qquad\text{if and only if}\qquad &g{^{-1}}&= k,\\ {R_{f}}{\mid}{R_{g}}&={R_{k}}&&\qquad\text{if and only if}\qquad &f{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}g &=k.\end{aligned}$$ For each subset $X$ of $G$, write $$S_X={{\textstyle\bigcup_{g\in X}}}{R_{g}},$$ and take $A$ to be the set of all relations $S_X$ for $X{\subseteq}G$. Using the properties of the relations ${R_{g}}$ displayed above, and also the complete distributivity of the operations of relational composition and converse over unions, it is a simple matter to check that $A$ is a subuniverse of the set relation algebra ${\mathfrak{Re}({E})}$ with $E$ the universal relation on the set $G$, so that the correspondence mapping each set $X$ to the relation ${{S}_{X}}$ is an embedding of ${\mathfrak{Cm}({G})} $ into ${\mathfrak{Re}({E})}$. We shall call this mapping the *Cayley representation* of ${{\mathfrak {Cm}}(G\/)}$.
There is a natural extension of the Cayley representation of a group $G$ to a representation of a quotient group $G/H$. If $\langle{H_{{\gamma}}}:{\gamma}<\kappa\rangle$ is a coset system for a normal subgroup $H$ of $G$, then define the representative of a coset ${H_{{\alpha}}}$ to be the binary relation $${R_{{\alpha}}}={{\textstyle\bigcup_{{\gamma}<{\kappa}}}}{H_{{\gamma}}}\times({H_{{\gamma}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{\alpha}}}).$$ (To minimize the number of parentheses that are used, we adopt here and everywhere below the standard convention that multiplications—in this case Cartesian products—take precedence over additions—in this case, unions.) Notice that, strictly speaking, ${R_{{\alpha}}}$ is not the Cayley representation of ${H_{{\alpha}}}$, which is the set of ordered pairs $$\{{({H_{{\gamma}}},{H_{{\gamma}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{\alpha}}})}:{\gamma}<\kappa\}{\textnormal{{{\hspace*{.5pt}}}.\ }}$$
The notion of a relation representing a coset can be taken one step further. If $\varphi$ is an isomorphism from a quotient group $G/H$ to another quotient group $F/K$, then $F/K$ is identical with $G/H$ except for the “shape" of its elements, and therefore it makes sense to identify each coset ${H_{{\gamma}}}$ in $G/H$ with its image $\varphi({H_{{\gamma}}})=K_{{\gamma}}$ in $F/K$. One can then take the representative of a coset ${H_{{\alpha}}}$ to be the relation $${R_{{\alpha}}}={{\textstyle\bigcup_{{\gamma}<{\kappa}}}}{H_{{\gamma}}}\times\varphi({H_{{\gamma}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{\alpha}}}) ={{\textstyle\bigcup_{{\gamma}<{\kappa}}}}{H_{{\gamma}}}\times(K_{{\gamma}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}K_{{\alpha}}).$$ Notice that each relation ${R_{{\alpha}}}$ is a union of rectangles, that is to say, it is a union of relations of the form $X\times Y$, and these rectangles are mutually disjoint, because the cosets ${H_{{\gamma}}}$ are mutually disjoint, as are the cosets $K_{{\gamma}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}K_{{\alpha}}$, for distinct ${\gamma}<{\kappa}$.
To illustrate this idea with a concrete example, consider the two groups $\mathbb Z_6$ and $\mathbb Z_9$ (the integers modulo $6$ and the integers modulo $9$), and the canonical isomorphism $\varphi$ between the quotients $$\mathbb Z_6/\{0,3\}\qquad\text{and}\qquad \mathbb Z_9/\{0,3,6\}$$ that maps the cosets $$\begin{gathered}
H_0 =\{0,3\} \quad\text{to}\quad K_0 =\{0,3, 6\},\quad H_1
=\{1,4\}\quad\text{to}\quad K_1 =\{1,4, 7\},\\ H_2
=\{2,5\}\quad\text{to}\quad K_2 =\{2,5, 8\}.\end{gathered}$$ Using this correspondence, define three relations as follows: $$\begin{aligned}
R_0&=[H_0\times (K_0{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}K_0)]\cup[H_1\times
(K_1{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}K_0)]\cup[H_2\times (K_2{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}K_0)]\\ &=[H_0\times
K_0]\cup[H_1\times K_1]\cup[H_2\times K_2]\\ &=\{(a,b): a\in \mathbb
Z_6\text{\ ,\ }b\in \mathbb Z_9\text{ and } b\equiv a\text{ mod }
3\}{\textnormal{,}\ }\\ R_1&=[H_0\times (K_0{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}K_1)]\cup[H_1\times
(K_1{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}K_1)]\cup[H_2\times (K_2{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}K_1)]\\ &=[H_0\times
K_1]\cup[H_1\times K_2]\cup[H_2\times K_0]\\ &=\{(a,b): a\in \mathbb
Z_6\text{\ ,\ }b\in \mathbb Z_9\text{ and } b\equiv a+1\text{ mod }
3\}{\textnormal{,}\ }\\ R_2&=[H_0\times (K_0{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}K_2)]\cup[H_1\times (K_1{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}K_2)]\cup[H_2\times (K_2{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}K_2)]\\ &=[H_0\times K_2]\cup[H_1\times K_0]\cup[H_2\times K_1]\\
&=\{(a,b): a\in \mathbb Z_6\text{\ ,\ }b\in \mathbb Z_9\text{ and }
b\equiv a+2\text{ mod } 3\}{\textnormal{{{\hspace*{.5pt}}}.\ }}\end{aligned}$$
![The relations $R_0$, $R_1$, and $R_2$.[]{data-label="F:fig0"}](figure0version2)
(See Figure \[F:fig0\].) The relations $R_0$, $R_1$, and $R_2$ are *representatives* of the cosets $H_0$, $H_1$, and $H_2$ respectively, and together they give a kind of representation of $\mathbb Z_3$ that has the flavor of the Cayley representation of $\mathbb Z_3$. (Notice, however, that this is not a real representation of $\mathbb Z_3$, since we cannot form the composition of these relations.) This is a key idea in the construction of measurable algebras of binary relations from *systems* of groups and quotient isomorphisms.
Systems of groups and quotient isomorphisms {#S:3}
===========================================
Fix a system $$G=\langle
{G_{x}}:x\in I\,\rangle$$ of groups $\langle {G_{x}}{\,,}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{\,,}{^{-1}}{\,,}{e_{x}}\rangle$ that are pairwise disjoint, and an associated system $$\varphi=\langle{\varphi}_{xy}:{(x,y)}\in {\mathcal{E}}\,\rangle$$ of quotient isomorphisms. Specifically, ${\mathcal{E}}$ is an equivalence relation on the index set $I$, and for each pair ${(x,y)}$ in ${\mathcal{E}}$, the function ${{\varphi}_{xy}}$ is an isomorphism from a quotient group of ${G_{x}}$ to a quotient group of ${G_{y}}$. We shall call $${\mathcal{F}}={(G,\varphi)}$$ a *group pair*. The set $I$ is the *group index set*, and the equivalence relation ${\mathcal{E}}$ is the (*quotient*) *isomorphism index set*, of ${\mathcal{F}}$. The normal subgroups of ${G_{x}}$ and ${G_{y}}$ from which the quotient groups are constructed are uniquely determined by ${{\varphi}_{{{xy}}}}$, and will be denoted by ${H_{{{xy}}}}$ and $K_{xy}$ respectively, so that ${{\varphi}_{{{xy}}}}$ maps ${G_{x}/H_{{xy}}}$ isomorphically onto ${G_{y}/K_{{xy}}}$.
For a fixed enumeration $\langle {H_{{xy},{\gamma}}}:{\gamma}<{{\kappa}_{{xy}}}\rangle$ (without repetitions) of the cosets of ${H_{{xy}}}$ in ${G_{x}}$ (indexed by some ordinal number ${{\kappa}_{{{xy}}}}$), the isomorphism ${{\varphi}_{{xy}}}$ induces a *corresponding*, or *associated*, coset system of $K_{{xy}}$ in ${G_{y}}$, determined by the rule $${K_{{xy},{\gamma}}}={{\varphi}_{{xy}}}({H_{{xy},{\gamma}}})$$ for each ${\gamma}<{{\kappa}_{{xy}}}$. In what follows we shall always assume that the given coset systems for ${H_{{xy}}}$ in ${G_{x}}$ and for $K_{{xy}}$ in ${G_{y}}$ are associated in this manner. Furthermore, there is no loss of generality in assuming that the first elements in the enumeration of the coset systems are always the normal subgroups themselves, so that $${H_{{xy},0}} ={H_{{xy}}}\qquad\text{and}\qquad{K_{{xy},0}} =K_{{xy}}.$$
\[D:compro\] For each pair ${(x,y)}$ in ${\mathcal{E}}$ and each $\alpha <{{\kappa}_{{xy}}}$, define a binary relation $R_{{{xy}},{\alpha}}$ by$$R_{{{xy}},{{\alpha}}}= {\textstyle \bigcup}_{{\gamma}<
{\kappa}_{{xy}}} H_{{xy},{\gamma}}\times {{\varphi}_{{xy}}}[H_{{xy},{\gamma}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}H_{{xy},{\alpha}}]= {\textstyle \bigcup}_{{\gamma}< {\kappa}_{{xy}}} H_{{xy},{\gamma}}\times
(K_{{xy},{\gamma}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}K_{{xy},{\alpha}}){\textnormal{{{\hspace*{.5pt}}}.\ }}$$
The index ${\alpha}$ enumerating the relations $R_{{{xy}},{\alpha}}$ coincides with the index enumerating the coset system for the subgroup ${H_{{{xy}}}}$, and therefore is dependent upon the particular, often arbitrarily chosen, enumeration of the cosets. It would be much better if the index enumerating the relations were independent of the particular coset system that has been employed. This can be accomplished by using the cosets themselves as indices, writing, for instance, for each coset $L$ of ${H_{{{xy}}}}$, that is to say, for each element $L$ in ${G_{x}}/{H_{{{xy}}}}$, $$R_{{{xy}},{L}}={\textstyle \bigcup}\{H\times{\varphi}(H{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}L):H\in{G_{x}}/{H_{{{xy}}}}\}$$ instead of $R_{{{xy}},{\alpha}}$. In fact, it is really our intention that the relations be indexed by the cosets and not by the indices of the cosets. However, adopting this notation in practice eventually becomes notationally a bit unwieldy. For that reason, we shall continue to use the coset indices ${\alpha}$, but we view these only as convenient abbreviations for the cosets themselves. In places where the distinction is important, we shall point it out.
Notice that the relation $R_{{{xy}}, 0}$ encodes the isomorphism ${{\varphi}_{{{xy}}}}$.
In proofs, we shall use repeatedly the fact that operations such as forward and inverse images of sets under functions, Cartesian multiplication of sets, intersection of sets, complex group composition, relational composition, and relational converse are all distributive over arbitrary unions, and we shall usually simply refer to this fact by citing *distributivity*.
\[L:i-vi\] The relations $R_{{xy}, \alpha} $[[,]{.nodecor} ]{}for $\alpha <{{\kappa}_{{{xy}}}}$[[,]{.nodecor} ]{}are non-empty and partition the set ${{G_{x}}\times {G_{y}}}$.
Obviously, the relations are non-empty, because the cosets used to construct them are non-empty. The sequence $\langle H_{{xy},{\gamma}} : {\gamma}< {\kappa}_{{xy}}\,\rangle$ is a coset system for ${H_{{xy}}}$ in $G_{x}$, so these cosets are mutually disjoint and have ${{G}_{{x}}}$ as their union. Similarly, the cosets in the corresponding sequence $\langle K_{{xy},{\gamma}} : {\gamma}<
{\kappa}_{{xy}}\,\rangle$ are mutually disjoint and have ${{G}_{{y}}}$ as their union. The sequence obtained by multiplying each ${K_{{xy},{\gamma}}}$ on the right by a fixed coset ${K_{{xy},\alpha}}$ lists the cosets of $K_{{xy}}$ in some permuted order. These observations and the distributivity of Cartesian multiplication yield $$\begin{aligned}
{\textstyle \bigcup}_{\alpha}R_{{{xy}},{{\alpha}}} &= {\textstyle \bigcup}_{\alpha}{\textstyle \bigcup}_{\gamma}H_{{xy},{\gamma}}\times (K_{{xy},{\gamma}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}K_{{xy},{\alpha}}) =
{\textstyle \bigcup}_{\gamma}{\textstyle \bigcup}_{\alpha}H_{{xy},{\gamma}}\times (K_{{xy},{\gamma}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}K_{{xy},{\alpha}})\\ &= {\textstyle \bigcup}_{\gamma}H_{{xy},{\gamma}}\times
\bigl({\textstyle \bigcup}_{\alpha}K_{{xy},{\gamma}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}K_{{xy},{\alpha}}\bigr) =
{\textstyle \bigcup}_{\gamma}H_{{xy},{\gamma}}\times G_{y}\\ &= \bigl({\textstyle \bigcup}_{\gamma}H_{{xy},{\gamma}}\bigr)\times G_{y}= G_{x}\times G_{y}.\end{aligned}$$
The cosets $H_{{xy},{\gamma}}$ and $H_{{xy},{\delta}}$ are disjoint whenever ${\gamma}\neq{\delta}$, and so are the cosets $K_{{xy},{\alpha}}$ and $K_{{xy},{\beta}}$—and therefore also the cosets $K_{{xy},{\delta}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}K_{{xy},{\alpha}}$ and $K_{{xy},{\gamma}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}K_{{xy},{\beta}}$—whenever ${\alpha}\neq{\beta}$. Consequently, $$[H_{{xy},{\gamma}}\cap H_{{xy},{\delta}}] \times [(K_{{xy},{\gamma}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}K_{{xy},{\alpha}})\cap (K_{{xy},{\delta}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}K_{{xy},{\beta}})]={\varnothing}\tag{1}$$ whenever ${\gamma}\neq{\delta}$ or $\alpha \neq \beta$. For distinct ${\alpha},{\beta}$, a simple computation leads to $$\begin{aligned}
R_{{xy},{\beta}} &= \big[ {\textstyle \bigcup}_{\gamma}H_{{xy},{\gamma}} \times
(K_{{xy},{\gamma}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}K_{{xy},{\alpha}})\big]\cap \big[ {\textstyle \bigcup}_{\delta}H_{{xy},{\delta}} \times (K_{{xy},{\delta}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}K_{{xy},{\beta}})\big] \\ &=
{\textstyle \bigcup}_{{\gamma},{\delta}}
[H_{{xy},{\gamma}} \times (K_{{xy},{\gamma}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}K_{{xy},{\alpha}})]\cap
[H_{{xy},{\delta}} \times (K_{{xy},{\delta}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}K_{{xy},{\beta}})] \\ &=
{\textstyle \bigcup}_{{\gamma},{\delta}}[H_{{xy},{\gamma}}\cap H_{{xy},{\delta}}] \times
[(K_{{xy},{\gamma}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}K_{{xy},{\alpha}})\cap (K_{{xy},{\delta}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}K_{{xy},{\beta}})] \\ &={\varnothing},\end{aligned}$$ by the definition of $R_{{xy}, \alpha}$ and $R_{{xy}, \beta}$, the distributivity of intersection and Cartesian multiplication, and (1).
Let $U$ be the union of the disjoint system of groups, and $E$ the equivalence relation on $U$ induced by the isomorphism index set ${\mathcal{E}}$, $$U={\textstyle \bigcup}\{{{G}_{x}}:x\in I\}\qquad\text{and}\qquad
E={\textstyle \bigcup}\{{{G}_{x}}\times{{G}_{y}}:{(x,y)}\in {\mathcal{E}}\}{\textnormal{{{\hspace*{.5pt}}}.\ }}$$ Write $${\mathcal{I}}=\{ ({(x,y)},{\alpha}) : {(x,y)}\in {\mathcal{E}} \text{ and }{\alpha}< {\kappa}_{{xy}}\}$$ for the *relation index set* of the group pair ${\mathcal{F}}$, that is to say, for the set of indices of the relations $R_{{xy},
\alpha}$. For each subset ${\mathcal{X}}$ of ${\mathcal{I}}$, define $$S_{{\mathcal{X}}}= {\textstyle \bigcup}\{ R_{{{xy}},{{\alpha}}} : ((x,y),{\alpha})\in {{\mathcal{X}}}\},$$ and let $A$ be the collection of all of the relations ${{S}_{{\mathcal{X}}}}$ so defined.
\[T:disj\] The set $A$ is the universe of a complete and atomic Boolean algebra of subsets of $E$[[. ]{.nodecor}]{}The distinct elements in $A$ are the relations ${{S}_{{\mathcal{X}}}}$ for distinct subsets ${\mathcal{X}}$ of ${\mathcal{I}}$[[,]{.nodecor} ]{}and the atoms are the relations $R_{{{xy}},{{\alpha}}}$ for $({(x,y)},{\alpha})$ in ${\mathcal{I}}$[[. ]{.nodecor}]{}The unit is the relation $E=S_{{\mathcal{I}}}$, and the operations of union, intersection, and complement in $A$ are determined by $${\textstyle \bigcup}_\xi S_{{{\mathcal{X}}}_\xi} = S_{{\mathcal{Y}}},\qquad {\textstyle \bigcap}_\xi S_{{{\mathcal{X}}}_\xi}
= S_{{\mathcal{Y}}},\qquad S_{\mathcal{I}}{\sim}S_{{\mathcal{X}}} = S_{{\mathcal{Y}}}$$ where ${{\mathcal{Y}}}={\textstyle \bigcup}_\xi {{\mathcal{X}}}_\xi$ in the first case[[,]{.nodecor} ]{}${{\mathcal{Y}}}={\textstyle \bigcap}_\xi {{\mathcal{X}}}_\xi$ in the second case[[,]{.nodecor} ]{}and ${{\mathcal{Y}}}={\mathcal{I}}{\sim}{{\mathcal{X}}}$ in the third case [[(]{.nodecor}]{}for any system $({{{\mathcal{X}}}_{\xi}}:\xi<\lambda)$ of subsets[[,]{.nodecor} ]{}and any subset ${\mathcal{X}}$[[,]{.nodecor} ]{}of ${\mathcal{I}}$[[)]{.nodecor}]{}[[. ]{.nodecor}]{}
The system of rectangles $\langle{{G}_{x}}\times{{G}_{y}} :{(x,y)}\in{\mathcal{E}}\rangle$ is easily seen to be a partition of $E$. Combine this with Lemma \[L:i-vi\] and the definition of the relations ${{S}_{{\mathcal{X}}}}$ to arrive at the desired result.
Although the set $A$ is always a complete Boolean set algebra of binary relations, it is not in general closed under the operations of relational composition and converse, nor does it necessarily contain the identity relation ${id_{U}}$ on the set $U$. Such closure depends on the properties of the quotient isomorphisms. We begin by characterizing when $A$ contains ${id_{U}}$.
\[T:identthm1\] For each element $x$ in $I$[[,]{.nodecor} ]{}the following conditions are equivalent[[. ]{.nodecor}]{}
1. The identity relation ${id_{{G_{x}}}}$ on ${G_{{x}}}$ is in $A$[[. ]{.nodecor}]{}
2. $R_{{{xx}}, 0}={id_{{G_{x}}}}$[[. ]{.nodecor}]{}
3. ${{\varphi}_{{{xx}}}}$ is the identity automorphism of ${G_{{x}}}/\{{e_{{x}}}\}$[[.]{.nodecor} ]{}
Consequently[[,]{.nodecor} ]{}$A$ contains the identity relation ${id_{U}}$ on the base set $U$ if and only if [(iii)]{.nodecor} holds for each ${x}$ in $I$[[.]{.nodecor} ]{}
Suppose (i) holds, with the intention of deriving (iii). From the assumption in (i), and the definition of the set $A$, it is clear that ${id_{{G_{x}}}}$ must be a (non-empty) union of some of the relations $R_{{yz},\alpha}$. Each relation $R_{{yz},\alpha}$ in such a union is a subset of the rectangle ${G_{y}}\times{G_{z}}$, by Partition Lemma \[L:i-vi\], and it is simultaneously a subset of the square ${G_{x}}\times {G_{x}}$, because ${id_{{G_{x}}}}$ is a subset of ${G_{x}}\times{G_{x}}$. The rectangle and the square are disjoint whenever $x\neq y$ or $x\neq z$, so $x=y=z$, and therefore $$\tag{1}\label{Eq:idt1.1}
{\textstyle \bigcup}_{{\gamma}}{H_{{{xx}},{\gamma}}}\times ({K_{{{xx}},{\gamma}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{K_{{{xx}},\alpha}})=
R_{{{xx}},\alpha} {\subseteq}{id_{{G_{x}}}}= {\textstyle \bigcup}\{{(g,g)}:g\in {G_{x}}\}{\textnormal{,}\ }$$ by the definitions of $R_{{{xx}},\alpha}$ and ${id_{{G_{x}}}}$. This inclusion implies that the cosets ${H_{{{xx}},{\gamma}}}$ and ${K_{{{xx}},{\gamma}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{K_{{{xx}},\alpha}}$ on the left side of contain exactly one element each, and this element is the same for both cosets, for if this were not the case, then the Cartesian product of the two cosets would contain a pair of the form ${(g,h)}$ with $g\neq h$, in contradiction to . Thus, for each ${\gamma}<{{\kappa}_{{{xx}}}}$, there is an element $g$ in ${G_{x}}$ such that $$\tag{2}\label{Eq:idt1.2}
{H_{{{xx}},{\gamma}}} = \{g\}\qquad\text{and}\qquad {K_{{{xx}},{\gamma}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{K_{{{xx}},\alpha}} = \{g\}{\textnormal{{{\hspace*{.5pt}}}.\ }}$$
Take ${\gamma}=0$ in , and apply the convention that ${H_{{{xx}},0}}$ and ${K_{{{xx}},0}}$ coincide with the subgroups ${H_{{{xx}}}}$ and $K_{{{xx}}}$ respectively; these subgroups are the identity cosets of the quotient groups ${G_{x}}/{H_{{{xy}}}}$ and ${G_{y}}/K_{{{xy}}}$, so $$\tag{3}\label{Eq:idt1.4}
{H_{{{xx}}}}= {H_{{{xx}},0}} = \{g\}\qquad\text{and}\qquad {K_{{{xx}},\alpha}}={K_{{{xx}},0}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{K_{{{xx}},\alpha}} = \{g\}{\textnormal{{{\hspace*{.5pt}}}.\ }}$$ By assumption ${H_{{{xx}}}}$ is a normal subgroup of ${G_{x}}$. The only normal subgroup that has exactly one element is the trivial subgroup $\{e_x\}$, so the element $g$ in must coincide with $e_x$. Use the right side of with $g=e_x$ to see that $\alpha$ must be $0$.
Invoke one more time to obtain, for each ${\gamma}<{{\kappa}_{{{xx}}}}$, an element $g$ in ${G_{x}}$ such that $$\tag{4}\label{Eq:idt1.3} {H_{{{xx}},{\gamma}}}
=\{g\}={K_{{{xx}},{\gamma}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{K_{{{xx}},\alpha}}={K_{{{xx}},{\gamma}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{K_{{{xx}},0}}={K_{{{xx}},{\gamma}}}{\textnormal{{{\hspace*{.5pt}}}.\ }}$$ The isomorphism ${{\varphi}_{{{xx}}}}$ is assumed to map ${H_{{{xx}},{\gamma}}}$ to ${K_{{{xx}},{\gamma}}}$ for each ${\gamma}$, so shows that ${{\varphi}_{{{xx}}}}$ maps each singleton $\{g\}$ to itself. It follows that ${{\varphi}_{{{xx}}}}$ is the identity isomorphism on ${G_{x}}/\{{e_{x}}\}$. Thus, (iii) holds.
If (iii) holds, then $$R_{{{xx}}, 0}={\textstyle \bigcup}\{\{g\}\times(\{g\}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}\{ e_x\}):g\in{G_{x}}\}=\{{(g,g)}:g\in {G_{x}}\}={id_{{{G}_{x}}}}{\textnormal{,}\ }$$ by the definition of $R_{{{xx}},0}$, so (ii) holds. On the other hand, if (ii) holds, then (i) obviously holds, by the definition of $A$.
To derive the final assertion of the theorem, assume first that (iii) holds. The identity relation ${id_{{G_{x}}}}$ is then in $A$, by (i). The union, over all $x$, of these identity relations is the identity relation ${id_{U}}$. Since $A$ is closed under arbitrary unions, it follows that ${id_{U}}$ is in $A$.
Now assume that ${id_{U}}$ is in $A$. The squares ${{G}_{x}}\times {{G}_{x}}$ are all in $A$, by Lemma \[L:i-vi\] and the definition of $A$, so the intersection of each of these squares with ${id_{U}}$ is in $A$, by the closure of $A$ under intersection. This intersection is just ${id_{{G_{x}}}}$, so (i) holds, and therefore also (iii), for each $x$.
In order to prove the next two theorems, it is convenient to formulate two lemmas that will be used in both proofs.
\[L:rect2\] Suppose that each of $$\langle
M_{\alpha}:{\alpha}<\kappa\rangle{\textnormal{,}\ }\qquad \langle
N_{\alpha}:{\alpha}<\kappa\rangle{\textnormal{,}\ }\qquad \langle
P_{\beta}:{\beta}<\lambda\rangle{\textnormal{,}\ }\qquad\langle
Q_{\beta}:{\beta}<\lambda\rangle$$ are sequences of non-empty[[,]{.nodecor} ]{}pairwise disjoint sets[[.]{.nodecor} ]{}If
1. ${{\textstyle\bigcup_{{\alpha}<{\kappa}}}}M_{\alpha}\times N_{\alpha}{\subseteq}{{\textstyle\bigcup_{{\beta}<{\lambda}}}}P_{\beta}\times Q_{\beta}$[[,]{.nodecor}]{}
then there is a uniquely determined mapping ${\vartheta}$ from ${\kappa}$ into ${\lambda}$ such that
1. $M_{\alpha}{\subseteq}P_{{\vartheta}({\alpha})}\qquad
\text{and}\qquad N_{\alpha}{\subseteq}Q_{{\vartheta}({\alpha})}$
for each ${\alpha}<{\kappa}$[[.]{.nodecor} ]{}If equality holds in [(i)]{.nodecor}[[,]{.nodecor} ]{}then equality holds in [(ii)]{.nodecor}[[,]{.nodecor} ]{}and ${\vartheta}$ is a bijection[[. ]{.nodecor}]{}
Consider, first, arbitrary non-empty sets $M$ and $N$. Assume $$\tag{1}\label{Eq:rect1.1}
M\times N={\textstyle \bigcup}_{{\beta}<{\lambda}} P_{\beta}\times Q_{\beta}{\textnormal{,}\ }$$ with the intention of proving that ${\lambda}=1$ (recall that ${\lambda}$ is an ordinal), and $$\tag{2}\label{Eq:rect1.2}M=
P_0\qquad \text{and}\qquad N= Q_0{\textnormal{{{\hspace*{.5pt}}}.\ }}$$ It is obvious from that $P_{\beta}\times Q_{\beta}{\subseteq}M\times N$, and therefore $P_{\beta}{\subseteq}M$ and $Q_{\beta}{\subseteq}N$, for each ${\beta}<{\lambda}$. Consequently, $$\tag{3}\label{Eq:rect1.3}
{{\textstyle\bigcup_{{\beta}<{\lambda}}}} P_{\beta}{\subseteq}M\qquad\text{and}\qquad
{{\textstyle\bigcup_{{\beta}<{\lambda}}}} Q_{\beta}{\subseteq}N{\textnormal{{{\hspace*{.5pt}}}.\ }}$$ Use the distributivity of Cartesian multiplication, , and to obtain $${{\textstyle\bigcup_{{\alpha},{\beta}<{\lambda}}}} P_{\alpha}\times Q_{\beta}=\bigl({{\textstyle\bigcup_{{\beta}<{\lambda}}}} P_{\beta}\bigr)\times \bigl({{\textstyle\bigcup_{{\beta}<{\lambda}}}}
Q_{\beta}\bigr) {\subseteq}M\times N = {{\textstyle\bigcup_{{\gamma}<{\lambda}}}} P_{\gamma}\times Q_{\gamma}{\textnormal{{{\hspace*{.5pt}}}.\ }}\tag{4}\label{E:E1}$$ The inclusion of the first union in the last one in \[E:E1\] implies that every pair ${(g,h)}$ in a rectangle $P_{\beta}\times Q_\beta$ must belong to some rectangle $P_{\gamma}\times Q_{{\gamma}}$. This cannot happen if $\alpha \neq \beta$, because in such a case either $\alpha\neq{\gamma}$ or $\beta\neq{\gamma}$, and therefore either $P_{\alpha}$ must be disjoint from $P_{\gamma}$, or else $Q_\beta$ must be disjoint from $Q_{\gamma}$. It follows that there is exactly one $\beta$ that is less than ${\lambda}$. Since ${\lambda}$ is assumed to be an ordinal, this forces ${\lambda}=1$ and $\beta=0$. Thus, assumes the form $$M\times N=P_0\times Q_0{\textnormal{,}\ }$$ and clearly, holds in this case.
Next, suppose that the equality in is replaced with set-theoretic inclusion, so that $$\tag{5}\label{Eq:rect1.5}
M\times N{\subseteq}{\textstyle \bigcup}_{{\beta}<{\lambda}} P_{\beta}\times Q_{\beta}{\textnormal{{{\hspace*{.5pt}}}.\ }}$$ There is then a unique index ${\beta}<{\lambda}$ such that $$\tag{6}\label{Eq:rect1.6}M{\subseteq}P_{\beta}\qquad
\text{and}\qquad N{\subseteq}Q_{\beta}{\textnormal{{{\hspace*{.5pt}}}.\ }}$$ For the proof, form the intersection of both sides of with $M\times
N$, and use , the distributivity of intersection, and simple set theory to obtain $$\begin{gathered}
\tag{7}\label{Eq:rect1.7}
M\times N=(M\times N)\cap(M\times N)=(M\times N)\cap[{{\textstyle\bigcup_{{\beta}<{\lambda}}}} (P_{\beta}\times
Q_{\beta})]\\
{{\textstyle\bigcup_{{\beta}<{\lambda}}}} [(M\times N)\cap (P_{\beta}\times Q_{\beta})]={{\textstyle\bigcup_{{\beta}<{\lambda}}}} (M\cap P_{\beta})\times (N\cap Q_{\beta}) {\textnormal{{{\hspace*{.5pt}}}.\ }}\end{gathered}$$ Drop all terms in the union on the right side of that are empty. The equality of the first and last expressions in shows that holds with $P_\beta$ and $Q_\beta$ replaced by $M\cap P_\beta$ and $N\cap Q_\beta$ respectively. Use the implication from to to conclude that there can only be one index $\beta$ on the right side of for which the intersection is not empty, and for that $\beta$ we have $$M=M\cap P_\beta\qquad\text{and}\qquad N=N\cap Q_{\beta}{\textnormal{,}\ }$$ so that holds.
Turn now to the proof of the implication from (i) to (ii). Fix an arbitrary index ${\alpha}<{\kappa}$. From (i), it follows immediately that $$M_{\alpha}\times N_{\alpha}{\subseteq}{{\textstyle\bigcup_{{\beta}<{\lambda}}}}P_{\beta}\times Q_{\beta}{\textnormal{{{\hspace*{.5pt}}}.\ }}$$ Apply the implication from to to obtain a unique ${\beta}<{\lambda}$ such that $$\tag{8}\label{Eq:rect.1.new}
M_{\alpha}{\subseteq}P_{{\beta}}\qquad\text{and}\qquad N_{\alpha}{\subseteq}Q_{{\beta}}{\textnormal{{{\hspace*{.5pt}}}.\ }}$$ The desired function is the mapping ${\vartheta}$ that sends $\alpha$ to the corresponding ${\beta}$, so that holds for each $\alpha<\kappa$.
Assume finally that equality holds in (i). There are then uniquely determined mappings ${\vartheta}$ from ${\kappa}$ to ${\lambda}$ and $\psi$ from ${\lambda}$ to ${\kappa}$ such that $$\begin{aligned}
M_{\alpha}{\subseteq}P_{{\vartheta}({\alpha})}\qquad&\text{and}\qquad N_{\alpha}{\subseteq}Q_{{\vartheta}({\alpha})}\tag{9}\label{Eq:rect1.8} \\ \intertext{ for each
${\alpha}<{\kappa}$, and}
P_{\beta}{\subseteq}M_{\psi({\beta})}\qquad&\text{and}\qquad Q_{\beta}{\subseteq}N_{\psi({\beta})}\tag{10}\label{Eq:rect1.9}\\ \intertext{for each
${\beta}<{\lambda}$. Combine \eqref{Eq:rect1.8} and \eqref{Eq:rect1.9} to
arrive at} M_{\alpha}{\subseteq}P_{{\vartheta}({\alpha})}{\subseteq}M_{\psi({\vartheta}({\alpha}))}
\qquad&\text{and}\qquad P_{\beta}{\subseteq}M_{\psi({\beta})}{\subseteq}P_{{\vartheta}(\psi({\beta}))}\tag{11}\label{Eq:rect1.10}\end{aligned}$$ for each ${\alpha}<{\kappa}$ and ${\beta}<{\lambda}$. The sets $M_{\alpha}$ are pairwise disjoint, as are the sets $P_{\beta}$, so the inclusions in force $${\psi({\vartheta}({\alpha}))}={\alpha}\qquad\text{and}\qquad
{{\vartheta}(\psi({\beta}))}={\beta}$$ for each ${\alpha}<{\kappa}$ and ${\beta}<{\lambda}$. This implies that the mappings ${\vartheta}$ and $\psi$ are bijections and inverses of each other.
\[L:PQnormsg\] Suppose $P$ and $Q$ are normal subgroups of groups $G$ and $\bar
G$[[,]{.nodecor} ]{}with coset systems $$\langle P_{\gamma}: {\gamma}<
{\kappa}\rangle\qquad\text{and}\qquad \langle Q_{\gamma}; {\gamma}< {\kappa}\rangle$$ respectively[[.]{.nodecor} ]{}If the mapping $P_{\gamma}\longmapsto Q_{\gamma}$ is an isomorphism from ${G/P}$ onto $\bar G/Q$[[,]{.nodecor} ]{}then for all $\alpha,{\beta}<{\kappa}$[[,]{.nodecor} ]{}we have
- ${\textstyle \bigcup\limits}_{\gamma}P_{\gamma}\times (Q_{\gamma}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}Q_{\alpha})= {\textstyle \bigcup\limits}_{\gamma}(P_{\gamma}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}P_{\alpha}{^{-1}})\times Q_{\gamma}$[[,]{.nodecor}]{}
- ${\textstyle \bigcup\limits}_{\gamma}P_{\gamma}\times (Q_{\gamma}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}Q_{\alpha}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}Q_{\beta})= {\textstyle \bigcup\limits}_{\gamma}(P_{\gamma}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}P_{\alpha}{^{-1}})\times
(Q_{\gamma}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}Q_{\beta})$[[.]{.nodecor} ]{}
Fix an index $\alpha<{\kappa}$, and observe that $ \langle P_{\gamma}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}P_{\alpha}{^{-1}}: {\gamma}< {\kappa}\rangle$ is also an enumeration of the cosets of $P$. Consequently, for each ${\gamma}< {\kappa}$ there exists a unique $\bar{\gamma}< {\kappa}$ such that $$\tag{1}\label{Eq:P.1}
P_{\gamma}= P_{\bar{\gamma}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}P_{\alpha}{^{-1}}{\textnormal{{{\hspace*{.5pt}}}.\ }}$$ The mapping $P_{\gamma}\longmapsto Q_{\gamma}$ is assumed to be an isomorphism, so (1) implies that $$\tag{2}\label{Eq:Q.1}
Q_{\gamma}= Q_{\bar{\gamma}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}Q_{\alpha}{^{-1}}.$$ Use , , and the inverse properties of groups to get $$\tag{3}\label{Eq:P.2}
{\textstyle \bigcup\limits}_{\gamma}P_{\gamma}\times (Q_{\gamma}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}Q_{\alpha}) = {\textstyle \bigcup\limits}_{\gamma}(P_{\bar{\gamma}} {\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}P_{\alpha}{^{-1}})\times (Q_{\bar{\gamma}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}Q_{\alpha}{^{-1}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}Q_{\alpha}) = {\textstyle \bigcup\limits}_{\gamma}(P_{\bar{\gamma}} {\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}P_{\alpha}{^{-1}})\times
Q_{\bar{\gamma}}{\textnormal{{{\hspace*{.5pt}}}.\ }}$$ As ${\gamma}$ varies over ${\kappa}$, so does $\bar{\gamma}$, and vice versa, so the occurrence of $\bar{\gamma}$ in the union on the right side of may be replaced by ${\gamma}$ to arrive at (i).
Exactly the same reasoning also gives $$\begin{aligned}
{\textstyle \bigcup\limits}_{\gamma}P_{\gamma}\times (Q_{\gamma}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}Q_{\alpha}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}Q_{\beta}) &=
{\textstyle \bigcup\limits}_{\gamma}(P_{\bar{\gamma}} {\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}P_{\alpha}{^{-1}})\times (Q_{\bar{\gamma}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}Q_{\alpha}{^{-1}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}Q_{\alpha}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}Q_{\beta})\notag\\
&= {\textstyle \bigcup\limits}_{\gamma}(P_{\bar{\gamma}} {\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}P_{\alpha}{^{-1}})\times (Q_{\bar{\gamma}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}Q_{\beta})\notag\\
&= {\textstyle \bigcup\limits}_{\gamma}(P_{\gamma}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}P_{\alpha}{^{-1}})\times (Q_{\gamma}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}Q_{\beta}){\textnormal{,}\ }\notag\end{aligned}$$ which proves (ii).
The next task is to establish necessary and sufficient conditions for the set $A$ to be closed under converse, and in particular, for $A$ to contain the converse of every atomic relation. As we shall see in the next theorem, $A$ will contain the converse of every atomic relation if and only if the isomorphism ${{\varphi}_{{{yx}}}}$ is the inverse of the isomorphism ${{\varphi}_{{{xy}}}}$ for every pair ${(x,y)}$ in ${\mathcal{E}}$. Since ${{\varphi}_{{{xy}}}}$ maps ${{G}_{x}}/{H_{{{xy}}}}$ to ${{G}_{y}}/K_{{{xy}}}$, and ${{\varphi}_{{{yx}}}}$ maps ${{G}_{y}}/{H_{{{yx}}}}$ to ${{G}_{x}}/K_{{{yx}}}$, if these two isomorphisms are inverses of one another, then we must have $$\begin{aligned}
{{G}_{x}}/{H_{{{xy}}}}={{G}_{x}}/K_{{{yx}}}\qquad&\text{and}\qquad {{G}_{y}}/K_{{{xy}}}={{G}_{y}}/{H_{{{yx}}}}{\textnormal{,}\ }\\ \intertext{so that}
K_{{{yx}}}={H_{{{xy}}}}\qquad&\text{and}\qquad K_{{{xy}}}={H_{{{yx}}}}{\textnormal{{{\hspace*{.5pt}}}.\ }}\end{aligned}$$ As mentioned earlier, the enumeration of the cosets of the subgroup ${H_{{{yx}}}}$ can be chosen freely. Under the given assumption, we can and shall always adopt the following convention regarding the choice of this enumeration.
\[Co:convention\] If ${{\varphi}_{{{xy}}}}$ and ${{\varphi}_{{{yx}}}}$ are inverses of one another, then the coset enumeration $\langle {H_{{{yx}},{\gamma}}}:{\gamma}<{{\kappa}_{{{yx}}}}\rangle$ is chosen so that ${{\kappa}_{{{yx}}}}={{\kappa}_{{{xy}}}}$ and ${H_{{{yx}},{\gamma}}}={K_{{{xy}},{\gamma}}}$ for all ${\gamma}<{{\kappa}_{{{xy}}}}$. It then follows that $${K_{{{yx}},{\gamma}}} ={{\varphi}_{{{yx}}}}({H_{{{yx}},{\gamma}}})={{\varphi}_{{{xy}}}}{^{-1}}({K_{{{xy}},{\gamma}}})=
\ {H_{{{xy}},{\gamma}}}$$ for all ${\gamma}<{{\kappa}_{{{xy}}}}$.
The next theorem characterizes when $A$ is closed under converse.
\[T:convthm1\] For each pair ${(x,y)}$ in ${\mathcal{E}}$[[,]{.nodecor} ]{}the following conditions are equivalent[[. ]{.nodecor}]{}
1. There are an $\alpha<{{\kappa}_{{{xy}}}}$ and a ${\beta}<{{\kappa}_{{{yx}}}}$ such that $R_{{{xy}},{\alpha}}{^{-1}}=R_{{{yx}},{\beta}}$[[. ]{.nodecor}]{}
2. For every $\alpha<{{\kappa}_{{{xy}}}}$ there is a ${\beta}<{{\kappa}_{{{yx}}}}$ such that $R_{{{xy}},{\alpha}}{^{-1}}=R_{{{yx}},{\beta}}$[[. ]{.nodecor}]{}
3. ${{\varphi}_{{{xy}}}}{^{-1}}={{\varphi}_{{{yx}}}}$.
Moreover[[,]{.nodecor} ]{}if one of these conditions holds[[,]{.nodecor} ]{}then we may assume that ${{\kappa}_{{{yx}}}}={{\kappa}_{{{xy}}}}$[[,]{.nodecor} ]{}and the index ${\beta}$ in [(i)]{.nodecor} and [(ii)]{.nodecor} is uniquely determined by $
{H_{{{xy}},{\alpha}}}{^{-1}}={H_{{{xy}},{\beta}}}$[[. ]{.nodecor}]{}The set $A$ is closed under converse if and only if [(iii)]{.nodecor} holds for all ${(x,y)}$ in ${\mathcal{E}}$[[.]{.nodecor} ]{}
Observe, first of all, that without using any of the hypotheses in (i)–(iii), only the definition of the relation $R_{{{xy}},{\alpha}}$, Lemma \[L:PQnormsg\](i), the distributivity of relational converse, and the definition of relational converse, we have $$\begin{gathered}
\tag{1}\label{Eq:ct1.03}
R_{{{{xy}}},{{\alpha}}}{^{-1}}= \bigl[{\textstyle \bigcup}_{\gamma}H_{{{xy}},{\gamma}}\times
(K_{{{xy}},{\gamma}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}K_{{{xy}},{\alpha}})\bigr]{^{-1}}= \bigl[{\textstyle \bigcup}_{\gamma}(H_{{{xy}},{\gamma}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{xy}},\alpha}}{^{-1}})\times
K_{{{xy}},{\gamma}}\bigr]{^{-1}}\\ = {\textstyle \bigcup}_{\gamma}\bigl[(H_{{{xy}},{\gamma}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{xy}},\alpha}})\times K_{{{xy}},{\gamma}} \bigr]{^{-1}}=
{\textstyle \bigcup}_{\gamma}K_{{{xy}},{\gamma}}\times (H_{{{xy}},{\gamma}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{xy}},\alpha}}{^{-1}}){\textnormal{{{\hspace*{.5pt}}}.\ }}\end{gathered}$$
Assume now that (iii) holds, with the intention of deriving (ii). Choose ${\beta}<{{\kappa}_{{{xy}}}}$ so that $$\tag{2}\label{Eq:ct1.08} {H_{{{xy}},\beta}} = {H_{{{xy}},{\alpha}}}{^{-1}}{\textnormal{{{\hspace*{.5pt}}}.\ }}$$ In view of assumption (iii), Convention \[Co:convention\] may be applied to write ${{\kappa}_{{{yx}}}}={{\kappa}_{{{xy}}}}$, and $$\tag{3}\label{Eq:ct1.09} {H_{{{yx}},{\gamma}}} = {K_{{{xy}},{\gamma}}}{\textnormal{,}\ }\qquad {K_{{{yx}},{\gamma}}} =
{H_{{{xy}},{\gamma}}}$$ for each ${\gamma}<{{\kappa}_{{{xy}}}}$. Use the definition of the relation $R_{{{yx}},\beta}$, together with , , and , to conclude that$$\begin{gathered}
R_{{{yx}},\beta}={\textstyle \bigcup}_{\gamma}H_{{{xy}},{\gamma}}\times
(K_{{{yx}},{\gamma}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}K_{{{yx}},{\beta}})
= {\textstyle \bigcup}_{\gamma}K_{{{xy}},{\gamma}}\times (H_{{{xy}},{\gamma}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}H_{{{xy}},{\beta}})\\
= {\textstyle \bigcup}_{\gamma}K_{{{xy}},{\gamma}}\times (H_{{{xy}},{\gamma}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}H_{{{xy}},{\alpha}}{^{-1}})=R_{{{xy}},{\alpha}}{^{-1}}{\textnormal{{{\hspace*{.5pt}}}.\ }}\end{gathered}$$ Thus, (ii) holds.
The implication from (ii) to (i) is obvious. Consider now the implication from (i) to (iii). Fix $\alpha<{{\kappa}_{{{xy}}}}$, and suppose that $$R_{{{xy}},{\beta}}=
R_{{{xy}},\alpha}{^{-1}}{\textnormal{{{\hspace*{.5pt}}}.\ }}\tag{4}\label{Eq:new6}$$ Use , the definition of $R_{{{yx}},{\beta}}$, and (with ${\gamma}$ replaced by another variable, say $\eta$) to obtain $${\textstyle \bigcup}_{{\gamma}<{{\kappa}_{{{yz}}}}}{H_{{{yx}},{\gamma}}}\times ({K_{{{yx}},{\gamma}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{K_{{{yx}},{\beta}}})=
{\textstyle \bigcup}_{\eta<{{\kappa}_{{{xy}}}}} K_{{{xy}},{\eta}}\times( H_{{{xy}},{\eta}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}H_{{{xy}},{\alpha}}{^{-1}}){\textnormal{{{\hspace*{.5pt}}}.\ }}\tag{5}\label{Eq:new8}$$ Apply Lemma \[L:rect2\] to to see that there must be a bijection ${\vartheta}$ from ${{\kappa}_{{{xy}}}}$ to ${{\kappa}_{{{xy}}}}$ such that $${H_{{{yx}},{\gamma}}}={K_{{{xy}},{\vartheta}({\gamma})}}\qquad\text{and}\qquad {K_{{{yx}},{\gamma}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{K_{{{yx}},{\beta}}} =
{H_{{{xy}},{\vartheta}({\gamma})}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{xy}},{\beta}}} \tag{6}\label{Eq:new8.9}$$ for all ${\gamma}<{{\kappa}_{{{yx}}}}$.
Take ${\gamma}=0$ in . It follows from the first equation that ${H_{{{yx}},0}}={K_{{{xy}},{\vartheta}(0)}}$. Since ${H_{{{yx}},0}}$ is a subgroup of ${G_{y}}$, the same must be true of ${K_{{{xy}},{\vartheta}(0)}}$. The only subgroup in the coset enumeration of $K_{{{xy}}}$ is ${K_{{{xy}},0}}$, so ${\vartheta}(0)=0$, and therefore $${H_{{{yx}}}}={H_{{{yx}},0}}={K_{{{xy}},0}}=K_{{{xy}}}.$$ Apply this observation to the second equation, and use the fact that ${\vartheta}(0)=0$, to arrive at $$\begin{gathered}
\tag{7}\label{Eq:ct1.13}
{K_{{{yx}},{\beta}}}=K_{{{yx}}} {\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{K_{{{yx}},{\beta}}}={K_{{{yx}},0}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{K_{{{yx}},{\beta}}}\\={H_{{{xy}},{\vartheta}(0)}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{xy}},{\beta}}}={H_{{\xi},0}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{xy}},{\beta}}}={H_{{{xy}}}}
{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{xy}},{\beta}}}={H_{{{xy}},{\beta}}}.\end{gathered}$$ (Recall that $K_{{{yx}}}$ and ${H_{{{xy}}}} $ are the identity cosets of their respective coset systems.)
Multiply the left and right sides of the second equation in , on the right, by ${K_{{{yx}},{\beta}}}{^{-1}}$, use the inverse law for groups, and use the equality of the first and last cosets in , to arrive at $$\begin{gathered}
\tag{8}\label{Eq:new11}
{K_{{{yx}},{\gamma}}}={K_{{{yx}},{\gamma}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{K_{{{yx}},{\beta}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{K_{{{yx}},{\beta}}}{^{-1}}= {H_{{{xy}},{\vartheta}({\gamma})}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{xy}},{\beta}}} {\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{K_{{{yx}},{\beta}}}{^{-1}}\\
= {H_{{{xy}},{\vartheta}({\gamma})}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{xy}},{\beta}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{xy}},{\beta}}}{^{-1}}={H_{{{xy}},{\vartheta}({\gamma})}}\end{gathered}$$ for every ${\gamma}<{{\kappa}_{{{yx}}}}$. Consequently, $${{\varphi}_{{{yx}}}}({K_{{{xy}},{\vartheta}({\gamma})}})={{\varphi}_{{{yx}}}}({H_{{{yx}},{\gamma}}}) = {K_{{{yx}},{\gamma}}}
={H_{{{xy}},{\vartheta}({\gamma})}},$$ by , the definition of ${K_{{{yx}},{\gamma}}}$, and . As ${\gamma}$ runs through the indices less than ${{\kappa}_{{{yx}}}}$, the image ${\vartheta}({\gamma})$ runs through the indices less than ${{\kappa}_{{{xy}}}}$, so the preceding string of equalities shows that $${{\varphi}_{{{yx}}}}({K_{{{xy}},{\delta}}})={H_{{{xy}},{\delta}}}\tag{9}\label{Eq:new8.10}$$ for every ${\delta}<{{\kappa}_{{{xy}}}}$. Since ${{\varphi}_{{{xy}}}}$ maps each coset ${H_{{{xy}},{\delta}}}$ to ${K_{{{xy}},{\delta}}}$, it follows from that ${{\varphi}_{{{yx}}}}$ is the inverse of ${{\varphi}_{{{xy}}}}$. This completes the proof that conditions (i)–(iii) are equivalent.
If one of the three conditions holds, then all three conditions hold by the equivalence just established. Consequently, using the proof of the implication from (iii) to (ii), we may assume that ${{\kappa}_{{{yx}}}}={{\kappa}_{{{xy}}}}$, and choose ${\beta}<{{\kappa}_{{{xy}}}}$ so that holds. This proves the second assertion of the theorem.
Turn to the proof of the final assertion of the theorem. Assume first that (iii) holds for all ${(x,y)}$ in ${\mathcal{E}}$. The atoms in $A$ are just the relations of the form $R_{{{xy}},\alpha}$, so from the equivalence of (ii) with (iii), it follows that the converse of every atom in $A$ is again an atom in $A$. The elements of $A$ are just the unions of these various atoms, by Theorem \[T:disj\], and the converse of a union of atoms is again a union of atoms, by the preceding observation and the distributivity of converse. Thus, the converse of every element in $A$ belongs to $A$, so $A$ is closed under converse.
Assume now that $A$ is closed under converse, and fix an arbitrary pair ${(x,y)}$ in ${\mathcal{E}}$. The relation $R_{{{xy}},0}$ is a subset of ${{G}_{x}}\times{{G}_{y}}$ and belongs to $A$, by Lemma \[L:i-vi\] and the definition of $A$. It follows that the converse relation $R_{{{xy}},0}{^{-1}}$ is a subset of ${{G}_{y}}\times {{G}_{x}}$, and it belongs to $A$ by assumption. Consequently, there must be a non-empty set ${\varGamma}{\subseteq}{{\kappa}_{{{yx}}}}$ such that $$R_{{{xy}},0}{^{-1}}={\textstyle \bigcup}_{{\beta}\in{\varGamma}}R_{{{yx}},{\beta}}{\textnormal{,}\ }\tag{10}\label{Eq:ct1.15}$$ by Boolean Algebra Theorem \[T:disj\]. The pair ${({e_{x}},{e_{y}})}$ belongs to the relation $R_{{{xy}},0}$, by the definition of $R_{{{xy}},0}$ (in fact, the pair is in ${H_{{{xy}},0}}\times {K_{{{xy}},0}}$, which is one of the rectangles that make up $R_{{{xy}}, 0}$), so the converse pair ${({e_{y}},{e_{x}})}$ belongs to $R_{{{xy}}, 0}{^{-1}}$. For similar reasons, the relation $R_{{{yx}}, 0}$ contains the pair ${({e_{y}},{e_{x}})}$, and it is the only relation of the form $R_{{{yx}},
{\beta}}$ that contains this pair, because the atomic relations in $A$ are pairwise disjoint. It follows from this observation and that $0$ must be one of the indices in ${\varGamma}$. In other words, $$\tag{11}\label{Eq:ct1.16}
R_{{{yx}}, 0}{\subseteq}R_{{{xy}}, 0}{^{-1}}{\textnormal{{{\hspace*{.5pt}}}.\ }}$$ Reverse the roles of $x$ and $y$ in this argument to obtain $$\tag{12}\label{Eq:ct1.17}
R_{{{xy}}, 0}{\subseteq}R_{{{yx}}, 0}{^{-1}}{\textnormal{{{\hspace*{.5pt}}}.\ }}$$ Combine with , and use the monotony and first involution laws for converse, to arrive at $$R_{{{xy}}, 0}{^{-1}}{\subseteq}(R_{{{yx}}, 0}{^{-1}}){^{-1}}=R_{{{yx}},0}{\subseteq}R_{{{xy}}, 0}{^{-1}}{\textnormal{{{\hspace*{.5pt}}}.\ }}$$ The first and last terms are equal, so equality must hold everywhere. In particular, $R_{{{xy}}, 0}{^{-1}}=R_{{{yx}}, 0}$. This shows that condition (i) is satisfied for the pair ${(x,y)}$ in the case $\alpha=0$. Invoke the equivalence of (i) with (iii) to conclude that (iii) holds for all pairs ${(x,y)}$.
It is natural to ask whether, in analogy with Identity Theorem \[T:identthm1\], one can add another condition to those already listed in Converse Theorem \[T:convthm1\], for example, the condition that $R_{{{xy}}, \alpha}{^{-1}}$ be in $A$ for some $\alpha<{{\kappa}_{{{xy}}}}$. It turns out, however, that in the absence of additional hypotheses, this condition is not equivalent to the conditions listed in the lemma. We return to this question at the end of the next section.
Notice that condition (ii) in the preceding theorem, combined with the second assertion of the theorem, provides a concrete method of computing the converse of a relation $R_{{{xy}},
\alpha}$ in terms of the structure of the quotient group ${G_{x}}/{H_{{{xy}}}}$: just compute the index ${\beta}$ such that $
{H_{{{xy}},{\alpha}}}{^{-1}}={H_{{{xy}},{\beta}}}$, for then we have $R_{{{xy}},{\alpha}}{^{-1}}=R_{{{xy}},{\beta}}$. This method, in turn, provides a concrete way of computing the converse of any relation in $A$.
The final and most difficult task is to characterize when the set $A$ is closed under relational composition, and in particular, when it contains the composition of two atomic relations. There is one case in which the relative product of two atomic relations is empty, and therefore automatically in $A$.
\[L:emptycomp\] If ${({x},{y})}$ and $ {(w,z)}$ are in ${\mathcal{E}}$[[,]{.nodecor} ]{}and if $y\ne
w$[[,]{.nodecor} ]{}then $$R_{{{xy}},{\alpha}} {\mid}R_{{{wz}}, {\beta}}={\varnothing}$$ for all ${\alpha}<{{\kappa}_{{{xy}}}}$ and ${\beta}<{{\kappa}_{{{wz}}}}$[[.]{.nodecor} ]{}
Indeed, $$R_{{{{xy}}},{{\alpha}}}{\subseteq}G_{x}\times G_{y} \quad\text{and}\quad
R_{{{{wz}}},{{\beta}}}{\subseteq}G_w \times G_z,$$ by Lemma \[L:i-vi\]. Therefore, $$R_{{{{xy}}},{{\alpha}}} {\mid}R_{{{{wz}}},{{\beta}}}{\subseteq}(G_{x}\times G_{y}) {\mid}(G_w \times G_z){\textnormal{,}\ }$$ by monotony. If $y\neq w$, then the sets $G_{y}$ and $G_w $ are disjoint, and therefore the relational composition of $G_{x}\times G_{y}$ and $G_w
\times G_z$ is empty.
To clarify the underlying ideas of the remaining case when $y=w$, we again use cosets as indices of the atomic relations for a few moments. It is natural to conjecture that (under suitable hypotheses) the composition of the relations corresponding to cosets $H$ and $\bar H$ of ${H_{{{xy}}}}$ and ${H_{{{yz}}}}$ respectively is precisely the relation corresponding to the group composition of the two cosets, $$R_{{{xy}},{H}}{\mid}R_{{{yz}},{\bar H}}=R_{{{xz}},{H {\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}\bar H}}.$$
This form of the conjecture is incorrect. The first difficulty is that the cosets $H$ and $\bar H$ live in disjoint groups, and therefore cannot be composed. To write the conjecture in a meaningful way, one must first translate the coset $H$ to its copy, the coset $K={{\varphi}_{{{xy}}}}(H)$ of $K_{{{xy}}}$ in ${G_{{y}}}$, where $\bar H$ “lives", and then compose this translation with $\bar H$ to arrive at a coset $$M=K{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}\bar H$$ of $K_{{{xy}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{yz}}}}$.
The second difficulty is that compositions of subrelations of $${{G}_{x}}\times {{G}_{y}}\qquad\text{and}\qquad {{G}_{y}}\times {{G}_{z}}$$ should be a subrelation of ${{G}_{x}}\times {{G}_{z}}$, and therefore should have ${{xz}}$ as part of the index. The relations indexed with ${{xz}}$ are constructed with the help of cosets of ${H_{{{xz}}}}$, so it is necessary to translate the composite coset $K{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}\bar H$ from ${{G}_{y}}$ back to ${{G}_{x}}$ using the mapping ${{\varphi}_{{{xy}}}}{^{-1}}$, so that it can be written as a union of cosets of ${H_{{{xz}}}}$. A more reasonable form of the original conjecture might look like $$R_{{{xy}},{H}}{\mid}R_{{{yz}},{\bar H}}=R_{{{xz}},{{{\varphi}_{{{xy}}}}{^{-1}}[{M}]}}=R_{{{xz}},{{{\varphi}_{{{xy}}}}{^{-1}}[K{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}\bar H]}}{\textnormal{{{\hspace*{.5pt}}}.\ }}$$
The third difficulty is that the relation on the right side of this last equation has not been defined. At this point, we can only speak in a meaningful way about relations $R_{{{xz}},{\hat H}}$ for single cosets ${\hat H}$ of ${H_{{{xz}}}}$. It therefore is necessary to rewrite the preceding conjecture in the form $$R_{{{xy}},{H}}{\mid}R_{{{yz}},{\bar H}}={\textstyle \bigcup}\{R_{{{xz}},{\hat H}} :{\hat H}{\subseteq}{{{\varphi}_{{{xy}}}}{^{-1}}[K
{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}\bar H]}\}.$$
In order for the conjecture be true, the subgroup ${{\varphi}_{{{xy}}}}{^{-1}}[K_{{{xy}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{yz}}}}]$ must include the subgroup ${H_{{{xz}}}}$, so that the coset ${{\varphi}_{{{xy}}}}{^{-1}}[K {\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}\bar H]$ can really be written as a union of cosets $\hat H$ of ${H_{{{xz}}}}$. Moreover, it is natural to suspect that some sort of composition of the mappings ${{\varphi}_{{{xy}}}}$ and ${{\varphi}_{{{yz}}}}$ should equal the mapping ${{\varphi}_{{{xz}}}}$, $$\begin{aligned}
{2}&\ {{\varphi}_{{{xy}}}}&&\ {{\varphi}_{{{yz}}}}\\
{G_{x}}/{H_{{{xy}}}}&\longmapsto{G_{y}}/K_{{{xy}}}{\textnormal{,}\ }\qquad&\qquad {G_{y}}/{H_{{{yz}}}}
&\longmapsto{G_{z}}/K_{{{yz}}},\end{aligned}$$ $$\begin{aligned}
&{{\varphi}_{{{xz}}}}\\
{G_{x}}/{H_{{{xz}}}}&\longmapsto{G_{z}}/K_{{{xz}}}.\end{aligned}$$ However, the subgroup $K_{{{xy}}}$ may not coincide with the subgroup ${H_{{{yz}}}}$ at all, so it is not meaningful to speak about the composition of ${\varphi_{xy}}$ with ${\varphi_{yz}}$. In order to be able to compose quotient isomorphisms, one has first to form a common quotient group using the complex product of the subgroups, and then compose the induced isomorphisms ${{\hat{\varphi}}_{{{xy}}}}$ and ${{\hat{\varphi}}_{{{yz}}}}$, $$\begin{aligned}
{3}&\ {{\hat{\varphi}}_{{{xy}}}}& &\ & &\ {{\hat{\varphi}}_{{{yz}}}}\\
{G_{x}}/({H_{{{xy}}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{xz}}}})&\longmapsto\ & {G_{y}}/&(K_{{{xy}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{yz}}}})&
&\longmapsto{G_{z}}/(K_{{{xz}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}K_{{{yz}}}).\\ &\longmapsto& &\ \
{{\hat{\varphi}}_{{{xz}}}}& &\longmapsto\end{aligned}$$ What really should be true is that the composition of the induced mappings ${{\hat{\varphi}}_{{{xy}}}}$ and ${{\hat{\varphi}}_{{{yz}}}}$ should equal the induced mapping ${{\hat{\varphi}}_{{{xz}}}}$. These conditions do indeed prove to be necessary and sufficient for the conjecture to hold. We formulated them in the conventional notation, using the subscripts of the cosets in place of the cosets.
\[T:compthm\] For all pairs ${(x,y)}$ and ${(y,z)}$ in ${\mathcal{E}}$[[,]{.nodecor} ]{}the following conditions are equivalent[[. ]{.nodecor}]{}
1. The relation $R_{{{xy}}, 0} {\mid}R_{{{yz}}, 0}$ is in $A$[[. ]{.nodecor}]{}
2. For each ${\alpha}<{{\kappa}_{{{xy}}}}$ and each ${\beta}<{{\kappa}_{{{yz}}}}$[[,]{.nodecor} ]{}the relation $R_{{{xy}},{\alpha}}{\mid}R_{{{yz}},{\beta}}$ is in $A$[[. ]{.nodecor}]{}
3. For each ${\alpha}<{{\kappa}_{{{xy}}}}$ and each ${\beta}<{{\kappa}_{{{yz}}}}$[[,]{.nodecor} ]{}$$R_{ {{xy}}, \alpha} {\mid}R_{ {{yz}},{\beta}}={\textstyle \bigcup}\{R_{{{xz}},{\gamma}} :
{H_{{{xz}},{\gamma}}} {\subseteq}{{\varphi}_{{{xy}}}}{^{-1}}[ {K_{{{xy}},\alpha}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{yz}},{\beta}}}]\}{\textnormal{{{\hspace*{.5pt}}}.\ }}$$
4. $ {H_{{{xz}}}}{\subseteq}{{\varphi}_{{{xy}}}}{^{-1}}[K_{{{xy}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{yz}}}}]$ and ${{\hat{\varphi}}_{{{xy}}}}{\mid}{{\hat{\varphi}}_{{{yz}}}}={{\hat{\varphi}}_{{{xz}}}}$[[,]{.nodecor} ]{}where ${{\hat{\varphi}}_{{{xy}}}}$ and ${{\hat{\varphi}}_{{{xz}}}}$ are the mappings induced by ${{\varphi}_{{{xy}}}}$ and ${{\varphi}_{{{xz}}}}$ on the quotient of ${G_{{x}}}$ modulo the normal subgroup ${{\varphi}_{{{xy}}}}{^{-1}}[K_{{{xy}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{yz}}}}]$[[,]{.nodecor} ]{}while ${{\hat{\varphi}}_{{{yz}}}}$ is the isomorphism induced by ${{\varphi}_{{{yz}}}}$ on the quotient of ${G_{{y}}}$ modulo the normal subgroup $K_{{{xy}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{yz}}}}$.
Consequently[[,]{.nodecor} ]{}the set $A$ is closed under relational composition if and only if [(iv)]{.nodecor} holds for all pairs ${(x,y)}$ and ${(y,z)}$ in ${\mathcal{E}}$[[.]{.nodecor} ]{}
Let ${P_{0}}$ be the normal subgroup of ${G_{y}}$ generated by $K_{{{xy}}}$ and ${H_{{{yz}}}}$, $$\begin{aligned}
{P_{0}}
&=K_{{{xy}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{yz}}}}.\tag{1}\label{Eq:cot1}\\ \intertext{Choose a
coset system
$\langle {P_{\xi}}:\xi<\mu\rangle$ for ${P_{0}}$ in ${G_{y}}$, and write}
{M_{\xi}} &={{\varphi}_{{{xy}}}}{^{-1}}[{P_{\xi}}\,]\tag{2}\label{Eq:cot2}\\
\intertext{for $\xi<\mu$. The isomorphism properties of ${{\varphi}_{{{xy}}}}$
imply that} {M_{0}}=
{{\varphi}_{{{xy}}}}{^{-1}}[{P_{0}}]&={{\varphi}_{{{xy}}}}{^{-1}}[K_{{{xy}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{yz}}}}]\tag{3}\label{Eq:cot3.0}\\
\intertext{is a normal subgroup of ${G_{x}}$ that includes ${H_{{{xy}}}}$
(the inverse image of $K_{{{xy}}}$ under ${{\varphi}_{{{xy}}}}$), and that
the sequence $\langle {M_{\xi}}:\xi<\mu\rangle$ is a coset system
for ${M_{0}}$ in ${G_{x}}$. Moreover, the isomorphism ${{\varphi}_{{{xy}}}}$ induces
a quotient isomorphism ${{\hat{\varphi}}_{{{xy}}}}$ from ${G_{x}/{M_{0}}}$ to ${G_{y}/{P_{0}}}$ that maps
${M_{\xi}}$ to $ {P_{\xi}}$\ for each $\xi<\mu$. Similarly, write} {N_{\xi}} &={{\varphi}_{{{yz}}}}[{P_{\xi}}\,]\tag{4}\label{Eq:cot3}\end{aligned}$$ for $\xi<\mu$, and observe that ${N_{0}}$ is a normal subgroup of ${G_{z}}$ that includes $K_{{{yz}}}$ (the image of ${H_{{{yz}}}}$ under ${{\varphi}_{{{yz}}}}$), and that the sequence $\langle {N_{\xi}}:\xi<\mu\rangle$ is a coset system for ${N_{0}}$ in ${G_{z}}$. Moreover, the isomorphism ${{\varphi}_{{{yz}}}}$ induces a quotient isomorphism ${{\hat{\varphi}}_{{{yz}}}}$ from ${G_{y}/{P_{0}}}$ to ${G_{z}/{N_{0}}}$ that maps ${P_{\xi}}$ to $ {N_{\xi}}$ for each $\xi<\mu$.
Since ${P_{0}}$ is a union of cosets of $K_{{{xy}}}$, each coset of ${P_{0}}$ is a union of cosets of $K_{{{xy}}}$. Thus, there is a partition $\langle {\varGamma}_\xi:\xi<\mu\rangle$ of ${{\kappa}_{{{xy}}}}$ such that $$\begin{aligned}
{P_{\xi}}&={\textstyle \bigcup}\{{K_{{{xy}},{\lambda}}}:{\lambda}\in{\varGamma}_\xi\}\tag{5}\label{Eq:cot4}\\
\intertext{for each $\xi<\mu$. Apply ${{\varphi}_{{{xy}}}}{^{-1}}$ to both sides of
\eqref{Eq:cot4}, and use the distributivity of inverse images,
together with \eqref{Eq:cot2}, to obtain} {M_{\xi}}&={\textstyle \bigcup}\{{H_{{{xy}},{\lambda}}}:{\lambda}\in{\varGamma}_\xi\}\tag{6}\label{Eq:cot5}\\ \intertext{for
$\xi<\mu${\textnormal{{{\hspace*{.5pt}}}.\ }}Carry out a completely analogous argument with
${H_{{{yz}}}}$ in place of $K_{{{xy}}}$ to obtain a partition $\langle
{\varDelta}_\xi:\xi<\mu\rangle$ of ${{\kappa}_{{{yz}}}}$ such that} {P_{\xi}}&={\textstyle \bigcup}\{{H_{{{yz}},{\lambda}}}:{\lambda}\in{\varDelta}_\xi\}\tag{7}\label{Eq:cot6}\\
\intertext{for each $\xi<\mu$. Apply ${{\varphi}_{{{yz}}}}$ to both sides of
\eqref{Eq:cot6}, and use the distributivity of forward images, to
obtain} {N_{\xi}}&={\textstyle \bigcup}\{{K_{{{yz}},{\lambda}}}:{\lambda}\in{\varDelta}_\xi\}\tag{8}\label{Eq:cot7}\end{aligned}$$ for $\xi<\mu$.
It is well known from group theory that the intersection of the two normal subgroups $K_{{{xy}}}$ and ${H_{{{yz}}}}$ in ${{G}_{y}}$ is again a normal subgroup in ${{G}_{y}}$, and that a coset system for this intersection is just the system of intersecting cosets, $$\langle {K_{{{xy}},{\lambda}}}\cap{H_{{{yz}},\chi}}:{\xi}<\mu\text{ and }{\lambda}\in {{\varGamma}_{\xi}}\text{ and
}\chi\in{{\varDelta}_{\xi}}\,\rangle{\textnormal{{{\hspace*{.5pt}}}.\ }}\tag{9}\label{Eq:cot8}$$ In particular, the cosets in are non-empty and mutually disjoint. Moreover,$$\begin{gathered}
{P_{\xi}} ={P_{\xi}}\cap{P_{\xi}} =({\textstyle \bigcup}\{{K_{{{xy}},{\lambda}}}:{\lambda}\in{\varGamma}_\xi\})\cap
({\textstyle \bigcup}\{{H_{{{yz}},{\lambda}}}:{\lambda}\in{\varDelta}_\xi\})\\
= {\textstyle \bigcup}\{
{K_{{{xy}},{\lambda}}}\cap{H_{{{yz}},\chi}}:{\lambda}\in {{\varGamma}_{\xi}}\text{ and
}\chi\in{{\varDelta}_{\xi}}\}
\end{gathered}$$ for each ${\xi}<\mu$, by , , and the distributivity of intersection. The composition of the relations $$\tag{10}\label{Eq:cot9} (H_{{{xy}},{\delta}}
{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}H_{{{xy}},{\alpha}}{^{-1}}) \times (K_{{{xy}},{\delta}} \cap H_{{{yz}},{\chi}})
\quad\text{and}\quad(K_{{{xy}},{\zeta}} \cap H_{{{yz}},{\vartheta}}) \times
(K_{{{yz}},{\vartheta}} {\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}K_{{{yz}},{\beta}})$$ is empty if either ${\delta}\not={\zeta}$ or ${\chi}\not={\vartheta}$, and it is $$\tag{11}\label{Eq:cot10}
(H_{{{xy}},{\delta}} {\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}H_{{{xy}},{\alpha}}{^{-1}}) \times (K_{{{yz}},{\chi}} {\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}K_{{{yz}},{\beta}})$$ when ${\delta}={\zeta}$ and ${\chi}={\vartheta}$.
If ${\rho}<\mu$ and ${\delta}\in{{\varGamma}_{{\rho}}}$, then $$K_{{{xy}},{\delta}} = K_{{{xy}},{\delta}} \cap P_{\rho}= K_{{{xy}},{\delta}}\cap
({{\hspace*{.1pt}}}{\textstyle \bigcup}_{{\chi}\in{{\varDelta}_{{\rho}}}} H_{{{yz}},{\chi}} ) =
{\textstyle \bigcup}_{{\chi}\in{{\varDelta}_{{\rho}}}} (K_{{{xy}},{\delta}}\cap
H_{{{yz}},{\chi}}){\textnormal{,}\ }\tag{12}\label{Eq:cot11}$$ by , , and the distributivity of intersection. A completely analogous argument shows that $$H_{{{yz}},{\vartheta}} = {\textstyle \bigcup}_{{\zeta}\in{{\varGamma}_{{\rho}}}} K_{{{xy}},{\zeta}}\cap
H_{{{yz}},{\vartheta}}\tag{13}\label{Eq:cot13.0}$$ for ${\rho}<\mu$ and ${\vartheta}\in{{\varDelta}_{{\gamma}}}$.
Without any special assumptions on $A$, we now prove that$$\begin{gathered}
{K_{{{xy}},{\alpha}}}{\subseteq}{P_{\xi}}\ \text{ and }\ {H_{{{yz}},{\beta}}}{\subseteq}{P_{{\sigma}}}\ \text{ implies }\\ R_{{{{xy}}},{{\alpha}}} {\mid}R_{{{{yz}}},{{\beta}}} = {\textstyle \bigcup}_{{\rho}<\mu}{M_{{\rho}}}\times({N_{{\rho}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}N_{\xi}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}N_{\sigma}).\tag{14}\label{Eq:cot12}\end{gathered}$$
Successive transformations of terms lead to the following values for $R_{{{{xy}}},{{\alpha}}} {\mid}R_{{{{yz}}},{{\beta}}}$. $$\big[{\textstyle \bigcup}_{ {\delta}<{{\kappa}_{{{xy}}}}}
H_{{{xy}},{\delta}} \times (K_{{{xy}},{\delta}} {\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}K_{{{xy}},{\alpha}}) \big]
{\bigm\vert}\big[{\textstyle \bigcup}_{ {\vartheta}<{{\kappa}_{{{yz}}}}} H_{{{yz}},{\vartheta}} \times
(K_{{{yz}},{\vartheta}} {\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}K_{{{yz}},{\beta}}) \big]{\textnormal{,}\ }$$ by the definitions of $R_{{{{xy}}},{{\alpha}}}$ and $R_{{{{yz}}},{{\beta}}}$. $$\big[{\textstyle \bigcup}_{ {\rho}< \mu}{\textstyle \bigcup}_{{\delta}\in{{\varGamma}_{{\rho}}}}
H_{{{xy}},{\delta}} \times (K_{{{xy}},{\delta}} {\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}K_{{{xy}},{\alpha}}) \big]
{\bigm\vert}\big[{\textstyle \bigcup}_{ {\gamma}<
\mu}{\textstyle \bigcup}_{{\vartheta}\in{{\varDelta}_{{\gamma}}}} H_{{{yz}},{\vartheta}} \times
(K_{{{yz}},{\vartheta}} {\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}K_{{{yz}},{\beta}}) \big]{\textnormal{,}\ }$$ because the sets ${\varGamma}_\rho$ (for $\rho<\mu$) and ${\varDelta}_{\gamma}$ (for ${\gamma}<\mu$) partition ${{\kappa}_{{{xy}}}}$ and ${{\kappa}_{{{yz}}}}$ respectively. $$\big[ {\textstyle \bigcup}_{{\rho},{\delta}} (H_{{{xy}},{\delta}} {\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}H_{{{xy}},{\alpha}}{^{-1}}) \times K_{{{xy}},{\delta}} \big]{\bigm\vert}\big[
{\textstyle \bigcup}_{{\gamma},{\vartheta}} H_{{{yz}},{\vartheta}} \times (K_{{{yz}},{\vartheta}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}K_{{{yz}},{\beta}}) \big]{\textnormal{,}\ }$$ by Lemma \[L:PQnormsg\](i)[[. ]{.nodecor}]{}$$\begin{gathered}
\big[ {\textstyle \bigcup}_{{\rho},{\delta}} (H_{{{xy}},{\delta}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}H_{{{xy}},{\alpha}}{^{-1}})\times (\,{\textstyle \bigcup}_{{\chi}\in{{\varDelta}_{{\rho}}}}
K_{{{xy}},{\delta}}\cap H_{{{yz}},{\chi}})\big]{\bigm\vert}\\ \big[
{\textstyle \bigcup}_{{\gamma},{\vartheta}} (\,{\textstyle \bigcup}_{{\zeta}\in{{\varGamma}_{{\gamma}}}}
K_{{{xy}},{\zeta}}\cap H_{{{yz}},{\vartheta}}) \times (K_{{{yz}},{\vartheta}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}K_{{{yz}},{\beta}})\big]{\textnormal{,}\ }\end{gathered}$$ by and [[. ]{.nodecor}]{}$$\begin{gathered}
\big[ {\textstyle \bigcup}_{{\rho},{\delta},{\chi}} (H_{{{xy}},{\delta}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}H_{{{xy}},{\alpha}}{^{-1}})
\times (K_{{{xy}},{\delta}}\cap H_{{{yz}},{\chi}})\big]{\bigm\vert}\\ \big[
{\textstyle \bigcup}_{{\gamma},{\vartheta},{\zeta}} (K_{{{xy}},{\zeta}}\cap H_{{{yz}},{\vartheta}}) \times
(K_{{{yz}},{\vartheta}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}K_{{{yz}},{\beta}})\big]{\textnormal{,}\ }\end{gathered}$$ by the distributivity of Cartesian multiplication. $$\begin{gathered}
{\textstyle \bigcup}_{{\rho},{\delta},{\chi},{\gamma},{\vartheta},{\zeta}}
[ (H_{{{xy}},{\delta}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}H_{{{xy}},{\alpha}}{^{-1}}) \times (K_{{{xy}},{\delta}}\cap
H_{{{yz}},{\chi}})]{\bigm\vert}\\ [ (K_{{{xy}},{\zeta}}\cap H_{{{yz}},{\vartheta}}) \times
(K_{{{yz}},{\vartheta}} {\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}K_{{{yz}},{\beta}}) ]{\textnormal{,}\ }\end{gathered}$$ by the distributivity of relational composition. The relational composition inside the brackets of the preceding union is precisely of the form . Apply the conclusions of about this relative product, and in particular use , the distributivity of complex group composition, , and to obtain $$\begin{aligned}
R_{{{{xy}}},{{\alpha}}} {\bigm\vert}R_{{{{yz}}},{{\beta}}} =& {\textstyle \bigcup}_{{\rho},{\delta},{\chi}}
(H_{{{xy}},{\delta}} {\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}H_{{{xy}},{\alpha}}{^{-1}}) \times (K_{{{yz}},{\chi}} {\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}K_{{{yz}},{\beta}}) \\ =& {\textstyle \bigcup}_{\rho}[ (\,{\textstyle \bigcup}_{\delta}H_{{{xy}},{\delta}}){\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}H_{{{xy}},{\alpha}}{^{-1}}] \times [ (\,{\textstyle \bigcup}_{\chi}K_{{{yz}},{\chi}}) {\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}K_{{{yz}},{\beta}}] \\ =& {\textstyle \bigcup}_{\rho}(M_{\rho}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}H_{{{xy}},{\alpha}}{^{-1}}) \times (N_{\rho}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}K_{{{yz}},{\beta}}) \\ =&
{\textstyle \bigcup}_{\rho}(M_{\rho}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}M_{\xi}{^{-1}}) \times (N_{\rho}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}N_{\sigma}) \\
=& {\textstyle \bigcup}_{\rho}M_{\rho}\times (N_{\rho}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}N_{\xi}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}N_{\sigma}){\textnormal{{{\hspace*{.5pt}}}.\ }}\end{aligned}$$ The fourth step is a consequence of the following well-known property of cosets: $K_{{{yz}}}$ is a normal subgroup of ${{N}_{{\rho}}}$, so if the coset ${K_{{{yz}},{\beta}}}$ is included in the coset ${{N}_{{\sigma}}}$, then $${{N}_{{\rho}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{K_{{{yz}},{\beta}}}={{N}_{{\rho}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{{N}_{{\sigma}}}$$ (and similarly for the passage from ${{M}_{{\rho}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{xy}},\alpha}}{^{-1}}$ to ${{M}_{{\rho}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{{M}_{\xi}}{^{-1}}$). This completes the proof of .
Assume $$\tag{15}\label{Eq:cot14}
{H_{{{xz}}}}{\subseteq}{M_{0}}={{{\varphi}}_{{{xy}}}}{^{-1}}[K_{{{xy}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{yz}}}}]{\textnormal{{{\hspace*{.5pt}}}.\ }}$$ There must then be a partition $\langle{{\varPsi}_{{\rho}}}:{\rho}<\mu\rangle$ of ${{\kappa}_{{{xz}}}}$ such that $$\tag{16}\label{Eq:cot15}
{M_{{\rho}}} = {\textstyle \bigcup}_{{\delta}\in{{\varPsi}_{{\rho}}}}{H_{{{xz}},{\delta}}}$$ for each ${\rho}<\mu$. Apply ${{\varphi}_{{{xz}}}}$ to both sides of , and use the correspondence $$\tag{17}\label{Eq:cot16}
{{\varphi}_{{{xz}}}}{{\hspace*{.1pt}}}[{H_{{{xz}},{\delta}}}]={K_{{{xz}},{\delta}}}$$ and the distributivity of forward images, to see that the function ${{\hat{\varphi}}_{{{xz}}}}$ defined by $$\tag{18}\label{Eq:cot19.0}
{{\hat{\varphi}}_{{{xz}}}}({M_{{\rho}}})={{{\varphi}}_{{{xz}}}}[{M_{{\rho}}}]={{{\varphi}}_{{{xz}}}}[{{\hspace*{.1pt}}}{\textstyle \bigcup}_{\delta\in
{{{\varPsi}}_{{\rho}}}}{H_{{{xz}},{\delta}}}]={\textstyle \bigcup}_{{\delta}\in{{{\varPsi}}_{{\rho}}}}{K_{{{xz}},{\delta}}}$$ is a well-defined isomorphism on the quotient group ${G_{x}}/{M_{0}}$. A simple computation using and shows that $$\tag{19}\label{Eq:cot13}
({{\hat{\varphi}}_{{{xy}}}}{\mid}{{\hat{\varphi}}_{{{yz}}}})({M_{{\rho}}})={{\hat{\varphi}}_{{{yz}}}}\bigl({{\hat{\varphi}}_{{{xy}}}}({M_{{\rho}}}))
={{\hat{\varphi}}_{{{yz}}}}({P_{{\rho}}})={N_{{\rho}}}$$ for each ${\rho}<\mu$. Combine and to conclude that the following three conditions are equivalent: $$ {{\hat{\varphi}}_{{{xy}}}}{\mid}{{\hat{\varphi}}_{{{yz}}}} ={{\hat{\varphi}}_{{{xz}}}}{\textnormal{,}\ }\quad {{\hat{\varphi}}_{{{xz}}}}({M_{{\rho}}})={N_{{\rho}}}\text{ for all ${\rho}$}{\textnormal{,}\ }\quad {N_{{\rho}}}={\textstyle \bigcup}_{{\delta}\in{{\varPsi}_{{\rho}}}}{K_{{{xz}},{\delta}}}\text{ for all
${\rho}$}{\textnormal{{{\hspace*{.5pt}}}.\ }}\tag{20}\label{Eq:cot18}$$
Turn now to the task of establishing the equivalences in the theorem. Assume (iv), with the goal of deriving (iii). Fix $\alpha<{{\kappa}_{{{xy}}}}$ and ${\beta}<{{\kappa}_{{{yz}}}}$, and choose $\xi,{\sigma}<\mu$ so that ${K_{{{xy}},{\alpha}}}{\subseteq}{P_{\xi}}$ and ${H_{{{yz}},{\beta}}}{\subseteq}{P_{{\sigma}}}$. The multiplication rules for cosets imply that $${P_{{\xi}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{P_{\sigma}}={K_{{{xy}},\alpha}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{yz}},{\beta}}}{\textnormal{{{\hspace*{.5pt}}}.\ }}\tag{21}\label{Eq:cot19}$$ Choose $\pi$ so that $$\begin{aligned}
{P_{\xi}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{P_{{\sigma}}}&={P_{\pi}}{\textnormal{,}\ }\tag{22}\label{Eq:cot20}\\ \intertext{and use the
isomorphism properties of ${{\hat{\varphi}}_{{{yz}}}}$, together with
\eqref{Eq:cot3}, to obtain}
{N_{\xi}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{N_{{\sigma}}}&={N_{\pi}}{\textnormal{{{\hspace*{.5pt}}}.\ }}\tag{23}\label{Eq:cot21}\end{aligned}$$ Compute: $$\begin{aligned}
R_{{{{xy}}},{{\alpha}}} {\mid}R_{{{{yz}}},{{\beta}}}&=
{\textstyle \bigcup}_{{\rho}<\mu}{M_{{\rho}}}\times({N_{{\rho}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{N_{\pi}})\\ &= {\textstyle \bigcup}_{\rho}({{\hspace*{.1pt}}}{\textstyle \bigcup}_{{\delta}\in{{\varPsi}_{{\rho}}}} H_{{{xz}},{\delta}})
\times(N_{\rho}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}N_\pi) \\ &= {\textstyle \bigcup}_{{\rho},{\delta}}
H_{{{xz}},{\delta}}\times (N_{\rho}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}N_\pi) \\ &= {\textstyle \bigcup}_{{\rho},{\delta}}
H_{{{xz}},{\delta}}\times (K_{{{xz}},{\delta}} {\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}N_\pi)
\\ &= {\textstyle \bigcup}_{{\rho},{\delta}} H_{{{xz}},{\delta}}\times
[ K_{{{xz}},{\delta}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}( {\textstyle \bigcup}_{{\gamma}\in{{\varPsi}_{\pi}}}
K_{{{xz}},{\gamma}})]\\ &= {\textstyle \bigcup}_{{\gamma}} {\textstyle \bigcup}_{{\rho},{\delta}}
H_{{{xz}},{\delta}}\times (K_{{{xz}},{\delta}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}K_{{{xz}},{\gamma}})\\ &=
{\textstyle \bigcup}_{{\gamma}} {\textstyle \bigcup}_{{\delta}<{{\kappa}_{{{xz}}}}} H_{{{xz}},{\delta}}\times
(K_{{{xz}},{\delta}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}K_{{{xz}},{\gamma}}) \\ &= {\textstyle \bigcup}_{{\gamma}}
R_{{{{xz}}},{{\gamma}}}{\textnormal{{{\hspace*{.5pt}}}.\ }}\end{aligned}$$ The first equality uses and , the second uses , the third uses the distributivity of Cartesian multiplication, the fourth uses the multiplication rules for cosets and the inclusion of ${K_{{{xz}},{\delta}}}$ in ${N_{{\rho}}}$ (because ${\delta}$ is in ${{{\varPsi}}_{{\rho}}}$, and (iv) holds, which implies that the last equation in holds), the fifth uses the assumption of (iv), which implies that the last equation in holds with $\pi$ in place of $\rho$, the sixth uses the distributivity of complex group composition, the seventh uses the fact that the sets ${\varPsi}_{\rho}$ (for ${\rho}<\mu$) partition ${{\kappa}_{{{xz}}}}$, and the last uses the definition of $R_{{{xz}},{\gamma}}$. Summarizing, $$R_{{{{xy}}},{{\alpha}}} {\mid}R_{{{{yz}}},{{\beta}}}={\textstyle \bigcup}_{{\gamma}\in {{\varPsi}_{\pi}}} R_{{{{xz}}},{{\gamma}}}{\textnormal{{{\hspace*{.5pt}}}.\ }}\tag{24}\label{Eq:cot22}$$
In order to complete the derivation of (iii), it is necessary to characterize the relations $R_{{{xz}},{\gamma}}$ such that ${\gamma}\in{{\varPsi}_{\pi}}$. First, $${M_{\pi}}={{\varphi}_{{{xy}}}}{^{-1}}[{P_{\pi}}]={{\varphi}_{{{xy}}}}{^{-1}}[{P_{\xi}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{P_{{\sigma}}}]={{\varphi}_{{{xy}}}}{^{-1}}[{K_{{{xy}},\alpha}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{yz}},{\beta}}}],
\tag{25}\label{Eq:cot23}$$ by (with $\pi$ in place of $\rho$), , and . Second, $${M_{\pi}} =
{{\varphi}_{{{xz}}}}{^{-1}}[{N_{\pi}}]={{\varphi}_{{{xz}}}}{^{-1}}[{{\hspace*{.1pt}}}{\textstyle \bigcup}_{{\gamma}\in{{\varPsi}_{\pi}}}{K_{{{xz}},{\delta}}}]={{\hspace*{.1pt}}}{\textstyle \bigcup}_{{\gamma}\in{{\varPsi}_{\pi}}}{{\varphi}_{{{xz}}}}{^{-1}}[{K_{{{xz}},{\delta}}}]={\textstyle \bigcup}_{{\gamma}\in{{\varPsi}_{\pi}}}{H_{{{xz}},{\delta}}}{\textnormal{,}\ }\tag{26}\label{Eq:cot24}$$ by the assumption in (iv), which implies that the second and third equations in hold, the distributivity of inverse images, and . Combine and to arrive at the equivalence of the three conditions $${H_{{{xz}},{\gamma}}}{\subseteq}{{\varphi}_{{{xy}}}}{^{-1}}[{K_{{{xy}},\alpha}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{yz}},{\beta}}}]{\textnormal{,}\ }\qquad
{H_{{{xz}},{\gamma}}}{\subseteq}{M_{\pi}}{\textnormal{,}\ }\qquad {\gamma}\in {{\varPsi}_{\pi}}{\textnormal{{{\hspace*{.5pt}}}.\ }}$$ The equivalence of the first and last formulas permits us to rewrite in the desired form of (iii): $$R_{{{xy}}, \alpha}{\mid}R_{{{yz}},{\beta}}
= {\textstyle \bigcup}\{R_{{{xz}},{\gamma}} : {\gamma}<{{\kappa}_{{{xz}}}}\text{ and }{H_{{{xz}},{\gamma}}}{\subseteq}{{\varphi}_{{{xy}}}}{^{-1}}[{K_{{{xy}},\alpha}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{yz}},{\beta}}}]\}{\textnormal{{{\hspace*{.5pt}}}.\ }}$$
The implication from (iii) to (ii) follows from the definition of $A$ as the set of arbitrary unions of atomic relations (see Boolean Algebra Theorem \[T:disj\]), and the implication from (ii) to (i) is obvious. Consider finally the implication from (i) to (iv). Under the assumption of (i), the definition of $A$ implies that $R_{{{xy}},0}{\mid}R_{{{yz}}, 0}$ must be a union of atomic relations of the form $R_{{{xz}},\zeta}$. In more detail, the inclusions and the equality $${K_{{{xy}},0}}{\subseteq}{P_{0}},\qquad {H_{{{yz}},0}}{\subseteq}{P_{0}},\qquad {N_{{\rho}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{N_{0}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{N_{0}}={N_{{\rho}}}$$ (which hold by , , and the fact that ${N_{0}}$ is the identity coset), together with , imply (without using the hypothesis of (i)) that $$\tag{27}\label{Eq:cot25}
R_{{{xy}}, 0}{\mid}R_{{{yz}}, 0}={\textstyle \bigcup}_{{\rho}<\mu}{M_{{\rho}}}\times {N_{{\rho}}}{\textnormal{{{\hspace*{.5pt}}}.\ }}$$ The cosets ${M_{{\rho}}}$ and ${N_{{\rho}}}$ are subsets of ${G_{x}}$ and ${G_{z}}$ respectively, for each ${\rho}<\mu$ (see and ), so the right-hand side—and therefore also the left-hand side—of must be a subset of the rectangle ${{G_{x}}\times {G_{z}}}$. The only atomic relations in $A$ that are not disjoint from this rectangle are those of the form $R_{{{xz}},\zeta}$. Conclusion: there is a subset ${\varPhi}$ of ${{\kappa}_{{{xz}}}}$ such that $$\tag{28}\label{Eq:cot26}
R_{{{xy}}, 0}{\mid}R_{{{yz}}, 0}={\textstyle \bigcup}_{\zeta\in{\varPhi}}R_{{{xz}},\zeta}.$$
The pair ${({e_{x}}\,,{e_{z}})}$ belongs to the rectangle ${M_{0}}\times {N_{0}}$, and therefore also to the composition $R_{{{xy}},0}{\mid}R_{{{yz}}, 0}$, by . The pair also belongs to the rectangle $${H_{{{xz}},0}}\times{K_{{{xz}},0}} = {H_{{{xz}}}}\times K_{{{xz}}}{\textnormal{,}\ }$$ and therefore to the relation $$\tag{29}\label{Eq:cot27}
R_{{{xz}}, 0}={\textstyle \bigcup}_{{\gamma}<{{\kappa}_{{{xz}}}}}{H_{{{xz}},{\gamma}}}\times{K_{{{xz}},{\gamma}}}{\textnormal{{{\hspace*{.5pt}}}.\ }}$$ The relations of the form $R_{{{xz}}, \zeta}$ are pairwise disjoint, and $R_{{{xz}}, 0}$ is the only one that contains the pair ${({e_{x}}\,,{e_{z}})}$. In view of , the only way this can happen is if $0$ is one of the indices in ${\varPhi}$, so that $$R_{{{xz}}, 0}{\subseteq}R_{{{xy}}, 0}{\mid}R_{{{yz}},0}
\tag{30}\label{Eq:cot28}{\textnormal{{{\hspace*{.5pt}}}.\ }}$$Use and to rewrite in the form $${\textstyle \bigcup}_{{\gamma}<{{\kappa}_{{{xz}}}}}{H_{{{xz}},{\gamma}}}\times{K_{{{xz}},{\gamma}}}{\subseteq}{\textstyle \bigcup}_{{\rho}<\mu}{M_{{\rho}}}\times {N_{{\rho}}}{\textnormal{{{\hspace*{.5pt}}}.\ }}\tag{31}\label{Eq:cot28.0}$$
In view of Lemma \[L:rect2\], the inclusion in implies that for every ${\gamma}<{{\kappa}_{{{xz}}}}$ there is a ${\rho}<\mu$ such that $${H_{{{xz}},{\gamma}}}{\subseteq}{M_{{\rho}}}\qquad\text{and}\qquad{K_{{{xz}},{\gamma}}}{\subseteq}{N_{{\rho}}}{\textnormal{{{\hspace*{.5pt}}}.\ }}\tag{32}
\label{Eq:cot29}$$ In particular, when ${\gamma}=0$, the subgroup ${H_{{{xz}},0}}$ is included in ${M_{{\rho}}}$ for some ${\rho}<\mu$. This inclusion forces the group identity element $e_x$ to belong to ${M_{{\rho}}}$, since it belongs to ${H_{{{xz}},0}}$. The only coset of ${M_{0}}$ that includes the group identity element is ${M_{0}}$ itself, so ${\rho}=0$, that is to say, holds.
On the basis of alone, we saw that a partition $\langle{{\varPsi}_{{\rho}}}:{\rho}<\mu\rangle$ of ${{\kappa}_{{{xz}}}}$ exists for which holds. The derivations of – are based only on and , so these statements hold in the present situation as well. It is evident from that the coset ${H_{{{xz}},{\delta}}}$ is included in ${M_{{\rho}}}$ for each ${\delta}\in{{\varPsi}_{{\rho}}}$, and therefore the corresponding coset ${K_{{{xz}},{\delta}}}$ must be included in ${N_{{\rho}}}$ for each ${\delta}\in{{\varPsi}_{{\rho}}}$ by . Thus,$${\textstyle \bigcup}_{{\delta}\in{{\varPsi}_{{\rho}}}}{K_{{{xz}},{\delta}}}{\subseteq}{N_{{\rho}}}\tag{33}\label{Eq:cot30}$$ for each ${\rho}<\mu$. The cosets ${K_{{{xz}},{\delta}}}$ (for ${\rho}<\mu$ and $\delta$ in ${{{\varPsi}}_{{\rho}}}$) form a partition of ${G_{z}}$, as do the cosets ${N_{{\rho}}}$ (for ${\rho}<\mu$). These two facts force the inclusion in to be an equality. Use this equality and the equivalence of the first and third formulas in to conclude that ${{\hat{\varphi}}_{{{xy}}}}{\mid}{{\hat{\varphi}}_{{{yz}}}}
={{\hat{\varphi}}_{{{xz}}}}$, as desired in (iv).
Turn now to the final assertion of the theorem. If $A$ is closed under relational composition, then (ii) holds for all pairs ${(x,z)} $ and ${(y,z)}$ in ${\mathcal{E}}$, so that (iv) must hold by the equivalence of (ii) and (iv) proved above.
On the other hand, if (iv) holds for all pairs ${(x,y)}$ and ${(y,z)}$ in $E$, then (ii) holds as well. Combine this with Lemma \[L:emptycomp\] to conclude that $A$ is closed under the composition of any two of its atomic relations. The elements in $A$ are just the various possible unions of atomic relations, and the composition of two such unions is a union of compositions of atomic relations, by the distributivity of relational composition, and hence a union of elements in $A$. Since $A$ is closed under arbitrary unions, it follows that the composition of any two elements in $A$ is again an element in $A$, as was to be shown.
Notice that part (iii) of the previous theorem provides a concrete way of computing the composition of any two relations $R_{{{xy}},\alpha}$ and $R_{{{yz}},{\beta}}$ in terms of the structure of the quotient group ${G_{y}}/(K_{{{xy}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{yz}}}})$, the mapping ${{\varphi}_{{{xy}}}}$, and the cosets ${H_{{{xz}},{\gamma}}}$. One first computes the complex product ${K_{{{xy}},{\alpha}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{yz}},{\beta}}}$, and then the inverse image of this complex under ${{\varphi}_{{{xy}}}}{^{-1}}$. This inverse image is a union of cosets ${H_{{{xz}},{\gamma}}}$, and one computes the set ${\varGamma}$ of indices ${\gamma}$ for which the corresponding cosets are part of this union.
It is natural to ask whether, in condition (i) of the Composition Theorem, one can replace the condition that $R_{{{xy}}, 0} {\mid}R_{{{yz}},
0}$ be in $A$ by the condition that $R_{{{xy}}, \alpha} {\mid}R_{{{yz}},
{\beta}}$ be in $A$ for some $\alpha<{{\kappa}_{{{xy}}}}$ and some ${\beta}<{{\kappa}_{{{yx}}}}$. It turns out that an additional hypothesis is needed for this to be true. We will return to this matter at the end of the next section.
\[L:domains\] Suppose ${(x,y)} $ and ${(y,z)} $ are in ${\mathcal{E}}$[[. ]{.nodecor}]{}If ${{\varphi}_{{{yx}}}}
={{\varphi}_{{{xy}}}}{^{-1}}$[[,]{.nodecor} ]{}then $$\begin{gathered}
{H_{{{xz}}}}{\subseteq}{{\varphi}_{{{xy}}}}{^{-1}}[K_{{{xy}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{yz}}}}]\quad\text{and}\quad
{H_{{{yz}}}}{\subseteq}{{\varphi}_{{{yx}}}}{^{-1}}[K_{{{yx}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{xz}}}}]\\ \intertext{together imply}
{{\varphi}_{{{xy}}}} [{H_{{{xy}}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{xz}}}}]=K_{{{xy}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{yz}}}}={H_{{{yx}}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{yz}}}}.\end{gathered}$$
The function ${{\varphi}_{{{xy}}}}$ is an isomorphism from ${G_{x}}/{H_{{{xy}}}}$ onto ${G_{y}}/K_{{{xy}}}$[[,]{.nodecor} ]{}and the complex product $K_{{{xy}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{yz}}}}$ is a normal subgroup of ${G_{y}}$ that includes $K_{{{xy}}}$. Therefore, the inverse image $$\label{Eq:domains1}\tag{1}{{\varphi}_{{{xy}}}}{^{-1}}[K_{{{xy}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{yz}}}}]$$ is a normal subgroup of ${G_{x}}$ that includes ${{\varphi}_{{{xy}}}}{^{-1}}[K_{{{xy}}}]={H_{{{xy}}}}$. By assumption, also includes ${H_{{{xz}}}}$, so $$\begin{aligned}
{H_{{{xy}}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{xz}}}}&{\subseteq}{{\varphi}_{{{xy}}}}{^{-1}}[K_{{{xy}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{yz}}}}].\\ \intertext{Apply ${{\varphi}_{{{xy}}}}$ to
both sides of this inclusion to obtain}
{{\varphi}_{{{xy}}}}[{H_{{{xy}}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{xz}}}}]&{\subseteq}K_{{{xy}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{yz}}}}.\label{Eq:domains2}\tag{2}\\ \intertext{The same
argument with the roles of $x$ and $y$ reversed yields}
{{\varphi}_{{{yx}}}}[{H_{{{yx}}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{yz}}}}]&{\subseteq}K_{{{yx}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{xz}}}}.\label{Eq:domains3}\tag{3}\end{aligned}$$ The assumption ${{\varphi}_{{{yx}}}}={{\varphi}_{{{xy}}}}{^{-1}}$ implies that $$\label{Eq:domains5}\tag{4}{H_{{{yx}}}}=K_{{{xy}}}\qquad
\text{and}\qquad K_{{{yx}}}={H_{{{xy}}}}$$ (see Convention \[Co:convention\]). Use these three equalities to rewrite the inclusion in as $$\tag{5}\label{Eq:domains6}{{\varphi}_{{{xy}}}}{^{-1}}[K_{{{xy}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{yz}}}}]{\subseteq}{H_{{{xy}}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{xz}}}}.$$ Combine with and to arrive at the desired conclusion.
\[L:sfimage\] If ${{\varphi}_{{{yx}}}}
={{\varphi}_{{{xy}}}}{^{-1}}$ for all pairs ${(x,y)}$ in ${\mathcal{E}}$[[,]{.nodecor} ]{}and if $${{\varphi}_{{{xy}}}} [{H_{{{xy}}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{xz}}}}]=K_{{{xy}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{yz}}}}$$ for all ${(x,y)}$ and ${(y,z)}$ in ${\mathcal{E}}$[[,]{.nodecor} ]{}then $${{\varphi}_{{{yz}}}} [K_{{{xy}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{yz}}}}]=K_{{{xz}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}K_{{{yz}}}\qquad\text{and}\qquad {{\varphi}_{{{xz}}}}[{H_{{{xy}}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{xz}}}}]=K_{{{xz}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}K_{{{yz}}}{\textnormal{{{\hspace*{.5pt}}}.\ }}$$
To derive the first equation, observe that if the pairs ${(x,y)}$ and ${(y,z)}$ are in ${\mathcal{E}}$ then so are the pairs ${(x,z)}$ and ${(z,x)}$. Use the assumption that ${{\varphi}_{{{yx}}}} ={{\varphi}_{{{xy}}}}{^{-1}}$, Convention \[Co:convention\], the commutativity of normal subgroups, and the hypotheses of the lemma (with $y$, $z$, and $x$ in place of $x$, $y$, and $z$ respectively) to arrive at $${{\varphi}_{{{yz}}}}[{K_{{{xy}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{yz}}}}}]={{\varphi}_{{{yz}}}}[{H_{{{yx}}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{yz}}}}]
={{\varphi}_{{{yz}}}}[{H_{{{yz}}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{yx}}}}]=K_{{{yz}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{zx}}}}=K_{{{yz}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}K_{{{xz}}}{\textnormal{{{\hspace*{.5pt}}}.\ }}$$ An entirely analogous argument yields $${{\varphi}_{{{xz}}}}[{{H_{{{xy}}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{xz}}}}}]={{\varphi}_{{{xz}}}}[{H_{{{xz}}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{xy}}}}]=K_{{{xz}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{zy}}}}=K_{{{xz}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}K_{{{yz}}}{\textnormal{{{\hspace*{.5pt}}}.\ }}$$
\[T:domainofmap\] If $A$ is closed under converse and composition[[,]{.nodecor} ]{}then $$\begin{gathered}
{{\varphi}_{{{xy}}}} [{H_{{{xy}}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{xz}}}}]=K_{{{xy}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{yz}}}}{\textnormal{,}\ }\qquad {{\varphi}_{{{yz}}}}
[K_{{{xy}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{yz}}}}]=K_{{{xz}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}K_{{{yz}}}{\textnormal{,}\ }\\
{{\varphi}_{{{xz}}}}[{H_{{{xy}}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{xz}}}}]=K_{{{xz}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}K_{{{yz}}}\end{gathered}$$ for all ${(x,y)}$ and ${(y,z)}$ in ${\mathcal{E}}$[[.]{.nodecor} ]{}In other words,
1. ${{\hat{\varphi}}_{{{xy}}}}$ maps ${G_{x}}/({H_{{{xy}}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{xz}}}})$ isomorphically to ${G_{y}}/(K_{{{xy}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{yz}}}})$[[,]{.nodecor} ]{}
2. ${{\hat{\varphi}}_{{{yz}}}}$ maps ${G_{y}}/(K_{{{xy}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{yz}}}})$ isomorphically to ${G_{z}}/(K_{{{xz}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}K_{{{yz}}})$[[,]{.nodecor} ]{}
3. ${{\hat{\varphi}}_{{{xz}}}}$ maps ${G_{x}}/({H_{{{xy}}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{xz}}}})$ isomorphically to ${G_{z}}/(K_{{{xz}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}K_{{{yz}}})$[[.]{.nodecor} ]{}
Assume $A$ is closed under converse and composition, and consider pairs ${(x,y)}$ and ${(y,z)}$ in ${\mathcal{E}}$. The assumed closure of $A$ under converse means that part (iii) of the Converse Theorem \[T:convthm1\] may be applied to obtain $$\tag{1}\label{Eq:image1}{{\varphi}_{{{yx}}}} ={{\varphi}_{{{xy}}}}{^{-1}}{\textnormal{{{\hspace*{.5pt}}}.\ }}$$ The assumed closure of $A$ under composition means that part (iv) of the Composition Theorem \[T:compthm\] may be applied to obtain $${H_{{{xz}}}}{\subseteq}{{\varphi}_{{{xy}}}}{^{-1}}[K_{{{xy}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{yz}}}}]\qquad\text{and}\qquad {H_{{{yz}}}}
{\subseteq}{{\varphi}_{{{yx}}}}{^{-1}}[K_{{{yx}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{xz}}}}]{\textnormal{{{\hspace*{.5pt}}}.\ }}$$ Invoke Lemma \[L:domains\] to obtain $$\tag{2}\label{Eq:image2}
{{\varphi}_{{{xy}}}} [{H_{{{xy}}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{xz}}}}]=K_{{{xy}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{yz}}}}{\textnormal{{{\hspace*{.5pt}}}.\ }}$$ This argument establishes the first equation for all pairs ${(x,y)}$ and ${(y,z)}$ in ${\mathcal{E}}$. Use Lemma \[L:sfimage\] to obtain the second and third equations.
The mappings ${{\hat{\varphi}}_{{{xy}}}}$ and ${{\hat{\varphi}}_{{{xz}}}}$ are defined to be the isomorphisms induced on the quotient group of ${G_{x}}$ modulo ${{\varphi}_{{{xy}}}}{^{-1}}[K_{{{xy}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{yz}}}}]$ by the isomorphisms ${{\varphi}_{{{xy}}}}$ and ${{\varphi}_{{{xz}}}}$ respectively, and ${{\hat{\varphi}}_{{{yz}}}}$ is defined to be the isomorphism induced on the quotient group of ${G_{y}}$ modulo $
K_{{{xy}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{yz}}}}$ by the isomorphism ${{\varphi}_{{{yz}}}}$, by part (iv) of the Composition Theorem \[T:compthm\]. In view of the preceding proof, this immediately gives assertions (i)–(iii) of the theorem.
Group frames
============
In the preceding section, necessary and sufficient conditions are given for a Boolean algebra $A$ of binary relations constructed from a group pair ${\mathcal{F}}$ to contain the identity relation and to be closed under the operations of relational converse and composition. In each case, one of these conditions is formulated strictly in terms of the quotient isomorphisms. It is natural to single out the group pairs that satisfy these quotient isomorphism conditions, because precisely these group pairs lead to algebras of binary relations and in fact to measurable relation algebras.
\[D:cosfra\] A *group* *frame* is a group pair $${\mathcal{F}}=(\langle {G_{x}}:x\in
I\,\rangle{\,,}\langle{\varphi}_{{xy}}:{(x,y)}\in {\mathcal{E}}\,\rangle)$$ satisfying the following *frame conditions* for all pairs ${(x,y)}$ and ${(y,z)}$ in ${\mathcal{E}}$[[. ]{.nodecor}]{}
1. ${{\varphi}_{xx}}$ is the identity automorphism of ${G_{x}}/\{{e_{x}}\}$ for all $x$[[. ]{.nodecor}]{}
2. ${{\varphi}_{{{yx}}}}={{\varphi}_{{{xy}}}}{^{-1}}$[[. ]{.nodecor}]{}
3. ${{\varphi}_{{{xy}}}}[{H_{{{xy}}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{xz}}}}] =K_{{{xy}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{yz}}}}$[[. ]{.nodecor}]{}
4. ${{\hat{\varphi}}_{{{xy}}}}{\mid}{{\hat{\varphi}}_{{{yz}}}}={{\hat{\varphi}}_{{{xz}}}}$[[.]{.nodecor} ]{}
Given a group frame ${\mathcal{F}}$, let $A$ be the collection of all possible unions of relations of the form $R_{{{xy}}, \alpha}$ for ${(x,y)}$ in ${\mathcal{E}}$ and $\alpha <{{\kappa}_{{{xy}}}}$. Call $A$ the set of *frame relations* constructed from ${\mathcal{F}}$.
\[T:closed\] If ${\mathcal{F}}$ is a group frame[[,]{.nodecor} ]{}then the set of frame relations constructed from ${\mathcal{F}}$ is the universe of a complete[[,]{.nodecor} ]{}atomic[[,]{.nodecor} ]{}measurable set relation algebra with base set and unit $$U={\textstyle \bigcup}\{{G_{x}}:x\in I\}\qquad\text{and}\qquad E={\textstyle \bigcup}\{{{G}_{x}}\times{{G}_{y}}:{(x,y)}\in{\mathcal{E}}\}$$ respectively[[. ]{.nodecor}]{}The atoms in this algebra are the relations of the form $R_{{{xy}}, \alpha}$[[,]{.nodecor} ]{}and the subidentity atoms are the relations of the form $R_{{{xx}},
0}$[[. ]{.nodecor}]{}The measure of $R_{{{xx}}, 0}$ is just the cardinality of the group ${{G}_{x}}$[[. ]{.nodecor}]{}
Let $A$ be the set of frame relations constructed from ${\mathcal{F}}$. This set is the universe of a complete and atomic Boolean algebra of binary relations with base set $U$ and unit $E$, and its atoms are the relations of the form $R_{{{xy}},\alpha}$[[,]{.nodecor} ]{}by Boolean Algebra Theorem \[T:disj\]. The identity relation ${id_{U}}$ is in $A$, and the subidentity atoms are the relations of the form $R_{{{xx}}, 0}$, by Theorem \[T:disj\], Identity Theorem \[T:identthm1\], and frame condition (i). The closure of $A$ under the operations of converse and composition follows from Converse Theorem \[T:convthm1\], Composition Theorem \[T:compthm\], and frame conditions (ii)–(iv).
The measure of a subidentity atom $R_{{{xx}}, 0}$ is, by definition, the number of non-zero functional atoms below the square $$R_{{{xx}},
0}{\mid}E{\mid}R_{{{xx}}, 0}={{G}_{x}}\times{{G}_{x}}{\textnormal{{{\hspace*{.5pt}}}.\ }}$$ These non-zero functional atoms are just the relations $R_{{{xx}}, \alpha}$ for $\alpha< {{\kappa}_{{{xx}}}}$, that is to say, they are just the Cayley representations of the elements in ${{G}_{x}}$, by Partition Lemma \[L:i-vi\]. Consequently, there are as many of them as there are elements in ${{G}_{x}}$.
The theorem justifies the following definition.
\[D:gradef\] Suppose that ${\mathcal{F}}$ is a group frame[[.]{.nodecor} ]{}The set [relation algebra]{} constructed from ${\mathcal{F}}$ in Group Frame Theorem \[T:closed\] is called the (*full*) *[group relation algebra]{}* on ${\mathcal{F}}$ and is denoted by ${\mathfrak{G}{[{\mathcal{F}}]}}$ [[(]{.nodecor}]{}and its universe by ${G{[{\mathcal{F}}]}}$[[)]{.nodecor}]{}[[.]{.nodecor} ]{}A *general group relation algebra* is defined to be an algebra that is embeddable into a full group relation algebra[[.]{.nodecor} ]{}
The task of verifying that a given group pair satisfies the frame conditions, and therefore yields a full group relation algebra, that is to say, it yields an example of a measurable relation algebra, can be complicated and tedious. Fortunately, a few simplifications are possible. To describe them, it is helpful to assume that the group index set $I$ is linearly ordered, say by a relation $\,<\,$. Roughly speaking, under the assumption of condition (i), condition (ii) holds in general just in case it holds for each pair ${(x,y)}$ in ${\mathcal{E}}$ with $x<y$. In other words, under the assumption of (i), it is not necessary to check condition (ii) for the case $x=y$, nor is it necessary to check both cases ${(x,y)}$ and ${(y,x)}$ when $x\neq y$. Also, under the assumption of condition (i) and the modified form of condition (ii) just described, conditions (iii) and (iv) will hold in general if they hold for all pairs ${(x,y)}$ and ${(y,x)}$ in ${\mathcal{E}}$ with $x<y<z$. In other words, under the assumption of (i) and the modified (ii), it is not necessary to check conditions (iii) and (iv) in any case in which at least two of the three indices $x$, $y$, and $z$ are equal, nor is it necessary to check all six permutations of an appropriate triple ${(x,y,z)}$ of distinct indices. Here is the precise formulation of the theorem.
\[T:simpfr\] A group pair ${\mathcal{F}}$ is a group frame if and only if the following four conditions are satisfied[[. ]{.nodecor}]{}
1. ${{\varphi}_{xx}}$ is the identity automorphism of ${G_{x}}/\{{e_{x}}\}$ for every ${x}$ in $I$[[. ]{.nodecor}]{}
2. ${{\varphi}_{{{yx}}}}={{\varphi}_{{{xy}}}}{^{-1}}$ for every pair $ {(x,y)}$ in ${\mathcal{E}}$ with $x<y$[[. ]{.nodecor}]{}
3. ${{\varphi}_{{{xy}}}}[{H_{{{xy}}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{xz}}}}]=K_{{{xy}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{yz}}}}$ and ${{\varphi}_{{{yz}}}}[K_{{{xy}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{yz}}}}]=K_{{{xz}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}K_{{{yz}}}$ for all pairs ${(x,y)}$ and ${(y,z)}$ in ${\mathcal{E}}$ with $x<y<z$[[. ]{.nodecor}]{}
4. ${{\hat{\varphi}}_{{{xy}}}}{\mid}{{\hat{\varphi}}_{{{yz}}}}={{\hat{\varphi}}_{{{xz}}}}$ for all pairs ${(x,y)}$ and ${(y,z)}$ in ${\mathcal{E}}$ with $x<y<z$[[. ]{.nodecor}]{}
By its very definition, a frame must satisfy conditions (i)–(iv) of the theorem. To establish the reverse implication, suppose that $${\mathcal{F}}=(\langle {G_{x}}:x\in
I\,\rangle{\,,}\langle{\varphi}_{{xy}}:{(x,y)}\in {\mathcal{E}}\,\rangle)$$ is a group pair satisfying conditions (i)–(iv) of the theorem. It must be shown that the four frame conditions hold. Obviously, the first frame condition holds, since it coincides with condition (i) of the theorem. To verify the second frame condition, assume that ${(x,y)}$ is a pair in ${\mathcal{E}}$. If ${x}={y}$, then ${{\varphi}_{{{xy}}}}$ is the identity automorphism, by condition (i) of the theorem, and therefore $$\tag{1}\label{Eq:sf1}
{{\varphi}_{{{yx}}}}={{\varphi}_{{{xy}}}}{^{-1}}{\textnormal{{{\hspace*{.5pt}}}.\ }}$$ If $x<y$, then holds, by condition (i) of the theorem. If $y<x$, then ${{\varphi}_{{{xy}}}}={{\varphi}_{{{yx}}}}{^{-1}}$, by condition (i) of the theorem (with the roles of $x$ and $y$ reversed), so must also hold.
Turn now to the task of verifying the last two frame conditions. Assume that ${(x,y)}$ and ${(y,z)}$ are pairs in ${\mathcal{E}}$, and consider first the case when $x=y$. The mapping ${{\varphi}_{{{xy}}}}$ is then the identity automorphism of ${G_{x}}/\{e_x\} $, by condition (i), so that $$ {H_{{{xy}}}}={H_{{{xx}}}}=\{{{e}_{x}}\}=K_{{{xx}}}=K_{{{xy}}}{\textnormal{,}\ }\qquad
{H_{{{xz}}}}={H_{{{yz}}}}{\textnormal{,}\ }\qquad K_{{{xz}}}=K_{{{yz}}}{\textnormal{,}\ }$$ and therefore $${H_{{{xy}}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{xz}}}}={H_{{{xz}}}}{\textnormal{,}\ }\quad
K_{{{xy}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{yz}}}}={H_{{{yz}}}}={H_{{{xz}}}}{\textnormal{,}\ }\quad K_{{{xz}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}K_{{{yz}}}=K_{{{yz}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}K_{{{yz}}}=K_{{{yz}}}{\textnormal{{{\hspace*{.5pt}}}.\ }}$$ It follows that $$\begin{gathered}
{{\varphi}_{{{xy}}}}[{H_{{{xy}}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{xz}}}}]={{\varphi}_{{{xx}}}}[{H_{{{xz}}}}]={H_{{{xz}}}}=K_{{{xy}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{yz}}}}\\
\intertext{and}
{{\varphi}_{{{yz}}}}[K_{{{xy}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{yz}}}}]={{\varphi}_{{{yz}}}}[{H_{{{yz}}}}]=K_{{{yz}}}=K_{{{xz}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}K_{{{yz}}}{\textnormal{{{\hspace*{.5pt}}}.\ }}\end{gathered}$$ For the same reasons, the isomorphism ${{\hat{\varphi}}_{{{xy}}}}$ induced by ${{\varphi}_{{{xy}}}}$ on the quotient group ${G_{x}}/({H_{{{xy}}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{xz}}}}) $ must coincide with the identity automorphism of ${G_{x}}/{H_{{{xz}}}}$, the isomorphism ${{\hat{\varphi}}_{{{yz}}}}$ induced by ${{\varphi}_{{{yz}}}}$ on the quotient group ${G_{y}}/(K_{{{xy}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{yz}}}}) $ must coincide with ${{\varphi}_{{{yz}}}}$, and the isomorphism ${{\hat{\varphi}}_{{{xz}}}}$ induced by ${{\varphi}_{{{xz}}}}$ on the quotient group ${G_{x}}/({H_{{{xy}}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{xz}}}}) $ must coincide with the isomorphism ${{\varphi}_{{{xz}}}}$[[. ]{.nodecor}]{}Consequently, $${{\hat{\varphi}}_{{{xy}}}}{\mid}{{\hat{\varphi}}_{{{yz}}}}={{\varphi}_{{{yz}}}}={{\varphi}_{{{xz}}}}={{\hat{\varphi}}_{{{xz}}}}{\textnormal{{{\hspace*{.5pt}}}.\ }}$$
The case when $y=z$ is treated in a completely symmetric fashion. Consider, next, the case when $x=z$[[. ]{.nodecor}]{}The mapping ${{\varphi}_{{{xz}}}}$ is then the identity automorphism of ${G_{x}}/\{e_x\} $, by condition (i), so that $${H_{{{xz}}}}={H_{{{xx}}}}=\{{{e}_{x}}\}=K_{{{xx}}}=K_{{{xz}}}{\textnormal{,}\ }\qquad
{H_{{{yz}}}}={H_{{{yx}}}}{\textnormal{,}\ }\qquad K_{{{yz}}}=K_{{{yx}}}{\textnormal{,}\ }$$ and therefore $$\begin{gathered}
{H_{{{xy}}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{xz}}}}={H_{{{xy}}}}{\textnormal{,}\ }\quad
K_{{{xy}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{yz}}}}=K_{{{xy}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{yx}}}}=K_{{{xy}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}K_{{{xy}}} =\\
K_{{{xy}}}={H_{{{yx}}}}={H_{{{yz}}}}{\textnormal{,}\ }\quad K_{{{xz}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}K_{{{yz}}}=K_{{{yz}}}=K_{{{yx}}}{\textnormal{{{\hspace*{.5pt}}}.\ }}\end{gathered}$$ In the second string of equations, use is being made of the fact that the second frame condition holds, and therefore $K_{{{xy}}}={H_{{{yx}}}}$ (see the remark preceding Theorem \[T:convthm1\]). It follows that $$\begin{gathered}
{{\varphi}_{{{xy}}}}[{H_{{{xy}}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{xz}}}}]={{\varphi}_{{{xy}}}}[{H_{{{xy}}}}]=K_{{{xy}}}=K_{{{xy}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{yz}}}}\\
\intertext{and}
{{\varphi}_{{{yz}}}}[K_{{{xy}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{yz}}}}]={{\varphi}_{{{yx}}}}[K_{{{xy}}}]={{\varphi}_{{{yx}}}}[{H_{{{yx}}}}]=K_{{{yx}}}=K_{{{xz}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}K_{{{yz}}}{\textnormal{{{\hspace*{.5pt}}}.\ }}\end{gathered}$$ These equations imply that the isomorphism ${{\hat{\varphi}}_{{{xy}}}}$ induced by ${{\varphi}_{{{xy}}}}$ on the quotient group ${G_{x}}/({H_{{{xy}}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{xz}}}}) $ must coincide with ${{\varphi}_{{{xy}}}}$, the isomorphism ${{\hat{\varphi}}_{{{yz}}}}$ induced by ${{\varphi}_{{{yz}}}}$ on the quotient group ${G_{y}}/(K_{{{xy}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{yz}}}}) $ must coincide with ${{\varphi}_{{{yz}}}}$, which is the same as ${{\varphi}_{{{yx}}}}$ and therefore also the same as ${{\varphi}_{{{xy}}}}{^{-1}}$, by frame condition (ii) (which has been shown to hold by conditions (i) and (ii) of the theorem) and Converse Theorem \[T:convthm1\], and the isomorphism ${{\hat{\varphi}}_{{{xz}}}}$ induced by ${{\varphi}_{{{xz}}}}$ on the quotient group ${G_{x}}/({H_{{{xy}}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{xz}}}}) $ must coincide with the identity automorphism of ${G_{x}}/{H_{{{xy}}}} $[[. ]{.nodecor}]{}Consequently, $${{\hat{\varphi}}_{{{xy}}}}{\mid}{{\hat{\varphi}}_{{{yz}}}}={{\varphi}_{{{xy}}}}{\mid}{{\varphi}_{{{yx}}}}={{\varphi}_{{{xy}}}}{\mid}{{\varphi}_{{{xy}}}}{^{-1}}={{\hat{\varphi}}_{{{xz}}}}{\textnormal{{{\hspace*{.5pt}}}.\ }}$$
Assume now, that the indices $x$, $y$, and $z$ are all distinct from one another. If $x<y<z$, then $$\begin{gathered}
{{\varphi}_{{{xy}}}}[{H_{{{xy}}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{xz}}}}]=K_{{{xy}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{yz}}}}{\textnormal{,}\ }\qquad {{\varphi}_{{{yz}}}}[K_{{{xy}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{yz}}}}]=K_{{{xz}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}K_{{{yz}}}{\textnormal{,}\ }\tag{2}\label{Eq:simpfr1}\\
{{\hat{\varphi}}_{{{xy}}}}{\mid}{{\hat{\varphi}}_{{{yz}}}}={{\hat{\varphi}}_{{{xz}}}}{\textnormal{,}\ }\tag{3}\label{Eq:simpfr2}\end{gathered}$$ by conditions (iii) and (iv) of the theorem, where ${{\hat{\varphi}}_{{{xy}}}}$ is the isomorphism from ${G_{{x}}}/({{H_{{{xy}}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{xz}}}}})$ to ${G_{{y}}}/({K_{{{xy}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{yz}}}}})$ that is induced by ${{\varphi}_{{{xy}}}}$, while ${{\hat{\varphi}}_{{{yz}}}}$ is the isomorphism from ${G_{{y}}}/({K_{{{xy}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{yz}}}}})$ to ${G_{{z}}}/({K_{xz}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}K_{yz}})$ that is induced by ${{\varphi}_{{{yz}}}}$. It follows from that ${{\hat{\varphi}}_{{{xz}}}}$ must be the isomorphism from ${G_{{x}}}/({{H_{{{xy}}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{xz}}}}})$ to ${G_{{z}}}/({K_{xz}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}K_{yz}})$ that is induced by ${{\varphi}_{{{xz}}}}$ (this is not part of the assumption in condition (iii)). Consequently, $$\tag{4}\label{Eq:simpfr3}
{{\varphi}_{{{xz}}}}[{H_{{{xy}}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{xz}}}}]=K_{{{xz}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}K_{{{yz}}}.$$ The corresponding equations for the remaining pairs of mappings follow readily from , , and condition (ii). In more detail, $$\tag{5}\label{Eq:simpfr10}
{{\varphi}_{{{yx}}}}={{\varphi}_{{{xy}}}}{^{-1}}{\textnormal{,}\ }\qquad{{\varphi}_{{{zy}}}}={{\varphi}_{{{yz}}}}{^{-1}}{\textnormal{,}\ }\qquad{{\varphi}_{{{zx}}}}={{\varphi}_{{{xz}}}}{^{-1}}{\textnormal{,}\ }$$ by (the already verified) frame condition (ii). In particular, $$\begin{gathered}
\tag{6}\label{Eq:simpfr4}
{H_{{{zx}}}} = K_{{{xz}}}{\textnormal{,}\ }\quad K_{{{zx}}}={H_{{{xz}}}}{\textnormal{,}\ }\quad{H_{{{zy}}}} =
K_{{{yz}}}{\textnormal{,}\ }\quad K_{{{zy}}}={H_{{{yz}}}},\\ {H_{{{yx}}}} = K_{{{xy}}}{\textnormal{,}\ }\quad
K_{{{yx}}}={H_{{{xy}}}}.\end{gathered}$$ Apply ${{\varphi}_{{{zx}}}}$ to both sides of , and use , to obtain $${{\varphi}_{{{zx}}}}[K_{{{xz}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}K_{{{yz}}}]={H_{{{xy}}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{xz}}}}.$$ With the help of , rewrite this last equation as $${{\varphi}_{{{zx}}}}[{H_{{{zx}}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{zy}}}}]=K_{{{zx}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{xy}}}}{\textnormal{{{\hspace*{.5pt}}}.\ }}$$ Use and repeatedly, together with the fact that the subgroups involved are normal, to obtain $${{\varphi}_{{{xy}}}}[K_{{{zx}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{xy}}}}]={{\varphi}_{{{xy}}}}[{H_{{{xy}}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{xz}}}}]=K_{{{xy}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{yz}}}}=K_{{{xy}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}K_{{{zy}}}=K_{{{zy}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}K_{{{xy}}}{\textnormal{{{\hspace*{.5pt}}}.\ }}$$ In other words, condition (iii) holds with the variables $x$, $y$, and $z$ replaced by $z$, $x$, and $y$ respectively. The other cases of the third frame condition are verified in a similar fashion.
Frame condition (iv) is a simple consequence of the preceding observations, together with and . For example, compose both sides of on the left with ${{\hat{\varphi}}_{{{yx}}}}$, and use , to obtain $$\tag{7}\label{Eq:simpfr5}
{{\hat{\varphi}}_{{{yx}}}}{\mid}{{\hat{\varphi}}_{{{xz}}}}={{\hat{\varphi}}_{{{yz}}}}{\textnormal{,}\ }$$ and compose both sides of on the right with ${{\hat{\varphi}}_{{{zy}}}}$, and use , to obtain $$\tag{8}\label{Eq:simpfr6}
{{\hat{\varphi}}_{{{xz}}}}{\mid}{{\hat{\varphi}}_{{{zy}}}}={{\hat{\varphi}}_{{{xy}}}}.$$ This argument shows that the two permuted versions of , the first obtained by transposing the first two indices $x$ and $y$ of the triple ${(x,y,z)}$, and the second by transposing the last two indices $y$ and $z$ of the triple, are valid in ${\mathcal{F}}$. All permutations of the triple ${(x,y,z)}$ may be obtained by composing these two transpositions in various ways. For example, if we transpose the first two indices of , permuting ${(x,y,z)}$ to ${(y,x,z)}$ and arriving at , and then transpose the last two indices of , permuting ${(y,x,z)}$ to ${(y,z,x)}$, we arrive at $${{\hat{\varphi}}_{{{yz}}}}{\mid}{{\hat{\varphi}}_{{{zx}}}}={{\hat{\varphi}}_{{{yz}}}}{\textnormal{{{\hspace*{.5pt}}}.\ }}$$ It follows that frame condition (iv) is valid in ${\mathcal{F}}$.
An examination of the preceding proof reveals that only condition (i) is used to verify the second frame condition in the case when $x=y$, and to verify the last two frame condition when $x=y$ or $y=z$. Also, only conditions (i) and (ii) are used to verify the last two frame conditions when $x=z$. The following corollary, which will be needed in the construction of coset relation algebras, is a consequence of this observation. In formulating it and the succeeding two corollaries, we use the following simplified notation: if $f$ is the $\alpha$th element in some fixed enumeration of one of the groups ${G_{x}}$ in a group pair, then we write ${H_{{{xx}},f}}$ and $R_{{{xx}}, f}$ for ${H_{{{xx}},\alpha}}$ and $R_{{{xx}},\alpha}$ respectively.
\[C:compthma\] Let ${\mathcal{F}}$ be a group pair satisfying condition [(i)]{.nodecor} of Theorem [\[T:simpfr\]]{.nodecor}[[. ]{.nodecor}]{}The following conditions hold for all $x$ in $I$ and all pairs ${(x,y)}$ in ${\mathcal{E}}$[[. ]{.nodecor}]{}
1. $R_{{{xx}}, f}{^{-1}}=R_{{{xx}}, g}$ for $f$ in ${G_{x}}$ and $g=f{^{-1}}$[[. ]{.nodecor}]{}
2. $R_{{{xx}}, f}{\mid}R_{{{xy}},{\beta}}=R_{{{xy}},{\gamma}}$ for $f$ in ${G_{x}}$ and ${H_{{{xy}},{\gamma}}}= f {\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{xy}},{\beta}}}$[[. ]{.nodecor}]{}
3. $R_{{{xy}},\alpha}{\mid}R_{{{yy}}, g}=R_{{{xy}},{\gamma}}$ for $g$ in ${G_{y}}$ and $\quad{K_{{{xy}},{\gamma}}}= {K_{{{xy}},\alpha}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}g$[[. ]{.nodecor}]{}
4. If ${\mathcal{F}}$ also satisfies condition [(ii)]{.nodecor} of Theorem [\[T:simpfr\]]{.nodecor}[[,]{.nodecor} ]{}then $$\begin{aligned}
R_{{{xy}},\alpha}{\mid}R_{{{yx}},{\beta}}&={\textstyle \bigcup}\{R_{{{xx}}, f} : f \in
{H_{{{xy}},\alpha}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{xy}},{\beta}}}\}\\&=\{{(g,g{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}f)}:g\in {G_{x}}\text{
and } f\in {H_{{{xy}},\alpha}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{xy}},{\beta}}}\}{\textnormal{{{\hspace*{.5pt}}}.\ }}\end{aligned}$$
In particular[[,]{.nodecor} ]{}each of these converses and compositions is in the set of frame relations[[.]{.nodecor} ]{}
As an example, we prove (iv). Write $z=x$ and use condition (i) from Theorem \[T:simpfr\] to see that $$\tag{1}\label{Eq:compthma1}
{H_{{{xz}}}}={H_{{{xx}}}}=\{e_x\}$$ and that $$\tag{2}\label{Eq:compthma2}
{{\varphi}_{{{xz}}}}={{\varphi}_{{{xx}}}}$$ is the identity automorphism of ${G_{x}}/\{e_x\}$[[. ]{.nodecor}]{}Consequently, $$\tag{3}\label{Eq:compthma3}
{H_{{{xz}},f}}={H_{{{xx}},f}}=\{f\}\qquad\text{and}\qquad R_{{{xz}}, f}=R_{{{xx}}, f}=\{{(g,g{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}f)}:g\in {G_{x}}\}$$ for every $f$ in ${G_{x}}$. The additional assumption of condition (ii) from Theorem \[T:simpfr\] implies that frame condition (ii) holds, by Theorem \[T:simpfr\], and therefore $$\tag{4}\label{Eq:compthma4}
{{\varphi}_{{{yx}}}}={{\varphi}_{{{xy}}}}{^{-1}}{\textnormal{{{\hspace*{.5pt}}}.\ }}$$ Invoke Convention \[Co:convention\] to write ${{\kappa}_{{{yx}}}}={{\kappa}_{{{xy}}}}$ and $$\tag{5}\label{Eq:compthma5}
{H_{{{yx}},{\gamma}}}={K_{{{xy}},{\gamma}}}\qquad\text{and}\qquad{K_{{{yx}},{\gamma}}}={H_{{{xy}},{\gamma}}}$$ for all indices ${\gamma}<{{\kappa}_{{{xy}}}}$. In particular, taking ${\gamma}=0$, we obtain $$\tag{6}\label{Eq:compthma6}
{H_{{{yx}}}}=K_{{{xy}}}\qquad\text{and}\qquad K_{{{yx}}}={H_{{{xy}}}}{\textnormal{{{\hspace*{.5pt}}}.\ }}$$
Since $$\tag{7}\label{Eq:compthma7}
{H_{{{xy}}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{xz}}}}={H_{{{xy}}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}\{e_x\}={H_{{{xy}}}}{\textnormal{,}\ }$$ by , the isomorphism ${{\hat{\varphi}}_{{{xz}}}}$ induced by ${{\varphi}_{{{xz}}}}$, which coincides with ${{\hat{\varphi}}_{{{xx}}}}$, by , is the identity automorphism of ${G_{x}}/{H_{{{xy}}}}$[[,]{.nodecor} ]{}by [[. ]{.nodecor}]{}Similarly, implies that the isomorphism ${{\hat{\varphi}}_{{{xy}}}}$ induced by ${{\varphi}_{{{xy}}}}$ coincides with ${{\varphi}_{{{xy}}}}$. Finally, and the preceding observation imply that the isomorphism ${{\hat{\varphi}}_{{{yz}}}}$ induced by ${{\varphi}_{{{yz}}}}$ coincides with ${{\varphi}_{{{xy}}}}{^{-1}}$. Combine these three observations to conclude that $${{\hat{\varphi}}_{{{xy}}}}{\mid}{{\hat{\varphi}}_{{{yz}}}}={{\varphi}_{{{xy}}}}{\mid}{{\varphi}_{{{xy}}}}{^{-1}}{\textnormal{,}\ }$$ which then is the identity automorphism on ${G_{x}}/{H_{{{xy}}}}$, and therefore $$\tag{8}\label{Eq:compthma8}
{{\hat{\varphi}}_{{{xy}}}}{\mid}{{\hat{\varphi}}_{{{yz}}}}={{\hat{\varphi}}_{{{xz}}}}{\textnormal{{{\hspace*{.5pt}}}.\ }}$$
The assumption on $z$, and , yield $${K_{{{xy}},\alpha}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{yz}},{\beta}}}={K_{{{xy}},\alpha}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{yx}},{\beta}}}={K_{{{xy}},\alpha}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{K_{{{xy}},{\beta}}}{\textnormal{,}\ }$$ and therefore, using also the isomorphism properties of ${{\varphi}_{{{xy}}}}{^{-1}}$, $$\begin{gathered}
\tag{9}\label{Eq:compthma9}
{{\varphi}_{{{xy}}}}{^{-1}}[{K_{{{xy}},\alpha}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{yz}},{\beta}}}]={{\varphi}_{{{xy}}}}{^{-1}}({K_{{{xy}},\alpha}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{K_{{{xy}},{\beta}}})\\
={{\varphi}_{{{xy}}}}{^{-1}}({K_{{{xy}},\alpha}}){\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{{\varphi}_{{{xy}}}}{^{-1}}({K_{{{xy}},{\beta}}})={H_{{{xy}},\alpha}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{xy}},{\beta}}}{\textnormal{{{\hspace*{.5pt}}}.\ }}\end{gathered}$$ Take $\alpha={\beta}=0$ in to arrive at$$\tag{10}\label{Eq:compthma10}
{{\varphi}_{{{xy}}}}{^{-1}}[K_{{{xy}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{yz}}}}]= {{\varphi}_{{{xy}}}}{^{-1}}[{K_{{{xy}},0}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{yz}},0}}]={H_{{{xy}},0}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{xy}},0}}={H_{{{xy}}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{xy}}}}={H_{{{xy}}}}{\textnormal{{{\hspace*{.5pt}}}.\ }}$$ Because ${H_{{{xz}}}}$ coincides with the trivial subgroup $\{e_x\}$, by , it may be concluded from that $$\tag{11}\label{Eq:compthma11}{H_{{{xz}}}}{\subseteq}{{\varphi}_{{{xy}}}}{^{-1}}[K_{{{xy}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{yz}}}}]{\textnormal{{{\hspace*{.5pt}}}.\ }}$$ Together, and show that condition (iv) in Composition Theorem \[T:compthm\] is satisfied in the case under consideration. Apply the implication from (iv) to (iii) in that theorem, together with , , and the definition of $R_{{{xx}}, f}$, to conclude that $$\begin{aligned}
R_{{{xy}},\alpha}{\mid}R_{{{yz}},{\beta}}&=R_{{{xy}},\alpha}{\mid}R_{{{yx}},{\beta}}\\
&={\textstyle \bigcup}\{R_{{{xx}}, f} : {H_{{{xx}},f}}{\subseteq}{{\varphi}_{{{xy}}}}{^{-1}}[{K_{{{xy}},\alpha}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{yz}},{\beta}}}]\}\\
&={\textstyle \bigcup}\{R_{{{xx}}, f} : f\in {H_{{{xy}},\alpha}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{xy}},{\beta}}}\}\\
&={\textstyle \bigcup}\{\{{(g,g{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}f)}:g\in{G_{x}}\}:f\in {H_{{{xy}},\alpha}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{xy}},{\beta}}}\}\\
&={\textstyle \bigcup}\{{(g,g{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}f)}:g\in{G_{x}}\text{ and }f\in
{H_{{{xy}},\alpha}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{xy}},{\beta}}}\}{\textnormal{{{\hspace*{.5pt}}}.\ }}\end{aligned}$$
We return to the question that was posed after the Converse Theorem: can condition (i) in that theorem be replaced by the condition that $R_{{{xy}}, \alpha}{^{-1}}$ be in $A$ for some fixed $\alpha$?
\[C:convequiv\] If the set $A$ of frame relations contains the identity relation on the base set[[,]{.nodecor} ]{}then for any pair ${(x,y)}$ in ${\mathcal{E}}$[[,]{.nodecor} ]{}the following conditions are equivalent[[. ]{.nodecor}]{}
1. $R_{{{xy}},\alpha}{^{-1}}$ is in $A$ for some $\alpha<{{\kappa}_{{{xy}}}}$[[. ]{.nodecor}]{}
2. $R_{{{xy}},\alpha}{^{-1}}$ is in $A$ for all $\alpha<{{\kappa}_{{{xy}}}}$[[. ]{.nodecor}]{}
The implication from (ii) to (i) is obvious. To establish the reverse implication, assume that $R_{{{xy}},{\xi}}{^{-1}}$ is in $A$ for some ${\xi}<{{\kappa}_{{{xy}}}}$, and let $\alpha<{{\kappa}_{{{xy}}}}$ be an arbitrary index. Choose an element $f$ in ${G_{{x}}}$ such that $$\tag{1}\label{Eq:convequiv1}
f{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{xy}},{\xi}}}={H_{{{xy}},\alpha}}{\textnormal{{{\hspace*{.5pt}}}.\ }}$$ The assumption on $A$ implies that the group pair ${\mathcal{F}}$ satisfies condition (i) of Theorem \[T:simpfr\], by the Identity Theorem \[T:identthm1\]. Apply Corollary \[C:compthma\](ii) and to obtain $$\tag{2}\label{Eq:convequiv4}
R_{{{xx}}, f}{\mid}R_{{{xy}},{\xi}}=R_{{{xy}},\alpha}{\textnormal{{{\hspace*{.5pt}}}.\ }}$$ Form the converse of both sides of , and use the second involution law for relational composition, to arrive at $$\tag{3}\label{Eq:convequiv5}
R_{{{xy}},{\xi}}{^{-1}}{\mid}R_{{{xx}}, f}{^{-1}}=R_{{{xy}},\alpha}{^{-1}}.$$
Put $$\tag{4}\label{Eq:convequiv2}
f{^{-1}}=g{\textnormal{{{\hspace*{.5pt}}}.\ }}$$ Apply Corollary \[C:compthma\](i) to to obtain $$R_{{{xx}}, f}{^{-1}}= R_{{{xx}}, g}.$$ Use this equation to rewrite equation in the form $$\tag{5}\label{Eq:convequiv6}
R_{{{xy}},{\xi}}{^{-1}}{\mid}R_{{{xx}}, g}=R_{{{xy}},\alpha}{^{-1}}.$$ The relation $R_{{{xy}},{\xi}}{^{-1}}$ is in $A$, by assumption. Corollary \[C:compthma\](ii),(iii) and the distributivity of relational composition together imply that the set $A$ is closed under the composition of its elements with relations of the form $R_{{{xx}}, g}$. Consequently, the composition on the left side of is in $A$. Use to conclude that $R_{{{xy}},\alpha}{^{-1}}$ is in $A$.
Turn next to the question that was posed after the Composition Theorem: can condition (i) in that theorem be replaced by the condition that $R_{{{xy}}, \alpha} {\mid}R_{{{yz}}, {\beta}}$ be in $A$ for some fixed $\alpha$ and ${\beta}$?
\[C:compequiv\] If the set $A$ of frame relations contains the identity relation[[,]{.nodecor} ]{}then for any pairs ${(x,y)}$ and ${(y,z)}$ in ${\mathcal{E}}$[[,]{.nodecor} ]{}the following conditions are equivalent[[. ]{.nodecor}]{}
1. $R_{{{xy}},\alpha}{\mid}R_{{{yz}},{\beta}}$ is in $A$ for some $\alpha<{{\kappa}_{{{xy}}}}$ and some ${\beta}<{{\kappa}_{{{yz}}}}$[[. ]{.nodecor}]{}
2. $R_{{{xy}},\alpha}{\mid}R_{{{yz}},{\beta}}$ is in $A$ for all $\alpha<{{\kappa}_{{{xy}}}}$ and all ${\beta}<{{\kappa}_{{{yz}}}}$[[. ]{.nodecor}]{}
The implication from (ii) to (i) is obvious. To establish the reverse implication, use an argument similar to the one in the preceding proof. Assume that $R_{{{xy}},{\xi}}{\mid}R_{{{yz}},\eta}$ is in $A$ for some ${\xi}<{{\kappa}_{{{xy}}}}$ and $\eta<{{\kappa}_{{{yx}}}}$. Let $\alpha <{{\kappa}_{{{xy}}}}$ and ${\beta}<{{\kappa}_{{{yz}}}}$ be arbitrary, and choose elements $f$ in ${G_{x}}$ and $g$ in ${G_{z}}$ so that $$\begin{aligned}
f{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{{xy}},{\xi}}}={H_{{{xy}},\alpha}}\qquad&\text{and}\qquad{K_{{{yz}},\eta}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}g={K_{{{yz}},{\beta}}}{\textnormal{{{\hspace*{.5pt}}}.\ }}\tag{1}\label{Eq:compequiv1}\\ \intertext{The assumption
on $A$ implies that the group pair ${\mathcal{F}}$ satisfies condition (i)
of Theorem~\ref{T:simpfr}, by the Identity
Theorem~\ref{T:identthm1}. Apply Corollary~\ref{C:compthma}(ii),(ii)
and \eqref{Eq:compequiv1} to obtain}
R_{{{xx}}, f}{\mid}R_{{{xy}},{\xi}}=R_{{{xy}},\alpha}\qquad&\text{and}\qquad
R_{{{yz}},\eta}{\mid}R_{{{zz}}, g}=R_{{{yz}},{\beta}}{\textnormal{{{\hspace*{.5pt}}}.\ }}\end{aligned}$$ These equations and the associative law for relational composition lead immediately to $$\begin{gathered}
\tag{2}\label{Eq:compequiv2}R_{{{xy}},\alpha}{\mid}R_{{{yz}},{\beta}} = (R_{{{xx}}, f}{\mid}R_{{{xy}},{\xi}}){\mid}(R_{{{yz}},\eta}{\mid}R_{{{zz}},
g}) =\\
R_{{{xx}}, f}{\mid}(R_{{{xy}},{\xi}}{\mid}R_{{{yz}},\eta}){\mid}R_{{{zz}}, g}{\textnormal{{{\hspace*{.5pt}}}.\ }}\end{gathered}$$ The relation $R_{{{xy}},{\xi}}{\mid}R_{{{yz}},\eta}$ is in $A$, by assumption. Corollary \[C:compthma\] and the distributivity of relational composition over unions together imply that the set $A$ is closed under the composition of its elements with relations of the form $R_{{{xx}}, f}$ and $R_{{{zz}}, g}$. Consequently, the composition on the right side of is in $A$. Use to conclude that $R_{{{xy}},\alpha}{\mid}R_{{{yz}},{\beta}} $ is in $A$.
Examples
========
The easiest group frame to construct involves a kind of “power" of a quotient group. Fix a group $M$ and a normal subgroup $N$. For each element $x$ in a given index set $I$, let ${G_{x}}$ be an isomorphic copy of $M$ (chosen so that distinct copies are pairwise disjoint) and $\psi_x$ an isomorphism from the quotient group $M/N$ to the corresponding quotient group of ${G_{x}}$[[. ]{.nodecor}]{}Take the mapping ${\varphi_{xy}}$ to be the natural isomorphism between the quotient groups of ${G_{x}}$ and of ${G_{y}}$[[,]{.nodecor} ]{}defined by $${\varphi_{xy}} =\psi_x{^{-1}}{\mid}\psi_y$$ for distinct $x$ and $y$ in $I$. These mappings are all isomorphisms between copies of the single quotient group $M/N$. Take ${\varphi_{xx}}$ to be the identity automorphism of ${G_{x}}/\{{{e}_{x}}\}$, as required by the definition of a frame.
The resulting pair ${\mathcal{F}}={(G,\varphi)}$ is readily seen to be a group frame, and the corresponding group relation algebra ${\mathfrak{G}{[{\mathcal{F}}]}}$ is a measurable set relation algebra. If we take the indices $\alpha$ of the atomic relations in ${\mathfrak{G}{[{\mathcal{F}}]}}$ to be the corresponding cosets of $M/N$, then there are especially simple formulas for computing the converse of an atomic relation and the composition of two atomic relations in ${\mathfrak{G}{[{\mathcal{F}}]}}$: $$R_{xy,\alpha}{^{-1}}=
{R_{{y}{x},{\alpha{^{-1}}}}}\qquad\text{and}\qquad {R_{{x}{y},{\alpha}}}{\bigm\vert}{R_{{y}{z},{\beta}}} = {R_{{x}{z},{\alpha{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}\beta}}}$$ when $x$, $y$, and $z$ are distinct elements of $I$. (Here $\alpha{^{-1}}$ denotes the inverse of the coset $\alpha$, and $\alpha{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}\beta$ the product of the cosets $\alpha$ and $\beta$, in $M/N$.)
If $N$ is the trivial (one-element) subgroup of $M$, then each atomic relation ${R_{{x}{y},{\alpha}}}$ is a function and in fact a bijection from ${G_{x}}$ to ${G_{y}}$[[. ]{.nodecor}]{}In this case, ${\mathfrak{G}{[{\mathcal{F}}]}}$ is an example of an atomic relation algebra with functional atoms. At the other extreme, if $N$ coincides with $M$, then there is only one atomic relation, namely $${R_{{x}{y},{0}}}={G_{x}}\times {G_{y}}{\textnormal{,}\ }$$ for each pair of distinct indices $x,y$ in $I$. In general, if the normal subgroup $N$ has order $\lambda$ and index $\kappa$ in $M$ (that is to say, if $N$ contains $\lambda$ elements and has $\kappa$ cosets in $M$), then there will be $\kappa$ distinct atomic relations of the form ${R_{{x}{y},{\alpha}}}$[[,]{.nodecor} ]{}and each of them will be the union of $\lambda$ pairwise disjoint bijections from ${G_{x}}$ to ${G_{y}}$.
In the general case of the power construction, ${\mathcal{E}}$ is allowed to be an arbitrary equivalence relation on $I$. Moreover, the group $M$ and normal subgroup $N$ are fixed for a given equivalence class of ${\mathcal{E}}$, but different equivalence classes may use different groups and normal subgroups.
The most trivial case of the power construction is when the fixed group $M$ is the one-element group. In this case, ${\mathfrak{G}{[{\mathcal{F}}]}}$ is just the full set relation algebra with base set and unit $$U={\textstyle \bigcup}\{{G_{x}}:x\in I\}\qquad\text{and}\qquad
E={\textstyle \bigcup}\{{{G}_{x}}\times{{G}_{y}}:{(x,y)}\in{\mathcal{E}} \}$$ respectively. Moreover, every full set relation algebra on an equivalence relation can be obtained as a group relation algebra in this fashion, using arbitrary equivalence relations ${\mathcal{E}}$ on $I$. The construction of full set relation algebras may therefore be viewed as the most trivial case of the construction of full group relation algebras, namely the case when all the groups have order one. This class may be characterized abstractly, up to isomorphisms, as the class of complete and atomic singleton-dense relation algebras.
It follows from this observation that the class of algebras embeddable into full group relation algebras coincides with the class of representable relation algebras. In particular, the class is equationally axiomatizable, by the results of Tarski[@t55]. However, the description of representable relation algebras in terms of group relation algebras seems much more advantageous, because the class of full group relation algebras is substantially more varied and interesting than the class of full set relation algebras.
A second example of the group relation algebra construction that is easy to describe is the one in which all of the groups are cyclic. Suppose $G=\langle{G_{x}}:x\in I\,\rangle$ is a family of (pairwise disjoint) cyclic groups and ${\mathcal{E}}$ an equivalence relation on $I$. To avoid unnecessary complications in notation, we consider here only the case when the groups are finite. Fix a generator $g_x$ of each group ${G_{x}}$. Let $\langle {\kappa_{{x}{y}}}:{(x,y)} \in
{\mathcal{E}}\,\rangle$ be a system of positive integers satisfying the following conditions for all appropriate pairs in ${\mathcal{E}}$[[. ]{.nodecor}]{}
1. $ {\kappa_{{x}{y}}}$ is a common divisor of the orders of ${G_{x}}$ and ${G_{y}}$[[. ]{.nodecor}]{}
2. $ {\kappa_{{x}{x}}}$ is equal to the order of ${G_{x}}$[[. ]{.nodecor}]{}
3. $ {\kappa_{{y}{x}}} = {\kappa_{{x}{y}}}$[[. ]{.nodecor}]{}
4. $\gcd( {\kappa_{{x}{y}}},{\kappa_{{y}{z}}}) = \gcd( {\kappa_{{x}{y}}},{\kappa_{{x}{z}}})=\gcd( {\kappa_{{x}{z}}},{\kappa_{{y}{z}}})$[[. ]{.nodecor}]{}
Condition (i) ensures that there are (uniquely determined) subgroups ${H_{{x}{y}}}$ and ${K_{{x}{y}}}$ of index ${\kappa_{{x}{y}}}$ in ${G_{x}}$ and ${G_{y}}$ respectively. The quotient groups ${G_{x}/H_{{x}{y}}}$ and ${G_{y}/K_{{x}{y}}}$ are therefore isomorphic, and in fact there is a uniquely determined isomorphism ${\varphi_{xy}}$ between them that maps the generator $g_x/{H_{{x}{y}}}$ of the first quotient to the generator $g_y/{K_{{x}{y}}}$ of the second. Conditions (ii) and (iii), and the definition of the quotient isomorphisms, ensure that frame conditions (i) and (ii) are satisfied. The complex product ${H_{{x}{y}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{x}{z}}}$ is a subgroup of ${G_{x}}$ of index $d = \gcd ({\kappa_{{x}{y}}}, {\kappa_{{x}{z}}})$. Condition (iv) says that the complex products ${K_{{x}{y}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{H_{{y}{z}}}$ and ${K_{{x}{z}}}{\mathbin{\raise2pt\hbox{$\scriptscriptstyle\circ$}}}{K_{{y}{z}}}$ also have index $d$. This, together with the definition of the quotient isomorphisms, ensures that frame conditions (iii) and (iv) are satisfied. It follows that the pair ${\mathcal{F}} =(G,\varphi)$ is a group frame. This construction using cyclic groups is due jointly to Hajnal Andréka and the author.
If every group in ${\mathcal{F}}$ has order one or two, then the group relation algebra ${\mathfrak{G}{[{\mathcal{F}}]}}$ is an example of a pair-dense relation algebra in the sense of Maddux [@ma91]. When ${\kappa_{{x}{y}}} = 2$, there are exactly two relations: ${R_{{x}{y},{0}}}$ and ${R_{{x}{y},{1}}}$[[. ]{.nodecor}]{}Each of them is a function, and in fact a bijection from ${G_{x}}$ to ${G_{y}}$, with exactly two pairs. When ${\kappa_{{x}{y}}} = 1$, there is only the one relation ${R_{{x}{y},{0}}}={G_{x}}\times {G_{y}}$[[. ]{.nodecor}]{}It contains either four pairs, two pairs, or one pair, according to whether both groups ${G_{x}}$ and ${G_{y}}$ have order two, exactly one of these groups has order two and the other order one, or both groups have order one. The class of such group relation algebras may be characterized abstractly, up to isomorphisms, as the class of complete and atomic pair-dense relation algebras.
A decomposition theorem
=======================
The isomorphism index set ${\mathcal{E}}$ of a group frame ${\mathcal{F}}={(G,{\varphi})}$ is an equivalence relation on the group index set $I$, and the unit $$E={\textstyle \bigcup}\{{G_{{x}}}\times{G_{{y}}}:{(x,y)}\in {\mathcal{E}} \}$$ of the corresponding full group relation algebra ${\mathfrak{G}{[{\mathcal{F}}]}}$ is an equivalence relation on the base set $U={\textstyle \bigcup}_{x\in I}{{G}_{x}}$. Call a group frame *simple* if the group index set $I$ is not empty, and the isomorphism index set ${\mathcal{E}}$ is the universal relation on $I$. It turns out that the frame ${\mathcal{F}}$ is simple if and only if the algebra ${\mathfrak{G}{[{\mathcal{F}}]}}$ is simple in the algebraic sense of the word, namely, it has more than one element, and every non-constant homomorphism on it must be injective; or, equivalently, it has exactly two ideals, the trivial ideal and the improper ideal.
\[T:simplegra\] Let ${\mathcal{F}}$ be a group frame[[. ]{.nodecor}]{}The group relation algebra ${\mathfrak{G}{[{\mathcal{F}}]}}$ is simple if and only if ${\mathcal{F}}$is simple[[. ]{.nodecor}]{}
Suppose first that the frame ${\mathcal{F}}$ is simple. The isomorphism index set ${\mathcal{E}}$ is then the universal relation on the index set $I$, and consequently the unit $E$ of ${\mathfrak{G}{[{\mathcal{F}}]}}$ is the universal relation $U\times U$ on the base set $U$. Moreover, the base set $U$ is not empty, because the index set $I$ is not empty, and the groups indexed by $I$ are not empty. The algebra ${\mathfrak{Re}({E})}$ therefore consists of all binary relations on a non-empty base set. Such set relation algebras are well known to be simple. Moreover, ${\mathfrak{G}{[{\mathcal{F}}]}}$ is a subalgebra of ${\mathfrak{Re}({E})}$, by Group Frame Theorem \[T:closed\]. It is well known that subalgebras of simple relation algebras are simple, so ${\mathfrak{G}{[{\mathcal{F}}]}} $ must also be simple.
We postpone the proof of the reverse implication, that simplicity of ${\mathfrak{G}{[{\mathcal{F}}]}}$ implies that of ${\mathcal{F}}$, until after the next theorem.
It turns out that every full group relation algebra can be decomposed into the direct product of simple, full group relation algebras. Here is a sketch of the main ideas. The details are left to the reader. Given an arbitrary group frame $${\mathcal{F}}=(\langle {G_{x}}:x\in
I\,\rangle{\,,}\langle{\varphi}_{{xy}}:{(x,y)}\in {\mathcal{E}}\rangle){\textnormal{,}\ }$$ consider an equivalence class $J$ of the isomorphism index set ${\mathcal{E}}$. The universal relation $J\times J$ on $J$ is a subrelation of ${\mathcal{E}}$, and in fact it is a maximal connected component of ${\mathcal{E}}$ in the graph-theoretic sense of the word. The *restriction* of ${\mathcal{F}}$ to $J$ is defined to be the group pair $${\mathcal{F}}_J=(\langle {G_{x}}:x\in
J\,\rangle{\,,}\langle{\varphi}_{{xy}}:{(x,y)}\in J\times J\rangle).$$ Each such restriction of ${\mathcal{F}}$ to an equivalence class of the index set ${\mathcal{E}}$ inherits the frame properties of ${\mathcal{F}}$ and is therefore a simple group frame. Call such restrictions the *components* of ${\mathcal{F}}$. It is not difficult to check that every frame is the disjoint union of its components in the sense that the group system and the isomorphism system of ${\mathcal{F}}$ are obtained by respectively combining the group systems and the isomorphism systems of the components of ${\mathcal{F}}$.
Each component ${{{\mathcal{F}}}_{J}}$ gives rise to a full group relation algebra ${\mathfrak{G}{[{{{{\mathcal{F}}}_{J}} }]}}$ that is simple and is in fact a subalgebra of the full set relation algebra with base set and unit $${{U}_{J}}={\textstyle \bigcup}_{x\in J}{{G}_{x}}\qquad\text{and}\qquad{{E}_{J}}={{U}_{J}}\times{{U}_{J}}$$ respectively. The group relation algebra ${\mathfrak{G}{[{\mathcal{F}}]}}$ is isomorphic to the direct product of the simple group relation algebras ${\mathfrak{G}{[{{{{\mathcal{F}}}_{J}}}]}}$ constructed from the components of ${\mathcal{F}}$ (so $J$ varies over the equivalence classes of ${\mathcal{E}}$). In fact, if internal direct products are used instead of Cartesian direct products, then ${\mathfrak{G}{[{\mathcal{F}}]}}$ is actually equal to the internal direct product of the full group relation algebras constructed from its component frames.
\[T:cosdecomp\] Every full group relation algebra is isomorphic to a direct product of full group relation algebras on simple frames[[. ]{.nodecor}]{}
Return now to the proof of the reverse implication in Theorem \[T:simplegra\]. Assume that the frame ${\mathcal{F}}$ is not simple. If the group index set $I$ is empty, then the base set $U$ is also empty, and in this case ${\mathfrak{G}{[{\mathcal{F}}]}}$ is a one-element relation algebra with the empty relation as its only element. In particular, ${\mathfrak{G}{[{\mathcal{F}}]}}$ is not simple. On the other hand, if the group index set $I$ is not empty, then the isomorphism index set ${\mathcal{E}}$ has at least two equivalence classes, by the definition of a simple frame. The group relation algebra ${\mathfrak{G}{[{\mathcal{F}}]}}$ is isomorphic to the direct product of the group relation algebras on the component frames of ${\mathcal{F}}$, by Decomposition Theorem \[T:cosdecomp\], and there are at least two such components. Each of these components is a simple frame, so the corresponding group relation algebra is simple, by the first part of the proof of Theorem \[T:simplegra\]. It follows that ${\mathfrak{G}{[{\mathcal{F}}]}}$ is isomorphic to a direct product of at least two simple relation algebras, so ${\mathfrak{G}{[{\mathcal{F}}]}}$ cannot be simple. For example, the projection of ${\mathfrak{G}{[{\mathcal{F}}]}}$ onto one of the factor algebras is a non-constant homomorphism that is not injective.
Summary
=======
The present paper generalizes the notion of pair density from Maddux[@ma91] by introducing the notion of a measurable relation algebra. A large class of examples of such algebras has been constructed, namely the class of full group relation algebras. Unfortunately, the class is not large enough to represent all measurable relation algebras: there exist measurable relation algebras that are not essentially isomorphic to (full) group relation algebras, and in fact that are not representable as set relation algebras at all. The next paper in this series, [@ag], greatly extends the class of examples of measurable relation algebras by adding one more ingredient to the mix, namely systems of cosets that are used to modify the operation of relative multiplication. In the group relation algebras constructed in the present paper, the operation of relative multiplication is just relational composition, but in the coset relation algebras to be constructed in the next paper, the operation of relative multiplication is “shifted" by coset multiplication, so that in general it no longer coincides with composition. On the one hand, this shifting leads to examples of measurable relation algebras that are not representable as set relation algebras, see [@ag Theorem 5.2]. On the other hand, the class of coset relation algebras constructed from systems of group pairs and shifting cosets really is broad enough to include all measurable relation algebras. The task of the third paper in the series, [@ga], is to prove this assertion, namely that every measurable relation algebra is essentially isomorphic to a coset relation algebra, see [@ga Theorem 7.2].
Acknowledgment {#acknowledgment .unnumbered}
--------------
The author is very much indebted to Dr.Hajnal Andréka, of the Alfréd Rényi Mathematical Institute in Budapest, for carefully reading a draft of this paper and making many extremely helpful suggestions.
[99]{}
Andréka, H., Givant, S.: Coset relation algebras. Algebra Universalis (in press).
De Morgan, A.: On the syllogism, no. IV, and on the logic of relations. Transactions of the Cambridge Philosophical Society **10**, 331–358 (1864)
Givant, S.: Introduction to Relation Algebras. Springer International Publishing, Cham (2017)
Givant, S., Andréka, H.: Groups and algebras of relations. The Bulletin of Symbolic Logic **8**, 38–64 (2002)
Givant, S., Andréka, H.: A representation theorem for measurable relation algebras (submitted for publication)
Givant, S., Andréka, H.: Simple Relation Algebras. Springer International Publishing, Cham (2017)
Hirsch, R., Hodkinson, I.: Relation algebras by games. Studies in Logic and the Foundations of Mathematics, vol. 147, Elsevier Science, North-Holland Publishing Company, Amsterdam (2002)
Jónsson, B., Tarski, A.: Representation problems for relation algebras. Bulletin of the American Mathematical Society, Abstract 89 **54**, 80,1192 (1948)
Jónsson, B., Tarski, A.: Boolean algebras with operators. Part II. American Journal of Mathematics **74**, 127–162 (1952)
Lyndon, R.C.: The representation of relational algebras. Annals of Mathematics **51**, 707–729 (1950)
Maddux, R.D.: Pair-dense relation algebras. Transactions of the American Mathematical Society **328**, 83–131 (1991)
Maddux, R.D.: Relation algebras. Studies in Logic and the Foundations of Mathematics, vol. 150. Elsevier Science, North-Holland Publishing Company, Amsterdam (2006)
Monk, J.D.: On representable relation algebras. Michigan Mathematical Journal **11**, 207–210 (1964)
Peirce, C.S.: Note B. The logic of relatives. In: Peirce, C.S. (ed) Studies in logic by members of the Johns Hopkins University. pp. 187–203. Little, Brown, and Company, Boston (1883) \[Reprinted by John Benjamins Publishing Company, Amsterdam (1983)\]
Schröder, E.: Vorlesungen über die Algebra der Logik (exakte Logik), vol. III. Algebra und Logik der Relative, part 1. B.G. Teubner, Leipzig (1895) \[Reprinted by Chelsea Publishing Company, New York (1966)\]
Tarski, A.: On the calculus of relations. Journal of Symbolic Logic **6**, 73–89 (1941)
Tarski, A.: Contributions to the theory of models. III. Koninklijke Nederlandse Akademie van Wetenschappen, Proceedings, Series A, Mathematical Sciences **58** ($=$ Indagationes Mathematicae **17**, 56–64 (1955)
[^1]: This research was partially supported by Mills College.
|
---
abstract: 'Green hydrogen can help to decarbonize transportation, but its power sector interactions are not well understood. It may contribute to integrating variable renewable energy sources if production is sufficiently flexible in time. Using an open-source co-optimization model of the power sector and four options for supplying hydrogen at German filling stations, we find a trade-off between energy efficiency and temporal flexibility: for lower shares of renewables and hydrogen, more energy-efficient and less flexible small-scale on-site electrolysis is optimal. For higher shares of renewables and/or hydrogen, more flexible but less energy-efficient large-scale hydrogen supply chains gain importance as they allow disentangling hydrogen production from demand via storage. Liquid hydrogen emerges as particularly beneficial, followed by liquid organic hydrogen carriers and gaseous hydrogen. Large-scale hydrogen supply chains can deliver substantial power sector benefits, mainly through reduced renewable surplus generation. Energy modelers and system planners should consider the distinct flexibility characteristics of hydrogen supply chains in more detail when assessing the role of green hydrogen in future energy transition scenarios.'
author:
- 'Fabian Stöckl, Wolf-Peter Schill, Alexander Zerrahn'
title: 'Green hydrogen: optimal supply chains and power sector benefits'
---
*Keywords*: hydrogen supply chains, LOHC, power sector modeling, renewable integration
Introduction\[sec: Introduction-P2H2\]
======================================
The increasing use of renewable energy sources in all end-use sectors is a main strategy to reduce greenhouse gas emissions [@DeConinck2018]. This not only applies to the power sector, but also to other sectors such as transportation. Here, energy demand may be satisfied either directly by renewable electricity or indirectly by hydrogen and derived synthetic fuels produced with renewable electricity [@Armaroli2011; @Yan2019; @Staffell2019; @Brynolf2018; @DeLuna2019]. The potential role of hydrogen-based electrification for deep decarbonization is widely acknowledged [@Jacobson2017; @Luderer2018a; @Hanley2018; @NatureEnergy2016a].
Yet a central aspect is less understood so far: how hydrogen-based electrification interacts with the power sector. Hydrogen supply chains use different types of storage, which can temporally disentangle electricity demand for hydrogen production from hydrogen supply to users. Greater temporal flexibility allows to better use variable renewable energy from wind and solar PV. This, in turn, impacts the optimal electricity generation and storage capacities in the power sector, their hourly use, carbon emissions, and costs. Yet more flexible hydrogen supply chains may be less energy-efficient as they incur more conversion steps.
We address this research gap by investigating four different supply chains of hydrogen for road-based passenger mobility for future scenarios with high shares of variable renewable electricity. Specifically, we examine least-cost options for the supply of electrolysis-based hydrogen at filling stations and how they interact with the power sector. To this end, we use an open-source cost-minimization model with a technology-rich well-to-tank perspective that co-optimizes the power sector and four hydrogen supply chains: small-scale on-site electrolysis at the filling station as well as three large-scale hydrogen production and distribution options.
Many previous power sector analyses that include a hydrogen sector lack detail to discuss different hydrogen production and distribution options [@Breyer2019a; @Brown2018a; @Gils2017a; @Michalski2017a]. Studies that include more techno-economic details of supply chains for (green) hydrogen mobility often rely on exogenous electricity price inputs, include only rudimentary power sectors, tie hydrogen production to the availability of surplus electricity generation, and/or are restricted to a single supply channel [@Emonts2019a; @Glenk2019; @Kluschke2019a; @Reuss2017a; @Runge2019a; @Robinius2017a; @Welder2018a; @Yang2007a].
Our integrated hydrogen and power sector model fills this gap in the literature. It minimizes overall system costs by endogenously optimizing electricity generation and storage capacities, their hourly dispatch, as well as capacity and hourly use for the hydrogen supply chains.
We parameterize the model to a 2030 setting for Germany. As the government repeatedly committed itself to an ambitious expansion of renewable energy sources and currently also promotes the use of green hydrogen [@BMWi2020], Germany constitutes a relevant case study.
Model and scenarios {#sec: Model}
===================
We use the established open-source power sector model DIETER [@Zerrahn2017; @Schill2017; @Schill2018; @Schill2020]. For transparency and reproducibility [@Pfenninger2017], the source code, input data, and a complete documentation of the model version used here are available under a permissive open-source license in a public repository [@Stoeckl2020] (see also [www.diw.de/dieter](www.diw.de/dieter)).
The model minimizes the total system costs of providing electricity and hydrogen. The objective function comprises annualized investment costs and hourly variable costs for electricity generation and storage technologies, electrolysis, as well as storage, conversion, and transportation of hydrogen. The main model inputs are availability and cost parameters for all technologies as well as hourly time series of electricity demand, hydrogen demand, and renewable capacity factors. Main decision variables are capacities in the power and hydrogen sectors as well as their hourly use. The optimization is subject to constraints, including market balances for electricity and hydrogen that equate supply and demand in each hour, capacity limits for generation and investment, and a minimum share of renewable energy in electricity supply. The model determines a long-run first-best equilibrium benchmark for a frictionless market. Assuming perfect foresight, DIETER is solved for all consecutive hours of an entire year, thereby capturing the variability of renewable energy sources. Model outputs comprise system costs, optimal capacities and their hourly use, and derived metrics such as emission intensities.
The hydrogen sector is modeled with a well-to-tank perspective. It includes four options to provide filling stations with hydrogen. One small-scale on-site electrolysis directly at the filling station, and three more centralized large-scale options with H~2~ delivery by trailer (Figure \[fig: Schaubild\]). Only one supply channel can be selected per filling station. Electricity demand along the hydrogen supply chains enters the model’s electricity market balance. This includes electricity used for hydrogen production, processing, and distribution facilities. Depending on the conversion steps along the supply chain, the four options also differ in how much electricity is required and when (for an illustration see \[sub: hydrogen sector data\]). All costs for hydrogen-related investments enter the model’s objective function. This endogenously captures the use of electricity for different purposes in each hour.
Small-scale on-site hydrogen production is restricted to proton exchange membrane (PEM) water electrolysis, which is superior to alkaline (ALK) electrolysis in several dimension relevant for small-scale on-site production, including higher load flexibility [@Mittelsteadt2015a], lower footprint [@Mittelsteadt2015a], and easier handling [@Linde2016a]. The hydrogen is immediately compressed and stored at $700$- in high pressure vessels at the filling station. For high pressure storage and dispensing, the same assumptions apply as for the large-scale supply chains.
For large-scale hydrogen production, we consider both ALK and PEM electrolysis. The large-scale options allow for bulk hydrogen storage and, thus, greater temporal flexibility compared to the small-scale on-site option, which only comes with a short-term buffer storage at the filling station. The hydrogen is either compressed and stored at the production site at up to (GH$_2$), liquefied and stored in insulated tanks (LH$_2$), or bound to a liquid organic hydrogen carrier (LOHC, see [@Preuster2017]) in an exothermic hydrogenation reaction and stored in simple tanks. As LOHC, we assume dibenzyltoluene; see [@Eypasch2017a] for an exposition. GH$_2$ and LOHC can be stored without losses; LH$_2$ suffers from a boil-off of $\sim$ per day ($\sim$ per year), which lowers its potential for long-term H$_2$ storage. For GH$_2$, hydrogen may also be directly prepared for transportation after production, bypassing production site storage. Investments in storage capacity at large-scale production sites are unrestricted. Due to minimum filling level requirements, usable storage capacities can be lower than nominal capacities.
![\[fig: Schaubild\] Large-scale and small-scale on-site supply chains with specific production, processing, transportation, and storage requirements.](graphs/schaubild){width="80.00000%"}
For transportation, hydrogen is taken from the respective storage at the large-scale production site, re-compressed (if necessary), and transported (time consuming) in special trailers to the filling stations.
At filling stations, GH$_2$ from large-scale electrolysis is either re-compressed and stored at up to or directly compressed to for the high-pressure buffer storage (bypass option). LH$_2$ and LOHC are first stored in unconverted form, where boil-off for LH$_{2}$ is slightly higher at the filling station than at the large-scale production site ($\sim$ per day or $\sim$ per year). Spatial limitations and security aspects restrict these storage capacities to two trailer-loads for all three large-scale supply chains. LH$_2$ is then cryo-compressed and evaporated, and LOHC dehydrogenated and compressed to be stored in gaseous form at up to in high-pressure vessels used as buffer for dispensing. High pressure storage is limited to (one container with tubes [@HexagonComposites2016a]).
Twelve scenarios vary the share of renewable energy sources in electricity generation between $65$- in five percentage points increments and the demand for hydrogen between $0$, $5$, $10$, and of private and public road-based passenger vehicle energy demand. A renewable share of exactly matches the target of the current German government for 2030. Larger shares reflect higher ambition levels, which may be required to achieve more progressive climate targets. Annual hydrogen demands are $9.1$, $18.1$, and at the filling stations, representing different future market penetrations of hydrogen-electric mobility. For clarity, we abstract from the provision of hydrogen for other purposes than mobility. For each scenario, we combine the small-scale on-site hydrogen supply option with each of the three large-scale options. Due to path dependencies and technology specialization, we do not expect parallel infrastructures for large-scale technologies to emerge in a plausible future setting.
As we aim to derive general insights on temporal flexibility, we abstract from an explicit representation of idiosyncratic spatial aspects and electricity network constraints. Moreover, to keep the analysis tractable, the DIETER version used here has no explicit representation of electricity transmission, focuses on Germany only, and abstracts from balancing within the European interconnection. We also do not use some features of the original model, such as demand-side flexibility beyond the hydrogen sector.
Results\[sec: Results\]
=======================
Optimal hydrogen supply chains depend on renewable penetration and hydrogen demand\[sub: optimal h2 chains\]
------------------------------------------------------------------------------------------------------------
Figure \[fig: 12 Panel Graph\] shows the cost-minimal combinations of small-scale on-site (OS) and large-scale hydrogen supply chains for the $12$ scenarios with hydrogen demand. We denote the resulting renewables-demand scenarios as $Res65$-$Dem5$, $Res65$-$Dem10$, and so on. The Figure also shows the Additional System Costs of Hydrogen (ASCH, see also Section \[sub: Metrics\]), defined as difference in total system costs between a scenario that includes hydrogen and the respective baseline without hydrogen demand, related to total hydrogen supply.
![\[fig: 12 Panel Graph\]Optimal combinations of small-scale on-site (OS) and large-scale hydrogen supply chains and Additional System Costs of Hydrogen (ASCH) for the $12$ scenarios. Starting from the top left panel, the share of renewable energy sources increases to the bottom, and the demand for hydrogen increases to the right.](graphs/12-panel-ref.pdf){width="80.00000%"}
For combinations of relatively low shares of renewable energy sources ($65$-) and hydrogen demand ($5$- of road-based passenger traffic), small-scale electrolysis is the least-cost option. That is, the energy efficiency benefits of on-site electrolysis prevail over the flexibility benefits of large-scale options. Large-scale supply chains are increasingly part of the optimal solution for higher shares of renewables or greater hydrogen demand. In these scenarios, the flexibility they offer becomes more valuable. Among the three large-scale options, liquid hydrogen tends to have the highest shares in the optimal solution.
Comparing the Additional System Costs of Hydrogen, the solutions that include compressed gaseous hydrogen are always dominated by liquid hydrogen and often also by LOHC. This is because GH$_{2}$, while energy efficient, incurs comparably high storage and transportation costs (see \[sub: hydrogen sector data\]). In contrast, solutions that include LH$_{2}$ lead to the lowest ASCH in most scenarios with high renewable shares ($75$-) or high hydrogen demand (). In general, solutions that include LH$_{2}$ or LOHC often lead to relatively similar cost outcomes. Yet, this is driven by different underlying mechanisms. LH$_{2}$ is overall more energy efficient; LOHC offers higher temporal flexibility due to cheap storage, yet requires substantial amounts of electricity for the dehydrogenation process at the filling station (see Section \[sub: utilization patterns\] and \[sub: hydrogen sector data\]).
Further, the Additional System Costs of Hydrogen generally increase with hydrogen demand and decrease with the share of renewable energy sources, mainly reflecting the availability of cheap renewable surplus energy (see Section \[sub: power sector outcomes\]).
Use patterns of hydrogen production and storage indicate differences in temporal flexibility\[sub: utilization patterns\]
-------------------------------------------------------------------------------------------------------------------------
Differences in hydrogen storage capabilities as well as the level and timing of electricity demand (\[sub: hydrogen sector data\]) lead to very different utilization patterns of the four hydrogen supply chains. We illustrate this for the optimal combination of temporally inflexible small-scale electrolysis and more flexible LH$_{2}$ in the $Res80$-$Dem25$ scenario.
Figure \[fig: LH2 in detail - a\] shows that LH$_{2}$ allows to temporally disentangle hydrogen production from demand. On average, production is high during hours when (renewable) electricity is abundant and, thus, cheap. These are not necessarily hours of high hydrogen demand. At the filling station, dispensing LH$_{2}$ on time requires only little electricity. Vice versa, large-scale hydrogen production is low during hours of high prices. In contrast, on-site electrolysis only includes a small high-pressure buffer storage and needs to produce almost on demand (Figure \[fig: LH2 in detail - b\]). Thus, through greater temporal flexibility, LH$_{2}$ allows to exploit phases of high renewable electricity supply and accordingly low electricity prices, which can overcompensate the overall higher electricity demand. Comparable production patterns also emerge for the other two large-scale supply chains GH$_2$ and LOHC.
The capacities of production site hydrogen storage and its hourly use vary substantially across the three large-scale options (Figure \[fig: Utilization of mass storage\]). LOHC has the highest overall storage capacity and a strongly seasonal use pattern. In contrast, GH$_2$ has a much smaller storage capacity and a pronounced short-term storage pattern. LH$_2$ storage is in between. Capacity deployment of GH$_2$ storage is small because of its relatively high specific investment costs. This changes in a sensitivity with cheap cavern storage (see \[sub: Sensitivity - Cavern Storage for GH2\]). For LH$_2$, storage investment costs are much lower, yet investment costs for liquefaction plants are high, impeding investments in larger LH$_2$ production capacities. LH$_2$ storage is also subject to a small, but relevant boil-off, which makes it less suitable for long-term storage. For LOHC, both investment costs for storage and hydrogenation plants are relatively low and investments, accordingly, high. As there is also no boil-off, LOHC storage is used for seasonal balancing.
Power sector outcomes reflect drivers for optimal hydrogen supply chains\[sub: power sector outcomes\]
------------------------------------------------------------------------------------------------------
Figure \[fig: 12 Panel Graph - Capacity\] summarizes power sector capacity impacts for the scenarios. Each bar shows the difference of optimal generation capacities compared to the respective baseline without H$_2$ demand. Generally, overall generation capacity increases with growing hydrogen demand and decreases with growing renewable penetration. A higher renewable share leads to higher renewable surplus generation. Large-scale electrolyzers and storage make use of this surplus that would otherwise be curtailed. In fact, in scenarios $Res80$-$Dem5$ and $Res80$-$Dem10$, overall electricity generation capacity hardly increases or even decreases because the additional electricity demand for hydrogen production is covered by renewable electricity that would otherwise not be used.
![\[fig: 12 Panel Graph - Capacity\]Electricity generation capacity changes compared to the respective baselines without hydrogen for optimal combinations of small-scale and large-scale hydrogen supply chains as shown in Figure \[fig: 12 Panel Graph\].](graphs/3mal4_capacity.pdf){width="80.00000%"}
Concerning specific technologies, the additional electricity demand for hydrogen supply yields larger optimal solar PV capacities. Additional investments in wind power are lower and the optimal wind power capacity even decreases in some $Res75$ or $Res80$ scenarios compared to the respective baseline. Additional wind power would lead to more sustained renewable surplus events, which would be harder to integrate. Offshore wind power is always deployed at the exogenous lower capacity bound of . Further, we find a slight increase in the natural gas generation capacity in most scenarios because this is the most economical conventional generation technology to be operated with relatively low full-load hours. Compared to the respective baselines, the supply of hydrogen further tends to increase the optimal electricity storage capacity in the scenarios with lower renewable penetration because temporally inflexible on-site hydrogen production prevails here. In contrast, the optimal electricity storage capacity decreases in the $Res80$ scenarios. Here, large-scale hydrogen supply chains add a substantial amount of flexibility to the power sector.
Figure \[fig: 12 Panel Graph - Energy\] shows the impact of hydrogen supply chains on yearly energy generation. Across scenarios, wind power is a major source of the additional electricity required for hydrogen supply. Much of this wind power would be curtailed in a power sector without hydrogen. The central driver for this result is that large-scale hydrogen supply chains allow to make better use of variable renewable energy sources, facilitated through longer-term storage. In the $Res75$ and $Res80$ scenarios, electricity generation from wind turbines increases substantially although wind capacity barely increases or even decreases (compare Figure \[fig: 12 Panel Graph - Capacity\]). Renewable curtailment decreases most in scenario $Res80$-$Dem25$ with LOHC, where full-load hours of wind power increase by . LOHC has the largest capability to integrate renewable surpluses by means of storage and also requires the largest amount of electricity.
Power generation from conventional generators also increases and supplies the part of the additional electricity that is not covered by renewables according to the specified share. In the $Res65$-$Dem25$ and $Res70$-$Dem25$ scenarios, with largely inflexible, small-scale electrolysis, this is mainly natural gas-fired power generation. With increasing shares of renewables, there is a shift to hard coal and lignite. In $Res80$-$Dem25$, the share of lignite in non-renewable power generation is highest. Here, the temporal flexibility of large-scale hydrogen supply chains allows increasing the full-load hours of conventional generation with the highest fixed and lowest variable costs, i.e., lignite. Likewise, the use of electricity storage increases compared to the baseline in scenario $Res65$-$Dem25$, where inflexible small-scale on-site hydrogen supply prevails, but is substituted by large-scale hydrogen flexibility in scenario $Res80$-$Dem25$.
![\[fig: 12 Panel Graph - Energy\]Yearly electricity generation changes compared to the respective baselines without hydrogen for optimal combinations of small-scale and large-scale hydrogen supply chains as shown in Figure \[fig: 12 Panel Graph\].](graphs/3mal4_energy_plus_curt.pdf){width="80.00000%"}
CO₂ emission intensity of hydrogen may not decrease with higher renewable shares
--------------------------------------------------------------------------------
We calculate the CO$_2$ emission intensity of the hydrogen supplied in two complementary ways (see Section \[sub: Metrics\]). The Additional System Emission Intensity of Hydrogen (ASEIH), shown in Figure \[fig: ASEIH\], takes the full power sector effects of hydrogen provision into account. It is defined as the difference of overall CO$_2$ emissions between a scenario with hydrogen and the respective baseline without hydrogen, relative to the total hydrogen demand. The ASEIH mirrors the changes in yearly electricity generation induced by hydrogen supply and ranges between $6$ and $13$ kg CO$_2$ per kg H$_2$.
Among the $Res65$ scenarios, the emission intensity of hydrogen is higher for high hydrogen demand ($Dem25$) because the greater role of flexible large-scale hydrogen infrastructure triggers an increase in coal-fired generation. For a renewable share of , the emission intensity is lower because overall power sector emissions decrease and the additional hydrogen demand largely integrates renewables without requiring additional fossil generation. In contrast, for high renewable shares of or , the ASEIH increases again because the flexibility related to the large-scale hydrogen supply chains allows integrating more coal-fired power generation. This is most pronounced for combinations of small-scale on-site electrolysis and LH$_2$, as the large-scale supply chain has a greater relevance in overall H$_2$ supply compared to OS+GH$_2$ or OS+LOHC. Under this metric, thus, the emission intensity of electrolysis-based hydrogen does not necessarily decrease with increasing renewable shares, absent further CO$_2$ regulation.
The second metric, Average Provision Emission Intensity of Hydrogen (APEIH), shown in Figure \[fig: APEIH\], does not capture the differences to an alternative power sector without hydrogen, but is based on CO$_2$ emissions prevailing in the hours of actual hydrogen production. The APEIH ranges between $7$ and $12$ kg CO$_2$ per kg H$_2$. The APEIH is highest for the $Res65$ scenarios and generally decreases with increasing renewable shares. It is lowest in supply chains with GH$_2$, slightly higher in with LH$_2$, and highest for LOHC. This largely reflects the differences in energy efficiency among these options.
For lower renewable shares, the APEIH tends to be higher than the ASEIH; for high renewable shares, the APEIH tends to be lower than the ASEIH. That is, a greater renewable penetration decreases the CO$_2$ emissions of the electricity mix used to produce hydrogen (APEIH), but additional emissions induced by H$_2$ do not necessarily decrease (ASEIH). This also indicates that analyses on the emission intensity of (green) hydrogen should generally be interpreted with care.
Power sector benefits of hydrogen\[sub: benefits\]
--------------------------------------------------
We illustrate the power sector benefits of hydrogen supply in two different ways. First, the Average Provision Costs of Hydrogen (APCH) indicate hydrogen costs from a producer perspective. Across all scenarios, the APCH are between around $5$ and (Figure \[fig: APCH\]). These costs are below the uniform retail price of hydrogen in Germany of around by 2020. In general, the APCH increase with hydrogen demand in all scenarios. With increasing shares of renewable energy, the APCH generally increase slightly, with the exception of scenarios $Res80$-$Dem5$ and $Res80$-$Dem10$. Here, supply chain combinations that include LH$_2$ or LOHC lead to lower costs because they can make better use of periods with very low electricity prices, which are frequent in this setting.
In contrast to APCH, the Additional System Costs of Hydrogen (ASCH) metric indicates the costs of hydrogen from a power system perspective. ASCH, which are also shown in Figure \[fig: 12 Panel Graph\], are smaller than APCH in all scenarios. This difference is substantially more pronounced for higher renewable shares (Figure \[fig: diff APCH and system costs\]). The ASCH also include the benefits of better renewable energy integration compared to a system without hydrogen. Yet, these benefits cannot be fully internalized by customers at filling stations, as the difference to the more production-oriented APCH metric indicates.
Second, we illustrate the power sector benefits of different hydrogen supply chains with their impacts on the System Costs of Electricity (SCE, Section \[sub: Metrics\]). Here, the total benefits of integrating the power and hydrogen sectors are attributed to the costs of generating electricity. For renewable shares of and , hydrogen hardly has an impact (Figure \[fig: sce\]). Yet, SCE decrease markedly for higher renewable shares, up to more than for a combination of small-scale on-site electrolysis and LOHC in the $Res80$-$Dem25$ scenario. The main driver for these benefits, again, is reduced renewable curtailment.
![\[fig: sce\]Effect of hydrogen on System Costs of Electricity (SCE)](graphs/sce.pdf "fig:"){width="100.00000%"}\
Sensitivity analyses: impacts of central parameter assumptions on supply chains\[sub: sensitivites-main-body\]
--------------------------------------------------------------------------------------------------------------
Additional model runs show the impact of alternative assumptions for central parameters (see \[sub: Sensitivities\]). GH$_2$ and LOHC tend to improve relative to LH$_2$ if the transportation distance decreases, and vice versa, in particular if the share of large-scale production is high. If mass hydrogen storage could be placed at filling stations, this would greatly benefit the small-scale on-site supply chain. GH$_2$ becomes the dominant option for most scenarios if low-cost cavern storage can be developed. LH$_2$ would improve further if boil-off during storage could be avoided. In turn, LOHC would become dominant in most scenarios if free waste heat could be used for dehydrogenation as well as if existing transportation and storage infrastructure could be used without additional costs.
Qualitative effects of model limitations\[sec: limitations\]
============================================================
We briefly discuss some limitations of the study and how they may qualitatively impact results. Several research design choices we made for clarity and tractability lead to a power sector that is relatively flexibility-constrained. On the demand side, we abstract from a range of potential flexibility sources, such as power-to-heat options, battery-electric vehicles or the use of hydrogen for other purposes than mobility, e.g., high-temperature processes in industry. We also abstract from geographical balancing in the European interconnection. Accordingly, we may overestimate renewable surpluses and, in turn, the benefits of flexible hydrogen supply chains that make use of them. We also do not constrain investments in renewable electricity generation in Germany. A cap on renewable capacity deployment, reflecting public acceptance and planning issues, may further increase the relative importance of energy efficiency compared to flexibility.
We further do not consider potential transmission or distribution grid constraints for clarity and generalizability. These can increase the local value of flexible hydrogen supply, particularly in areas with very good renewable energy resources. For example, temporally flexible large-scale hydrogen supply chains may be particularly beneficial in Germany’s Northern region, where the best wind power resources are located.
Likewise, we abstract from hydrogen distribution via pipelines. These could resolve the efficiency-flexibility trade-off, but are likely to be economical only for transporting large amounts of hydrogen between major hubs.
Discussion\[sec: Discussion-P2H2\]
==================================
Our co-optimization of the power and hydrogen sectors highlights that small-scale on-site electrolysis is most beneficial for lower shares of renewable energy sources and low hydrogen demand because energy efficiency matters more than temporal flexibility. In such a setting, the power sector benefits of hydrogen are accordingly small. For higher shares of renewables or higher hydrogen demand, large-scale hydrogen infrastructure options gain importance. LH$_2$ provides the best combination of efficiency, flexibility, and investment cost over the majority of scenarios. In particular, temporally flexible large-scale supply chains make use of renewable surplus generation, which allows reducing optimal renewable capacity deployment. Yet this flexibility not only facilitates renewable integration in the power sector, but can also increase the use of conventional generation with low marginal costs. The emission intensity of hydrogen does not necessarily decrease with higher renewable shares, absent further CO$_2$ regulation. Overall, the Additional System Costs of Hydrogen are relatively similar among optimal supply chain combinations in many scenarios. Real-world investment choices should thus take additional factors into account that the model analysis does not capture. This includes aspects of operational safety and public acceptance, which may favor LOHC, or constraints to renewable energy deployment, which may favor the more energy-efficient options.
Energy system analysts and planners should consider the flexibility and efficiency trade-off of green hydrogen in more detail when assessing its role in future energy transition scenarios. This requires a sufficiently detailed representation of hydrogen supply chains in respective energy modeling tools. To realize flexibility benefits in actual energy markets, policy makers should further redesign tariffs and taxes such that they do not overly distort wholesale price signal along all steps of the hydrogen supply chain [cf. @Guerra2019], while enabling a fair distribution of the benefits between hydrogen and electricity consumers. Future research may aim to address some limitations of this study (cf. \[sec: limitations\]), or explore the efficiency-flexibility trade-off for different hydrogen carriers that allow long-range bulk transport of green hydrogen from remote areas with very good wind or PV resources, such as Patagonia or Australia. Likewise, extending our analysis to also include the reconversion of hydrogen to electricity in scenarios with full renewable supply would be promising [@Staffell2019; @Welder2019].
Acknowledgments\[sec: Acknowledgments\]
=======================================
We thank Markus Reuß and Philipp Runge for fruitful discussions and helpful comments. We are also grateful that Markus Reuß shared a spreadsheet tool to easily calculate electricity demand for compression. We further thank the participants of the following seminars and workshops for valuable feedback: Climate & Energy College at the University of Melbourne, Renewable Energy workshop at the Australian National University, Strommarkttreffen Berlin, Power-to-X Day at Dechema Frankfurt, and the BB2 research seminar at ifo Munich. We further thank Amine Sehli, Seyed Saeed Hosseinioun, and Justin Werdin for research assistance. Wolf-Peter Schill carried out parts of this work during a research stay at the Energy Transition Hub at the University of Melbourne. We gratefully acknowledge research funding by the German Federal Ministry of Education and Research via the Kopernikus P2X project, research grant 03SFK2B1.
Declaration of interests\[sec: declaration\]
============================================
The authors declare no competing interests.
Electronic Supplementary Information\[sec: suppl-info\]
=======================================================
Cost and emissions metrics\[sub: Metrics\]
------------------------------------------
**System Costs of Electricity (SCE)** are the total power sector costs related to overall electricity generation. They include all investment, fixed, and variable power sector costs, but exclude the investment, fixed, and (non-electricity) variable costs of the hydrogen supply chains. Using the SCE, the benefits of integrating the power and hydrogen sectors are completely attributed to electricity generation. The SCE treat all electricity generation equally, irrespective of later consumption for conventional electricity demand, demand for hydrogen production and distribution, or losses in the transformation process.
**Additional System Costs of Hydrogen (ASCH)** are defined as the difference in total system costs between a scenario that includes hydrogen and the respective baseline without hydrogen demand, related to total hydrogen supply. The ASCH factor in the total power sector benefits of hydrogen supply. ASCH are not directly observable for market participants, but relevant from an energy sector planning perspective.
**Average Provision Costs of Hydrogen (APCH)**, in contrast, sum the annualized costs of the hydrogen infrastructure and yearly electricity costs for hydrogen production, related to total hydrogen supply. Yearly electricity costs are the product of the hourly shadow prices of the model’s energy balance and the hourly electricity demand along the hydrogen supply chain, summed up over all hours of a year. The APCH reflect a producer perspective (excluding taxes and fees that are potentially relevant in real-world settings). For alternative levelized costs of hydrogen (LCOH) concepts, see [@Kuckshinrichs2018].
The **Additional System Emission Intensity of Hydrogen (ASEIH)** relates the overall difference of CO$_2$ emissions between a scenario with hydrogen and the respective baseline without hydrogen to the total hydrogen supply. Analogously to the ASCH, this metric takes the full power sector effects of hydrogen provision into account. Like ASCH, ASEIH are not directly observable in an actual market, but relevant from an energy sector planning perspective.
The alternative **Average Provision Emission Intensity of Hydrogen (APEIH)** metric is calculated by multiplying hourly average emission intensities of electricity generation with respective hourly electricity consumption for hydrogen supply at all steps of the supply chain (including compression, dehydrogenation etc.) and relating this to overall hydrogen provision. Analogously to the APCH, the APEIH assume a producer perspective.
Sensitivities\[sub: Sensitivities\]
-----------------------------------
We carry out a range of sensitivity calculations to explore how key parameter assumptions affect results. We investigate the effects of varying transportation distances, alternatively assuming that mass storage for small-scale on-site hydrogen supply is available, alternatively assuming that low-cost cavern storage for GH$_{2}$ is available as well as LH$_{2}$ storage without boil-off, and examine cost-free supply of heat as well as of transportation and storage infrastructure for LOHC.
### Transportation distance\[sub: Sensitivity - Transportation\]
Alternatively to the baseline assumption of a overall transportation distance for hydrogen produced in large-scale facilities, we examine the effects of $200$ and transportation distances. In general, a shorter/longer transportation distance increases/decreases the shares of large-scale hydrogen supply chains in the optimal solution, see Figures \[fig: 12 Panel Graph sensitivity dist-\] and \[fig: 12 Panel Graph sensitivity dist+\]. Moreover, with a shorter transportation distance, large-scale technologies are now part of the optimal technology portfolio in some scenarios, while for a longer transportation distance, large-scale supply chains drop out in some scenarios.
![\[fig: 12 Panel Graph sensitivity dist-\]Optimal combinations of small-scale on-site and large-scale hydrogen supply chains and Additional System Costs of Hydrogen (ASCH) for different scenarios - sensitivity with overall transportation distance.](graphs/12-panel-dist-.pdf){width="80.00000%"}
![\[fig: 12 Panel Graph sensitivity dist+\]Optimal combinations of small-scale on-site and large-scale hydrogen supply chains and Additional System Costs of Hydrogen (ASCH) for different scenarios - sensitivity with overall transportation distance.](graphs/12-panel-dist+.pdf){width="80.00000%"}
In general, a longer/shorter transportation distance increases/decreases the overall costs of the large-scale hydrogen supply chain. The spread in costs across supply chain combinations within scenarios tends to increase with transportation distance. Yet, the overall least-cost options are robust, with LH$_2$ as dominant large-scale supply chain in the optimal solution. Cost outcomes are fairly robust with respect to the transportation distance because the share of transportation-related costs in the overall costs of hydrogen provision are relatively small.
In more detail, a change in the average transportation distance has two effects on the costs of hydrogen supply. First, variable transportation costs (fuel and driver wage) are proportional to the transportation distance. For the sensitivity calculations with and overall transportation distances, the variable costs increase/decrease by . While the relative effect is the same for all three large-scale supply chains, the effect on absolute cost is highest for GH$_2$ and also more pronounced for LOHC than for LH$_2$, see Figure \[fig: Sensitivities- Variable transportation costs\].
Second, longer/shorter distances imply that each trailer is occupied for a longer/shorter time period. Consequently, the fleet capacity needs to be increased or can be reduced, respectively. Figure \[fig: Sensitivities - Fixed transportation capacity investment costs\] shows transportation capacity investment costs per kg of hydrogen supplied through a specific supply chain averaged over all $Res$-$Dem$-scenarios. The pattern is identical to the one for variable costs, yet with less impact in absolute terms.
### Mass storage for small-scale on-site hydrogen supply\[sub: Mass Storage for Decentralized Production\]
![\[fig: 12 Panel Graph sensitivity flexdec\]Optimal combinations of small-scale on-site and large-scale hydrogen supply chains and Additional System Costs of Hydrogen (ASCH) for different scenarios - sensitivity with mass storage available for small-scale on-site production.](graphs/12-panel-flexdec.pdf){width="80.00000%"}
Under baseline assumptions, mass hydrogen storage is not available at filling stations for small-scale supply because of space requirements and security concerns. Alternatively, we assume that relatively cheap mass storage at can be deployed at filling stations, with the same techno-economic assumptions as for large-scale GH$_2$ storage. Table \[tab: sensitivity: mass storage for dec\] gives an overview of the necessary changes with respect to compression processes and storage infrastructure.
Consequently, small-scale on-site production of hydrogen becomes more temporally flexible and loses its major disadvantage compared to large-scale production. Given that on-site hydrogen supply t filling stations is more energy-efficient, its share substantially increases for most supply-chain combinations and $Res$-$Dem$-scenarios (Figure \[fig: 12 Panel Graph sensitivity flexdec\]), except for those with the highest renewable surpluses, i.e., $Res80$-$Dem5$ and $Res80$-$Dem10$, where all demand is still supplied by large-scale technologies. Here, large-scale production of LH$_{2}$ and LOHC still profits from a larger optimal storage size and the according flexibility. GH$_{2}$ produced in large-scale infrastructures drops out completely. As expected, with the additional flexibility option, the ASCH decrease slightly and the spread in costs between different supply chain combinations within each scenario rather decreases. Finally, the pattern of least-cost options across scenarios is robust, except for scenarios $Res75$-$Dem25$ and $Res80$-$Dem25$ where the cost-optimal technology portfolio now contains LOHC rather than LH$_{2}$.
### Cavern storage for GH₂\[sub: Sensitivity - Cavern Storage for GH2\]
![\[fig: 12 Panel Graph sensitivity cavern\]Optimal combinations of small-scale on-site and large-scale hydrogen supply chains and Additional System Costs of Hydrogen (ASCH) for different scenarios - sensitivity with cavern storage available for large-scale GH$_{2}$ production.](graphs/12-panel-cavern.pdf){width="80.00000%"}
Low-cost cavern storage would provide flexibility for large-scale GH$_2$ production at very low costs of , which is about one third of the costs of LOHC or LH$_2$ storage. Tables \[tab: production site auxiliaries\] and \[tab: transporation auxiliaries (before)\] list the altered requirements for compression processes.
If cavern storage is available, the share of large-scale GH$_2$ production increases substantially for all scenarios, see Figure \[fig: 12 Panel Graph sensitivity cavern\]. In contrast to the results under default assumptions, the ASCH of the supply chain (DEC+)GH$_2$ are now lower than for the other options in most scenarios, especially if the share of renewable energy sources is high or H$_2$ demand is low. Moreover, Figure \[fig: cav-sto\] illustrates that the use of cavern storage exhibits a seasonal pattern, as prevalent for LOHC in the baseline specification, yet with higher storage capacity due to low investment costs. Accordingly, the (non-)availability of cavern storage is a relevant driver of numerical model results.
![\[fig: cav-sto\]Temporal storage use patterns also including cavern storage for scenario $Res80$-$Dem25$](graphs/cavern-storage){width="6.75cm"}
### No boil-off for LH₂\[sub: Sensitivity - No boil-off for LH2\]
![\[fig: 12 Panel Graph sensitivity - no boil off\]Optimal combinations of small-scale on-site and large-scale hydrogen supply chains and Additional System Costs of Hydrogen (ASCH) for different scenarios - sensitivity with no boil-off for LH$_{2}$ storage.](graphs/12-panel-no-boil-off.pdf){width="80.00000%"}
We assess the effects of LH$_2$ boil-off during storage and transportation by counter-factually setting it to zero. Figure \[fig: 12 Panel Graph sensitivity - no boil off\] shows the results. The optimal shares of LH$_2$ compared to on-site hydrogen production at filling stations slightly increase in some cases, but effects are small. The average increase is $3.2$ percentage points, and the largest increase is $10.2$ percentage points in scenario $Res70$-$Dem5$. Likewise, the effect on H$_{2}$ costs is small, with an average cost reduction of and a maximum decrease of in scenario $Res80$-$Dem10$. The pattern of least-cost options is robust with the combination containing LH$_{2}$ now additionally optimal for $Res75$-$Dem10$.
While the effect on costs and optimal technology shares is limited, LH$_2$ without boil-off is better suited as long-term or seasonal storage. Its use pattern changes substantially and resembles that of LOHC under default assumptions. Figure \[fig: Sens - no boil-off\] exemplarily illustrates this point for scenario $Res80$-$Dem25$.
Additionally, we find that LH$_2$ storage at the filling station becomes relatively more important if there is no boil-off. Under default assumptions, boil-off at the filling station was slightly higher than at the production site. Without boil-off, the two storage options are identical in terms of losses over time. Thus, the division of storage between the production and filling sites allows for a more efficient use of transportation capacities. This results in a decrease of transportation infrastructure costs of per kg of hydrogen in the scenario $Res80$-$Dem25$.
### Free heat supply for LOHC dehydrogenation\[sub: Sensitivity - Free Heat Supply for Dehydrogenation\]
![\[fig: 12 Panel Graph sensitivity free heat\]Optimal combinations of small-scale on-site and large-scale hydrogen supply chains and Additional System Costs of Hydrogen (ASCH) for different scenarios - sensitivity with free heat supply for dehydrogenation.](graphs/12-panel-free-heat.pdf){width="80.00000%"}
LOHC has a relatively high electricity demand for dehydrogenation, which is additionally temporally inflexible, that may hold back its extended use. We carry out a sensitivity calculation where the required heat is available free of costs, for instance, because industrial waste heat is available. Figure \[fig: 12 Panel Graph sensitivity free heat\] shows the results. Compared to default assumptions, the share of LOHC increases in most scenarios. Also the ASCH for combinations of small-scale on-site electrolysis at filling stations and LOHC decrease. With free heat supply, the LOHC supply chain is the least-cost solution for all scenarios with renewable shares of or .
### Free transportation and production site storage infrastructure for LOHC\[sub: Sensitivity - free infrastructure\]
![\[fig: 12 Panel Graph sensitivity free infra\]Optimal combinations of small-scale on-site and large-scale hydrogen supply chains and Additional System Costs of Hydrogen (ASCH) for different scenarios - sensitivity with free infrastructure for LOHC storage and transportation.](graphs/12-panel-free-infra.pdf){width="80.00000%"}
Proponents of LOHC argue that existing infrastructure may be used for the LOHC supply chain, especially storage at the production site and filling stations as well as transportation facilities [@Preuster2017]. To address this point in a sensitivity calculation, we assume that storage and transportation capacities do not incur additional costs. Note that the expected lifetime of trailers is $12$ years. The cost advantage of free transportation capacities would at most last for this time period. The results in Figure \[fig: 12 Panel Graph sensitivity free infra\] show that the optimal share of LOHC increases only moderately in many scenarios. In contrast, the ASCH decrease substantially for all supply chains containing LOHC. As for the sensitivity calculation with free heat supply for dehydrogenation, the supply chain involving LOHC is the least-cost option in the scenarios with high renewable penetration also in this case ( or ).
Key power sector data\[sub: power sector Data\]
-----------------------------------------------
We apply our model to 2030 scenarios for Germany. To embed the analysis in a plausible mid-term future setting, electricity generation and storage capacities lean on the medium scenario B of the Grid Development Plan 2019 (*Netzentwicklungsplan*, NEP [@NEP.2018]), an official projection of the German electricity market that transmission system operators base their investments on.
NEP capacities for wind power, both onshore and offshore, solar PV, and battery storage serve as lower bounds for investments. NEP capacities for fossil plants, biomass plants, and run-of-river hydro power serve as upper bounds, where natural gas capacities are split evenly between combined- and open-cycle gas turbines. Coal capacities are largely in line with current German coal phase-out plans that target at most $9$ and lignite and hard coal by 2030, respectively. Investments for pumped storage are bounded from below by today’s value and from above by the NEP value. Figure \[fig: apen\] summarizes the capacity bounds for the power sector.
![\[fig: apen\] Lower and upper bounds for capacity investments in the power sector](graphs/APEN){width="100.00000%"}
Cost and technical parameters for power plants [@Schroeder2013] and storage [@Pape2014; @Schmidt2017b] are based on established medium-term projections. Fuel costs and the CO$_{2}$ price of $29.4$€/t follow the middle NEP scenario B 2030. The hourly electricity load is representative for an average year and is taken from the Ten-Year Network Development Plan 2030 of the European Network of Transmission System Operators for Electricity [@tyndp.2018]. Annual load sums up to around $550$ Terawatt hours (TWh). Time series of hourly capacity factors for wind and PV are based on re-analysis data of the average weather year 2012 [@Pfenninger2016; @Staffell2016].
All input data is available in a spreadsheet provided together with the open-source model [@Stoeckl2020].
Key hydrogen sector data\[sub: hydrogen sector data\]
-----------------------------------------------------
In the following, we present key assumptions on hydrogen sector parameters and fuel demand that are central drivers of the results. Full account of all input data is given in \[sub: Data-suppl-info\].
### H₂ infrastructure
PEM electrolysis is six percentage points more efficient than the ALK technology ( versus ), but has about one-third higher specific investment costs ( versus ). Moreover, based on industry data [@Langas2015a], we assume that investment costs of large-scale electrolysis are lower than those of small-scale on-site production at filling stations.
Cost differences also exist for hydrogen transportation. Trailers for GH$_2$ require high pressure tubes (), for LH$_2$ an insulated tank (), and for LOHC only a simple standard tank (). Differences in variable costs are determined by the net loading capacity per trailer, where GH$_2$ is most expensive with , compared to and for LOHC and LH$_{2}$, respectively. Fuel consumption (Diesel), wages for drivers, and (un-)loading times are assumed to be identical across all supply chains.
Investment costs for hydrogen storage are the central parameter that determines whether flexibility of a supply chain is economical. The costs of GH$_{2}$ storage at () is substantially higher than for LH$_{2}$ () and LOHC (). LOHC has a degradation rate of per supply-cycle, entailing additional costs of . We interpret these costs as LOHC rental rate. High-pressure gaseous (buffer) storage at the filling station is more expensive () and requires a high minimum filling level in order to ensure pressure above for dispensing. This reduces the effective available storage capacity further.
The techno-economic characteristics of the four hydrogen supply chains entail an efficiency-flexibility trade-off with respect to their electricity demand. Small-scale on-site production is relatively energy-efficient but needs to be almost on-time due to a lack of cheap storage options. The three large-scale supply chains are less efficient, but (partly) provide cheap storage options that allow to shift energy intensive electrolysis to hours with high (renewable) electricity supply. Electricity demand for the remaining, inflexible processes to prepare stored hydrogen for dispensing at the filling station (recompression, cryo-compression, and evaporation or dehydrogenation), is comparably low. Figure \[fig: trade-off\] contrasts overall electricity demand with largely inflexible (i.e., non-shiftable) electricity demand at the filling station for different hydrogen supply chains across all scenarios. Within-channel deviations (min & max) are due to the choice of electrolysis technology and losses during storage.
![\[fig: trade-off\]The (realized) efficiency-flexibility trade-off for different hydrogen supply chains across all scenarios.](graphs/trade-off){width="6.75cm"}
### H₂ demand
H$_2$ demand for private and public road-based passenger transportation in Germany leans on a forecast for the year 2030 [@Schubert2014a]. To convert gasoline and diesel consumption to H$_2$ demand [@Hass2014a], shares of fuel consumption for 2030 are assumed to be identical to those in 2017 [@Radke2017a]. Table \[tab:Traffic Data Projection\] shows the resulting demands for the scenarios where , , or of private and public road-based passenger traffic in Germany in 2030 is fueled by hydrogen.
The hourly H$_2$ demand profile at the filling stations is assumed to be identical to today’s for gasoline and diesel fuel. As data for Germany is not available, we resort to U.S. data for hourly and weekly [@Nexant2008a] as well as for monthly [@US-EIA-2018a] demand characteristics. Moreover, each filling station dispenses at most hydrogen per day [@H2Mobility2010a]. This results in $976$, $1952$, and $4880$ filling stations for the $5$, $10$, and demand scenarios, respectively.
---------------------- -------- --------- ----------- ---------
[9]{} [053]{} [271]{} [610]{}
[$\unit[10]{\%}$ ]{} [18]{} [160]{} [543]{} [220]{}
[$\unit[25]{\%}$ ]{} [45]{} [265]{} [1,358]{} [050]{}
---------------------- -------- --------- ----------- ---------
: \[tab:Traffic Data Projection\]Traffic Data (2030 projection)
Finally, depending on the average loading capacity and time a car spends at the filling station, a small amount needs to be added to the average costs of hydrogen to cover dispenser costs (around for per car with an average filling time of and a filling station capacity of , compare [@Runge2019a]). These costs are identical across all supply chain combinations and, thus, have no effect on their ranking.
### Data tables\[sub: Data-suppl-info\]
In the following, we list all data and sources for techno-economic parameters concerning the H$_2$ infrastructure. As parameter projections for 2030 are scarce, except for electrolysis, we resort to values for currently existing or planned sites. All cost parameters are stated in euros (). For conversion from U.S. dollar (), we assume an exchange rate of one. As the literature on cost parameters does often not provide information on the reference year, we refrain from correcting for inflation. Unless stated otherwise, is always short for . To calculate electricity demand for compression and scale investment costs, we follow [@Reuss2017a]. Pursuing a conservative approach, we always calculate energy demand for hydrogen compression for the least favorable initial pressure conditions. All data are in terms of the lower heating value (LHV). The costs of water for electrolysis are not taken into account in this analysis as they are negligible in Germany. Finally, OPEX are always stated as % of CAPEX.\
[Value]{}
-------------------------------------------------------------------------- -----------------------------------------
[Average transportation distance (one-way) [@Runge2019a; @Reuss2017a]]{} [250km]{}
[Average transportation speed [@Reuss2017a]]{} [$\unitfrac[50]{km}{h}$]{}
[Interest rate]{} [4%]{}
[Loading (LOHC) [@Eypasch2017a]]{} [6.2weight-%]{}
[LOHC costs^a^ [@Teichmann2012a]]{} [$\unitfrac[4]{\text{€}}{kg_{LOHC}}$]{}
: \[tab: general assumptions\]General assumptions
[0.40000000000000002]{}
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
[a: LOHC has is a degradation rate of $2\times\unit[0.1]{\%}$ (hydrogenation & dehydrogenation) [@Teichmann2012a] per supply-cycle, entailing additional costs of . We interpret these costs as LOHC rental rate.]{}
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
: \[tab: general assumptions\]General assumptions
[ ]{}
[ALK]{} [PEM]{}
---------------------------------------------------------------------------------------- --------- ---------
[CAPEX $\left(\unitfrac{\text{€}}{kW_{el}}\right)$^a^ [@Schmidt2017a; @Langas2015a]]{} [550]{} [724]{}
[OPEX (%) [@Bertuccioli2014a]]{} [1.5]{} [1.5]{}
[Depreciation period (a)^a,d^ [@Schmidt2017a; @Bertuccioli2014a]]{} [10]{} [10]{}
[Efficiency (%)^c^ [@Bertuccioli2014a]]{} [66]{} [71]{}
[Pressure out (bar) [@Schmidt2017a; @Carmo2013a; @Bertuccioli2014a]]{} [30]{} [30]{}
[Scale advantage (%)^b^ [@Langas2015a]]{} [20]{} [20]{}
: \[tab: electrolysis cost assumptions\]Assumptions for different electrolysis technologies for 2030
[0.40000000000000002]{}
[p[7cm]{}]{} [a: Based on a $\unit[10]{MW_{el}}$ electrolysis system with $2$ times the current R&D investment and production scale-up.]{}
[b: Cost advantage when scaling up from $\unit[2.2]{MW_{el}}$ to $\unit[10]{MW_{el}}$. The output of a $\unit[2.2]{MW_{el}}$ and $\unit[10]{MW_{el}}$ electrolyzer with an efficiency of $68.5$% (the center of our assumptions for ALK and PEM) is equal to and , respectively.]{}
[c: At the system level, including power supply, system control, gas drying (purity at least $99.4$%). Excluding external compression, external purification, and hydrogen storage.]{}
[d: $60,000$h operation at an utilization rate of $70$%.]{}[\
]{}
------------------------------------------------------------ ------------------- ------------------------- ----------------------------- ------------------------- ---------------------------------------------------------------------------------
[GH~[2]{}~ (S)]{} [GH~[2]{}~ (L)]{} [GH~[2]{}~^[cav.]{}^ (L)]{} [LH~[2]{}~ (L)]{} [LOHC (L)]{}
[[@Elgowainy2015a]]{} [[@Elgowainy2015a]]{} [[@Stolzenburg2013a]]{} [[@McClaine2015a; @Teichmann2012a; @Reuss2017a; @Eypasch2017a; @Muller2015a]]{}
[Activity]{} [-]{} [compression]{} [compression]{} [liquefaction]{} [hydrogenation]{}
[CAPEX-base (€)]{} [-]{} [40,528]{} [40,528]{} [643,700]{} [74,657 [@Eypasch2017a]]{}
[CAPEX-comparison]{} [-]{} [$\unit[1]{kW_{el}}$]{} [$\unit[1]{kW_{el}}$]{} [1kg]{} [1kg]{}
[Scale]{} [-]{} [0.4603]{} [0.4603]{} [2/3]{} [2/3]{}
[Ref.-Capacity $\left(\unitfrac{kg}{h}\right)$]{} [-]{} [206]{} [206]{} [1030]{} [1030]{}
[CAPEX-scaled $\left(\unitfrac{\text{€}}{kg}\right)$^a^]{} [-]{} [2,923]{} [2,672]{} [63,739]{} [7,392 [@Eypasch2017a]]{}
[OPEX (%)]{} [-]{} [4]{} [4]{} [4]{} [4]{}
[Depreciation period (a)]{} [-]{} [15]{} [15]{} [30]{} [20]{}
[Pressure in (bar)]{} [-]{} [30]{} [30]{} [30 (20 nec.)]{} [30]{}
[Pressure out (bar)]{} [-]{} [250]{} [180]{} [2]{} [-]{}
[Compression stages]{} [-]{} [2]{} [2]{} [-]{} [-]{}
[Elec. Demand $\left(\unitfrac{kWh}{kg}\right)$]{} [-]{} [1.707]{} [1.402]{} [6.78]{} [0.37]{}
[Heat Demand $\left(\unitfrac{kWh}{kg}\right)$]{} [-]{} [-]{} [-]{} [-]{} [-8.9]{}
[Losses (%)]{} [-]{} [0.5]{} [0.5]{} [1.625]{} [3]{}
------------------------------------------------------------ ------------------- ------------------------- ----------------------------- ------------------------- ---------------------------------------------------------------------------------
: \[tab: production site auxiliaries\]Assumptions for different storage preparation processes (production site)
[0.40000000000000002]{}
[p[13.9cm]{}]{} *Abbreviations:*[ cav.: cavern; (S): small-scale on-site supply chain; (L): large-scale supply chain]{}
[a: For $\unit[10]{MW_{el}}$ $\left(\unitfrac[206]{kg}{h}\right)$ electrolysis capacity, the maximum daily throughput is almost $5$t of hydrogen. For non-stacked processes such as liquefaction and hydrogenation, we assume a throughput of which would be equal to the hydrogen production of a $\unit[50]{MW_{el}}$ electrolyzer.]{}[\
]{}
--------------------------------------------------------------- ------------------- ------------------- ----------------------------- --------------------- -------------------
[GH~[2]{}~ (S)]{} [GH~[2]{}~ (L)]{} [GH~[2]{}~^[cav.]{}^ (L)]{} [LH~[2]{}~ (L)]{} [LOHC (L)]{}
[[@Parks2014a]]{} [[@Kruck2013a]]{} [[@US-DOE-2015a]]{} [[@Reuss2017a]]{}
[CAPEX-base (€)]{} [-]{} [450]{} [3.5]{} [13.31]{} [10]{}
[CAPEX-comparison]{} [-]{} [1kg]{} [1kg]{} [1kg]{} [1kg]{}
[Scale]{} [-]{} [1]{} [1]{} [1]{} [1]{}
[CAPEX-scaled $\left(\unitfrac{\text{€}}{kg}\right)$]{} [-]{} [450]{} [3.5]{} [13.31]{} [10]{}
[OPEX (%) [@Reuss2017a]]{} [-]{} [2]{} [2.5 [@Stolzenburg2014a]]{} [2]{} [2]{}
[Depreciation period (a) [@Parks2014a]]{} [-]{} [20]{} [30 [@Stolzenburg2014a]]{} [20]{} [20]{}
[Pressure range (bar)]{} [-]{} [15 - 250]{} [60 - 180]{} [-]{} [-]{}
[Min. filling level (%)^a^]{} [-]{} [6]{} [33.3]{} [5]{} [-]{}
[Boil-off $\left(\unitfrac{\%}{d}\right)$ [@Bouwkamp2017a]]{} [-]{} [-]{} [-]{} [0.2]{} [-]{}
[Storage bypass possibility]{} [-]{} [yes]{} [yes]{} [-]{} [-]{}
--------------------------------------------------------------- ------------------- ------------------- ----------------------------- --------------------- -------------------
: \[tab: production site storage\]Assumptions for different storage types (production site)
[0.40000000000000002]{}
[p[12.7cm]{}]{} *Abbreviations:*[ cav.: cavern; (S): small-scale on-site supply chain; (L): large-scale supply chain]{}
[a: Calculated according to Boyle’s law in order to maintain the minimum pressure required. For the cavern, minimum pressure is calculated dependent on the required amount of cushion gas.]{}[\
]{}
[GH~[2]{}~ (S) ]{} [GH~[2]{}~ (L) [@Elgowainy2015a]]{} [GH~[2]{}~^[cav.]{}^ (L) [@Elgowainy2015a]]{} [LH~[2]{}~ (L)]{} [LOHC (L)]{}
------------------------------------------------------------ -------------------- ------------------------------------- ----------------------------------------------- ------------------- --------------
[Activity]{} [-]{} [compression]{} [compression]{}
[CAPEX-base (€)]{} [-]{} [6000]{} [6000]{} [-]{} [-]{}
[CAPEX-comparison]{} [-]{} [$\unit[1]{kW_{el}}$]{} [$\unit[1]{kW_{el}}$]{} [-]{} [-]{}
[Scale]{} [-]{} [1]{} [1]{} [-]{} [-]{}
[Ref.-Capacity $\left(\unitfrac{kg}{h}\right)$]{} [720]{} [720]{} [-]{} [-]{}
[CAPEX-scaled $\left(\unitfrac{\text{€}}{kg}\right)$^a^]{} [-]{} [13,784]{} [6,530]{} [-]{} [-]{}
[OPEX (%)]{} [-]{} [4]{} [4]{} [-]{} [-]{}
[Depreciation period (a)]{} [-]{} [15]{} [15]{} [-]{} [-]{}
[Min. Pressure in (bar)]{} [-]{} [15]{} [60]{} [-]{} [-]{}
[Pressure out (bar)]{} [-]{} [250]{} [250]{} [-]{} [-]{}
[Compression stages]{} [-]{} [2]{} [2]{} [-]{} [-]{}
[Elec. demand $\left(\unitfrac{kWh}{kg}\right)$]{} [-]{} [2.297]{} [1.088]{} [-]{} [-]{}
[Losses (%)]{} [-]{} [0.5]{} [0.5]{} [-]{} [-]{}
: \[tab: transporation auxiliaries (before)\]Assumptions for different transportation preparation processes
[0.40000000000000002]{}
[p[13.3cm]{}]{} *Abbreviations:*[ cav.: cavern; (S): small-scale on-site supply chain; (L): large-scale supply chain]{}
[a: is equal to the trailer capacity. Thus, every compressor is required to have the capacity to load one trailer per hour.]{}[\
]{}
[All [@Teichmann2012a]]{} [GH~[2]{}~ (L) [@US-DOE-2015a]]{} [LH~[2]{}~ (L) [@US-DOE-2015a]]{} [LOHC (L) [@Reuss2017a]]{}
------------------------------------------------------------- --------------------------- ----------------------------------- ----------------------------------- ----------------------------
[Function]{} [tractor]{} [trailer]{} [trailer]{} [trailer]{}
[CAPEX (€)^a,b^]{} [223,031]{} [518,400]{} [865,260]{} [150,000]{}
[Capacity (kg)]{} [-]{} [720]{} [4,554]{} [1,800]{}
[Net capacity (kg)^c^]{} [-]{} [676.8]{} [4,326]{} [1,620]{}
[CAPEX-net $\left(\unitfrac{\text{€}}{kg}\right)$]{} [-]{} [763.93]{} [190]{} [92.59]{}
[OPEX (%)]{} [12]{} [2]{} [2]{} [2]{}
[Depreciation period (a) [@Teichmann2012a]]{} [12]{} [12]{} [12]{} [12]{}
[Losses $\left(\unitfrac{\%}{d}\right)$ [@Bouwkamp2017a]]{} [-]{} [-]{} [0.6]{} [-]{}
[(Un-)/Loading time (h)]{} [-]{} [1 / 1]{} [1 / 1]{} [1 / 1]{}
: \[tab: transporation processes\]Assumptions for different transportation processes
[0.40000000000000002]{}
[p[12cm]{}]{} *Abbreviations:*[ (L): large-scale supply chain]{}
[a: CAPEX adjusted for a lifetime of $12$ years with an interest rate of $4$%.]{}
[b: The average fuel consumption of a tractor is assumed to be [@Teichmann2012a]. Moreover, we assume a price of for diesel and an hourly wage of drivers of . Fuel is not covered by the CO$_2$ tax.]{}
[c: For GH~[2]{}~, net-capacity is determined by the required outlet pressure. $5$% of LH~[2]{}~ remain in the trailer to avoid heating up of the trailer-tank. For LOHC, a maximum discharge-depth of $90$% is assumed [@Eypasch2017a]. Thus, transportation capacity of actually usable hydrogen is below the total amount of bound hydrogen. For all other processes, issues linked to a discharge-depth below $100$% are ignored either because the effect on costs is negligible (storage, degradation) or because we assume a heat-recovery system being installed (dehydrogenation).]{}[\
]{}
[GH~[2]{}~ (S)]{} [GH~[2]{}~(L) [@Elgowainy2015a]]{} [LH~[2]{}~ (L)]{} [LOHC (L)]{}
--------------------------------------------------------- ------------------- ------------------------------------ ------------------- --------------
[Activity]{} [-]{} [compression]{}
[CAPEX-base (€)]{} [-]{} [40,035]{} [-]{} [-]{}
[CAPEX-comparison]{} [-]{} [$\unit[1]{kW_{el}}$]{} [-]{} [-]{}
[Scale]{} [-]{} [0.6038]{} [-]{} [-]{}
[Ref.-Capacity $\left(\unitfrac{kg}{h}\right)$]{} [676.8]{} [-]{} [-]{}
[CAPEX-scaled $\left(\unitfrac{\text{€}}{kg}\right)$]{} [-]{} [4,744]{} [-]{} [-]{}
[OPEX (%)]{} [-]{} [4]{} [-]{} [-]{}
[Depreciation period (a)]{} [-]{} [15]{} [-]{} [-]{}
[Pressure in (bar)]{} [-]{} [15]{} [-]{} [-]{}
[Pressure out (bar)]{} [-]{} [250]{} [-]{} [-]{}
[Compression stages[@Reuss2017a]]{} [-]{} [4]{} [-]{} [-]{}
[Elec. demand $\left(\unitfrac{kWh}{kg}\right)$]{} [-]{} [2.105]{} [-]{} [-]{}
[Constraint $\left(\unitfrac{trailers}{h}\right)$^a^]{} [-]{} [1]{} [1]{} [1]{}
[Losses (%)]{} [-]{} [0.5]{} [2.5]{} [-]{}
: \[tab:first fillling site auxiliaries\]Assumptions for different filling storage preparation processes (1^st^ stage)
[0.40000000000000002]{}
[p[10.4cm]{}]{} *Abbreviations:*[ (S): small-scale on-site supply chain; (L): large-scale supply chain]{}
[a: Own assumption to avoid congestion at the filling station.]{}[\
]{}
[GH~[2]{}~ (S)]{} [GH~[2]{}~ (L) [@Parks2014a]]{} [LH~[2]{}~ (C) [@US-DOE-2015a]]{} [LOHC (L) [@Reuss2017a]]{}
--------------------------------------------------------------- ------------------- --------------------------------- ----------------------------------- ----------------------------
[CAPEX-base (€)]{} [-]{} [450]{} [13.31]{} [10]{}
[CAPEX-comparison]{} [-]{} [1kg]{} [1kg]{} [1kg]{}
[Scale]{} [-]{} [1]{} [1]{} [1]{}
[CAPEX-scaled $\left(\unitfrac{\text{€}}{kg}\right)$]{} [-]{} [450]{} [13.31]{} [10]{}
[OPEX (%) [@Reuss2017a]]{} [-]{} [2]{} [2]{} [2]{}
[Depreciation period (a) [@Parks2014a]]{} [-]{} [20]{} [20]{} [20]{}
[Pressure range (bar)]{} [-]{} [15 - 250]{} [-]{} [-]{}
[Min. filling level (%)^a^]{} [-]{} [6]{} [5]{} [-]{}
[Boil-off $\left(\unitfrac{\%}{d}\right)$ [@Bouwkamp2017a]]{} [-]{} [-]{} [0.4]{} [-]{}
[Storage bypass possibility]{} [-]{} [yes]{} [-]{} [-]{}
: \[tab: first filling site storage\]Assumptions for different storage technologies (1^st^ stage)
[0.40000000000000002]{}
[p[12.2cm]{}]{} *Abbreviations:*[ (S): small-scale on-site supply chain; (L): large-scale supply chain]{}
[a: Calculated according to Boyle’s law in order to maintain the minimum pressure required.]{}[\
]{}
[+90]{}
[lllllll]{} & [GH~[2]{}~ (S)]{} & [GH~[2]{}~ (L)]{} & [LH~[2]{}~ (L)]{} & [LH~[2]{}~ (L)]{} & [LOHC (L)]{} & [LOHC (L)]{}[\
]{} & [[@Elgowainy2015a]]{} & [[@Elgowainy2015a]]{} & [[@Nexant2008a; @Elgowainy2015a]]{} & [[@Nexant2008a; @Elgowainy2015a]]{} & [[@McClaine2015a; @Teichmann2012a; @Reuss2017a; @Eypasch2017a; @Muller2015a]]{} & [[@Elgowainy2015a]]{}[\
]{}\[\] & [compression]{} & [compression]{} & [cryo-compr.-pump]{} & [evaporation]{} & [dehydrogenation]{} & [compression]{}[\
]{}[CAPEX-base (€)]{} & [40,035]{} & [40,035]{} & [567.1$\unitfrac{\text{€}}{kg}$]{} & [900.9$\unitfrac{\text{€}}{kg}$]{} & [55,707]{} & [40,035]{}[\
]{} & & & [+ 11,565€]{} & [+ 2,389€]{} & & [\
]{}[CAPEX-comparison]{} & [$\unit[1]{kW_{el}}$]{} & [$\unit[1]{kW_{el}}$]{} & [1kg]{} & [1kg]{} & [1kg]{} & [$\unit[1]{kW_{el}}$]{}[\
]{}[Scale]{} & [0.6038]{} & [0.6038]{} & [1]{} & [1]{} & [2/3]{} & [0.6038]{}[\
]{}[Ref.-Capacity $\left(\unitfrac{kg}{h}\right)$]{} & [45]{} & [45]{} & [45]{} & [45]{} & [45]{} & [45]{}[\
]{}[CAPEX-scaled $\left(\unitfrac{\text{€}}{kg}\right)$]{} & [17,014]{} & [19,070]{} & [824.1]{} & [954]{} & [15,662]{} & [22,220]{}[\
]{}[OPEX (%)]{} & [4]{} & [4]{} & [4]{} & [1]{} & [4]{} & [4]{}[\
]{}[Depreciation period (a)]{} & [10]{} & [10]{} & [10]{} & [10]{} & [20]{} & [10]{}[\
]{}[Pressure in (bar)]{} & [30]{} & [15]{} & [2]{} & [-]{} & [-]{} & [5 [@Eypasch2017a]]{}[\
]{}[Pressure out (bar)]{} & [950]{} & [950]{} & [-]{} & [950]{} & [5]{} & [950]{}[\
]{}[Compression stages [@Reuss2017a]]{} & [4]{} & [4]{} & [-]{} & [-]{} & [-]{} & [4]{}[\
]{}[Elec. demand $\left(\unitfrac{kWh}{kg}\right)$]{} & [2.947]{} & [3.559]{} & [0.1 [@Reuss2017a]]{} & [0.6 [@Reuss2017a]]{} & [-]{} & [4.585]{}[\
]{}[Heat demand $\left(\unitfrac{kWh}{kg}\right)$^a^]{} & [-]{} & [-]{} & [-]{} & [-]{} & [9.1]{} & [-]{}[\
]{}[Losses (%)]{} & [0.5]{} & [0.5]{} & [-]{} & [-]{} & [1]{} & [0.5]{}[\
]{} [\
]{} [\
]{}
[All [@US-DOE-2015a]]{}
--------------------------------------------------------- -------------------------
[CAPEX-base (€)]{} [600]{}
[CAPEX-comparison]{} [1kg]{}
[Scale]{} [1]{}
[CAPEX-scaled $\left(\unitfrac{\text{€}}{kg}\right)$]{} [600]{}
[OPEX (%)]{} [2]{}
[Depreciation period (a)]{} [20]{}
[Pressure range (bar)]{} [700 - 950]{}
[Min. filling level (%)^a^]{} [74]{}
: \[tab: second filling site storage\]Assumptions for different storage technologies (2^nd^ stage)
[0.40000000000000002]{}
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
[a: Calculated according to Boyle’s law in order to maintain the minimum pressure required. For the cavern, minimum pressure is calculated dependent on the required amount of cushion gas.]{}
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
: \[tab: second filling site storage\]Assumptions for different storage technologies (2^nd^ stage)
[Refrigeration [@Elgowainy2015a]]{} [Dispenser [@Elgowainy2015a]]{}
--------------------------------------------------------------------------------- ------------------------------------- ---------------------------------
[CAPEX-base $\left(\nicefrac{\text{€}}{\textrm{pc.}}\right)$ [@US-DOE-2015a]]{} [70,000]{} [60,000]{}
[OPEX (%)]{} [2]{} [1]{}
[Depreciation period (a)]{} [15]{} [10]{}
[Elec. demand $\left(\unitfrac{kWh}{kg}\right)$]{} [0.325]{} [-]{}
[Max. temperature ($\unit{\text{\textdegree C}}$)^a^]{} [-40]{} [-40]{}
: \[tab: filling station equipment\]Assumptions for filling station equipment
[0.40000000000000002]{}
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
[a: Hydrogen is dispensed to cars in gaseous form at $\unit[700]{bar}$ and pre-cooled to $\unit[-40]{\text{\textdegree C}}$ in order to guarantee short filling times [@Elgowainy2015a].]{}
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
: \[tab: filling station equipment\]Assumptions for filling station equipment
[GH~[2]{}~ (S) [@Elgowainy2015a]]{} [GH~[2]{}~ (S) [@Elgowainy2015a]]{}
--------------------------------------------------------- ------------------------------------- -----------------------------------------
[Activity]{} [compression (mass storage)]{} [compression (high pressure storage)]{}
[CAPEX-base (€)]{} [40,035]{} [40,035]{}
[CAPEX-comparison]{} [$\unit[1]{kW_{el}}$]{} [$\unit[1]{kW_{el}}$]{}
[Scale]{} [0.6038]{} [0.6038]{}
[Ref.-Capacity $\left(\unitfrac{kg}{h}\right)$]{} [45]{} [45]{}
[CAPEX-scaled $\left(\unitfrac{\text{€}}{kg}\right)$]{} [11,972]{} [17,014]{}
[OPEX (%)]{} [4]{} [4]{}
[Depreciation period (a)]{} [15]{} [10]{}
[Pressure in (bar)]{} [30]{} [30]{}
[Pressure out (bar)]{} [250]{} [950]{}
[Compression stages[@Reuss2017a]]{} [4]{} [4]{}
[Elec. demand $\left(\unitfrac{kWh}{kg}\right)$]{} [1.654]{} [2.947]{}
[Losses (%)]{} [0.5]{} [0.5]{}
: \[tab: sensitivity: mass storage for dec\]Sensitivity: mass storage for small-scale on-site electrolysis
[0.40000000000000002]{}
[p[12.5cm]{}]{} *Abbreviations:*[ (S): small-scale on-site supply chain]{}
[\
]{}
|
---
abstract: |
Let $\,T^{j,k}_{N}:L^{p}(B)\,
\rightarrow\,L^{q}([0,1])\,$ be the oscillatory integral operators defined by $\;\displaystyle T^{j,k}_{N}f(s):=\int_{B}
\,f(x)\,e^{\imath N{|x|}^{j}s^{k}}\,dx,
\quad (j,k)\in\{1,2\}^{2},\,$ where $\,B\,$ is the unit ball in ${\mathbb{R}}^{n}\,$ and $\,N\,>>1.$ We compare the asymptotic behaviour as $\,N\rightarrow +\infty\,$ of the operator norms $\,\parallel T^{j,k}_{N} \parallel_
{L^{p}(B)\rightarrow L^{q}([0,1])}\,$ for all $\,p,\,q\in [1,+\infty].\,$ We prove that, except for the dimension $n=1,\,$ this asymptotic behaviour depends on the linearity or quadraticity of the phase in $s$ only. We are led to this problem by an observation on inhomogeneous Strichartz estimates for the Schrödinger equation.
author:
- 'Ahmed A. Abdelhakim'
bibliography:
- 'mybibfile.bib'
title: '$L^p$-$L^q$ boundedness of integral operators with oscillatory kernels: Linear versus quadratic phases'
---
Strichartz estimates for the Schrödinger equation ,Oscillatory integrals,$L^{p}-L^{q}$ boundedness 35B45, 35Q55, 42B20
### 1. A remark on a counterexample to inhomogeneous Strichartz estimates for the Schrödinger equation and motivation {#a-remark-on-a-counterexample-to-inhomogeneous-strichartz-estimates-for-the-schrödinger-equation-and-motivation .unnumbered}
Consider the Cauchy problem for the inhomogeneous free Schrödinger equation with zero initial data $$\begin{aligned}
\label{shreq}
\imath \partial_{t}u+\Delta u\,=\, F(t,x),\qquad (t,x)\in (0,\infty)\times{\mathbb{R}}^{n},\qquad u(0,x)\,=\,0.\end{aligned}$$ Space time estimates of the form $$\begin{aligned}
\label{est1}
||u||_{L^{q}_{t}\left(
\mathbb{R};L^{r}_{x}({\mathbb{R}}^{n})\right)}\;\lesssim\;
||F||_{L^{{\widetilde{q}}^{\prime}}_{t}
\left(\mathbb{R};L^{{\widetilde{r}}
^{\prime}}_{x}({\mathbb{R}}^{n})\right)},\end{aligned}$$ have been known as inhomogeneous Strichartz estimates. The results obtained so far (see [@damianoinhom; @Kato; @keeltao; @vilela; @YoungwooKoh]) are not conclusive when it comes to determining the optimal values of the Lebesue exponents $\,q$, $r$, $\tilde{q}\,$ and $\,\tilde{r}\,$ for which the estimate (\[est1\]) holds. Trying to further understand this problem, we [@ahmed1] found new necessary conditions on these exponents values. The counterexample in [@ahmed1], like Example 6.10 in [@damianoinhom], contains an oscillatory factor with high frequency. More precisely, we used a forcing term given by $$\begin{aligned}
\label{myphase}
F(t,x)= e^{-\imath\, N^2\,t } \,\chi_{[0,\frac{\eta}{N}]}(t)\,
\chi_{B\left(\frac{\eta}{N}\right)}{(x)}\end{aligned}$$ where $\,\eta>0\,$ is a fixed small number, $\,N>>1\,$ and $ B\left(\frac{\eta}{N}\right)$ is the ball with radius $\,\eta/N\,$ about the origin. While in [@damianoinhom] the stationary phase method is applied to the inhomogeneity $$\begin{aligned}
\label{fphase}
F (t,x)= e^{-2\imath\, N^2\,t^{2} }\,\chi_{[0,1]}(t)\,\chi_{B\left(\frac{\eta}{N}\right)}{(x)}.\end{aligned}$$ When $\,t\in [2,3],\,$ both data in (\[myphase\]) and (\[fphase\]) force the corresponding solution $u(t,x)$ to concentrate in a spherical shell centered at the origin with radius about $N.$ This agrees with the dispersive nature of the Schrödinger operator. The shell thickness is different in both cases though. It is about $1$ in the case of the data (\[myphase\]) but about $N$ in the case of (\[fphase\]). The necessary conditions obtained are respectively $$\begin{aligned}
\frac{1}{q}\geq\frac{n-1}{\widetilde{r}}-\frac{n}{r},
\qquad \quad\frac{1}{\widetilde{q}}\geq\frac{n-1}{r}-
\frac{n}{\widetilde{r}}\end{aligned}$$ and $$\begin{aligned}
\label{necess1}
|\frac{1}{r}-\frac{1}{\widetilde{r}}|\leq
\frac{1}{n}.\end{aligned}$$ Observe that the oscillatory function in (\[myphase\]) has a linear phase and is applied for the short time period of length $\:1/\sqrt{\text{frequency}}.\,$ The oscillatory function in (\[fphase\]) on the other hand has a quadratic phase and the oscillation is put to work for a whole time unit. We noticed that the phase in [@damianoinhom] need not be quadratic and we can get the necessary condition (\[necess1\]) using the data $$\begin{aligned}
\label{fphase1}
F_{l} (t,x)= e^{-\imath\, N^2\,t}\,
\chi_{[0,1]}(t)\,\chi_{B\left(\frac{\eta}{N}\right)}{(x)}\end{aligned}$$ where the phase in the oscillatory function is linear. Before we show this, we recall the following approximation of oscillatory integrals according to the principle of stationary phase.
\[stationary\] (see [@stein], Proposition 2 Chapter VIII and Lemma 5.6 in [@damianorem]) Consider the oscillatory integral $\;I(\lambda)=\displaystyle \int_{a}^{b}e^{\imath \lambda \phi(s)}\psi(s)d s.\;$ Let the phase $\,\phi \in C^5([a,b])\,$ and the amplitude $\psi\in C^3([a,b])$ such that
(i)
: $\;\phi^{\prime} (z)=0\,$ for a point $\;z\,\in\, ]a+c, b-c[\;$ with $\,c\,$ a positive constant,
(ii)
: $\;|\phi^{\prime} (s)|\,\gtrsim\, 1,\;$ for all $\;s\,\in\, [a, a+c]\,\cup\, [b-c, b],$
(iii)
: $\;|\phi^{\prime\prime} (s)|\,\gtrsim\, 1,$
(iv)
: $\;\psi^{(j)}\,$ and $\,\phi^{(j+3)}\;$ are uniformly bounded on $[a,b]$ for all $j=0,1,2$.
$$\begin{aligned}
\hspace*{-1 cm}\mbox{Then}\qquad \qquad I(\lambda)\,=\, {\,\sqrt{\frac{2 \pi}{\lambda|\phi^{\prime\prime} (z)|}} \,\psi(z)\,e^{\imath \,\lambda\, \phi(z)+\imath\,\mbox{\small sgn}\left(
\phi^{\prime\prime} (z)\right)\, \frac{\pi}{4}}}+\mathcal{O}\left(\frac{1}{\lambda}\right),
\vspace*{-0.22 cm}\end{aligned}$$
where the implicit constant in the $\mathcal{O}-$symbol is absolute.
The norm of the inhomogeneous term $F_{l} $ in (\[fphase1\]) has the estimate $$\begin{aligned}
\label{normf}
\parallel F_{l} \parallel_{L^{\tilde{q}^{\prime}}
([0,1];L^{\tilde{r}^{\prime}}(\mathbb{R}^{n}))}\,\approx\,
{\eta}^{n-\frac{n}{\tilde{r}}}\,
{N}^{-n}\,{N}^{\frac{n}{\tilde{r}}}.\end{aligned}$$ For the solution of (\[shreq\]), we have the explicit formula $$\begin{aligned}
\label{solshro}
u(t,x) \,=\, (4 \pi)^{-\frac{n}{2}}\int_{0}^{t}
(t-s)^{-\frac{n}{2}}\int_{{\mathbb{R}}^{n}}
e^{\imath\frac{|x-y|^2}{4(t-s)}}\,F(s,y)\,dy\, ds.\end{aligned}$$ Let us estimate the solution $u_{l}(t,x)$ that corresponds to $F_{l}.$ We shall restrict our attention to the region $$\begin{aligned}
\Omega_{\eta,N}=\left\{
(t,x)\in [2,3] \times \mathbb{R}^{n}\!\!:\,
2(t-{3}/{4})N+\eta N^{-1}\,<|x|<\,2(t-{1}/{4})N-\eta N^{-1}\right\}.\end{aligned}$$ It will be momentarily seen that this is the region where we can exploit Lemma \[stationary\] to approximate $\, u_{l}(t,x).$ Substituting from (\[fphase1\]) into (\[solshro\]) then applying Fubini’s theorem we get $$\begin{aligned}
\label{corsol}
u_{l}(t,x)\:=\:
(4\pi)^{-\frac{n}{2}} \,\int_{B({\eta}/{N})}
\,I_{N}(t,x,y)\, d y\end{aligned}$$ where $\,I_{N}(t,x,y)\,$ is the oscillatory integral $$\begin{aligned}
\label{inoscints}
I_{N}(t,x,y)\;=\;\int_{0}^{1}\,
e^{\imath N^2\, \phi_{N}{(s,t,x,y)}}\,\psi{(s,t)}\, d s,\end{aligned}$$ with the phase $\;
\displaystyle \phi_{N}{(s,t,x,y)} =\frac{ |x-y|^2}{4\,N^2}\frac{1}{t-s}-s\,$ and amplitude $\, \psi{(s,t)}=(t-s)^{-\frac{n}{2}}.$ For simplicity, we write $\phi(.)$ and $\psi(.)$ in place of $\,\phi_{N}{(.,t,x,y)}\,$ and $\,\psi{(.,t)}\,$ respectively. Next, we verify the conditions (**i**) - (**iv**) for $\,\phi\,$ and $\,\psi.$ Let $\,(t,x)\in \Omega_{\eta,N}\,$ and $\,y\in B(\eta/N).$ Observe then that $\;\displaystyle t-{3}/{4}<{|x-y|}/{2N}<t-{1}/{4}\;$ and $\; t-s \in [1,3]. $ Therefore
(i)
: If $\,z\,$ is such that $\,\phi^{\prime}(z)=0\,$ then $\,\displaystyle
z =t-{|x-y|}/{2N}.\,$ Moreover, $\displaystyle \,z\in\:]{1}/{4},{3}/{4} [.$
(ii)
: $\phi^{\prime}$ is monotonically increaing so $\;\displaystyle \min_{s\in[0,1]}{\phi^{\prime}(s)}
= \phi^{\prime}(0)=\frac{|x-y|^2}{4 N^2\, t^2}
> \left(1-\frac{3}{4t}\right)^{2}
\gtrsim 1.$
(iii)
: $\,\displaystyle
\phi^{\prime\prime}(s)\,=\,
\frac{|x-y|^2}{2 N^2}\frac{1 }{(t- s)^3}\,\approx\,1.$
(iv)
: $\;\displaystyle \phi^{(j)}(s)\,=\,
\frac{|x-y|^2}{4N^2}\frac{ j!}{(t- s)^{(j+1)}} \,\approx\,1,\; j=3,4,5$, $\;\;\;\psi(s)\,=\,
(t- s)^{-\frac{n}{2}}
\,\approx\,1$,\
$\psi^{\prime}(s)\,=\,
\frac{n}{2}(t- s)^{-\frac{n}{2}-1}
\,\approx\,1$, $\qquad \psi^{\prime\prime}(s)\,=\,
\frac{n}{2}(\frac{n}{2}+1)(t- s)^{-\frac{n}{2}-2}\,\approx\,1.$
Now, applying Lemma \[stationary\] to the oscillatory integral $\,I_{N}(t,x,y)\,$ in (\[inoscints\]) yields $$\begin{aligned}
\label{noosciny0}
I_{N}(t,x,y)\,=\,
{\,\sqrt{\frac{2 \pi}{\phi_N^{\prime\prime}(z,t,x,y)}} \,\psi(z,t)\,
\frac{e^{ \frac{\pi}{4} \imath}}{N}
\,e^{\imath N^2 \phi_N(z,t,x,y)}}+
\mathcal{O}\left(\frac{1}{N^2}\right).\end{aligned}$$ Since $\,\phi_N(z,t,x,y)+t=|x-y|/N\, $ and since $\, N\left(|x-y|-|x|\right)= \mathcal{O}\left(\eta\right)\,$ whenever\
$\,(t,x)\in \Omega_{\eta,N},\;y\in B(\eta/N).$ Then $\,\,N^2\,\phi_N(z,t,x,y)+N^2\,t
=N\,|x|+\mathcal{O}\left(\eta\right).$ Hence $$\begin{aligned}
\label{noosciny}
e^{\imath N^2\,\phi_N(z,t,x,y)}=
e^{\imath\left(N\,|x|-N^2\,t\right)}
\,e^{\mathcal{O}\left(\eta\right)}
=e^{\imath\left(N\,|x|-N^2\,t\right)}\,
\left(1+\mathcal{O}\left(\eta\right)\right).\end{aligned}$$ Inserting (\[noosciny\]) into (\[noosciny0\]) then returning to (\[corsol\]), we discover $$\begin{aligned}
u_{l}(t,x)\:=\:&
\frac{(4\pi)^{\frac{1-n}{2}} }{\sqrt{2}}\frac{e^{ \frac{\pi}{4} \imath}}{N} \,e^{\imath\left(N\,|x|-N^2\,t\right)}\,
\int_{B({\eta}/{N})}\,{
\,\frac{\psi(z,t)}{\sqrt{\phi_N^{\prime\prime}(z,
t,x,y)}} \,
\,\left(1+\mathcal{O}\left(\eta\right)\right)}
\,d y\\&\;+
\mathcal{O}\left(\frac{1}{N^2}\right)\,
\int_{B({\eta}/{N})}\,\,
d y.\end{aligned}$$ Recalling that $\,\psi,\:
\phi^{\prime\prime}\approx 1,\,$ we immediately deduce the estimate $$\begin{aligned}
&| u_{l}(t,x)|\,\gtrsim\,\frac{|B(\eta/N)|}{N}
\,\approx\,\eta^{n}\,N^{-(1+n)},\quad
(t,x)\in \Omega_{\eta,N}.\quad \text{Thus, for all}\;\;
t\in [2,3],\\
&\hspace*{-1 cm}
||u_{l}(t,x)||_{L^{r}_{x}\left({\mathbb{R}}^{n}\right)}
\,\geq\,
\left( \int_{2(t-{3}/{4})N+\eta N^{-1}\,<\,|x|\,<\,2(t-{1}/{4})N-\eta N^{-1}}\,
| u_{l}(t,x)|^{r}\,dx\,\right)^{\frac{1}{r}}
\,\gtrsim\,\eta^{n}\,N^{-(1+n)+\frac{n}{r}}.\end{aligned}$$ Consequently $$\begin{aligned}
\label{normul}
||u_{l}||_{L^{q}_{t}\left(
\mathbb{R};L^{r}_{x}({\mathbb{R}}^{n})\right)}
\,\geq\,||u_{l}||_{L^{q}_{t}\left(
[2,3];L^{r}_{x}({\mathbb{R}}^{n})\right)}
\,\gtrsim\,\eta^{n}\,N^{-(1+n)+\frac{n}{r}}.\end{aligned}$$ Lastly, it follows from (\[normf\]) and (\[normul\]) that $$\begin{aligned}
||u_{l}||_{L^{q}_{t}\left(
\mathbb{R};L^{r}_{x}({\mathbb{R}}^{n})\right)}/
\parallel F_{l} \parallel_{L^{\tilde{q}^{\prime}}
([0,1];L^{\tilde{r}^{\prime}}(\mathbb{R}^{n}))}\;
\gtrsim\;
\eta^{\frac{n}{\tilde{r}}}\,
N^{\frac{n}{r}-\frac{n}{\tilde{r}}-1}\end{aligned}$$ which, for a fixed $\,\eta,\,$ blows up as $\,N\rightarrow +\infty\,$ if $\,\displaystyle \frac{n}{r}-\frac{n}{\tilde{r}}>1.\,$ In the light of duality this implies the necessary condition (\[necess1\]).
These examples made us wonder how exactly different are linear oscillations from quadratic ones if we capture the cancellations in Lebesgue spaces. One way to see this is to consider the operators $\,T^{j,k}_{N}:L^{p}(B)\,
\rightarrow\,L^{q}([0,1])\,$ defined by $$\begin{aligned}
\label{intop}
T^{j,k}_{N}f(s):=\int_{B}
\,f(x)\,e^{\imath N{|x|}^{j}s^{k}}\,dx,
\qquad (j,k)\in\{1,2\}^{2},\end{aligned}$$ where $\,B\,$ is the unit ball in ${\mathbb{R}}^{n},\,$ and compare the asymptotic behaviour as $\,N\rightarrow +\infty\,$ of their operator norms for all $\,p,\,q\in [1,+\infty].\,$ Let $\,C_{j,k,n}:[0,1]^{2}\rightarrow \mathbb{R}\,$ be the functions defined by $$\begin{aligned}
C_{j,k,n}\left(\frac{1}{p},\frac{1}{q}\right)\,:=\,
\alpha \quad \text{if}\qquad
\parallel T^{j,k}_{N}\parallel_{L^{p}\left(B\right)
\rightarrow L^{q}([0,1])}
\;\approx\; N^{ - \alpha}.\end{aligned}$$ We discover that $\,C_{j,k,n}\,$ is a continuous function with range $\,[0,{1}/{4}]\,$ when $n=1,$ $j=2$ and $\,[0,{1}/{2}{k}]\,$ otherwise (see the figure below). We actually prove that
\[mainthm\] $$C_{j,k,n}\left(\frac{1}{p},\frac{1}{q}\right)\;=\;
\left\{
\begin{array}{ll}
\frac{1}{4}\, \sigma\left(\frac{1}{p},\frac{1}{q}\right), & \hbox{$n=1,\;$ $j=2$;} \\\\
\frac{1}{2\,k}\, \sigma\left(\frac{1}{p},\frac{1}{q}\right), & \hbox{
$n\geq j$.}
\end{array}
\right.$$ where $$\label{sgmab}
\sigma(a,b):=\left\{
\begin{array}{ll}
2b , & \hbox{$\; 0\leq a \leq 1-b,\;\;
0\leq b \leq \frac{1}{2}$;} \\
2(1-a) , & \hbox{$\; \frac{1}{2}\leq a \leq 1,\;\;a+ b \geq 1$;} \\
1 , & \hbox{$\;0\leq a \leq \frac{1}{2},\;\;\frac{1}{2}\leq b \leq 1$.}
\end{array}
\right.$$
$$\begin{aligned}
\begin{tikzpicture} [scale=9]
\draw[->] (0.0, 0) -- (0.6, 0) node[below] {$\frac{1}{p}$};
\draw[->] (0,0.0) -- (0, 0.6) node[left] {$\frac{1}{q}$};
\draw (0.5, 0) node[below] {${1}$};
\draw (0, 0.5) node[left] {${1}$};
\draw (0.25, 0) node[below] {$\frac{1}{2}$};
\draw (0, 0.25) node[left] {$\frac{1}{2}$};
\draw
(0, 0) -- (0.5, 0.0) -- (0.5, 0.5) -- (0.0, 0.5) -- cycle;
\draw
(0.5, 0.0) -- (0.25, 0.25)-- (0.25, 0.5);
\draw (0.25, 0.25) -- (0.0, 0.25);
\draw [loosely dotted] (0.25, 0.0) -- (0.25, 0.25);
\draw (0.165,0.125) node{$\;\frac{1}{2}\frac{1}{q}$};
\draw (0.125,0.375) node{$\;\frac{1}{4}$};
\draw (0.375,0.3) node{$\;
\frac{1}{2}(1-\frac{1}{p})$};
\draw (0.3, -0.1) node[below] {$C_{2,k,1}$};
\end{tikzpicture}\qquad\qquad\quad
\begin{tikzpicture} [scale=9]
\draw[->] (0.0, 0) -- (0.6, 0) node[below] {$\frac{1}{p}$};
\draw[->] (0,0.0) -- (0, 0.6) node[left] {$\frac{1}{q}$};
\draw (0.5, 0) node[below] {${1}$};
\draw (0, 0.5) node[left] {${1}$};
\draw (0.25, 0) node[below] {$\frac{1}{2}$};
\draw (0, 0.25) node[left] {$\frac{1}{2}$};
\draw
(0, 0) -- (0.5, 0.0) -- (0.5, 0.5) -- (0.0, 0.5) -- cycle;
\draw
(0.5, 0.0) -- (0.25, 0.25)-- (0.25, 0.5);
\draw (0.25, 0.25) -- (0.0, 0.25);
\draw [loosely dotted] (0.25, 0.0) -- (0.25, 0.25);
\draw (0.165,0.125) node{$\;\frac{1}{k}\frac{1}{q}$};
\draw (0.125,0.375) node{$\;\frac{1}{2}\frac{1}{k}$};
\draw (0.375,0.3) node{$\;
\frac{1}{k}(1-\frac{1}{p})$};
\draw (0.3, -0.1) node[below] {$C_{j,k,n}$};
\end{tikzpicture}\end{aligned}$$
For each $\,p,q\in [1,\infty],$ and all dimension $\,n>1,\,$ the asymptotic behaviour of $\,\parallel T^{j,k}_{N}\parallel_{L^{p}\left(B\right)
\rightarrow L^{q}([0,1])}\,$ as $\,n\rightarrow +\infty\,$ is determined only by the linearity or quadraticity of the phase in $s$. The role of the power $j$ of $x$ appears exclusively in the dimension $n=1.$
There is nothing special about neither the unit interval nor the unit ball in defining the operators $T^{j,k}_{N}$. Actually we shall make use of Hölder inclusions of $L^{p}$ spaces on measurable sets of finite measure (see Lemma \[holder\] below). So we may take any suitable two such sets provided their finite measures are asymptotically equivalent to a constant independent of $ N $ as $ N\rightarrow +\infty.$
Foschi [@damianorem] studied a discrete version of an operator a little simpler than the integral operator $\,T^{1,1}_{N}.\,$ He considered the operator $\,D_{N}:\ell^{p}(\mathbb{C}^{N})
\rightarrow L^{q}(-\pi,\pi)\,$ that assigns to each vector $\,a=(a_{0},a_{1},...a_{N-1})\in {\mathbb{C}}^{N}\,$ the $\,2\pi$-periodic trigonometric polynomial $\,D_{N}a(t)=\sum_{m=0}^{N-1}a_{m}\,e^{\imath\,m\,t}\,$ and described the asymptotic behaviour of $\, \displaystyle\sup_{a\in \mathbb{C}^{N}-\{0\}}
{{\parallel D_{N}a\parallel_{L^{q}([-\pi,\pi])} }/
{\parallel a\parallel_{\ell^{p}\left(\mathbb{C}^{N} \right)}}}$ as $N\rightarrow+\infty,$ for all $ 1\leq p,\,q\leq+\infty.$ The norms there are defined by $$\begin{aligned}
&\parallel a \parallel_{\ell^{p}}=
\left( \sum_{m=0}^{N-1}|a_{m}|^{p}\right)^{\frac{1}{p}},
\quad
1\leq p <\infty, \qquad
\parallel a \parallel_{\ell^{\infty}}=
\max_{0\leq m\leq N-1}|a_{m}|,\\
&
\parallel f \parallel_{L^{q}}=
\left(\frac{1}{2\pi} \int_{-\pi}^{\pi}|f(t)|^{q}dt\right)^{\frac{1}{q}},
\quad
1\leq q <\infty, \qquad
\parallel f \parallel_{L^{\infty}}=
\max_{|t|\leq \pi}|f(t)|.\end{aligned}$$ This was followed by a similar investigation (see Section 5 in [@damianorem]) of a linear integral operator with an oscillatory kernel $\, L_{N}: L^{p}([0,1])\rightarrow L^{q}([0,1])\,$ defined by $$\begin{aligned}
L_{N}f(t)\,:=\,\int_{0}^{1}\,
e^{\imath N/(1+t+s)}\,\frac{f(s)}{(1+t+s)^{\gamma}}\,ds,
\quad\text{\small for some fixed}\;\; \gamma \geq 0.\end{aligned}$$
### 2. Proof of Theorem \[mainthm\] {#proof-of-theorem-mainthm .unnumbered}
In order to show Theorem \[mainthm\], we shall go through the following steps.\
******. Find lower bounds for $\,\parallel T^{j,k}_{N} \parallel_
{L^{p}(B)\rightarrow L^{q}([0,1])}\,$ for all $\,p,q\in [1,+\infty]\,$:\
Test the ratio $ \parallel T^{j,k}_{N}f \parallel_{L^{q}([0,1])}/
\parallel f \parallel_{L^{p}(B)} $ for functions $f \in {L^{p}(B)}$ that kill or at least slow down the oscillations in the integrals $T^{j,k}_{N}f.\,$ Of course this ratio is majorized by $\displaystyle \parallel T^{j,k}_{N} \parallel_
{L^{p}(B)\rightarrow L^{q}([0,1])}=\sup_{f\in L^{p}(B)-\{0\}}
{{\parallel T^{j,k}_{N}f\parallel_{L^{q}([0,1])} }/{\parallel f\parallel_{L^{p}\left(B\right)}}}. $ But what is really interesting is the fact that such functions likely maximize the ratio as well.\
******. We find upper bounds for $\,\parallel T^{j,k}_{N} \parallel_
{L^{p}(B)\rightarrow L^{q}([0,1])}\,$ for all $\,p,q\in [1,+\infty].$ Thanks to interpolation and Hölder’s inequality, we merely need an upper bound for $\parallel T^{j,k}_{N} \parallel_
{L^{2}(B)\rightarrow L^{2}([0,1]).}$
\[holder\] Let $\,T^{j,k}_{N}:L^{p}(B)\,
\rightarrow\,L^{q}([0,1])\,$ be as in (\[intop\]). Assume that $$\begin{aligned}
\label{en11}
\parallel T^{j,k}_{N}f \parallel_{L^{2}([0,1])}
\,\leq\, c_{j,k,N}
\parallel f \parallel_{L^{2}(B)}.\end{aligned}$$ Then $$\begin{aligned}
\label{consigma}
\parallel T^{j,k}_{N} \parallel_{L^{p}(B)
\rightarrow L^{q}([0,1])}
\;\lesssim_{p,q,n}\; c^{\sigma\left(\frac{1}{p},\frac{1}{q}
\right)}_{j,k,N}\end{aligned}$$ where $\,\sigma:[0,1]^{2}\rightarrow [0,1]\,$ is the continuous function in (\[sgmab\]).
If we take absolute values of both sides of (\[intop\]) we get the trivial estimate\
$\;\parallel T^{j,k}_{N}f\parallel_{L^{\infty}([0,1])}
\,\leq\,\parallel f\parallel_{L^{1}\left(B\right)}.$ Interpolating this with (\[en11\]) using Riesz-Thorin theorem ([@loukas]) implies $$\begin{aligned}
\label{int1}
\parallel T^{j,k}_{N}f \parallel_{L^{q}([0,1])}
\,\leq\, c^{2\left(1-\frac{1}{p}\right)}_{j,k,N} \parallel f \parallel_{L^{p}(B)},
\qquad \frac{1}{2}\leq\frac{1}{p}\leq 1,\;\;
\frac{1}{q}=1-\frac{1}{p}.\end{aligned}$$ Since, by Hölder’s inequality, $\; \parallel T^{j,k}_{N}f \parallel_{L^{\bar{q}}([0,1])}
\,\leq\,\parallel T^{j,k}_{N}f \parallel_{L^{q}([0,1])}\;$ whenever\
$\,1\leq \bar{q}\leq q\leq \infty,$ then $$\begin{aligned}
\label{int2}
\parallel T^{j,k}_{N}f \parallel_{L^{q}([0,1])}
\,\leq\, c^{2\left(1-\frac{1}{p}\right)}_{j,k,N} \parallel f \parallel_{L^{p}(B)},
\qquad \frac{1}{2}\leq\frac{1}{p}\leq 1,\;\;
1-\frac{1}{p}\leq\frac{1}{q}\leq 1.\end{aligned}$$ Applying Hölder’s inequality once more we find that if $\;1\leq p\leq\bar{p}\leq \infty,\,$ then $$\begin{aligned}
\nonumber&\hspace{-1 cm}\parallel f \parallel_{L^{p}(B)}
\,\leq\,|B|^{\frac{1}{p}-\frac{1}{\bar{p}}}\,
\parallel f \parallel_{L^{\bar{p}}(B)}.
\;\; \text{Therefore by}\;(\ref{int1})\;
\text{we have}\\
\label{int3}
&\hspace{-0.6 cm}\parallel T^{j,k}_{N}f \parallel_{L^{q}([0,1])}
\,\leq\, |B|^{1-\frac{1}{p}-\frac{1}{q}}\,
c^{2/q}_{j,k,N} \parallel f \parallel_{L^{p}(B)},
\quad 0\leq\frac{1}{q}\leq \frac{1}{2},\;\;
0\leq\frac{1}{p}\leq 1-\frac{1}{q}.\end{aligned}$$ Moreover, since we know from (\[int2\]) that $$\begin{aligned}
\nonumber &\hspace*{-1 cm}\parallel T^{j,k}_{N}f \parallel_{L^{q}([0,1])}
\,\leq\, c_{j,k,N} \parallel f \parallel_{L^{2}(B)},
\quad \frac{1}{2}\leq\frac{1}{q}\leq 1,\quad
\text{then}\\
&\label{int4}
\parallel T^{j,k}_{N}f \parallel_{L^{q}([0,1])}
\,\leq\, |B|^{\frac{1}{2}-\frac{1}{p}}\,
c_{j,k,N} \parallel f \parallel_{L^{p}(B)},
\quad 0\leq\frac{1}{p}\leq \frac{1}{2},\;\;
\frac{1}{2}\leq\frac{1}{q}\leq 1.\end{aligned}$$
If the constants in inequalities (\[int1\]) - (\[int4\]) were sharp, they would be precisely the values of the corresponding norms $\,\parallel T^{j,k}_{N}\parallel_{L^{p}\left(B\right)
\rightarrow L^{q}([0,1])}.$ Unfortunately, we are not able to compute the optimal constant $\,c_{j,k,N}\,$ in the energy estimate (\[en11\]). Nevertheless, the constants $\,c^{\sigma\left(\frac{1}{p},\frac{1}{q}
\right)}_{j,k,N}\,$ in (\[consigma\]) would be good enough for our purpose if, for each $p,q\in [1,+\infty],$ they were asymptotically equivalent, as $ N\rightarrow +\infty$, to the corresponding lower bounds of $\,\parallel T^{j,k}_{N}\parallel_{L^{p}\left(B\right)
\rightarrow L^{q}([0,1])}\,$ that we compute in *Step 1*.\
****.**\
(i) **Focusing data**\
When $\,x\in B(\eta /N^{\frac{1}{j}})\,$ we have $\;\displaystyle e^{\imath N{|x|}^{j}s^{k}}=
e^{\mathcal{O}\left(\eta\right)}=
1+\mathcal{O}\left(\eta\right), \;\;
\text{\small for all}\;s\in [0,1].$ Thus, if we take $f_{j}$ to be the focusing functions $\,f_{j}=\displaystyle \chi_{B(\eta /N^{\frac{1}{j}})}\,$ then $\; \displaystyle \parallel f \parallel_{L^{p}(B)}\,=\,
|B(\eta /N^{\frac{1}{j}})|^{\frac{1}{p}}\;$ and $$\begin{aligned}
T^{j,k}_{N}f_{j}(s)\,=\,\int_{B(\eta /N^{\frac{1}{j}})}
\,e^{\imath N{|x|}^{j}s^{k}}\,dx=
\int_{B(\eta /N^{\frac{1}{j}})}
\,\left(1+\mathcal{O}\left(\eta\right)\right)\,dx
\,\gtrsim \,|B(\eta /N^{\frac{1}{j}})|\end{aligned}$$ for all $ \;0\leq s\leq 1.\;$ Consequently, since $\eta$ is fixed, $$\begin{aligned}
\label{lb1}
\parallel T^{j,k}_{N}\parallel_{L^{p}\left(B\right)
\rightarrow L^{q}([0,1])}
\,\geq\,\frac{\parallel T^{j,k}_{N}f_{j} \parallel_{L^{q}([0,1])}}{
\parallel f_{j} \parallel_{L^{p}(B)}}
\;\gtrsim\;
N^{-\frac{n}{j}\left(1-\frac{1}{p}\right)}.\end{aligned}$$ The figure below illustrates the one dimensional case. $$\begin{aligned}
&\hspace{-1 cm} \begin{tikzpicture} [scale=10]
\fill[fill = black!10] (0,0.35)--(0.048,0.35)--(0.048,0.0)--
(0,0)--cycle;
\draw[->] (0, 0) -- (0.38, 0) node[below] {\small$x$};
\draw[->] (0,0.0) -- (0, 0.4) node[left] {\small$ f_{j}(x)$};
\draw (0.35, 0) node[below] {\small ${1}$};
\draw (0,0.35) node[left] {\small ${1}$};
\draw[thick] (0,0.35)--(0.05,0.35);
\draw[thick] (0.05,0.0001)--(0.35,0.0001);
\draw (0.07,0.0)node[below] { \footnotesize {$N^{-\frac{1}{j}}$}};
\draw[dotted] (0.35, 0) -- (0.35,0.35)--(0,0.35);
\end{tikzpicture}\quad
\begin{tikzpicture} [scale=3.5]
\draw[->] (0.0, 0) -- (1.1, 0) node[below] {\small $s$};
\draw[->] (0.0,0.0) -- (0.0,1.1);
\draw (0.9,1.2) node[left] { \footnotesize {$N^{1/j}$\text{Re}$\left\{T^{j,k}_{N}f_{j}(s)\right\}$}};
\draw (1,0) node[below] {\small${1}$};
\draw (1,-0.001) --(1,0.003);
\draw (0.0,0.95) node[left] {\small${1}$};
\draw [blue,samples=500,domain=0.0:1] plot
(\x, {cos(0.65*\x r)});
\draw [red,samples=500,domain=0.0:1] plot
(\x, {cos(0.65*\x*\x r)});
\draw(0.7,1) node[right,thick] {\tiny {$k=2$}};
\draw(0.6,0.8) node[right,thick] {\tiny {$k=1$}};
\draw(0.0,-0.15) node {};
\end{tikzpicture}\qquad
\begin{tikzpicture} [scale=3.5]
\draw[->] (0.0, 0) -- (1.1, 0) node[below] {\small $s$};
\draw[->] (0.0,0.0) -- (0.0,1.1);
\draw (0.9,1.2) node[left] { \footnotesize {$N^{1/j}$\text{Im}$\left\{T^{j,k}_{N}f_{j}(s)\right\}$}};
\draw (1,0) node[below] {\small${1}$};
\draw (1,-0.001) --(1,0.003);
\draw (-0.001,1) --(0.003,1);
\draw (0.0,0.95) node[left] {\small${1}$};
\draw [blue,samples=500,domain=0.0:1] plot
(\x, {sin(0.65*\x r)});
\draw [red,samples=500,domain=0.0:1] plot
(\x, {sin(0.65*\x*\x r)});
\draw(0.7,0.3) node[right,thick] {\tiny {$k=2$}};
\draw(0.6,0.6) node[right,thick] {\tiny {$k=1$}};
\draw(0.0,-0.15) node {};
\end{tikzpicture}\\
&\text{\small \emph{Both real and imaginary parts of the functions}}
\;T^{1,k}_{N}f_{1}\; \text{\small\emph{and}}\;
T^{2,k}_{N}f_{2}\; \text{\small \emph{have the same profile}}.\end{aligned}$$ (ii) **Constant data**\
Let $\,g(x)=1.\,$ Whenever $\,\displaystyle s \in [0,\eta/N^{\frac{1}{k}}]\,$ we have $\;\imath N{|x|}^{j}s^{k}\,=\,\mathcal{O}\left(\eta\right)\;$ for all $\;x\in B\;$ and it follows that $\;\displaystyle e^{\imath N{|x|}^{j}s^{k}}=
1+\mathcal{O}\left(\eta\right).\;$ Hence, when $\,\displaystyle s \in [0,\eta/N^{\frac{1}{k}}],\,$ $$\begin{aligned}
T^{j,k}_{N}g(s)\,=\,
\int_{B}\,e^{\imath N{|x|}^{j}s^{k}}\,dx\,=\,
\int_{B}\,\left(1+\mathcal{O}\left(\eta\right)\right)\,dx
\,\gtrsim 1.\end{aligned}$$ Therefore, recalling that $\eta$ is fixed, $$\label{lb20}
\int_{0}^{1}|T^{j,k}_{N}g(s)|^{q}\,ds
\,\geq\,
\int_{0}^{\eta/N^{\frac{1}{k}}}|T^{j,k}_{N}g(s)|^{q}\,ds
\,\gtrsim\,
\int_{0}^{\eta/N^{\frac{1}{k}}}\,ds
\,\approx\,N^{-\frac{1}{k}}.$$ In view of (\[lb20\]), we deduce that $$\begin{aligned}
\label{lb2}
\parallel T^{j,k}_{N}\parallel_{L^{p}\left(B\right)
\rightarrow L^{q}([0,1])}
\,\geq\,\frac{\parallel T^{j,k}_{N} g \parallel_{L^{q}([0,1])}}{
\parallel g \parallel_{L^{p}(B)}}
\;\gtrsim\;
N^{-\frac{1}{k}\frac{1}{q}}.\end{aligned}$$ By rescaling, it is easy to verify that the estimate (\[lb2\]) follows for any complex-valued constant function $g$. The figure below shows the behaviour of $ T_{N}^{j,k}g $ on $[0,1]$ in the dimension $n=1.$ $$\begin{aligned}
\hspace*{-0.5 cm}
\begin{tikzpicture} [scale=10]
\draw[->] (0.0, 0.0) -- (0.48,0.0) node[below right] {$s$};
\draw[->] (0.0,0.0) -- (0.0,0.52);
\draw(0.25,0.44) node[above] {\scriptsize $
\text{Re}\left\{T^{1,k}_{N}g(s)\right\}=2$};
\draw(0.45,0.44) node[above] {\large $ \frac{\sin{(Ns^{k})}}{N\,s^{k}}$};
\draw (0.0,0.49) node[left] {\scriptsize ${2}$};
\draw (0.1,0.2) node[left] {\scriptsize $k=1$};
\draw (0.175,0.3) node[left] {\scriptsize $k=2$};
\draw [very thin,blue,samples=500,domain=0.001:0.45] plot
(\x, {2*sin((250*\x) r)/((1000*\x) )});
\draw [very thin,red,samples=500,domain=0.0012:0.45] plot
(\x, {2*sin((250*\x*\x) r)/((1000*\x*\x) )});
\end{tikzpicture}\qquad\qquad\qquad
\begin{tikzpicture} [scale=10]
\draw(0.23,0.44) node[above] {\scriptsize $
\text{Im}\left\{T^{1,k}_{N}g(s)\right\}=4$};
\draw(0.46,0.44) node[above] {\large $ \frac{\sin^{2}{(Ns^{k}/2)}}{N\,s^{k}}$};
\draw (0.0,0.49) node[left] {\scriptsize ${2}$};
\draw (0.0,0.49) --(0.001,0.49);
\draw (0.09,0.38) node[left] {\scriptsize $k=1$};
\draw (0.2,0.38) node[left] {\scriptsize $k=2$};
\draw [very thin,red,samples=500,domain=0.001:0.45] plot
(\x, {4*sin((100*\x*\x) r)*sin((100*\x*\x) r)/((800*\x*\x) )});
\draw [very thin,blue,samples=500,domain=0.0001:0.45] plot
(\x, {4*sin((50*\x) r)*sin((50*\x) r)/((400*\x) )});
\draw (-0.084, -0.084) node[below] {};
\draw[->] (0.0, 0.0) -- (0.48,0.0) node[below right] {$s$};
\draw[->] (0.0,0.0) -- (0.0,0.52);
\end{tikzpicture}\end{aligned}$$ $$\begin{aligned}
&\hspace*{-1.58 cm}\begin{tabular}{c c}
\hspace{-4.75 cm}\vspace{0.25 cm} \scriptsize $2$& \\
\hspace{-1.7 cm} \scriptsize $ k=2$&\\
\hspace{-3.3 cm} \scriptsize $ k=1$&\\
&\hspace{-1 cm} \scriptsize $ k=1\quad k=2$
\vspace{-2.5 cm}\\
\hspace{0.5 cm}\scriptsize{ \text{Re}$\left\{T^{2,k}_{N}g(s)\right\}$}
&\hspace{2 cm}\scriptsize
{\text{Im}$\left\{T^{2,k}_{N}g(s)
\right\}$} \vspace{-1.5 cm}\\
\includegraphics[scale=0.32]{t1001.pdf}
&\qquad\quad\;
\includegraphics[scale=0.32]{t2001.pdf}\vspace{-1 cm}\\
\hspace{5.5 cm} \small$ s$& \hspace{7 cm}
\small $ s$
\end{tabular}\\
&\hspace{-2.3 cm} \text{\small\emph{Functions}}\; \text{\small \emph{Re}}\small \{ T^{1,k}_{N}g(s)\}\;
\text{\small\emph{vanish and}}\;\text{\small \emph{Re}}\small \{T^{2,k}_{N}g(s)\}\;
\text{\small \emph{change monotonicity, for the first time, when }}\;s=\sqrt[k]{\pi/N}\end{aligned}$$ (iii) **Oscillatory data**\
Consider the oscillatory function $\,h(x)=e^{2\imath N \left(|x|^2-|x|\right)}.\,$ Using polar coordinates we can write $$\begin{aligned}
T^{j,k}_{N}h(s)\,=\,
\int_{S^{n-1}}
\int_{0}^{1}\,e^{\imath N
\,\left({\rho}^{j}s^{k}+2\rho^2-2\rho\right)}\,
\rho^{n-1}\,d\rho \,d\omega\,=\,
\omega_{n-1}
\:I^{j,k}_{N}(s)\end{aligned}$$ where $I^{j,k}_{N}(s) $ is the oscillatory integral given by $$\begin{aligned}
\label{inoscint}
I^{j,k}_{N}(s) =
\int_{0}^{1}\,e^{\imath N
\,\phi_{j,k}(\rho;s)}\,
\rho^{n-1}\,d\rho\end{aligned}$$ with the phase $\displaystyle \phi_{j,k}(\rho;s) ={\rho}^{j}s^{k}+2\rho^2-2\rho. $\
The quadratic function $\displaystyle \rho\rightarrow\phi_{j,k}(\rho;s)$, after a suitable translation along the vertical axis, has a single nondegenerate stationary point that happens to lie well inside $]\frac{1}{5},\frac{4}{5}[.$ Indeed, one can simply write $$\begin{aligned}
\phi_{j,k}(\rho;s)=\left\{
\begin{array}{ll}
2\left(\rho-\frac{2-s^{k}}{4}\right)^{2}-
\frac{\left(2-s^{k}\right)^{2}}{8}, & \hbox{$j=1$;} \\
\left(2+s^{k}\right)\left(\rho-\frac{1}{2+s^{k}}\right)^{2}-
\frac{1}{\left(2+s^{k}\right)^{2}}, & \hbox{$j=2$.}
\end{array}
\right.\end{aligned}$$ Notice also that $\, \left(2-s^{k}\right)/4\in[\frac{1}{4},\frac{1}{2}]\,$ and $\,\left(2+s^{k}\right)^{-1}\in[\frac{1}{3},
\frac{1}{2}]\,$ when $\,s\in [0,1].$ In fact, this is what we were after when we used the oscillatory function $h$ with its particular quadratic phase. Let us see how we benefit from this. We shall work on the integral $\,I^{1,k}_{N}(s)\,$ and the applicability of the same procedure to the integral $\,I^{2,k}_{N}(s)\,$ will be obvious. For simplicity, let $z$ denote $\,\left(2-s^{k}\right)/4.\,$ Then $$\begin{aligned}
\nonumber
e^{2\imath N\,z^2}\,I^{1,k}_{N}(s) =&
\,\int_{0}^{1}\,e^{2\imath N
\,\left(\rho-z\right)^{2}}\,
\rho^{n-1}\,d\rho\\
\label{feq}=&\,z^{n-1}\,\int_{0}^{1}\,e^{2\imath N
\,\left(\rho-z\right)^{2}}\,d\rho+
\int_{0}^{1}\,e^{2\imath N
\,\left(\rho-z\right)^{2}}\,
\left(\rho^{n-1}-z^{n-1}\right)\,d\rho.\end{aligned}$$ We compute $$\begin{aligned}
\label{t1}
\hspace{-1 cm}
\int_{0}^{1}\,e^{2\imath N
\,\left(\rho-z\right)^{2}}\,d\rho=
\int_{-\infty}^{+\infty}\,e^{2\imath N
\,\left(\rho-z\right)^{2}}\,d\rho-
\int_{-\infty}^{0}\,e^{2\imath N
\,\left(\rho-z\right)^{2}}\,d\rho-
\int_{1}^{+\infty}\,e^{2\imath N
\,\left(\rho-z\right)^{2}}\,d\rho.\end{aligned}$$ Using the identity (See Exercise 2.26 in [@taobook]) $$\begin{aligned}
\int_{-\infty}^{+\infty}
\,e^{-ax^2}\,e^{bx}\,dx=\sqrt{\frac{\pi}{a}}
\,e^{b^2/4a},\quad a,b \in \mathbb{C},\;
\textrm{Re}(a) >0 \qquad \text{we get}\end{aligned}$$ $$\begin{aligned}
\label{11}
\hspace{-0.5 cm}\int_{-\infty}^{+\infty}\,e^{2\imath N
\,\left(\rho-z\right)^{2}}\,d\rho
=\sqrt{\frac{\pi}{2N}}\,e^{\frac{\pi}{4}\imath}.\end{aligned}$$ And since $$\begin{aligned}
\hspace*{-1 cm}
\left|\;\int_{-\infty}^{0}\,e^{2\imath N
\,\left(\rho-z\right)^{2}}\,\partial_{\rho}
\left(\rho-z\right)^{-1}\,d\rho \right|\,\leq\,
\frac{1}{z},\quad \left|\;
\int_{1}^{+\infty}\,e^{2\imath N
\,\left(\rho-z\right)^{2}}\,\partial_{\rho}
\left(\rho-z\right)^{-1}\,d\rho \right|\,\leq\,
\frac{1}{1-z},\end{aligned}$$ then integration by parts implies $$\begin{aligned}
\label{22}&\int_{-\infty}^{0}\,e^{2\imath N
\,\left(\rho-z\right)^{2}}\,d\rho=
\frac{\imath\, e^{2\imath N z^{2}}}{4 N z}
+\mathcal{O}\left(\frac{1}{Nz}\right),\\
\label{33}&\int_{1}^{+\infty}\,e^{2\imath N
\,\left(\rho-z\right)^{2}}\,d\rho=
\frac{\imath\, e^{2\imath N\left(1-z\right)^{2}}}{4 N\left(1-z\right)}
+\mathcal{O}\left(\frac{1}{N\left(1-z\right)}\right).\end{aligned}$$ Recalling that $\,\frac{1}{4}\leq z\leq \frac{1}{2}\;$ and using (\[11\]), (\[22\]), (\[33\]) in (\[t1\]) we obtain $$\begin{aligned}
\label{44}
\int_{0}^{1}\,e^{2\imath N
\,\left(\rho-z\right)^{2}}\,d\rho\,=\,
\sqrt{\frac{\pi}{2N}}\,e^{\frac{\pi}{4}\imath}
+\mathcal{O}\left(\frac{1}{N}\right).\end{aligned}$$ This gives us an estimate for the first integral on the right hand side of (\[feq\]). The second integral is $\;\mathcal{O}\left({1}/{N}\right).\;$ This follows from integration by parts and the smoothness of the polynomial $\;P(\rho;z):={\left(\rho^{n-1}-z^{n-1}\right)}/{\left(\rho-z\right)}=
\sum_{\ell=0}^{n-2}\,\rho^{n-2-\ell}\,z^{\ell}\;$ as we can write $$\begin{aligned}
\int_{0}^{1}\,e^{2\imath N
\,\left(\rho-z\right)^{2}}\,
\left(\rho^{n-1}-z^{n-1}\right)\,d\rho\,=\,
\frac{1}{4\imath N}
\int_{0}^{1}\,
P(\rho;z)\,
\partial_{\rho}\,e^{2\imath N
\,\left(\rho-z\right)^{2}}\,d\rho.\end{aligned}$$ Plugging (\[44\]) together with the latter estimate into (\[feq\]) we get that $$\begin{aligned}
\label{sm}
e^{2\imath N\,z^2}\,I^{1,k}_{N}(s)
\,=\,z^{n-1}\,
\sqrt{\frac{\pi}{2N}}\,e^{\frac{\pi}{4}\imath}
+\mathcal{O}\left(\frac{1}{N}\right).\end{aligned}$$ From (\[sm\]) follows the estimate $$\begin{aligned}
\left|I^{1,k}_{N}(s)\right|
\,\gtrsim\,N^{-1/2}.\end{aligned}$$ An explanation for the estimate above comes from the fact that the function $\,\lambda_{N}(\rho;z)=
\cos{\left(2N\,\left(\rho-z\right)^{2}\right)}\,$ remains positive for $\;|\rho-z|<\sqrt{\left(\pi/4N\right)}\;$ and the further we move from the stationary point $\rho=z$ it, unlike the slowly varying factor $\rho^{n-1}$, oscillates rapidly for large $N$ so that, when summing over $\rho$, integrals over neighbouring halfwaves where $\lambda_{N}$ changes sign almost cancel. See the figure below. An identical estimate for $\,I^{2,k}_{N}(s)\,$ follows applying the same argument above. The approach adopted here is standard. It represents the key idea of the proof of the stationary phase method illustrated by Lemma \[stationary\]. $$\begin{aligned}
\begin{tikzpicture}[yscale=1.5]
\fill[fill = black!50] (3*pi/8,0) -- plot [domain=3*pi/8:11*pi/13] (\x,{cos(64*\x*\x )}) -- (11*pi/13,0) -- cycle;
\fill[fill = black!50] (-3*pi/8,0) -- plot [domain=-3*pi/8:-11*pi/13] (\x,{cos(64*\x*\x )}) -- (-11*pi/13,0) -- cycle;
\fill[fill = black!25] (11*pi/13,0) -- plot [domain=11*pi/13:235*pi/208] (\x,{cos(64*\x*\x )}) -- (235*pi/208,0) -- cycle;
\fill[fill = black!25] (-11*pi/13,0) -- plot [domain=-11*pi/13:-235*pi/208] (\x,{cos(64*\x*\x )}) -- (-235*pi/208,0) -- cycle;
\fill[fill = black!5] (235*pi/208,0) -- plot [domain=235*pi/208:141*pi/104] (\x,{cos(64*\x*\x )}) -- (141*pi/104,0) -- cycle;
\fill[fill = black!5] (-235*pi/208,0) -- plot [domain=-235*pi/208:-141*pi/104] (\x,{cos(64*\x*\x )}) -- (-141*pi/104,0) -- cycle;
\draw [ <->] (-6.5,0) -- (6.5,0);
\draw [help lines,dashed,<-] (0,1.3) -- (0,0);
\draw (0,0) node[below] {$\rho=z$};
\draw (-5,1.3) node[above]{$ \cos{\left(N\,\left(\rho-z\right)^{2}\right)}$}; ;
\draw [thick,samples=500,domain=-2*pi:2*pi] plot
(\x, {cos(64*\x*\x )});
\draw [ <-](-3*pi/8+0.01,-0.7)
--(-2*pi/8+0.18,-0.7);
\draw [ ->](2*pi/8-0.1,-0.7)
--(3*pi/8-0.01,-0.7);
\draw (6.5,0) node[right] {$\rho$};
\draw (0,-0.7) node {$\sqrt{{\pi}/{2N}}$};
\draw [help lines,dashed] (-3*pi/8,1) -- (-3*pi/8,-1);
\draw [help lines,dashed] (3*pi/8,1) -- (3*pi/8,0-1);
\end{tikzpicture}\end{aligned}$$ Finally, since $\;\displaystyle \parallel h \parallel_{L^{p}(B)}\,=\, |B|^{{1}/{p}}\,\approx\,1,\;$ then $$\begin{aligned}
\label{lb3}
\parallel T^{j,k}_{N}\parallel_{L^{p}\left(B\right)
\rightarrow L^{q}([0,1])}
\,\geq\,\frac{\parallel T^{j,k}_{N} h \parallel_{L^{q}([0,1])}}{
\parallel h \parallel_{L^{p}(B)}}
\;\gtrsim\;
N^{-\frac{1}{2}}.\end{aligned}$$ Putting (\[lb1\]), (\[lb2\]) and (\[lb3\]) together we deduce $$\begin{aligned}
\parallel T^{j,k}_{N} \parallel_{L^{p}(B)
\rightarrow L^{q}([0,1])}
\;\;\gtrsim\;
N^{-\min\left\{\frac{n}{j}\left(1-\frac{1}{p}\right),
\,\frac{1}{k}\frac{1}{q},\,\frac{1}{2}\right\}}
\,=\,N^{- C_{j,k,n}\left(\frac{1}{p},\frac{1}{q}\right)}.\end{aligned}$$ ****.** The $\,L^{2} - L^{2}\,$ estimate takes the form: $$\begin{aligned}
\label{energy}
\left.
\begin{array}{ll}
\vspace{0.3 cm}
\parallel T^{j,k}_{N}f\parallel_{L^{2}([0,1])}
\;\lesssim \; N^{-1/2k}\,\parallel f \parallel_{L^{2}\left(B\right)}, & \hbox{$n\geq j$,} \\
\parallel T^{2,k}_{N}f\parallel_{L^{2}([0,1])}
\;\lesssim \; N^{-n/2j}\,\parallel f \parallel_{L^{2}\left(B\right)}, & \hbox{$n=1$.}
\end{array}
\right \}\end{aligned}$$ Besides (\[lb2\]), the estimate (\[energy\]) demonstrates the difference between linear ($k=1$) and quadratic ($k=2$) oscillations. Let $\,x\in {\mathbb{R}}^{n}-\{0\}.\,$ The phase $\;s \longrightarrow {|x|}^{j}\,s^{k}\;$ of the oscillatory factor in (\[intop\]) is non-stationary when $\,k=1.\,$ While in the case $\,k=2,\,$ it is stationary with the nondegenerate critical point $s=0.\,$ This is where non-stationary and stationary phase methods (see lemmas \[nonstationary\] and \[stationary0\] below) for estimating oscillatory integrals come into play. As expected from (\[lb1\]), the role of $j$ appears only in the dimension $n=1.$ Using the estimate (\[energy\]) in Lemma \[holder\] we infer $$\begin{aligned}
\parallel T^{j,k}_{N} \parallel_{L^{p}(B)
\rightarrow L^{q}([0,1])}
\;\;\lesssim\; N^{- C_{j,k,n}\left(\frac{1}{p},\frac{1}{q}\right)}.\end{aligned}$$
### 3. Proof of the energy estimate (\[energy\]) {#proof-of-the-energy-estimate-energy .unnumbered}
To prove the estimate (\[energy\]) we need lemmas \[kernelsk\], \[kernelsq\] and \[even\] that we give below. Lemma \[kernelsk\] is based on the assertions of lemmas \[nonstationary\] and \[stationary0\].
\[nonstationary\] ([@stein], Proposition 1 Chapter VIII) Let $\,\psi \in C^{\infty}_{c}\left(\mathbb{R}
\right)\,$ and let $\displaystyle\; I(\lambda)=
\int_{\mathbb{R}}\,\psi(s)\,e^{\imath \,\lambda\,s}\,ds. \,$ Then $\;\displaystyle |I(\lambda)|\;\lesssim\;
\min{\left\{
\frac{1}{1+|\lambda|},
\frac{1}{1+\lambda^{2}}\right\}}.$
Observing that $\;\displaystyle \int_{0}^{1}\,e^{\imath \,\lambda\,s^{2}}\,ds\,=\,
\frac{1}{2}\int_{-1}^{1}\,e^{\imath \,\lambda\,s^{2}}\,ds\;$ and arguing as in (\[t1\])-(\[44\]) implies the estimate in Lemma \[stationary0\].
\[stationary0\] $$\begin{aligned}
\left|\int_{0}^{1}\,e^{\imath \,\lambda\,s^{2}}\,ds\right|\;\lesssim \; \max{\left\{\frac{1}{1+\sqrt{|\lambda|}},
\frac{1}{1+|\lambda|}\right\}}.\end{aligned}$$
\[kernelsk\] Let $\,\psi \in C^{\infty}_{c}\left(\mathbb{R}
\right)\,$ and let $\;K_{N}^{j,k}:{\mathbb{R}}^{n}\times{\mathbb{R}}^{n}
\longrightarrow {\mathbb{C}}\;$ be defined by $$\begin{aligned}
K_{N}^{j,k}(x,y):=
\left\{
\begin{array}{ll}
\displaystyle \int_{\mathbb{R}}\,\psi(s)\,e^{\imath N \left({|x|}^{j}-{|y|}^{j}\right)s}\,ds, & \hbox{$k=1$;} \\\\
\displaystyle \int_{0}^{1}\,\,e^{\imath N \left({|x|}^{j}-{|y|}^{j}\right)s^{2}}\,ds , & \hbox{$k=2$.}
\end{array}
\right.\end{aligned}$$ Then $$\begin{aligned}
\label{kernelsk1}
&\hspace*{-1 cm}
|K_{N}^{j,1}(x,y)|\;\lesssim\;
\min{\left\{
\left(1+N\,\left|{|x|}^{j}-{|y|}^{j}\right|\right)^{-1},
\left(1+N^{2} \, \left({|x|}^{j}-{|y|}^{j}\right)^{2}\right)^{-1}
\right\}},
\\ \label{kernelsk2}
&\hspace*{-1 cm} |K_{N}^{j,2}(x,y)|\;\lesssim\;
\max{\left\{\left( 1+\sqrt{N}\, \sqrt{\left|{|x|}^{j}-{|y|}^{j}\right|}\right)^{-1},
\left(1+N\, \left|{|x|}^{j}-{|y|}^{j}\right|\right)^{-1}\right\}}.\end{aligned}$$
The next lemma is mainly a consequence of Young’s inequality.
\[kernelsq\] Let $\,p,q,r\geq 1\,$ and $\,1/p+1/q+1/r =2.\,$ Let $\,f\in L^{p}(B),\,$ $\,g\in L^{q}(B)\,$ and $\,h\in L^{r}([0,1]).\,$ Then $$\begin{aligned}
\left|\,\int_{{B}}\,\int_{{B}}\,
f(x)\,f(y)\,h(|x|^{m}-|y|^{m})\,dx\,dy\,\right|\;\lesssim
\;\parallel f \parallel_{L^{p}(B)}\,
\parallel g \parallel_{L^{q}(B)}\,
\parallel h \parallel_{L^{r}([0,1])}\end{aligned}$$ provided $\,m\leq n$.
Switching to polar coordinates by setting $\,x=r_{1}\theta_{1}\,$ and $\,y=r_{2}\theta_{2}\,$ then applying Fubini’s theorem gives $$\begin{aligned}
\label{newlemma1}
\left|\,\int_{{B}}\,\int_{{B}}\,
f(x)\,f(y)\,h(|x|^{m}-|y|^{m})\,dx\,dy\,\right|
\,\leq\,\int_{S^{n-1}}\,\int_{S^{n-1}}\,
|Q(\theta_{1},\theta_{2})|
\,d\theta_{1}\,d\theta_{2}\end{aligned}$$ where $$\begin{aligned}
Q(\theta_{1},\theta_{2})\,=\,
\int_{0}^{1}\,\int_{0}^{1}\,
f(r_{1}\theta_{1})\,g(r_{2}\theta_{2})
\,h\left({r_{1}}^{m}-{r_{2}}^{m}\right)
\,r_{1}^{n-1}\,r_{2}^{n-1}\,dr_{1}\,dr_{2}.\end{aligned}$$ Changing variables $\:r_{i}^{m}\,\longrightarrow\, \rho_{i}\:$ then using Young’s inequality we get $$\begin{aligned}
\hspace*{-1 cm}
|Q(\theta_{1},\theta_{2})|\,\lesssim\,
\left(\int_{0}^{1}\left|f(\sqrt[m]{\rho_{1}}\,\theta_{1})
\right|^{p}\,\rho_{1}^{p\frac{n-m}{m}}\,d\rho_{1}\right)
^{\frac{1}{p}}
\left(\int_{0}^{1}\left|g(\sqrt[m]{\rho_{2}}\,\theta_{2})
\right|^{q}\,\rho_{2}^{q\frac{n-m}{m}}\,d\rho_{2}\right)
^{\frac{1}{q}}
\parallel h \parallel_{L^{r}([0,1])}.\end{aligned}$$ Reversing the variables change in the first two integrals on the right-hand side of the latter estimate we obtain $$\begin{aligned}
\label{newlemma2}
\hspace*{-1 cm}
\nonumber |Q(\theta_{1},\theta_{2})|\,\lesssim&\,
\left(\int_{0}^{1}\left|f({r_{1}}\,\theta_{1})
\right|^{p}\,r_{1}^{(p-1)(n-m)}\,
r_{1}^{n-1}\,dr_{1}\right)
^{\frac{1}{p}}\\&\;\nonumber
\left(\int_{0}^{1}\left|g({r_{2}}\,\theta_{2})
\right|^{q}\,r_{2}^{(p-1)(n-m)}\,
r_{2}^{n-1}\,dr_{2}\right)
^{\frac{1}{q}}\,
\parallel h \parallel_{L^{r}([0,1])}\\
\leq&
\left(\int_{0}^{1}\left|f({r_{1}}\,\theta_{1})
\right|^{p}\,r_{1}^{n-1}\,dr_{1}\right)
^{\frac{1}{p}}
\left(\int_{0}^{1}\left|g({r_{2}}\,\theta_{2})
\right|^{q}\,r_{2}^{n-1}\,dr_{2}\right)
^{\frac{1}{q}}
\,\parallel h \parallel_{L^{r}([0,1])}\end{aligned}$$ as long as $\,m\leq n.$ Invoking Hölder’s inequality it follows that $$\begin{aligned}
\nonumber &\int_{S^{n-1}}\,
\left(\int_{0}^{1}\left|f({r_{1}}\,\theta_{1})
\right|^{p}\,
r_{1}^{n-1}\,dr_{1}\right)^{\frac{1}{p}}\,d\theta_{1}\\
\label{newlemma3} &\hspace{0.8 cm}\leq\;\omega_{n-1}^{1-\frac{1}{p}}\;
\left( \int_{S^{n-1}}\,\int_{0}^{1}
\left|f({r_{1}}\,\theta_{1})
\right|^{p}\,r_{1}^{n-1}\,dr_{1}\,d
\theta_{1}\right)^{\frac{1}{p}}\;=
\;\omega_{n-1}^{1-\frac{1}{p}}\;
\parallel f \parallel_{L^{p}(B)},\\
\nonumber &
\int_{S^{n-1}}\,
\left(\int_{0}^{1}\left|g({r_{2}}\,\theta_{2})
\right|^{q}\,
r_{2}^{n-1}\,dr_{2}\right)^{\frac{1}{q}}\,d\theta_{2}\\
\label{newlemma4} &
\hspace{0.8 cm}\leq\;\omega_{n-1}^{1-\frac{1}{q}}\;
\left( \int_{S^{n-1}}\,\int_{0}^{1}
\left|g({r_{2}}\,\theta_{2})
\right|^{q}\,r_{2}^{n-1}\,dr_{2}\,d
\theta_{2}\right)^{\frac{1}{q}}\;=
\;\omega_{n-1}^{1-\frac{1}{q}}\;
\parallel g \parallel_{L^{q}(B)}.\end{aligned}$$ Returning to (\[newlemma1\]) with the estimates (\[newlemma2\]), (\[newlemma3\]) and (\[newlemma4\]) concludes the proof.
Remark \[even0\] together with Lemma \[homogeneous\] are needed to show Lemma \[even\].
\[even0\] Suppose that the integral $$\begin{aligned}
J\,=\,
\int_{-b_{1}}^{b_{1}}...\int_{-b_{m}}^{b_{m}}
\,K(t_{1},...,t_{m})\,
f_{1}(t_{1})...f_{m}(t_{m})\,dt_{1}...dt_{m}\end{aligned}$$ exists. If $\,K\,$ is even in all its variables then $$\begin{aligned}
J\,=\,
\int_{0}^{b_{1}}...
\int_{0}^{b_{m}}
\,K(t_{1},...,t_{m})\,
\prod_{i=1}^{m}\left(f_{i}(t_{i})+f_{i}(-t_{i})\right)
\,dt_{1}...dt_{m}.\end{aligned}$$ This follows easily from the fact that the integrand in the second expression for $\,J\,$ is even in all variables.
Lemma \[homogeneous\] discusses the boundedness of a bilinear form with a homogeneous kernel.
\[homogeneous\] Let $\,f\in L^{p}([0,1])\,$ and $\,g\in L^{q}([0,1])\,$ with $\,1\leq p \leq +\infty\,$ and $\,1/p\, +\, 1/q=1.\,$ Assume that $\,K:{[0,1]}\times
{[0,1]}\longrightarrow {\mathbb{R}}\,$ is homogeneous of degree $-1,\,$ that is, $\,K(\lambda x, \lambda y)=
\lambda^{-1} K(x,y),\,$ for $\,\lambda>0.\,$ Assume also that $$\begin{aligned}
\int_{0}^{+\infty}
\,\left|K(x,1)\right|\,{x}^{-\frac{1}{p}}\,dx
\,\lesssim\,1 \qquad \text{or } \qquad
\int_{0}^{+\infty}
\,\left|K(1,y)\right|\,{y}^{-\frac{1}{q}}\,dy
\,\lesssim\,1.\end{aligned}$$ Then $$\begin{aligned}
\left|\int_{0}^{1}\int_{0}^{1}\,
K(x,y)\,f(x)\,g(y)\,dx\,dy\right|\;\lesssim\;
\parallel f \parallel_{L^{p}([0,1])}\,
\parallel g \parallel_{L^{q}([0,1])}.\end{aligned}$$
In [@hardy], one can find a proof for the case when the integrals that define the bilinear form are taken over $\,[0,+\infty[.\,$ We treat this slightly trickier case of finite range without using the result in [@hardy].
Let $\,\displaystyle Q(f,g)\,=\,
\int_{0}^{1}\int_{0}^{1}\,
K(x,y)\,f(x)\,g(y)\,dx\,dy.\,$ Using a change of variables, $\,x\rightarrow y.u,\,$ and exploiting the homogeneity of the kernel we have $$\begin{aligned}
\hspace*{-0.8 cm} Q(f,g)\,=\,
\int_{0}^{1}y\,g(y)\int_{0}^{\frac{1}{y}}
K(y.u,y)\,f(y.u)\,du\,dy\,=\,
\int_{0}^{1}g(y)\int_{0}^{\frac{1}{y}}
K(u,1)\,f(y.u)\,du\,dy.\end{aligned}$$ By Fubini’s theorem we may write $$\begin{aligned}
\label{qfg}
\hspace*{-1 cm}
Q(f,g)=\int_{0}^{1} K(u,1) \int_{0}^{1} f(y.u)\,g(y)\,dy\,du+
\int_{1}^{+\infty} K(u,1) \int_{0}^{\frac{1}{u}} f(y.u)\,g(y)\,dy\,du.\end{aligned}$$ But by Hölder’s inequality we have $$\begin{aligned}
\hspace*{-0.8 cm}
\left|\int_{0}^{1}\,f(y.u)\,g(y)\,dy\right|
\;\leq&\;\left(\int_{0}^{1}\,|f(y.u)|^{p}\,dy\right)
^{\frac{1}{p}}
\left(\int_{0}^{1}\,|g(y)|^{q}\,dy\right)^{\frac{1}{q}}\\
\,&\hspace*{-2 cm}=u^{-\frac{1}{p}}\,
\left(\int_{0}^{u}\,|f(x)|^{p}\,dx\right)
^{\frac{1}{p}}\,
\parallel g \parallel_{L^{q}([0,1])}
\;\leq\;u^{-\frac{1}{p}}\,\parallel f \parallel_{L^{q}([0,1])}\,
\parallel g \parallel_{L^{q}([0,1])}\end{aligned}$$ for all $\,0 < u < 1.\,$ Similarly $$\begin{aligned}
\hspace*{-0.8 cm}
\left|\int_{0}^{\frac{1}{u}}\,f(y.u)\,g(y)\,dy\right|
\;\leq&\;\left(\int_{0}^{\frac{1}{u}}
\,|f(y.u)|^{p}\,dy\right)
^{\frac{1}{p}}\,
\left(\int_{0}^{\frac{1}{u}}\,|g(y)|^{q}
\,dy\right)^{\frac{1}{q}}\\
\,&\hspace*{-3.4 cm}=u^{-\frac{1}{p}}\,
\left(\int_{0}^{1}\,|f(x)|^{p}\,dx\right)
^{\frac{1}{p}}\,
\left(\int_{0}^{\frac{1}{u}}\,|g(y)|^{q}
\,dy\right)^{\frac{1}{q}}
\;\leq\;u^{-\frac{1}{p}}\,\parallel f \parallel_{L^{q}([0,1])}\,
\parallel g \parallel_{L^{q}([0,1])}\end{aligned}$$ for all $\,1< u < +\infty.\,$ Using the last two inequalities together with the triangle inequality in (\[qfg\]) we get $$\begin{aligned}
\hspace*{-1 cm}
|Q(f,g)|\leq& \,
\,\parallel f \parallel_{L^{q}([0,1])}\,
\parallel g \parallel_{L^{q}([0,1])}\,
\left(\int_{0}^{1} |K(u,1)| \,u^{-\frac{1}{p}}\,du+
\int_{1}^{+\infty} |K(u,1)|\,u^{-\frac{1}{p}}\,du
\right)\\
\lesssim&\;
\parallel f \parallel_{L^{q}([0,1])}\,
\parallel g \parallel_{L^{q}([0,1])},
\qquad \text{when} \quad
\int_{0}^{+\infty} |K(x,1)|\,x^{-\frac{1}{p}}\,dx
\,\lesssim\,1.\end{aligned}$$ When $\;\displaystyle \int_{0}^{+\infty}\left|K(1,y)\right|
{y}^{-\frac{1}{q}}dy\lesssim 1\;$ the assertion follows analogously.
If $\,K(x,y)=\left( x + y \right)^{-1}\,$ in Lemma \[homogeneous\] we get Hilbert’s inequality.
\[even\] Let $\,f,g \in L^2([-1,1]).$ Then $$\begin{aligned}
\label{even1}&\int_{-1}^{1}\,\int_{-1}^{1}\,
\frac{|f(x)||g(y)|}{1+N\,
\left|x^2-y^2\right|}
\,dx\,dy \;\lesssim\; \frac{1}{\sqrt{N}}\,
\parallel f \parallel_{L^{2}([-1,1])}\,\parallel g \parallel_{L^{2}([-1,1])}, \\
\label{even2} &\int_{-1}^{1}\,\int_{-1}^{1}\,
\frac{|f(x)|\,|g(y)|}{
\sqrt{\left|{x}^{2}-{y}^{2}\right|}}
\,dx\,dy \;\lesssim\; \parallel f \parallel_{L^{2}([-1,1])}\,\parallel g \parallel_{L^{2}([-1,1])}.\end{aligned}$$
Beginning with the estimate (\[even1\]), Remark \[even0\] suggests estimating\
\
$\; \displaystyle \int_{0}^{1}\,\int_{0}^{1}\,
\frac{|f(\pm x)||g(\pm y)|}{1+N\,
\left|x^2-y^2\right|}
\,dx\,dy.\;$ Let $\;\displaystyle W_{N}(f,g):=
\int_{0}^{1}\,\int_{0}^{1}\,
\frac{|f(x)||g(y)|}{1+N\,
\left|x^2-y^2\right|}
\,dx\,dy.$\
\
If $\,x,y \geq 0\,$ and $\,|x-y|>>1/\sqrt{N}\,$ then we also have $\,x+y>>1/\sqrt{N}\,$ and consequently $\,N\left|x^2-y^2\right|>>1.\,$ Therefore $$\begin{aligned}
\hspace{-0.8 cm}
\nonumber W_{N}(f,g)&\approx
\int\,\int_{
\substack{0\leq x,y\leq1,\\
|x-y|\lesssim\; 1/\sqrt{N}}}
\frac{|f(x)||g(y)|}{1+N\,
\left|x^2-y^2\right|}
\,dx\,dy+
\int\,\int_{
\substack{0\leq x,y\leq1,\\
|x-y|>> 1/\sqrt{N}}}
\frac{|f(x)||g(y)|}{1+N\,
\left|x^2-y^2\right|}
\,dx\,dy\\
\nonumber &\lesssim
\int\,\int_{
\substack{0\leq x,y\leq1,\\
|x-y|\lesssim\; 1/\sqrt{N}}}
{|f(x)||g(y)|}\,dx\,dy+
\frac{1}{N}\,\int\,\int_{
\substack{0\leq x,y\leq1,\\
|x-y|>> 1/\sqrt{N}}}
\frac{|f(x)||g(y)|}{\left|x^2-y^2\right|}
\,dx\,dy\\
\label{h1} &\lesssim
\int_{0}^{1}\,\int_{0}^{1}\,
\chi_{N}{\left(|x-y|\right)}{|f(x)||g(y)|}\,dx\,dy+
\frac{1}{\sqrt{N}}\,\int_{0}^{1}\,\int_{0}^{1}\,
\frac{|f(x)||g(y)|}{x+y}
\,dx\,dy\end{aligned}$$ where $\,\chi_{N}\,$ is the characteristic function of the interval $\,[0,1/\sqrt{N}\,].$ By Young’s inequality we have $$\begin{aligned}
\label{h2}
\int_{0}^{1}\,\int_{0}^{1}\,
\chi_{N}{\left(|x-y|\right)}{|f(x)||g(y)|}
\,dx\,dy\,\leq\,
\frac{1}{\sqrt{N}}\,
\parallel f \parallel_{L^{2}([0,1])}\,\parallel g \parallel_{L^{2}([0,1])}.\end{aligned}$$ And by Hilbert’s inequality $$\begin{aligned}
\label{h3}
\int_{0}^{1}\,\int_{0}^{1}\,
\frac{|f(x)||g(y)|}{x+y}
\,dx\,dy\,\lesssim \;\parallel f \parallel_{L^{2}([0,1])}\,\parallel g \parallel_{L^{2}([0,1])}.\end{aligned}$$ Using (\[h2\]) together with (\[h3\]) in (\[h1\]) we obtain $$\begin{aligned}
\int_{0}^{1}\,\int_{0}^{1}\,
\frac{|f(x)||g(y)|}{1+N\,
\left|x^2-y^2\right|}
\,dx\,dy\,\lesssim\,\frac{1}{\sqrt{N}}\,
\parallel f \parallel_{L^{2}([0,1])}\,\parallel g \parallel_{L^{2}([0,1])}.\end{aligned}$$ In obtaining (\[h1\]), we worked only on the kernel of $W_{N}.$ It is therefore easy to see that replacing the function $\,x\rightarrow f(x)\,$ by the function $\,x\rightarrow f(-x)\,$ or $\,y\rightarrow g(y)\,$ by $\,y\rightarrow g(-y)\,$ then repeating the routine above eventually leads to the estimate $$\begin{aligned}
\int_{0}^{1}\,\int_{0}^{1}\,
\frac{|f(\pm x)||g(\pm y)|}{1+N\,
\left|x^2-y^2\right|}
\,dx\,dy\,\lesssim\,\frac{1}{\sqrt{N}}\,
\parallel f \parallel_{L^{2}([-1,1])}\,\parallel g \parallel_{L^{2}([-1,1])}.\end{aligned}$$ This proves (\[even1\]). Taking advantage of Remark \[even0\] again and arguing like before, it suffices to\
\
estimate $\displaystyle
V(f,g)=\int_{0}^{1}\,\int_{0}^{1}\,
\frac{|f(x)|\,|g(y)|}{\sqrt{\left|{x}^{2}-{y}^{2}\right|}}
\,dx\,dy.\;$ Since $\; \displaystyle \int_{0}^{+\infty}\frac{dz}{\sqrt{z}\,\sqrt{|1-z^2|}}
\,\approx\,1,$\
\
a direct application of Lemma \[homogeneous\] then gives $\,V(f,g)\,\lesssim\,\parallel f \parallel_{L^{2}([0,1])}\,\parallel g \parallel_{L^{2}([0,1])}.$
We are now ready to prove (\[energy\]). We do this for each of the cases $k=1$ and $k=2$ separately.\
**The phase is linear in $\textbf{s}\,$ $\,(k=1)$**:\
Let $\psi$ be a nonnegative smooth cutoff function such that $\,{supp}\:\psi \subset\;]-1,2[\,$ and $\,\psi(s)=1\,$ on $\,[0,1]$. Since $\,|T^{j,1}_{N} f |^2 \,=\, T^{j,1}_{N} f\;\;\overline{T^{j,1}_{N} f}.\,$ Then $$\begin{aligned}
&\hspace{-1 cm}\parallel T^{j,1}_{N} f \parallel^{2}_{L^{2}([0,1])}
\,=
\int_{0}^{1}\,|T^{j,1}_{N} f(s)|^{2}\,ds
\,\leq\,\int_{\mathbb{R}}\psi(s)\,|T^{j,1}_{N} f(s)|^{2}\,ds\\
&\hspace{-1 cm}=\;\int_{\mathbb{R}}\psi(s)\, T^{j,1}_{N} f(s)\;\overline{T^{j,1}_{N} f(s)}\,ds\,
=\;
\int_{\mathbb{R}}\psi(s)\, \int_{{B}}\,\int_{{B}}\,e^{\imath N \left({|x|}^{j}-{|y|}^{j}\right)s}\,
f(x)\,\overline{f(y)}\,dx\,dy\,ds.\end{aligned}$$ Let $f\in L^{2}(B)$. Applying Fubini’s theorem we get $$\begin{aligned}
\label{energy01}
\parallel T^{j,1}_{N} f \parallel^{2}_{L^{2}([0,1])}\;\leq\;
\int_{{B}}\,\int_{{B}}\,K_{N}^{j,1}(x,y)\,
f(x)\,\overline{f(y)}\,dx\,dy.\end{aligned}$$ In the light of the estimate (\[kernelsk1\]) of Lemma \[kernelsk\], it follows that $$\begin{aligned}
\label{energy11}
\parallel T^{j,1}_{N} f \parallel^{2}_{L^{2}([0,1])}\;\lesssim\;
\int_{{B}}\,\int_{{B}}\,
\frac{|f(x)|\,|f(y)|}{1+N^{2} \, \left({|x|}^{j}-{|y|}^{j}\right)^{2}}
\,dx\,dy.\end{aligned}$$ Since $\displaystyle
\int_{0}^{1}\frac{dz}{1+N^2 z^2}\approx \frac{1}{N},\,$ then, applying Lemma \[kernelsq\] with $\,h(z)=\left(1+N^2 z^2\right)^{-1}\,$ to the\
\
estimate (\[energy11\]), we obtain $$\begin{aligned}
\label{e1}
\parallel T^{j,1}_{N} f \parallel_{L^{2}([0,1])}\;\lesssim\;
\frac{1}{\sqrt{N}}\,\parallel f \parallel_{L^{2}(B)},
\qquad\text{for all dimensions}\;\;n\geq j.\end{aligned}$$ To finish this case, it remains to estimate $\,T^{2,1}f\,$ in the dimension $\,n=1.$ In view of (\[kernelsk1\]) and (\[energy01\]), we have $$\begin{aligned}
\hspace{-1 cm}
\parallel T^{2,1}_{N} f \parallel^{2}_{L^{2}([0,1])}\:\lesssim\,
\int_{-1}^{1}\,\int_{-1}^{1}\,
\frac{|f(x)|\,|f(y)|}{1+N\,
\left|x^2-y^2\right|}
\,dx\,dy.\end{aligned}$$ Hence, by (\[even1\]) of Lemma \[even\], $$\begin{aligned}
\label{e2}
\parallel T^{2,1}_{N} f \parallel_{L^{2}([0,1])}\:
\lesssim\,\frac{1}{N^{1/4}}\,
\parallel f \parallel_{L^{2}([-1,1])}.\end{aligned}$$ **The phase is quadratic in $\textbf{s}\,$ $\,(k=2)$**:\
For $f\in L^{2}(B)$, using Fubini’s theorem then employing the estimate (\[kernelsk2\]) implies $$\begin{aligned}
\label{energy21}
\hspace{-0.6 cm}
\parallel T^{j,2}_{N} f \parallel^{2}_{L^{2}([0,1])}\:=\,
\int_{{B}}\,\int_{{B}}\,K_{N}^{j,2}(x,y)\,
f(x)\,\overline{f(y)}\,dx\,dy
\,\lesssim\, G^{j}_{N}(f)+H^{j}_{N}(f)\end{aligned}$$ where $$\begin{aligned}
G^{j}_{N}(f)\,=&\, \int_{{B}}\,\int_{{B}}\,
\frac{|f(x)|\,|f(y)|}{1+\sqrt{N}\, \sqrt{\left|{|x|}^{j}-{|y|}^{j}\right|}}
\,dx\,dy,\\
H^{j}_{N}(f)\,=&\, \int_{{B}}\,\int_{{B}}\,
\frac{|f(x)|\,|f(y)|}{1+N\, \left|{|x|}^{j}-{|y|}^{j}\right|}
\,dx\,dy.\end{aligned}$$ Since $\;\displaystyle \int_{0}^{1}\,\frac{dz}{1+\sqrt{N}\,\sqrt{z}}
\,\approx\, \frac{1}{\sqrt{N}},\quad
\int_{0}^{1}\,\frac{dz}{1+N\,z}
\,=\,
\text{\large o}\left(\frac{1}{\sqrt{N}}\right),
\quad \text{as}\;\;\; N\longrightarrow+\infty,
$\
\
then applying Lemma \[kernelsq\] to both $\,G^{j}_{N}(f)\,$ and $\,H^{j}_{N}(f)\,$ gives the estimate $$\begin{aligned}
\label{energy22}
G^{j}_{N}(f)+
H^{j}_{N}(f)\;\lesssim\; \frac{1}{\sqrt{N}}
\parallel f \parallel^{2}_{L^{2}(B)},
\qquad n\geq j.\end{aligned}$$ It remains to control $\:G^{2}_{N}(f)\,$ and $\,
H^{2}_{N}(f)\:$ in the dimension $\,n=1.\,$ But when $\,n=1,$ $$\begin{aligned}
\hspace*{-0.4 cm}
G^{2}_{N}(f)\,=&\, \int_{-1}^{1}\,\int_{-1}^{1}\,
\frac{|f(x)|\,|f(y)|}{1+\sqrt{N}\, \sqrt{\left|{x}^{2}-{y}^{2}\right|}}
\,dx\,dy\\ \leq&\,\frac{1}{\sqrt{N}}\,
\int_{-1}^{1}\,\int_{-1}^{1}\,
\frac{|f(x)|\,|f(y)|}{
\sqrt{\left|{x}^{2}-{y}^{2}\right|}}
\,dx\,dy
\,\lesssim\,
\frac{1}{\sqrt{N}}\,\parallel f \parallel^{2}_{L^{2}([-1,1])}\quad
\text{by}\;\; (\ref{even2})\; \text{of}\;
\text{Lemma} \;\ref{even}.\end{aligned}$$ An identical estimate holds for $H^{2}_{N}(f)$ in the dimension $n=1$ because of (\[even1\]). Combining this with (\[energy22\]) and using them in (\[energy21\]) yields $$\begin{aligned}
\label{e3}
\parallel T^{j,2}_{N} f \parallel_{L^{2}([0,1])}
\;\lesssim\;\frac{1}{{N}^{1/4}}
\parallel f \parallel_{L^{2}(B)}.\end{aligned}$$ Finally, bringing the estimates (\[e1\]), (\[e2\]) and (\[e3\]) together results in (\[energy\]).
References {#references .unnumbered}
==========
[10]{} Ahmed A. Abdelhakim, A counter example to Strichartz estimates for the inhomogeneous Schrödinger equation, Journal of Mathematical Analysis and Applications, 414 (2014), 767-772. Damiano Foschi, Some remarks on the $L^{p}-L^{q}$ boundedness of trigonometric sums and oscillatory integrals, Communications on pure and applied analysis, 4 (2005), 569-588. Damiano Foschi, Inhomogeneous Strichartz estimates, Journal of Hyperbolic Differential Equations, 2 (2005), 1–24.
Loukas Grafakos, Classical Fourier Analysis, 2nd ed., Springer, 2008. G. H. Hardy, J. E. Littlewood, and G. Pólya, Inequalities, 2nd ed., Cambridge University Press, Cambridge, UK, 1952. T. Kato, An $L^{q,r}$-theory for nonlinear Schrödinger equations, Spectral and scattering theory and applications, Adv. Stud. Pure Math., Math. Soc. Japan, Tokyo, 23 (1994), 223–238. M. Keel and T. Tao, Endpoint Strichartz estimates, American Journal of Mathematics, 120 (1998), 955–980. E. M. Stein, Harmonic analysis: real-variable methods, orthogonality, and oscillatory integrals. Princeton Mathematical Series, 43. Princeton University Press, Princeton, NJ, 1993. T. Tao, Nonlinear Dispersive Equations: Local and Global Analysis, CBMS Regional Conference Series in Mathematics, 2006. M. C. Vilela, Strichartz estimates for the nonhomogeneous Schrödinger equation, Transactions of the American Mathematical Society, 359 (2007), 2123–2136. Youngwoo Koh, Improved inhomogeneous Strichartz estimates for the Schrödinger equation, Journal of Mathematical Analysis and Applications, 373 (2011), 147–160.
Mathematics Department, Faculty of Science\
Assiut University, Assiut,71516, Egypt\
ahmed.abdelhakim@aun.edu.eg
|
---
author:
- 'Vahid Fazel-Rezai'
title: 'Equivalence Classes of Permutations Modulo Replacements Between 123 and Two-Integer Patterns'
---
Introduction {#sec:introduction}
============
A permutation is said to contain a pattern if it has a subpermutation order-isomorphic to the pattern. Modern study of permutation patterns was prompted by Donald Knuth in the form of stack-sortable permutations in [@knuth68], but has since evolved into an active combinatorial field. (For various other applications and motivations see Chapters 2 and 3 in the book by Sergey Kitaev [@kitaev11].) Much of the work on permutation patterns has dealt with counting permutations that contain or avoid certain patterns.
The notion of replacing a pattern in a permutation with a new pattern was first mentioned in different forms, as the plactic and Chinese monoids, by Alain Lascoux and Marcel P. Schützenburger in [@LS81], Gérard Duchamp and Daniel Krob in [@DK92], and Julien Cassaigne et al. in [@CEKNH01]. These have since been translated into the language of pattern replacements and further studied. Steven Linton et al. consider in [@LPRW12] several bi-directional pattern replacements between $123$ and another pattern of length 3, as well as cases where multiple such replacements are allowed at the same time. They inspect these replacements when applied to elements in general position, elements with adjacent positions, and elements with adjacent positions and adjacent values. A couple papers, [@PRW11], by James Propp et al., and [@kuszmaul13], by William Kuszmaul, follow up on replacements on elements with adjacent positions. Together these three papers enumerate the equivalence classes in a general $S_n$ and count the size of the class containing the identity for almost all cases. In addition, William Kuszmaul and Ziling Zhou examine in [@KZ13] equivalence classes under more general families of replacements. Throughout all of this work, permutation length was preserved under replacements.
James Propp has suggested considering pattern replacements that do not preserve permutation length; that is, when a pattern is replaced with another pattern of different length. This paper takes the first step in this new direction by examining a group of replacements between patterns with three integer elements and two integer elements. We choose to use the classical type of replacement in which replaced elements need not be adjacent in position or value. To accommodate patterns of different lengths, we use a modified definition of patterns that includes the character ${{*}}$ in place of certain integer elements, acting as a placeholder in the replacement procedure. Like previous works, we define equivalence as reachability through a series of bi-directional replacements, using which we can partition the set of permutations of all lengths into equivalence classes.
In particular, in this paper we investigate the equivalence classes for the 18 replacements of the form $123 {\leftrightarrow}\beta$, where $\beta$ contains exactly one ${{*}}$ and two integers. First, we provide an overview of relevant definitions and notations in Section \[sec:definitions\]. Then, we break the 18 replacements into four categories and spend each of Sections \[sec:betaDecreasing\], \[sec:dropOnly\], \[sec:shiftRightShiftLeft\], and \[sec:switchNeighborDrop\] dealing with one of these categories. We fully characterize all equivalence classes for each of the considered replacements.
Definitions {#sec:definitions}
===========
We will use the standard definition of a permutation.
\[def:permutation\] A **permutation** $\pi$ is a finite, possibly empty string consisting of the first $n$ positive integers. We refer to $n$ as the the **length** of the permutation, denoted by $|\pi|$.
The permutation of length $0$ is the empty permutation, denoted by $\emptyset$. We will refer to the identity permutation of length $n$, $123\dots n$ (or $\emptyset$ if $n=0$), by ${\text{\textnormal{id}}(n)}$ and the reverse identity permutation of length $n$, $n(n-1)(n-2)\dots 1$ (or $\emptyset$ if $n=0$), by ${\text{\textnormal{rid}}(n)}$.
We will also mention here a term that will emerge in Section \[sec:switchNeighborDrop\]:
\[def:leftToRightMinimum\] An element of a permutation is a **left-to-right minimum** if it has a value less than every element to its left.
The terms left-to-right maximum, right-to-left minimum, and right-to-left maximum are defined similarly.
We now introduce the classical notion of a pattern.
\[def:pattern\] Let $\pi$ and $\mu$ be permutations. A substring $p$ of $\pi$ forms a **copy** of the **pattern** $\mu$ if it is order-isomorphic to $\mu$. If such a substring exists, $\pi$ **contains** $\mu$. Otherwise, $\pi$ **avoids** $\mu$.
The definition of patterns must be extended for our purposes to accommodate patterns that may contain ${{*}}$.
\[def:starPattern\] Let $\rho$ and $\delta$ be strings each consisting of distinct positive integers and ${{*}}$’s. A substring $r$ of $\rho$ forms a **copy** of the **${{*}}$-pattern** $\delta$ if the following conditions are met:
- $r$ and $\delta$ have stars in the same positions, and
- $r$ and $\delta$, when ignoring all stars, are order-isomorphic to one another.
If a copy of $\delta$ in $\rho$ exists, $\rho$ **contains** $\delta$. Otherwise, $\rho$ **avoids** $\delta$.
We take interest in replacements of patterns in permutations, that are not necessarily adjacent, to form new permutations of possibly different lengths. We define replacements using ${{*}}$-patterns to be able to work with changes in length:
\[def:replacement\] Let $\alpha$ and $\beta$ be ${{*}}$-patterns of equal length. Given a permutation $\pi$, we say another permutation $\sigma$ is a result of the **replacement** $\alpha \to \beta$ on $\pi$ if $\sigma$ can be obtained from the following steps on $\pi$:
1. As necessary, renumber the integers in $\pi$, while preserving relative order, and add instances of ${{*}}$ anywhere. Call this result $\rho^{(1)}$.
2. Choose some substring $a$ of $\rho^{(1)}$ that forms a copy of $\alpha$. Also, choose a string $b$ of distinct positive integers and ${{*}}$’s such that the following are true:
- $b$ is itself a copy of $\beta$,
- all elements common to both $b$ and $\rho^{(1)}$ are contained in $a$, and
- for all $x\in \mathbb{N}$ contained in both $\alpha$ and $\beta$, if $y\in \mathbb{N}$ in $a$ is at the same position as $x$ in $\alpha$, then $y$ is in $b$ at the same position as $x$ in $\beta$.
Replace $a$ in $\rho^{(1)}$ with $b$ and call the result $\rho^{(2)}$.
3. Drop all instances of ${{*}}$ in $\rho^{(2)}$ and renumber, while preserving relative order, so that the final result is a permutation $\rho^{(3)}=\sigma$.
For example, the intermediate results in applying the replacement $123 \to 3{{*}}2$ to the pattern $125$ in $14253$ would be $\rho^{(1)}=14253$, $\rho^{(2)}=54{{*}}23$, and $\rho^{(3)}=4312$, respectively. Note that both $\alpha$ and $\beta$ can contain ${{*}}$, so, for example, applying $12{{*}}\to 3{{*}}2$ to $12$ in $14253$ could have intermediate steps $\rho^{(1)} = 1526{{*}}3$, $\rho^{(2)} = 45{{*}}623$, and $\rho^{(3)}=34512$. Finally, also note that, in some cases, applying a specific replacement to a certain substring of a given permutation can different results depending on the choice of $b$ in the second step.
For clarity, in this paper we will show alongside replacements the involved substrings in the original and resulting permutations with square brackets. In our previous example, this would be $[125 \to 41]$. Note that in the above definition it is possible that $|\pi| \not= |\sigma|$; that is, replacements do not necessarily preserve length.
We now introduce the notion of equivalence between permutations of possibly different lengths using two directions of a replacement.
\[def:equivalent\] We call two permutations $\pi$ and $\sigma$ **equivalent**, written $\pi \equiv \sigma$, under the bi-directional replacement $\alpha {\leftrightarrow}\beta$ if $\sigma$ can be attained through a sequence of $\alpha \to \beta$ and $\beta \to \alpha$ replacements on $\pi$.
We use this definition of equivalence to partition the set of all permutations, $S_0 \cup S_1 \cup S_2 \cup \cdots$, into equivalence classes. Our aim is to eventually characterize these classes.
Sometimes we find that it is impossible to apply a given replacement to a permutation, so that it is in its own class, which we will refer to by the following term.
\[def:isolated\] The permutation $\pi$ is **isolated** under a replacement $\alpha {\leftrightarrow}\beta$ if the equivalence class containing it has no other permutations.
The following property, which arises in particular in Section \[sec:betaDecreasing\], if established gives great insight into the structure of equivalence classes.
\[def:unraveling\] We say a replacement $\alpha {\leftrightarrow}\beta$ has the **unraveling property** if any given permutation is equivalent under $\alpha {\leftrightarrow}\beta$ to an identity permutation.
It is notable that if a replacement has the previous property, then there is at most one class per identity permutation.
It will be helpful in Sections \[sec:dropOnly\] and \[sec:switchNeighborDrop\] to talk about the shortest permutation equivalent to some given permutation under a replacement, for which we have the following definition.
\[def:primitive\] The **primitive permutation** $\tau$ of $\pi$ under a replacement $\alpha {\leftrightarrow}\beta$ is the unique permutation of shortest length equivalent to $\pi$, if it exists.
Note that for some permutations and replacements, a shortest equivalent permutation might not be unique, so in such cases we say that a primitive permutation does not exist.
Finally, we briefly note a symmetry that effectively cuts the number of distinct cases in half: if $\beta$ and $\gamma$ are reverse complements of one another, then $\pi$ and $\sigma$ are equivalent under $123 {\leftrightarrow}\beta$ if and only if their reverse complements are equivalent under $123 {\leftrightarrow}\gamma$. (Here reverse means flipped order of elements and complement means flipped value of elements.) For example, because $2314 \equiv 231$ under $123 {\leftrightarrow}13{{*}}$, we have $1423 \equiv 312$ under $123 {\leftrightarrow}{{*}}13$. Note that this symmetry is due to the fact that $123$ is its own reverse complement.
In the remainder of this paper we examine equivalence classes of replacements of the form $123 {\leftrightarrow}\beta$, where $\beta$ contains two of $\{ 1,2,3 \}$ and one ${{*}}$ in some order. We cover the cases in which the integer elements of $\beta$ are in decreasing order in Section \[sec:betaDecreasing\]. Then, in Section \[sec:dropOnly\] we analyze cases where the two integer elements in $\beta$ are in the same positions as in 123. In Section \[sec:shiftRightShiftLeft\] we consider when the two integer elements of $\beta$ are both shifted left or right one from their positions in 123. We deal with the four remaining cases in Section \[sec:switchNeighborDrop\].
$\beta$ Decreasing {#sec:betaDecreasing}
==================
In this section we characterize the classes of the nine replacements in which $\beta$ has integer elements in decreasing order. We will use $123 {\leftrightarrow}31$ to represent an arbitrary replacement out of the three replacements $123 {\leftrightarrow}{{*}}31$, $123 {\leftrightarrow}3{{*}}1$, and $123 {\leftrightarrow}31{{*}}$. Similarly, we use $123 {\leftrightarrow}32 $ to simultaneously discuss all three of $123 {\leftrightarrow}{{*}}32 $, $123 {\leftrightarrow}3{{*}}2 $, and $123 {\leftrightarrow}32{{*}}$. For analyzing $123 {\leftrightarrow}21$, which denotes $123 {\leftrightarrow}{{*}}21$, $123 {\leftrightarrow}2{{*}}1$, and $123 {\leftrightarrow}21{{*}}$, we will make use of reverse complement symmetries with $123 {\leftrightarrow}32$.
Under all nine replacements, descents are allowed to be rearranged into increasing order, which naturally suggests that they have the unraveling property. This is indeed the case:
\[lem:unraveling\] If $\beta$ is decreasing, then $123 {\leftrightarrow}\beta$ has the unraveling property.
The following proof is valid for any of $123 {\leftrightarrow}31$ or $123 {\leftrightarrow}32$. This will then cover $123 {\leftrightarrow}21$ by the reverse complement symmetry.
We proceed by inducting on the length of the permutation over the nonnegative integers. For the base case of length zero, we note that the only such permutation, $\emptyset$, is itself an identity permutation. Assume for the inductive step that any permutation of length $n$ is equivalent to some identity permutation and consider any permutation $\pi$ of length $n+1$. By the inductive hypothesis, we may apply replacements on the last $n$ integers of $\pi$ so that they become an increasing string of $m$ integers. Suppose the first element in this result is $k$. Then, we have $$\begin{aligned}
\pi & \equiv k 123\dots(k-1)(k+1)\dots (m+1) \\
& \equiv 12(k+1)34\dots(k)(k+2)\dots (m+2) \tag*{[$k1 \to 12(k+1)$]} \\
& \equiv 1234(k+2)5\dots(k+1)(k+3)\dots (m+3) \tag*{[$(k+1)3 \to 34(k+2)$]} \\
& \qquad \vdots \tag{a total of $k-1$ replacements} \\
& \equiv 12345\dots(2k-2)(2k-1)(2k)\dots (m+k),\end{aligned}$$ or $\pi \equiv 123\dots(m+k) \equiv {\text{\textnormal{id}}(m+k)}$, as desired.
Now we turn our attention to only the identity permutations. Here we must deal with $123 {\leftrightarrow}31$ separately:
\[lem:id31\] Under $123{\leftrightarrow}31$, all identity permutations of length 4 or greater are equivalent to one another.
First, we show that ${\text{\textnormal{id}}(5)} \equiv {\text{\textnormal{id}}(6)}$: $$\begin{aligned}
12345 & \equiv 2134 \tag*{[$123 \to 21$]}\\
& \equiv 231 \tag*{[$134 \to 31$]}\\
& \equiv 3124 \tag*{[$31 \to 124$]} \\
& \equiv 12435 \tag*{[$31 \to 124$]}\\
& \equiv 123456. \tag*{[$43 \to 345$]}\end{aligned}$$ Thus, in general for $n \ge 6$ we can apply the above replacements to the first five elements of ${\text{\textnormal{id}}(n)}$ to obtain ${\text{\textnormal{id}}(n)} \equiv {\text{\textnormal{id}}(n+1)}$, so that ${\text{\textnormal{id}}(5)} \equiv {\text{\textnormal{id}}(6)} \equiv {\text{\textnormal{id}}(7)} \equiv \dots$, which was to be shown.
However, this misses ${\text{\textnormal{id}}(4)}$. We now show ${\text{\textnormal{id}}(4)} \equiv {\text{\textnormal{id}}(7)}$, which will complete the proof. Under $123 {\leftrightarrow}{{*}}31$ (and similarly under $123 {\leftrightarrow}3{{*}}1$) we have $1234 \equiv 431 \equiv 321$. Under $123 {\leftrightarrow}31{{*}}$, we have $1234 \equiv 421 \equiv 321$. In all three cases, we have $1234 \equiv 321$, from which we continue, $$\begin{aligned}
1234 & \equiv 321 \\
& \equiv 2341 \tag*{[$32 \to 234$]}\\
& \equiv 23145 \tag*{[$41 \to 145$]}\\
& \equiv 213456 \tag*{[$31 \to 134$]}\\
& \equiv 1234567, \tag*{[$21 \to 123$]}\\\end{aligned}$$ as desired.
Now we prove the same thing for the other replacements:
\[lem:id32and21\] Under $123{\leftrightarrow}32$ and $123 {\leftrightarrow}21$, all identity permutations of length 4 or greater are equivalent to one another.
We show this for $123 {\leftrightarrow}32$ and the result for $123 {\leftrightarrow}21$ will follow.
First, we have that ${\text{\textnormal{id}}(4)} \equiv {\text{\textnormal{id}}(5)}$: $$\begin{aligned}
1234 & \equiv 132 \tag*{[$234 \to 43$]}\\
& \equiv 2134 \tag*{[$32 \to 134$]} \\
& \equiv 12345. \tag*{[$21 \to 123$]}\end{aligned}$$ For $n \ge 5$ we can apply the above replacements to the first four elements of ${\text{\textnormal{id}}(n)}$ to obtain ${\text{\textnormal{id}}(n)} \equiv {\text{\textnormal{id}}(n+1)}$, so that ${\text{\textnormal{id}}(4)} \equiv {\text{\textnormal{id}}(5)} \equiv {\text{\textnormal{id}}(6)} \equiv \dots$.
Combining the above lemmas, we can explicitly find all the equivalence classes:
\[thm:betaDecreasingClasses\] If $\beta$ is decreasing, there are only five equivalence classes under $123 {\leftrightarrow}\beta$. They are $\{ \emptyset \}, \{ 1 \}$, $\{ 12 \}$, $\{ 123, 21 \}$, and a fifth class containing all other permutations.
It can easily be verified that each permutation in the first four classes is equivalent to every other permutation in that class, and that applying $123 {\leftrightarrow}\beta$ permutation in the first four classes produces a permutation also already in that class. Thus, the first four listed classes contain no other permutations. Also, by Lemma \[lem:unraveling\] every permutation not in those four classes must be equivalent to an identity of length at least 4. Then by Lemmas \[lem:id31\] and \[lem:id32and21\], all identities of length at least 4 are equivalent, so all remaining permutations are equivalent to one another, forming the fifth class.
Drop Only: $123 {\leftrightarrow}{{*}}23$, $123 {\leftrightarrow}1{{*}}3$, and $123 {\leftrightarrow}12{{*}}$ {#sec:dropOnly}
=============================================================================================================
The replacements $123 {\leftrightarrow}{{*}}23$, $123 {\leftrightarrow}1{{*}}3$, and $123 {\leftrightarrow}12{{*}}$ simply drop or add an element in a 123 pattern. In the remainder of this section we will proceed simultaneously with $12{{*}}$ and $1{{*}}3$ by using $\gamma$ to denote an arbitrary selection from the two, and later use the reverse complement symmetry to state the result of equivalence classes for $123 {\leftrightarrow}{{*}}23$.
We begin by defining a function that will take any given permutation to what we will show to be its primitive permutation.
\[def:pFunction\] We define a function $p_\gamma(\pi)$ for a given permutation $\pi$ and replacement $123{\leftrightarrow}\gamma$ as follows:
1. Begin with the string $\pi$.
2. If the current string avoids 123, skip to step 4. Otherwise, find the leftmost copy of 123 in the current string first by comparing the smallest elements, then the middle elements (if necessary), and finally the largest elements (if necessary). Apply $123 \to \gamma$ to this copy of 123.
3. Repeat step 2 on the resulting string.
4. Define $p_\gamma(\pi)$ to be the permutation order-isomorphic to the current string.
For example, if $\pi = 152364$ and $\gamma = 12{{*}}$, the results of the iterations of step 2 are 15234 (using 156), 1524 (using 123), and 152 (using 124), so that $p_\gamma(\pi) = 132$.
The facts below follow immediately from the definition:
- For every permutation $\pi$, $|p_\gamma(\pi)| \le |\pi|$ with $|p_\gamma(\pi)| = |\pi|$ only if $p_\gamma(\pi)=\pi$.
- For every permutation $\pi$, $p_\gamma(\pi)$ avoids 123.
- If a permutation $\pi$ avoids $123$, then $p_\gamma(\pi) = \pi$.
- For every permutation $\pi$, $p_\gamma(\pi) \equiv \pi$.
First we show that $p_\gamma$ is preserved under one direction of the replacement:
\[lem:singleStep\] If $\sigma$ is the result of $123 \to \gamma$ applied to $\pi$, then $p_\gamma(\pi) = p_\gamma(\sigma)$.
When written out in terms of their elements, let $\pi = \pi_1 \dots \pi_{k-1} \pi_k \pi_{k+1} \dots \pi_{n}$ and $\sigma = \sigma_1 \dots \sigma_{k-1} \sigma_{k+1} \dots \sigma_n$, so that $n = |\pi| = |\sigma| + 1$ and if $\pi_k$ is dropped from $\pi$ the remaining elements are order-isomorphic to $\sigma$. We now proceed with the proof separately for $\gamma = 12{{*}}$ and $\gamma = 1{{*}}3$.
First consider $\gamma = 12{{*}}$. We will simultaneously compare the processes of calculating $p_\gamma(\pi)$ and $p_\gamma(\sigma)$. Each iteration of step 2 in Definition \[def:pFunction\] will be performed on copies of 123 at the same positions for computing $p_\gamma(\pi)$ and $p_\gamma(\sigma)$ when the entire copy of 123 is in the first $k-1$ elements. The first iteration of step 2 in $p_\gamma(\pi)$ for which this is not true must be performed on a copy of 123 in which $\pi_k$ is the third element, because at least one such copy exists (the one on which $123 {\leftrightarrow}12{{*}}$ was applied to form $\sigma$). All iterations after this must again be performed on the same positions for $p_\gamma(\pi)$ and $p_\gamma(\sigma)$. Furthermore, the resulting strings of each iteration will be order-isomorphic for the two processes, so that the end results will be equal.
Now suppose $\gamma = 1{{*}}3$. Again, each iteration of step 2 performed completely in the first $k-1$ elements will be on the same positions for $p_\gamma(\pi)$ and $p_\gamma(\sigma)$. However, on the iterations for $p_\gamma(\pi)$ in which $\pi_k$ is the third element of a 123 pattern, a copy of 123 will be chosen for $p_\gamma(\sigma)$ in which the first two elements are at the same positions as those for $p_\gamma(\pi)$, but the third element will be to the right of $\sigma_{k-1}$. (We know at least one such third element exists: the third element of the 123 copy on which $123 {\leftrightarrow}1{{*}}3$ was applied to form $\sigma$.) Even though the copy of 123 chosen for $p_\gamma(\pi)$ and $p_\gamma(\sigma)$ are different, the middle elements that are dropped will be in the same positions. Finally, on the iteration for $p_\gamma(\pi)$ in which $\pi_k$ is the middle element (this iteration must take place because an appropriate 123 copy must exist), $\pi_k$ will be dropped. The resulting strings at this point for $p_\gamma(\pi)$ and $p_\gamma(\sigma)$ will be order-isomorphic, so the final permutations $p_\gamma(\pi)$ and $p_\gamma(\sigma)$ will be equal.
Now, we can put a condition on equivalency involving $p_\gamma$:
\[lem:equivalenceCondition\] Under $123 {\leftrightarrow}\gamma$, we have $\pi \equiv \sigma$ if and only if $p_\gamma(\pi) = p_\gamma(\sigma)$.
First, we prove the if direction. Suppose $p_\gamma(\pi) = p_\gamma(\sigma)$. Then we have $\pi \equiv p_\gamma(\pi) = p_\gamma(\sigma) \equiv \sigma$, so that $\pi \equiv \sigma$ as desired.
For the only if direction assume $\pi \equiv \sigma$. By definition of equivalence, there must exist some sequence of permutations $\pi^{(0)}=\pi, \pi^{(1)}, \pi^{(2)}, \dots, \pi^{(k)}=\sigma$ where $\pi^{(i+1)}$ is the result of performing a $123\to \gamma$ or a $\gamma \to 123$ replacement on $\pi^{(i)}$. We claim that $p_\gamma\left(\pi^{(i)}\right) = p_\gamma\left(\pi^{(i+1)}\right)$ for all $0 \le i \le k-1$.
Suppose $\pi^{(i+1)}$ is the result of a $123 \to \gamma$ replacement on $\pi^{(i)}$. Then by Lemma \[lem:singleStep\], $p_\gamma\left(\pi^{(i)}\right) = p_\gamma\left(\pi^{(i+1)}\right)$. On the other hand, if $\pi^{(i+1)}$ is the result of a $\gamma \to 123$ replacement, then $\pi^{(i)}$ is the result of a $123 \to \gamma$ on $\pi^{(i+1)}$. Thus, by Lemma \[lem:singleStep\] again we have $p_\gamma\left(\pi^{(i)}\right) = p_\gamma\left(\pi^{(i+1)}\right)$.
Therefore, $p_\gamma(\pi) = p_\gamma\left(\pi^{(0)}\right) = p_\gamma\left(\pi^{(1)}\right) = \dots = p_\gamma\left(\pi^{(k)}\right) = p_\gamma(\sigma)$, as desired.
Now we have enough to show that $p_\gamma(\pi)$ is the primitive permutation of $\pi$:
\[lem:dropOnlyPrimitive\] Under $123 {\leftrightarrow}\gamma$, $p_\gamma(\pi)$ is the primitive permutation of $\pi$.
By Lemma \[lem:equivalenceCondition\], we have $\pi \equiv p_\gamma(\pi)$, so it remains to show that there does not exist a permutation $\sigma$ such that $\sigma \equiv \pi$ with $\sigma$ not order-isomorphic to $p_\gamma(\pi)$ and $|\sigma| \le |p_\gamma(\pi)|$.
For sake of contradiction, assume that some $\sigma \equiv \pi$ exists that is not order-isomorphic to and no longer than $p_\gamma(\pi)$. If $|\sigma|<|p_\gamma(\pi)|$, then $|p_\gamma(\sigma)| \le |\sigma| < |p_\gamma(\pi)|$, so $p_\gamma(\pi) \not= p_\gamma(\sigma) \implies \pi \not \equiv \sigma$, contradiction. Otherwise, $|\sigma|=|p_\gamma(\pi)|$, and we must have $p_\gamma(\pi) = p_\gamma(\sigma)$, so $|\sigma| = |p_\gamma(\sigma)|$. Thus $\sigma$ is order-isomorphic to $p_\gamma(\sigma) = p_\gamma(\pi)$, contradiction.
We restate the definition of the primitive permutation without using of $p_\gamma(\pi)$ so that we can include $123 {\leftrightarrow}{{*}}23$.
\[thm:dropOnlyPrimitive\] For $\beta=12{{*}}, 1{{*}}3, {{*}}23$, the primitive permutation of $\pi$ under $123 {\leftrightarrow}\beta$ is the result of repeatedly applying to $\pi$ the replacement $123 \to \beta$ on any choice of a 123 pattern until none exist.
We first show this for $\beta = 12{{*}}$ and $\beta = 1{{*}}3$. Suppose the result of applying to $\pi$ the replacement $123 \to \beta$ repeatedly to an arbitrary set of choices of copies of 123 is $\sigma$. If $\sigma = p_\beta(\pi)$, the result is true. Otherwise, because $\sigma$ avoids 123 we have $p_\beta(\sigma) = \sigma \not= p_\beta(\pi)$, so by Lemma \[lem:equivalenceCondition\] $\pi \not \equiv \sigma$, contradiction.
For $\beta = {{*}}23$, we use the reverse complement symmetry: the theorem statement is true for $\beta = 12{{*}}$, and the reverse complement of the statement is the statement itself, so it is true for $\beta = {{*}}23$.
As a result, the primitive permutations characterize the equivalence classes:
\[thm:dropOnlyClasses\] For $\beta=12{{*}}, 1{{*}}3, {{*}}23$, under $123 {\leftrightarrow}\beta$, for each $\tau$ avoiding $123$, there exists a distinct class consisting of all $\pi$ whose primitive permutation (as defined in Theorem \[thm:dropOnlyPrimitive\]) is $\tau$.
Note that for each $\tau$ avoiding 123, $\tau$ itself along with all other permutations whose primitive permutation is $\tau$ will be equivalent by Definition \[def:primitive\], and thus are in the same class.
Suppose now there exists another permutation $\sigma$ that is in the same class as $\pi$, but has primitive permutation $\omega$ different than $\tau$. However, this is a contradiction because both $\omega$ and $\tau$ are defined to be the unique permutation of shortest length equivalent to $\pi$.
Note that while the above statement of the equivalence classes is the same for all three possible replacements, the classes themselves are different. This is because the primitive permutations can be different for different $\beta$.
Shift Right and Shift Left: $123 {\leftrightarrow}{{*}}12$ and $123 {\leftrightarrow}23{{*}}$ {#sec:shiftRightShiftLeft}
=============================================================================================
We now deal with the replacements that shift two elements of a $123$ pattern to the left or right and drop the third. We may immediately characterize the classes with the following theorem. In the proof, we draw inspiration from the stooge sort, in a manner similar to the proof of Proposition 2.17 in [@kuszmaul13].
\[thm:shiftRightShiftLeftClasses\] Under $123 {\leftrightarrow}{{*}}12$ (and similarly under $123 {\leftrightarrow}23{{*}}$), each reverse identity is isolated and all other permutations are in the same class.
Note that the two replacements are reverse complements of one another, and the reverse complement version of the theorem’s statement is the same as the statement, so we only work with $123 {\leftrightarrow}{{*}}12$.
It is not possible to apply either direction of the replacement $123 {\leftrightarrow}{{*}}12$ to a reverse identity, so each reverse identity must not be equivalent to any other permutation and is thus isolated.
On the other hand, we claim that the permutations that are not reverse identities are equivalent. Note that immediately we have $12 \equiv 123$. Therefore, for $n \ge 2$ we may transform the first two elements of ${\text{\textnormal{id}}(n)}$ into $123$ so that ${\text{\textnormal{id}}(n)} \equiv {\text{\textnormal{id}}(n+1)}$. Thus, ${\text{\textnormal{id}}(2)} \equiv {\text{\textnormal{id}}(3)} \equiv {\text{\textnormal{id}}(4)} \equiv \dots$.
Now, we will prove that all non-reverse identity permutations of length $n$ are equivalent to ${\text{\textnormal{id}}(n)}$ by inducting on $n \ge 3$. (The cases for $n=0,1,2$ are trivial.) The base case of $n=3$ may be checked computationally. Now, assume the statement is true for $n=k-1$, and suppose $\pi \not= {\text{\textnormal{rid}}(n)}$ is some given permutation of length $n$. If the first $n-1$ elements of $\pi$ are not order-isomorphic to ${\text{\textnormal{rid}}(n-1)}$, we apply the inductive hypothesis to the first $n-1$ elements, then the last $n-1$ elements, and finally the first $n-1$ elements again; the result is ${\text{\textnormal{id}}(n)}$. If the first $n-1$ elements of $\pi$ are order-isomorphic to ${\text{\textnormal{rid}}(n-1)}$, then we instead apply the inductive hypothesis on the last $n-1$ elements, the first $n-1$ elements, and finally the last $n-1$ elements. (We can not have both the first $n-1$ elements and the last $n-1$ elements of $\pi$ order-isomorphic to ${\text{\textnormal{rid}}(n-1)}$, because then $\pi={\text{\textnormal{rid}}(n)}$.) The result of this procedure is again ${\text{\textnormal{id}}(n)}$, so that $\pi \equiv {\text{\textnormal{id}}(n)}$, as desired.
Because all permutations that are not reverse identities are equivalent to the identity of the same size, and all identities that are not also reverse identities are equivalent, we have that all non-reverse identity permutations are equivalent, completing the proof for $123 {\leftrightarrow}{{*}}12$.
Note that taking the reverse complements of each permutation in the classes described above results in exactly the same classes, so the result for $123 {\leftrightarrow}23{{*}}$ is the same.
Switch with Neighbor and Drop: $123 {\leftrightarrow}2{{*}}3$, $123 {\leftrightarrow}{{*}}13$, $123 {\leftrightarrow}13{{*}}$, and $123 {\leftrightarrow}1{{*}}2$ {#sec:switchNeighborDrop}
=================================================================================================================================================================
We first consider $123 {\leftrightarrow}13{{*}}$, whose reverse complement is $123 {\leftrightarrow}{{*}}13$:
\[lem:equivalentReplacement\] Two permutations $\pi$ and $\sigma$ of equal length are equivalent under $123 {\leftrightarrow}13{{*}}$ if they are equivalent under $123 {\leftrightarrow}132$.
It suffices to show both directions of $123 {\leftrightarrow}132$ can be performed through a series of $123 {\leftrightarrow}13{{*}}$ replacements. Suppose $\pi_1 < \pi_2 < \pi_3$ are three elements of $\pi$ that form a copy of 123. We show that we may transform $\pi_1\pi_2\pi_3$ into $\pi_1\pi_3\pi_2$: $$\begin{aligned}
\pi &\equiv \dots \pi_1\dots \pi_2 \dots \pi_3 \dots \\
&\equiv \dots \pi_1\dots \pi_2 \dots (\pi_2+1)(\pi_3+1) \dots \tag*{[$\pi_2\pi_3 \to \pi_2(\pi_2+1)(\pi_3+1)$]}\\
&\equiv \dots \pi_1\dots \pi_3 \dots \pi_2 \dots \tag*{[$\pi_1\pi_2(\pi_3+1) \to \pi_1\pi_3$]}\end{aligned}$$ Thus, we may perform $123 \to 132$ replacements by a series of $123 {\leftrightarrow}13{{*}}$ replacements. For the other direction, $132 \to 123$, we simply reverse the above process.
By using this mechanism we may swap any two elements that are not left-to-right minima:
\[lem:swapNonLRMin\] Suppose a permutation $\pi$ and two of its non-left-to-right minimum elements are given. Order these two elements in decreasing order then drop the rightmost one, and call the result $\pi'$. Under $123{\leftrightarrow}13{{*}}$, $\pi \equiv \pi'$.
Let $\pi = \dots \pi_1 \dots \pi_2 \dots \pi_3 \dots $ and its two non-left-to-right minima be $\pi_2$ and $\pi_3$ (one is not necessarily larger than the other), where $\pi_1$ is the rightmost left-to-right minimum to the left of $\pi_2$. It suffices to show $\pi'$ can be produced with $123{\leftrightarrow}13{{*}}$ replacements on $\pi$.
If $\pi_2 < \pi_1$, then $\pi_2$ itself is a left-to-right minimum; therefore, we must have $\pi_2 > \pi_1$.
Now, if $\pi_3 > \pi_1$, then $\pi_1\pi_2\pi_3$ form a copy of either 123 or 132, and thus $\pi_1$ and $\pi_2$ can be swapped if necessary via Lemma \[lem:equivalentReplacement\] so that they are in increasing order. Then, applying $123 \to 13{{*}}$ produces $\pi'$.
Otherwise, $\pi_3 < \pi_1$. In this case, we look to a fourth element for use in replacements: a left-to-right minimum between $\pi_2$ and $\pi_3$ called $\pi_4$. We can find $\pi_4$ because a left-to-right minimum must exist between $\pi_1$ and $\pi_3$, or else $\pi_3$ is itself a left-to-right minimum. Furthermore, $\pi_4$ must be to the right of $\pi_2$, or else $\pi_1$ was chosen incorrectly. We note that we must have $\pi_4 < \pi_3 < \pi_1 < \pi_2$, so we can perform the following operations. Note that as elements are dropped or added with value less than another element, the latter’s value will change by one. $$\begin{aligned}
\pi &\equiv \dots \pi_1\dots \pi_2 \dots \pi_4 \dots \pi_3 \dots \\
&\equiv \dots \pi_1\dots \pi_2 \dots \pi_4 \dots \pi_3(\pi_2+1) \dots \tag*{[$\pi_1\pi_2 \to \pi_1\pi_2(\pi_2+1)$]}\\
&\equiv \dots (\pi_1-1) \dots (\pi_2-1) \dots \pi_4 \dots \pi_2 \dots \tag*{[$\pi_4\pi_3(\pi_2+1) \to \pi_4\pi_2$]}\\
&\equiv \dots (\pi_1-1) \dots (\pi_2-1) \dots \pi_4 \dots \tag*{[$\pi_1(\pi_2-1)\pi_2 \to \pi_1(\pi_2-1)$]}\end{aligned}$$ This is indeed $\pi'$.
While non-left-to-right minima can be manipulated as shown, there are two properties of the set of non-left-to-right minima that must remain unchanged, in addition to the set of left-to-right minima:
\[lem:invariantProperties\] Under $123 {\leftrightarrow}13{{*}}$, two permutations $\pi$ and $\sigma$ are equivalent only if they have the following equal:
- the number of left-to-right minima,
- the position of the leftmost non-left-to-right minimum, and
- the largest value (relative to the left-to-right minima) of non-left-to-right minima.
To show that $\pi \equiv \sigma$ only if the they share the three above properties, it suffices to show that $123 {\leftrightarrow}13{{*}}$ preserves these properties; moreover, only one direction is necessary to prove because then the reverse must also preserve them. Thus, we consider the $123 \to 13{{*}}$ direction applied to the elements $\phi_1 < \phi_2 < \phi_3$ (from left to right) of an arbitrary permutation $\phi$ to produce $\phi'_1$ and $\phi'_3$ in the result $\phi'$ (i.e. $[\phi_1\phi_2\phi_3 \to \phi'_1\phi'_3]$). Also, call $k_1$ the position of $\phi_1$ and $k_2$ that of $\phi_2$ when counting from the left.
We will undertake the first property by breaking the permutations at the $k_1$-th position. The substrings of the first $k_1$ elements of $\phi$ and $\phi'$ are order-isomorphic, so the two substrings contain the same number of left-to-right minima. On the other hand, the left-to-right minima to the right of $\phi_1$ in $\phi$ and to the right of $\phi'_1$ in $\phi$ have values less than $\phi_1$ and $\phi'_1$. Furthermore, the substring of $\phi$ consisting of all elements except those to the right of and greater than $\phi_1$ is order-isomorphic to the corresponding substring of elements of $\phi'$ that are not both to the right of and greater than $\phi'_1$. Therefore, the numbers of left-to-right minima to the right of the $k_1$-th element in each of $\phi$ and $\phi'$ are the same. We conclude that $123 \to 13{{*}}$ preserves the number of left-to-right minima.
To prove the second property we will use the fact that the first $k_2-1$ elements of $\phi$ and $\phi'$ are order-isomorphic. The leftmost non-left-to-right minimum of $\phi$ is at a position of at most $k_2$, because $\phi_2$ is a non-left-to-right minimum. Similarly, the leftmost non-left-to-right minimum of $\phi'$ has position at most $k_2$. If $\phi_2$ is indeed the leftmost non-left-to-right minimum, then $\phi'_3$ will be the leftmost non-left-to-right minimum in $\phi'$ at the same position. Otherwise, the leftmost non-left-to-right minimum is in the first $k_2-1$ elements and thus in the same position in $\phi$ and $\phi'$.
For the third property, it should be noted that when discussing a value relative to those of left-to-right minima we are discussing the number of left-to-right minima less than (or greater than) that value; such a notion is only valid when the number of left-to-right minima is constant (which was shown above). Consider the greatest non-left-to-right minimum in $\phi$ and $\phi'$, which we will call $\phi_i$ and $\phi'_i$ respectively. Note that $\phi_i$ can not be $\phi_2$ because $\phi_3$ is a greater non-left-to-right minimum. Then, the relative order of values of the set of elements from $\phi$ consisting of all left-to-right minima and $\phi_i$ is the same as that of the set of elements from $\phi'$ consisting of all of its left-to-right minima and $\phi'_i$, as desired.
In terms of three properties we may exactly characterize the equivalence classes:
\[thm:switchNeighborDropClassesA\] Under $123 {\leftrightarrow}13{{*}}$, there exists a distinct equivalence class for every triple of integers $(m, p, v)$, with $1 \le p,v \le m$, consisting of all permutations $\pi$ with the following properties:
- $\pi$ has $m$ left-to-right minima,
- the position (from the left) of the leftmost non-left-to-right minimum is $p+1$
- the value of the largest non-left-to-right minima is less than those of $v$ left-to-right minima
In addition, each reverse identity permutation is in a class only containing itself. There are no other classes.
Note that if $\pi$ is a reverse identity permutation, then it can not undergo either direction of $123 {\leftrightarrow}13{{*}}$, so it must be isolated.
For the remainder of the theorem it suffices to show that, given two non-reverse-identity permutations $\pi$ and $\sigma$, $\pi \equiv \sigma$ if and only if they have the same triple $(m,p,v)$. The only if direction was shown in Lemma \[lem:invariantProperties\]. We will prove the other direction through the use of a primitive permutation.
Suppose both $\pi$ and $\sigma$ have triple $(m,p,v)$. From Lemma \[lem:invariantProperties\], any permutation equivalent to $\pi$ must have the same number of left-to-right minima. Also, it must have at least one non-left-to-right minimum. Thus, the shortest permutation equivalent to $\pi$ must have at least $m+1$ elements. In fact, there is exactly one permutation of this length: the permutation of length $m+1$ whose $(p+1)$-th element has value $v+1$ and remaining elements are in decreasing order. We can indeed construct this permutation by applying Lemma \[lem:swapNonLRMin\] repeatedly to any pair of non-left-to-right minima until the resulting permutation $\tau$ has length $m + 1$.
In a similar manner, we may construct the primitive permutation of $\sigma$, which must also be $\tau$ because it has the same $(m,p,v)$ triple. Thus, $\pi$ and $\sigma$ have the same primitive permutation and must be equivalent.
The remaining replacements $123 {\leftrightarrow}1{{*}}2$ and $123{\leftrightarrow}2{{*}}3$ (which are reverse complements) have similar equivalence classes. Following logic analogous to Lemma \[lem:equivalentReplacement\], Lemma \[lem:swapNonLRMin\], Lemma \[lem:invariantProperties\], and Theorem \[thm:switchNeighborDropClassesA\], we may find that equivalence under $123 {\leftrightarrow}1{{*}}2$ implies and is implied by equivalence under $123 {\leftrightarrow}132$, with which we can identify three properties that are necessary and sufficient to infer equivalence. The result classifying the equivalence classes under $123 {\leftrightarrow}1{{*}}2$ is stated below:
\[thm:switchNeighborDropClassesB\] Under $123 {\leftrightarrow}1{{*}}2$, there exists a distinct equivalence class for every triple $(m, p, v)$, with $1 \le p,v \le m$, consisting of all permutations $\pi$ with the following properties:
- $\pi$ has $m$ left-to-right minima,
- the value of the smallest non-left-to-right minima is greater than those of $v$ left-to-right minima
- the position (from the right) of the rightmost non-left-to-right minimum is $p+1$
In addition, each reverse identity permutation is isolated. There are no other classes.
The results for $123 {\leftrightarrow}{{*}}13$ and $123 {\leftrightarrow}2{{*}}3$ can be found by taking the reverse complement of each statement in Theorems \[thm:switchNeighborDropClassesA\] and \[thm:switchNeighborDropClassesB\], respectively. In particular, all instances of left-to-right minima become right-to-left maxima.
Summary {#subsec:summary .unnumbered}
-------
Table \[tab:summary\] summarizes the characterization of the equivalence classes for each of the 18 considered replacements. For replacements whose classes are described in the form $\{ \pi \mid \pi \equiv \tau\}$ for a certain $\tau$, refer to their respective sections for algorithms that produce the $\tau$ corresponding to a given $\pi$.
For the sake of abbreviation, we use LR and RL for left-to-right and right-to-left, respectively. In addition, minima and maxima are written min and max, respectively.
Category $\beta$ \# Classes Equivalence Classes
------------ ----------- ------------ -------------------------------------------------------------------------------------------
\[11pt\] ${{*}}32$ 5 $\{ \emptyset \}, \{ 1 \}, \{ 12 \}, \{ 123, 21 \}, \{ \text{all else} \} $
$21{{*}}$ 5 $\{ \emptyset \}, \{ 1 \}, \{ 12 \}, \{ 123, 21 \}, \{ \text{all else} \} $
${{*}}31$ 5 $\{ \emptyset \}, \{ 1 \}, \{ 12 \}, \{ 123, 21 \}, \{ \text{all else} \} $
$2{{*}}1$ 5 $\{ \emptyset \}, \{ 1 \}, \{ 12 \}, \{ 123, 21 \}, \{ \text{all else} \} $
$3{{*}}2$ 5 $\{ \emptyset \}, \{ 1 \}, \{ 12 \}, \{ 123, 21 \}, \{ \text{all else} \} $
$31{{*}}$ 5 $\{ \emptyset \}, \{ 1 \}, \{ 12 \}, \{ 123, 21 \}, \{ \text{all else} \} $
${{*}}21$ 5 $\{ \emptyset \}, \{ 1 \}, \{ 12 \}, \{ 123, 21 \}, \{ \text{all else} \} $
$3{{*}}1$ 5 $\{ \emptyset \}, \{ 1 \}, \{ 12 \}, \{ 123, 21 \}, \{ \text{all else} \} $
$32{{*}}$ 5 $\{ \emptyset \}, \{ 1 \}, \{ 12 \}, \{ 123, 21 \}, \{ \text{all else} \} $
\[11pt\] ${{*}}23$ $\infty$ $\{ \pi \mid \pi \equiv \tau \} \mid \tau \text{ avoids 123}$
$1{{*}}3$ $\infty$ $\{ \pi \mid \pi \equiv \tau \} \mid \tau \text{ avoids 123}$
$12{{*}}$ $\infty$ $\{ \pi \mid \pi \equiv \tau \} \mid \tau \text{ avoids 123}$
\[17.5pt\] ${{*}}12$ $\infty$ $\{ {\text{\textnormal{rid}}(n)} \} \mid n \in \mathbb{Z}_{\ge 0}, \{\text{all else}\} $
$23{{*}}$ $\infty$ $\{ {\text{\textnormal{rid}}(n)} \} \mid n \in \mathbb{Z}_{\ge 0}, \{\text{all else}\} $
\[11pt\] $1{{*}}2$ $\infty$ $ \{ \pi \mid \pi \equiv \tau\} \mid \tau \text{ has 0 or 1 non-LR min}$
$13{{*}}$ $\infty$ $ \{ \pi \mid \pi \equiv \tau\} \mid \tau \text{ has 0 or 1 non-LR min}$
${{*}}13$ $\infty$ $ \{ \pi \mid \pi \equiv \tau\} \mid \tau \text{ has 0 or 1 non-RL max}$
$2{{*}}3$ $\infty$ $ \{ \pi \mid \pi \equiv \tau\} \mid \tau \text{ has 0 or 1 non-RL max}$
: Summary of Classes Under Replacements of the Form $123 {\leftrightarrow}\beta$[]{data-label="tab:summary"}
Acknowledgments {#subsec:acknowledgements .unnumbered}
---------------
The author would like to thank his mentor Dr. Tanya Khovanova for invaluable guidance and greatly appreciates the time she has invested in the project. He would also like to thank Prof. James Propp for originally suggesting the project and providing useful feedback on the topic of the paper. Finally, the author is grateful to MIT PRIMES for their support and arranging the conditions for this research to take place.
[10]{}
Julien Cassaigne, Marc Espie, Daniel Krob, Jean-Christophe Novelli, and Florent Hivert. The chinese monoid. , 11(3):301–334, 2001.
Gérard Duchamp and Daniel Krob. Plactic-growth-like monoids. In [*Words, languages and combinatorics, II (Kyoto, 1992)*]{}, pages 124–142, 1994.
Sergey Kitaev. . Springer, 2011.
Donald E. Knuth. . Addison-Wesley, 1968.
Donald E. Knuth. Permutations, matrices, and generalized young tableaux. , 34(3):709–727, 1970.
William Kuszmaul. Counting permutations modulo pattern-replacement equivalences for three-letter patterns. arXiv:1304.5667 \[math.CO\], 2013.
William Kuszmaul and Ziling Zhou. Equivalence classes in $s_n$ for three families of pattern-replacement relations. arXiv:1304.5669 \[math.CO\], 2013.
Alain Lascoux and Marcel P. Schützenburger. Le monoïde plaxique. , 109:129–156, 1981.
Steven Linton, James Propp, Tom Roby, and Julian West. Equivalence classes of permutations under various relations generated by constrained transpositions. , 15(9), 2012.
Jean-Christophe Novelli and Anne Schilling. The forgotten monoid. , B8:71–83, 2008.
Adeline Pierrot, Dominique Rossin, and Julian West. Adjacent transformations in permutations. In [*proceedings of FPSAC 2011 (23th International Conference on Formal Power Series and Algebraic Combinatorics), DMTCS proc. AO*]{}, pages 765–776, 2011.
Craige Schnensted. Longest increasing and decreasing subsequences. , 13:179–191, 1961.
R. Simon and F. W. Schmidt. Restricted permutations. , 6:383–406, 1985.
|
---
abstract: 'The effects of $n$-type carrier doping by Li intercalation on magnetism in undoped and Co-doped anatase TiO$_2$ are investigated. We have found that doped $n$-type carriers in TiO$_2$ are localized mainly at Ti sites near the intercalated Li. With increasing the intercalation, local spins are realized at Ti. In the case of Co-doped TiO$_2$, most of the added $n$-type carriers fill the Co 3$d$ bands and the rest are localized at Ti. Therefore, Co magnetic moment vanishes by Li intercalation to have a nonmagnetic ground state.'
address: 'Department of Physics, Pohang University of Science and Technology, Pohang 790-784, Korea'
author:
- 'Min Sik Park, S. K. Kwon, and B. I. Min'
title: ' Li intercalation effects on magnetism in undoped and Co-doped anatase TiO$_2$'
---
Anatase TiO$_2$ is a wide band gap (3.2 - 3.7 eV) semiconductor. This wide band gap property provides a wide range of applications, such as air and water purifications using photocatalysis which converts solar energy into electrochemical energy[@Asahi] and batteries, electrochromic devices based on lithium intercalation. A theoretical study[@Stashans] shows that Li intercalation is easier in anatase TiO$_2$ than in rutile TiO$_2$. This is due to the open structure of anatase TiO$_2$. Recently, magnetic features are observed in the Li intercalated anatase TiO$_2$[@Luca]. It is possible to intercalate Li atoms up to Li/Ti ratio $\sim$ 0.7. With increasing the Li intercalation, a structural transition occurs from tetragonal to orthorhombic one. Also the insulator-to-metal transition is observed for $x$$>$ 0.3. The localized moments were measured, 0.003 $\sim$ 0.004$\mu_B$ per Ti, through the Li intercalation.
On the other hand, room temperature ferromagnetism is observed in Co-doped anatase TiO$_2$ thin film made by the combinatorial pulsed-laser-deposition molecular-beam epitaxy method[@Matsumoto]. A sizable amount of Co, up to 8$\%$, is soluble in anatase TiO$_2$. The measured saturated magnetic moment per Co ion was 0.32$\mu_B$ apparently in the low spin state and $T_C$ was estimated to be higher than 400 K. It is recognized that the carriers play role of inducing the ferromagnetism in dilute magnetic semiconductors[@Dietl].
To explore the carrier doping effects on magnetism, we have investigated electronic and magnetic properties of Li intercalated anatase TiO$_2$ and Ti$_{1-x}$Co$_x$O$_2$ ($x$=0.0625). Li/Ti intercalation ratios of 0.0625, 0.125, and 0.25 are considered for the undoped case, while 0.067 and 0.133 for the Co-doped case. We consider only the tetragonal anatase structure. We have used the linearized muffin-tin orbital (LMTO) band method in the local-spin-density approximation (LSDA). We have considered a supercell containing 16 f.u. in the primitive unit cell by replacing one Ti by Co or intercalating several Li ions ($a$=$b$=7.570, $c$=9.514 $\rm \AA$). Sixteen empty spheres are employed at the interstitial sites to enhance the packing ratio in the LMTO band calculation.
We have first calculated the electronic structure of Li intercalated TiO$_2$. In all Li/Ti ratio cases, we have obtained metallic ground states (Fig. \[undoped\]). For Li/Ti=0.0625, the paramagnetic ground state is obtained. However, in other cases, total energies of the paramagnetic and the ferromagnetic ground state are almost the same. Doped $n$-type carriers fill the Ti 3$d$ conduction band. Maximum localized magnetic moments at Ti are 0.029, 0.027 $\mu_B$ for Li/Ti=0.125 and 0.25, respectively. Thus the magnetic moment is not necessarily proportional to the number of localized electrons at Ti. The exchange splitting is clearly seen in the valence band top mainly with O 2$p$ characters (Fig. \[undoped\]).
Now, we have performed band calculations for Co-doped case. For Li/Ti=0.067, we have obtained paramagnetic and insulating ground state in Fig. \[doped\]. Insulating ground state results from filling up Co 3$d$ band by the $n$-type carriers. Therefore, Co $t_{2g}$ band is fully occupied, and the total occupancy of $d$ states amounts to $d^7$($t_{2g}^6e_g^1$). As seen in Fig. \[doped\], the position of occupied $t_{2g}$ band is different from the non-intercalated Co-doped case. In the latter case, the occupied Co $t_{2g}$ band is located near the valence band top[@Mspark], while, in the former case, the fully occupied Co $t_{2g}$ band is located near the conduction band bottom. The unoccupied Co $e_g$ band is hybridized with Ti 3$d$ conduction band.
For the Li/Ti=0.133 case, nearly paramagnetic and metallic ground state is obtained (Fig. \[doped\]). The carriers are mainly of Ti 3$d$ character. The extra electrons after filling up the Co $t_{2g}$ band occupy not Co $e_g$ band but Ti 3$d$ conduction band, because the carriers are localized at Ti sites near the intercalated Li. From these, one can expect that $n$-type carrier induced ferromagnet can be fabricated in TiO$_2$ by simultaneously intercalating Li and doping some 3$d$ transition metals having high spin magnetic ground state such as Mn, Fe[@Mspark].
In conclusion, we have found that, by intercalating Li in TiO$_2$, Ti atoms have localized magnetic moments, 0.029 and 0.027$\mu_B$ for Li/Ti=0.125 and 0.25, respectively. In the case of Co-doped TiO$_2$, nonmagnetic ground states are obtained for Li/Ti=0.067 and 0.133.
Acknowledgments$-$ This work was supported by the KOSEF through the eSSC at POSTECH and in part by the BK21 Project.
R. Asahi [*et al.*]{}, Science 293 (2001) 269. A. Stashans [*et al.*]{}, Phys. Rev. B 53 (1996) 159. V. Luca [*et al.*]{}, Chem. Mater. 13 (2001) 796. Y. Matsumoto [*et al.*]{}, Science 291 (2001) 854. T. Dietl [*et al.*]{}, Science 287 (2000) 1019. M. S. Park [*et al.*]{}, Phys. Rev. B 65 (2002) 161201.
Fig.1
Fig.2
|
---
abstract: |
The authors study the spectral theory of self-adjoint operators that are subject to certain types of perturbations.
An iterative introduction of infinitely many randomly coupled rank-one perturbations is one of our settings. Spectral theoretic tools are developed to estimate the remaining absolutely continuous spectrum of the resulting random operators. Curious choices of the perturbation directions that depend on the previous realizations of the coupling parameters are assumed, and unitary intertwining operators are used. An application of our analysis shows localization of the random operator associated to the Rademacher potential.
Obtaining fundamental bounds on the types of spectrum under rank-one perturbation, without restriction on its direction, is another main objective. This is accomplished by analyzing Borel/Cauchy transforms centrally associated with rank-one perturbation problems.
address:
- 'Department of Mathematics, Stockholm University, Kräftriket 6, 106 91 Stockholm, Sweden'
- 'Department of Mathematical Sciences, University of Delware, 501 Ewing Hall, Newark, DE 19716, USA; and CASPER, Baylor University, One Bear Place \#97328, Waco, TX 76798, USA.'
author:
- Dale Frymark
- Constanze Liaw
title: 'Spectral Analysis of Iterated Rank-One Perturbations'
---
Introduction
============
An important branch of perturbation theory is the study of spectral properties of the sum $H+V$ of self-adjoint operators $H$ and $V$, under the assumption that the spectrum of $H$ is known and $V$ is from some operator class. The operator $V$ is often thought of as a potential. Our focus is on two classes for $V$, rank-one and certain probabilistic potentials.
Rank-one perturbations are considered a very simple type of perturbation as their range is one-dimensional. Let $T$ be a self-adjoint operator $T$ on a separable Hilbert space ${\mathcal{H}}$, and consider the family of self-adjoint rank-one perturbations in the direction of a cyclic vector ${{\varphi}}\in{\mathcal{H}}$: $$T_\alpha = T+\alpha \langle{\,\cdot\,}, {{\varphi}}\rangle{_{ {}_{\scriptstyle {\mathcal{H}}}}}{{\varphi}},\qquad\alpha \in {{\mathbb R}}.$$ For details concerning this definition, see equation below.
More general rank-one perturbations can be defined when $H$ is unbounded, for instance through the theory of quadratic forms (see e.g. [@LiawTreil1; @SIMREV; @Simon] and the references therein). Unbounded perturbatiuons are outside the scope of this paper, as our focus lays either on the iterative introduction of infinitely many rank-one perturbations, or on obtaining general bounds for single ones.
A rank-one perturbation with ${{\varphi}}\in {\mathcal{H}}$ is not only a compact operator. It is also Hilbert–Schmidt, trace class and even of finite rank (rank-one). Yet, their study has revealed an extremely subtle nature. For example, a description of the singular continuous spectrum of the perturbed operator $T_\alpha$ in terms of properties of the unperturbed operator $T$ is unknown, see e.g. [@SIMREV] and the references within. Moreover, beyond the realms of mathematical physics and spectral analysis of self-adjoint operators, the problem of rank one perturbations is connected to many interesting topics in analysis, see e.g. [@CimaMathesonRoss; @t-KR; @LiawTreil1; @poltsara2006] and the references therein. The results in Section \[s-northo\] contribute bounds of two types: a bound for how much absolutely continuous spectrum can be transferred to discrete spectrum, and a bound for how much mass from the discrete spectrum can be transferred to absolutely continuous spectrum via a rank-one perturbation.
As an object of great interest to mathematical physicists, Anderson-type Hamiltonians $H_\omega = H+V_\omega$ are a generalization of the discrete random Schrödinger operator with a probabilistic potential $V_\omega = \sum \omega_n ({\,\cdot\,}, {{\varphi}}_n){_{ {}_{\scriptstyle {\mathcal{H}}}}} {{\varphi}}_n$. The perturbation problem is defined rigorously below in Subsection \[ss-01\]. In this setup, the perturbation $V_\omega$ is almost surely a non-compact operator. As a result, none of classical perturbation theory applies. In 1958 P.W. Anderson (Subsection \[ss-01\], [@Anderson]) suggested that sufficiently large impurities in a semi-conductor could lead to spatial localization of electrons, called Anderson localization. The field has grown into a rich theory and is studied by both the physics and the mathematics community. There are many well-studied and famous open problems in this research area, one of which is the Anderson localization conjecture for the discrete random Schrödinger operator at weak disorder in two spacial dimensions [@Anderson; @CFKS; @Banff] (or delocalization conjecture [@Liaw2]). There are numerous ways to interpret the meaning of extended states throughout the literature. The current work in Section \[s-infinite\] is related to the localization conjectures when the existence of *extended states* is defined as almost surely non-trivial absolutely continuous spectrum in the Anderson-type Hamiltonian. This is sometimes referred to as spectral delocalization. Spectral localization thus refers to an Anderson-type Hamiltonian with trivial absolutely continuous part.
Although rank-one perturbations and Anderson-type Hamiltonians are opposite in a perturbation theoretic sense, they have been found to be intimately connected [@AbukamovPoltoLiaw; @JakLast2000; @JakLast2006; @KingKirbyLiaw; @Liaw; @Sim1994; @SimonWolff]. Here we present further results linking these perturbation problems. Countably many rank-one perturbations are successively applied to a self-adjoint operator, $T$, with only absolutely continuous spectrum on a separable Hilbert space ${\mathcal{H}}$. Specifically, we utilize the Aronszajn–Donoghue theory to determine the amount by which the absolutely continuous spectrum decays with each perturbation, and explicitly compute formulas describing how the initial spectrum changes after an infinite number of such perturbations. This construction involves a curious choice of the perturbation vector at each step in order to control properties of the perturbed operators in terms of the initial operator. In the limiting case, the infinitely perturbed operator is somewhat similar to an Anderson-type Hamiltonian and can be compared to the discrete random Schrödinger operator.
To avoid any possible confusion, we list differences between the construction in the current project and classical Anderson-type Hamiltonians:
1. The main distinction is that the iterative construction requires knowledge of the previous perturbation vectors, ${{\varphi}}_{n}$ for $n=1,\hdots, k-1$, as well as all *previous* realizations of the (random) coupling parameter in order to choose the next perturbation vector, ${{\varphi}}_k$. This is very different from Anderson-type Hamiltonians, where the vectors ${{\varphi}}_k$’s form a sequence of orthonormal vectors and are given a priori, independently from the particular realization of the Anderson-type Hamiltonian. While our limiting operator is similar to an Anderson-type Hamiltonian, it is mainly due to the specific choice of vectors ${{\varphi}}_k$ that it cannot be classified as such.
2. The construction yields an operator of spectral multiplicity one. The spectral multiplicity of a general Anderson-type Hamiltonian may not necessarily equal one. In fact, not even the spectral multiplicity of the discrete random Schrödinger operator is known, though it is suspected to be infinite (see e.g. [@JakLast2000]).
3. We start on the spectral representation side of the operator. Hence, all of Lebesgue theory can be utilized as a tool, and rank-one perturbation theory becomes more concrete. This is not a serious distinction, as a unitary transformation takes any cyclic operator to its spectral representation.
A primary development is the explicit calculation of the remaining absolutely continuous spectrum after an infinite number of rank-one perturbations. As suggested by Example \[example\] below, precise control for even rank-two perturbations are challenging to produce and tend to be less explicit than those for rank-one perturbations. Recently, some progress was made for finite rank perturbations [@LiawTreil_arXiv] using matrix-valued measures. But the nature of our construction has a different focus.
The probability measures that can be handled include the case of Rademacher potentials; see Subsection \[ss-rademacher\], where we provide results with Rademacher potential. These represent a worst-case scenario for our construction.
Acknowledgments. {#acknowledgments. .unnumbered}
----------------
The authors thank Alexei Poltoratski for suggesting some questions which led to this paper as well as for many insightful discussions and comments.
Outline
-------
The main tools of perturbation theory from Section \[ss-Pert\] are utilized in Section \[s-firstpert\], where the majority of preparatory calculations take place, including applying Aronszajn–Donoghue theory to the first perturbation. Beginning with a measure that is constant over the interval $[-1,1]$, Aronszajn–Donoghue theory says that a perturbation creates a point mass outside of $[-1,1]$ and the remaining absolutely continuous spectrum is reduced accordingly. The precise strength of the point mass is calculated, and although it is possible to explicitly find a formula for the absolutely continuous spectral measure, it is avoided here for simplicity. Section \[example\] represents a comparison for the level of difficulty involved in computing even a rank-two perturbation. Recent developments in finite rank perturbations can be found, for example, in [@KapPol; @LiawTreil0].
Section \[example\] contains a simple motivating example, which also . An overview of the constructive process is given in Section \[ss-diagram\]. And in Section \[s-firstpert\] we compute location and mass of the eigenvalue generated by the perturbation.
Section \[s-startiterate\] explains the techniques involved in the iterative construction. Specifically, we fix the first perturbation parameter ${{\alpha}}_1$ and choose the second perturbation vector $\widetilde{{{\varphi}}}_2$ so we can pass via unitary equivalence from the often byzantine a.c. spectral measure on $[-1,1]$ to an auxiliary measure with constant weight on $[-1,1]$, again. We are mainly concerned with the total mass (or total variation) of the a.c. part of this auxiliary measure. This auxiliary measure is comparable to the starting measure and is unchanged through the spectral theorem and the unitary operator. This unitary operator and choice of the vector $\widetilde{{{\varphi}}}_2$ return us to the situation at the beginning of Section \[s-firstpert\], with a constant measure on $[-1,1]$.
Section \[s-infinite\] iterates this utilization of vector choices and unitary operators along with the perturbations. New perturbation directions are orthogonal to the point masses created from previous ones and therefore remain unchanged; this allows us to focus on the absolutely continuous spectrum. Results similar to those from Section \[s-startiterate\] are produced and the process can effectively be iterated. The final formulas obtained from iteration are found in Subsections \[ss-fk\] and \[ss-tau\]. Subsection \[ss-rademacher\] contains an application of the analysis to the constructed operator with Rademacher potential, where the ${{\alpha}}$’s are chosen to be the endpoints of the given interval, each occurring with equal probability. This operator is found to have spectral localization. These formulas are quite simple and shed further light onto the recursive nature of the process.
Section \[s-northo\] attempts to escape the requirement of the previous construction of orthogonal perturbation vectors, and obtains results for how much absolutely continuous spectrum can be destroyed via a single rank-one perturbation. No restriction on the direction of the perturbation is made whatsoever. These obtained estimates are upper-bounds and require knowledge of how the perturbation vector interacts with the spectral measure. The goal is to bring the constructed operator closer to being an Anderson-type Hamiltonian by allowing more freedom for the choices of the vectors. Unfortunately, the estimates obtained are not sharp enough to iterate using the devised methods and further refinement is still required. However, the estimates are the first known of their kind for general perturbation theory and are of interest for those purposes as well. The methods used rely on an intimate knowledge of Aronszajn–Donoghue theory and the integral transforms involved within.
Fundamentals of Perturbation Theory {#s-B}
===================================
Classical Perturbation Theory {#ss-Pert}
-----------------------------
In perturbation theory one seeks to answer the question: Given some information about the spectrum of an operator $A$, what can be said about the spectrum of the operator $A+B$ when $B$ is in some operator class? Often, the attention is restricted to which properties of the spectrum are preserved. The answer, of course, varies wildly depending on the class of operators the perturbation $B$ is taken from. The answer may also be influenced by the choice of unperturbed operator $A$. Here, we focus on self-adjoint operators $A$ and $B$.
To formulate some partial answers, we use the notation $$A\sim B (\operatorname{mod}\text{\em Class }X)$$ if there exists a unitary operator $U$ such that $UAU^{-1}-B$ is an element of $\text{\em Class }X$. The $\text{\em Class }X$ can be any class of operators, e.g. compact, trace class, or finite rank operators.
\[t-weylvn\] The essential spectra of two self-adjoint operators $A$ and $B$ satisfy $$\begin{aligned}
\sigma{_{\scriptstyle \text{\rm ess}}}(A)=\sigma{_{\scriptstyle \text{\rm ess}}}(B) \text{ if and only if } A\sim B~(\operatorname{mod}\text{compact operators}).\end{aligned}$$
\[t-KR\] If two self-adjoint operators satisfy $A\sim B (\operatorname{mod}\text{trace class})$ then their absolutely continuous parts are unitarily equivalent: $A{_{\scriptstyle \text{\rm ac}}}\sim B{_{\scriptstyle \text{\rm ac}}}$.
For self-adjoint $A$ and $B$, Carey and Pincus [@CP] found a complete characterization of when $A\sim B~(\operatorname{mod}\text{\em trace class})$ in terms of the operators’ spectrum. Of course, they must have unitarily equivalent absolutely continuous parts. Outside the continuous spectrum, they are only allowed discrete parts. And the discrete eigenvalues of $A$ and $B$ (counting multiplicity) must fall into three categories: (i) those eigenvalues of $A$ with distances from the joint continuous spectrum having finite sum (i.e. are trace class), (ii) those eigenvalues of $B$ with distances from the joint continuous spectrum having finite sum, and (iii) eigenvalues of $A$ and $B$ that can be matched up so that their differences have finite sum.
Introducing Rank-One Perturbations and the Spectral Theorem {#ss-ROST}
-----------------------------------------------------------
We focus our attention on when the perturbation class $\text{\em Class }X$ consists of self-adjoint operators with one-dimensional range (rank-one). Let $T$ be a self-adjoint operator (bounded or unbounded) on a separable Hilbert space ${\mathcal{H}}$. The operator $T$ will be called cyclic when it possesses a vector ${{\varphi}}$ such that $$\begin{aligned}
\label{e-cyclicity}
{\mathcal{H}}= \overline{\operatorname{span}\{(T-\lambda{\bf I})^{-1}{{\varphi}}: \lambda\in\CC\backslash\RR\}},\end{aligned}$$ where the closure is taken with respect to the Hilbert space norm. In this case, the vector ${{\varphi}}$ is also called *cyclic*. Here we take ${{\varphi}}\in {\mathcal{H}}$. All [*rank-one perturbations of a self-adjoint operator $T$ by the cyclic vector ${{\varphi}}$*]{} are given by $$\begin{aligned}
\label{e-rk1}
T_{\alpha} = T + \alpha \langle {\,\cdot\,}, {{\varphi}}\rangle{_{ {}_{\scriptstyle {\mathcal{H}}}}}{{\varphi}}\qquad \text{for}\qquad
\alpha \in \RR.\end{aligned}$$
The supposition that $T$ is cyclic is not a restriction, as otherwise we simply decompose ${\mathcal{H}}= {\mathcal{H}}_1\oplus {\mathcal{H}}_2$ such that ${{\varphi}}$ is cyclic for $T$ on ${\mathcal{H}}_1$ and $T$ is left unchanged by the perturbation when restricted to ${\mathcal{H}}_2$.
It is worth emphasizing that Theorems \[t-KR\] and \[t-weylvn\] can be applied to rank-one perturbations, as such perturbations are both trace class and compact.
As simple consequence of the resolvent formula one can see ${{\varphi}}$ is also a cyclic vector of the operator $T_{\alpha}$ for all $\alpha\in {{\mathbb R}}$, see [@AbukamovPoltoLiaw; @LiawTreil0] for more about cyclicity. The spectral measure of $T_{\alpha}$ with respect to the cyclic vector ${{\varphi}}$ will be denoted by $\mu_{\alpha}$. Explicitly, the spectral theorem defines $\mu_\alpha$ via $$\langle(T_{\alpha}-z{{\mathbf{I}}})^{-1}{{\varphi}},{{\varphi}}\rangle{_{ {}_{\scriptstyle {\mathcal{H}}}}}=\int_{{\mathbb R}}\frac{d\mu_\alpha(t)}{t-z}\qquad\text{for all }z\in {{\mathbb C}}\backslash{{\mathbb R}}.$$ In other words, $T_\alpha$ is unitarily equivalent to multiplication by the independent variable on an $L^2(\mu_{{{\alpha}}})$ space with non-negative Radon measure $\mu_{{{\alpha}}}$, the *spectral measure*, supported on ${{\mathbb R}}$. The spectral measure of the unperturbed operator $T$, $\mu_0$, is often used as a comparison to the spectral measure $\mu_{\alpha}$. Therefore, we use the convention that $\mu_0=\mu$ for simplicity. This means that $T$ can be written as $M_t$, multiplication by the independent variable on $L^2(\mu)$. The vector ${{\varphi}}$ is then represented by the function that is identically equal to the constant function one on $L^2(\mu)$.
The spectral theorem now translates the rank-one perturbation problem to $$\label{d-rk1}
\widetilde{T}_{\alpha}=M_t+{{\alpha}}\langle{\,\cdot\,}, {\bf 1}\rangle{_{ {}_{\scriptstyle L^2({\mu})}}}{\bf 1}.$$ Therefore, we identify ${\mathcal{H}}=L^2(\mu)$ and use $\widetilde{T}_{{{\alpha}}}=T_{{{\alpha}}}$ for brevity of notation. The presence of a different unitary intertwining operator relating the operators $T_{{{\alpha}}}$ and $M_s$, on their respective spaces ${\mathcal{H}}=L^2(\mu)$ and $L^2(\mu_{{{\alpha}}})$, begs the question whether we can capture this unitary intertwining operator. This question was answered in a paper of Liaw and Treil [@LiawTreil1 Theorem 2.1]. The theorem extends to all of $L^2(\mu)$, see [@LiawTreil1 Theorem 3.2], but a simpler version is presented here.
\[t-repthm\] The spectral representation $V_{{{\alpha}}}:L^2(\mu)\to L^2(\mu_{{{\alpha}}})$ of $T_{{{\alpha}}}$ acts by $
V_{{{\alpha}}}f(s)=f(s)-{{\alpha}}\int{_{ {}_{\scriptstyle {{\mathbb R}}}}}\dfrac{f(s)-f(t)}{s-t}d\mu(t)
$ for compactly supported $C^1$ functions $f$.
The Borel Transform and Rank-One Perturbation Theory {#ss-ROT}
----------------------------------------------------
A review of rank-one perturbation theory requires a subtle description of spectral measures and their various decompositions. A study of the integral operators involved will be central to the analysis of Aronszajn–Donoghue theory in Section \[s-northo\]. Let $\mu$ be a positive measure on $[a,\infty)$ for some $a>-\infty$ with $$\begin{aligned}
\label{d-preborel}
\int\dfrac{d\mu(\lambda)}{|\lambda|+1}<\infty.\end{aligned}$$ This assumption is somewhat restrictive, but is necessary for the study of Borel transforms. The condition that the support of $\mu$ is bounded below can be relaxed somewhat, but does hold in the current applications, and simplifies further details slightly.
Adherence to allows us to define the *Borel transform* of $\mu$ as $$\begin{aligned}
F(z):=\int_{\RR}\dfrac{d{\mu}({{\lambda}})}{{{\lambda}}-z}\qquad (z\in {{\mathbb C}}\backslash (\operatorname{supp}\mu)).\end{aligned}$$ Indeed, boundary values of $F(z)$, as $z=x+i\epsilon$ approaches points $x$ in the support of $\mu$, are the primary instrument to discern spectral properties of $\mu$. See [@CimaMathesonRoss; @LiawTreil1; @poltsara2006; @Simon] for a more detailed discussion (the Cauchy transform, a close relative, is often used).
The auxiliary transform $$\begin{aligned}
G(x):=\int_{\RR}\dfrac{d\mu(y)}{(y-x)^2}\qquad (x\in {{\mathbb R}}\backslash (\operatorname{supp}\mu)),\end{aligned}$$ captures some properties of the derivative (with respect to $z$) of the Borel transform as $z$ approaches the real axis, whereby it also plays a central role.
Let $w\in L^1{_{\scriptstyle \text{\rm loc}}}(\RR)$ denote the Radon–Nikodym derivative of $\mu$ with respect to Lebesgue measure. With this, the Lebesgue/Radon–Nikodym decomposition is given by $d\mu = w(x) dx + d\mu{_{\scriptstyle \text{\rm s}}}$. The unitary equivalence between $T$ on ${\mathcal{H}}$ and $M_t$ on $L^2(\mu)$ involves a unitary intertwining operator, which gives rise to the corresponding orthogonal components of the operator $T = T{_{\scriptstyle \text{\rm ac}}}\oplus T{_{\scriptstyle \text{\rm s}}}$. The singular part can be further decomposed into singular continuous $\mu{_{\scriptstyle \text{\rm sc}}}$ and pure point $\mu{_{\scriptstyle \text{\rm pp}}}$ parts. Here, $\mu{_{\scriptstyle \text{\rm pp}}}$ consists of point masses at the eigenvalues of $T$ and $\mu{_{\scriptstyle \text{\rm sc}}}=\mu{_{\scriptstyle \text{\rm s}}}-\mu{_{\scriptstyle \text{\rm pp}}}$. The spectrum is denoted by $\sigma(T)$ and is the (closed) $\operatorname{supp}(\mu)$. The set of all real numbers $x$ that are isolated eigenvalues of finite multiplicity for $T$ is defined to be the discrete spectrum, denoted $\sigma{_{\scriptstyle \text{\rm d}}}(T)$. The essential spectrum of $T$ is the complement of the discrete spectrum, denoted $\sigma{_{\scriptstyle \text{\rm ess}}}(T)=\sigma(T)\backslash\sigma{_{\scriptstyle \text{\rm d}}}(T)$.
Historically, the following theorem emerged from the question of changing boundary conditions in a Sturm–Liouville operator [@Aron; @Dono]. From a theoretical perspective, the theorem characterizes the perturbed operator’s pure point and absolutely continuous spectra. The result will be heavily used in later sections.
\[t-AD\] For ${{\alpha}}\neq 0$, define $$S_{{\alpha}}=\left\{x\in\RR ~|~ F(x+i0)=-{{\alpha}}^{-1}; G(x)=\infty\right\},$$ $$P_{{\alpha}}=\left\{x\in\RR ~|~ F(x+i0)=-{{\alpha}}^{-1}; G(x)<\infty\right\},$$ $$L=\left\{x\in\RR ~|~ {{\operatorname{Im}}}~F(x+i0)\neq 0\right\}.$$ Then we have
1. $\left\{S_{{{\alpha}}}\right\}_{{{\alpha}}\neq 0}$, $\left\{P_{{{\alpha}}}\right\}_{{{\alpha}}\neq 0}$ and $L$ are mutually disjoint.
2. $P_{{{\alpha}}}$ is the set of eigenvalues of $A_{{{\alpha}}}$. In fact, $$\left(d\mu_{{{\alpha}}}\right){_{\scriptstyle \text{\rm pp}}}(x)=\sum_{x_n\in P_{{{\alpha}}}}\dfrac{1}{{{\alpha}}^2 G(x_n)}\delta(x-x_n).$$
3. $\left(d\mu_{{{\alpha}}}\right){_{\scriptstyle \text{\rm ac}}}$ is supported on $L$, $\left(d\mu_{{{\alpha}}}\right){_{\scriptstyle \text{\rm sc}}}$ is supported on $S_{{{\alpha}}}$.
4. For ${{\alpha}}\neq\beta$, $\left(d\mu_{{{\alpha}}}\right){_{\scriptstyle \text{\rm s}}}$ and $\left(d\mu_{\beta}\right){_{\scriptstyle \text{\rm s}}}$ are mutually singular.
The case ${{\alpha}}=\infty$ is known as infinite coupling, and was treated by Gesztesy and Simon, see e.g. [@GS; @Simon]. The last part of the result says that the singular part of rank-one perturbations must move when the perturbation parameter $\alpha$ is changed. A description of the singular continuous spectrum is still outstanding. In fact, the ‘minimal’ support of $\left(\mu_{{{\alpha}}}\right){_{\scriptstyle \text{\rm sc}}}$ is not known, see e.g. [@CimaMathesonRoss; @LiawTreil1; @Simon], let alone a characterization of $\left(\mu_{{{\alpha}}}\right){_{\scriptstyle \text{\rm sc}}}$.
The absolutely continuous part of the perturbed operator, $(\mu_{{{\alpha}}}){_{\scriptstyle \text{\rm ac}}}$, can be explicitly computed using the following Lemma.
\[l-SIM\] Let $F(z)$ be the Borel transform of a measure $\mu$ obeying . Let $x\in\RR$ and $\displaystyle x+i0=\lim_{\beta\downarrow 0}(x+i\beta)$. Then we have $${{\operatorname{Im}}}~F{_{ {}_{\scriptstyle \alpha}}}(z)=\dfrac{{{\operatorname{Im}}}~F(z)}{|1+{{\alpha}}F(z)|^2} \qquad\text{and} \qquad (d\mu_{{{\alpha}}}){_{\scriptstyle \text{\rm ac}}}(x)=\pi^{-1}{{\operatorname{Im}}}~F_{{{\alpha}}}(x+i0)dx.$$
In the case of purely singular measures the following theorem resembles a characterization for $A\sim B (\operatorname{mod}\text{\em rank-one})$.
\[t-Polt\] Let $X\subset{{\mathbb R}}$ be closed. By $I_1=(x_1;y_1), I_2=(x_2;y_2), \hdots$ denote disjoint open intervals such that $X={{\mathbb R}}\backslash \bigcup I_n$. Let $A$ and $B$ be two cyclic self-adjoint completely non-equivalent operators with purely singular spectrum. Suppose $\sigma(A)=\sigma(B)=X$ and assume $\sigma{_{\scriptstyle \text{\rm pp}}}(A)\cap\{x_1,y_1, x_2, y_2, \hdots\}=\sigma{_{\scriptstyle \text{\rm pp}}}(B)\cap\{x_1,y_1, x_2, y_2, \hdots\}=\varnothing.$ Then we have $$A\sim B (\operatorname{mod}\text{\em rank-one}).$$
Anderson-type Hamiltonians {#ss-01}
--------------------------
Let $(\Omega, \mathcal{A}, {\mathbb{P}})$ be a probability space, and consider the sequence of independent random complex variables $X_n(w)$, $w\in \Omega$. We assume that $\Omega=\prod_{n=0}^\infty \Omega_n$, where $\Omega_n$ are different probability spaces, $w=(w_1,w_2,...)$, $w_n\in\Omega_n$ and the probability measure on $\Omega$ is introduced as the product measure of the corresponding measures on $\Omega_n$. Each of the independent random variables $X_n(w)$ depends only on the $n$-th coordinate, $w_n$.
Now, consider a self-adjoint operator $H$ on a separable Hilbert space ${\mathcal{H}}$ and a sequence $\{{{\varphi}}_n\}\subset{\mathcal{H}}$ of linearly independent unit vectors. Let $\omega=(\omega_1, \omega_2, \hdots)$ be a random variable corresponding to a probability measure ${\mathbb{P}}$ on ${{\mathbb R}}^\infty$. In particular, let the parameters $\omega_n$ be chosen i.i.d. (independent, identically distributed) with respect to ${\mathbb{P}}$. An *Anderson-type Hamiltonian* [@JakLast2000] is an almost surely self-adjoint operator associated with the formal expression $$\label{Model}
H_\omega = H + V_\omega \qquad\text{on }{\mathcal{H}}, \qquad V_\omega = \sum\limits_n \omega_n \langle{\,\cdot\,}, {{\varphi}}_n\rangle{{\varphi}}_n.$$ As is customary, assume that the vectors ${{\varphi}}_n$ are orthogonal. However, many properties readily extend to the case of non-orthogonal ${{\varphi}}_n$ so long as almost surely defines a self-adjoint operator.
The archetype Anderson-type Hamiltonian is the discrete Schrödinger operator with random potential on $l^2({{\mathbb Z}}^d)$, given by $$\label{e-dso}
Hf(x)=-\bigtriangleup f (x) = - \sum\limits_{|n|=1} (f(x+n)-f(x)), \quad {{\varphi}}_n(x)=\delta_n(x)=
\left\{\begin{array}{ll}1&x=n,\\ 0&\text{else.}\end{array}\right.$$ The corresponding operator $H_\omega$ models quantum mechanical phenomena in a crystalline structure with random on-site potentials, and appears in many fields of mathematics. We note that Kolmogorov’s 0-1 Law can be applied to Anderson-type Hamiltonians using the standard probabilistic set up described above, see [@AbaPolt; @K] where the reader can find all the necessary definitions and basic properties. Many interesting properties that are studied (i.e. cyclicity) are events, and the 0-1 Law hence states that the probability of such events are either 0 or 1.
Motivating Example {#example}
==================
We begin our endeavors by offering a simple example, which serves two purposes. First, it helps demonstrate the difficulties of computing the absolutely continuous part using a scalar spectral measure for even a simple rank–two perturbation, without using the construction introduced in Sections \[s-startiterate\] and \[s-infinite\]. Second, we will hook back into this example when we explain the choice of the direction vectors (later called ${{\varphi}}_k$) in the iteration process in Subsection \[ss-fk\].
We consider the following rank-two problem in the spectral representation with respect to one of the vectors and assume that the spectral measure of the unperturbed operator is $d\mu(x):=\frac{1}{2}\chi{_{ {}_{\scriptstyle [-1,1]}}}(x)dx$. Consider the normalized vectors ${{\varphi}}_1,{{\varphi}}_2\in L^2(\mu)$, where ${{\varphi}}_1$ is the constant function that is identically equal to ${{\mathbf{1}}}$ with $\mu$ respect to almost everywhere.
We introduce the rank-two perturbation $$H_{\alpha,\beta}=M_t+\alpha\langle{\,\cdot\,},{{\varphi}}_1\rangle_{L^2(\mu)}{{\varphi}}_1+\beta\langle{\,\cdot\,},{{\varphi}}_2\rangle_{L^2(\mu)}{{\varphi}}_2
\qquad
\text{on}
\qquad
L^2(\mu).$$ As an application of Aronszajn–Donoghue theory we can compute the absolutely continuous part of the rank-one perturbation $H_{\alpha,0}$ for $\beta = 0 $: $$\begin{aligned}
\label{e-r1}
d[(\mu_{\alpha,0})_{ac}](x)= \frac{1}{2}\left\{1+\alpha^2+\alpha\ln\Big(\dfrac{x+1}{-x+1}\Big)+\left(\dfrac{\alpha}{4}\right)^2\Big[\ln\Big(\dfrac{x+1}{-x+1}\Big)\Big]^2\right\}^{-1}dx\end{aligned}$$ for $x\in [-1,1]$, and $(\mu_{\alpha,0})_{ac}\equiv 0$ outside $[-1,1]$. In general, the introduction of a second rank-one perturbation causes the problem to be too expensive to merit computation. Indeed, Aronszajn–Donoghue theory will require integration with respect to $\mu_{\alpha, 0}$. As a consequence, computing the spectral measure under such an iterative rank-two perturbation, or even just its eigenvalues, seems practically unfeasible.
Overview of Iterative Process {#ss-diagram}
=============================
The general construction described in Sections \[s-firstpert\] through \[s-infinite\] is somewhat complicated, so it may be beneficial to the reader to have an overview of the string of processes before diving in.
We focus on the three main operations and specifically how they change the space we are analyzing. The process begins with a perturbed operator that acts on a space $L^2(\widetilde{\mu}_{{{\alpha}}_{k-1}})$, where $d\widetilde{\mu}_{{{\alpha}}_{k-1}}=\tau_{k-1}\chi{_{ {}_{\scriptstyle [-1,1]}}}(x)dx$ for some constant $0<\tau_{k-1}\leq 1$. These $L^2$ spaces, given by a measure with a tilde, are starting points because the total *strength* of the absolutely continuous spectrum (within $[-1,1]$) is easily calculated. They are referred to as auxiliary spaces throughout the manuscript. A broad description of each operation follows, see the referenced sections for more on each step.
1. The spectral theorem is used on the operator that acts on $L^2(\widetilde{\mu}_{{{\alpha}}_{k-1}})$ to yield the operator that is multiplication by the independent variable $M_t$ on a new space denoted $L^2(\mu_{{{\alpha}}_{k}})$. The explicit transform is given by the spectral representation $V_{{{\alpha}}_k}$ (the unitary operator realizing the spectral theorem) and described in Subsection \[ss-5.1\].
2. The new operator is perturbed by a parameter ${{\alpha}}_{k+1}$ and specific vector ${{\varphi}}_{k+1}$ given in Corollary \[c-morevectors\], that depends on the previous perturbation. Vector ${{\varphi}}_{k+1}$ is orthogonal to previous perturbation vectors, allowing us to focus on just the absolutely continuous part of the measure $L^2[(\mu_{{{\alpha}}_k}){_{\scriptstyle \text{\rm ac}}}]$. For convenience, we denote this operation as “$+{{\varphi}}_{k+1}$” over a squiggly arrow in the diagram below.
3. Finally, a unitary transform $U_k$ is applied to the operator which translates it to the space $L^2(\widetilde{\mu}_{{{\alpha}}_{k}})$, see Corollary \[c-morevectors\]. This space is comparable to $L^2(\widetilde{\mu}_{{{\alpha}}_{k-1}})$, and our operations may be repeated again.
The following schematic details how the $k+1$ and $k+2$ perturbations are performed by identifying the spaces of interest.
L\^2(\_[\_[k-1]{}]{}) & L\^2(\_[\_[k]{}]{}) & L\^2\[(\_[\_k]{})[\_]{}\]\
L\^2(\_[\_k]{}) & L\^2(\_[\_[k+1]{}]{}) & L\^2\[(\_[\_[k+1]{}]{})[\_]{}\]
Please see the beginning of Section \[s-infinite\] for a more detailed procedure.
A First Perturbation {#s-firstpert}
====================
We begin with constructing a rank-one perturbation in the spectral representation. Namely, consider the spectral measure $$\begin{aligned}
\label{e-firstmeasure}
d\widetilde{\mu}_0(x):=\dfrac{1}{2}~\chi{_{ {}_{\scriptstyle [-1,1]}}}(x)dx\end{aligned}$$ and let vector $\widetilde{{\varphi}}_1$ be the constant function of $L^2(\widetilde\mu_0)$ that is identically equal to 1. Note that $\widetilde{{\varphi}}_1$ has unit norm. Now, consider the family of (bounded) rank-one perturbations: $$\begin{aligned}
\label{e-firstpert}
\widetilde{H}{_{ {}_{\scriptstyle \alpha_1}}}=M_t+\alpha_1\langle{\,\cdot\,},\widetilde{{{\varphi}}}_1\rangle_{L^2(\widetilde{\mu}_0)}\widetilde{{{\varphi}}}_1
\qquad
\text{on }
L^2(\widetilde{\mu}_0)\text{, where }
\alpha_1\in{{\mathbb R}}.\end{aligned}$$
The total mass of a measure $\eta$, $$\begin{aligned}
\|{\eta}\|:=\int_{\RR}d{\eta}(t),\end{aligned}$$ will play a key role in comparing the remaining mass of the absolutely continuous parts of the spectral measures we produce within the iteration process.
By construction we have $\|\widetilde{\mu}_0\|=1$. The properties of the perturbed operator are captured by Aronszajn–Donoghue theory (Theorem \[t-AD\]). Applied to the current situation, this result yields the following observation, which we will utilize at each step of our construction.
\[l-compute\] Let $\widetilde{\mu}_0$ and $\widetilde{H}_{{{\alpha}}_1}$ respectively be given by and . Let $\mu{_{ {}_{\scriptstyle \alpha_1}}}$ be the spectral measure of $\widetilde{H}{_{ {}_{\scriptstyle \alpha_1}}}$. If $\alpha_1 \neq 0$, then
1. the perturbed operator $\widetilde{H}{_{ {}_{\scriptstyle \alpha_1}}}$ has exactly one eigenvalue, $x{_{ {}_{\scriptstyle \alpha_1}}}$, and $$\begin{aligned}
x{_{ {}_{\scriptstyle \alpha_1}}}:=\dfrac{-1-e^{2/{{\alpha}}_1}}{1-e^{2/{{\alpha}}_1}}\,\in {{\mathbb R}}\backslash [-1,1].\end{aligned}$$
2. The created eigenvalue has weight/mass: $$\begin{aligned}
\mu{_{ {}_{\scriptstyle \alpha_1}}}\{x{_{ {}_{\scriptstyle \alpha_1}}}\}=\dfrac{4 e^{2/{{\alpha}}_1}}{{{\alpha}}_1^2(e^{2/{{\alpha}}_1}-1)^2}\,.\end{aligned}$$
$(1)$ On $[-1,1]$, the imaginary part of $F(z)=\int{_{ {}_{\scriptstyle \mathbb{R}}}}\frac{d\widetilde{\mu}_0(\lambda)}{\lambda -z}$ is strictly positive. So, $
\widetilde{H}{_{ {}_{\scriptstyle \alpha_1}}}$ does not have any eigenvalues on $[-1,1]$ for all $\alpha_1$.
Further, the assumptions of the lemma imply $$\begin{aligned}
G(x)=\int_{-1}^{1}\dfrac{d\widetilde{\mu}_0(\lambda)}{(\lambda -x)^2}<\infty
\qquad
\text{for all }x\in{{\mathbb R}}\backslash [-1,1].\end{aligned}$$ By Theorem \[t-AD\], $\widetilde{H}{_{ {}_{\scriptstyle \alpha_1}}}$ has an eigenvalue at such $x$ if and only if $-\dfrac{1}{\alpha_1}=\text{F}(x+i0)$. Hence, eigenvalues occur at $x\in{{\mathbb R}}\backslash [-1,1]$ that satisfy $$\begin{aligned}
-\dfrac{1}{\alpha_1}=\int{_{ {}_{\scriptstyle \mathbb{R}}}}\dfrac{d\widetilde{\mu}_0(\lambda)}{\lambda - x}=\dfrac{1}{2}\int_{-1}^{1}\dfrac{d\lambda}{\lambda - x}=\dfrac{1}{2}\text{ln}\left(\dfrac{1-x}{-1-x}\right).\end{aligned}$$ The solution to the previous equation for $x$ depends on ${{\alpha}}_1$ and will be denoted as $$\begin{aligned}
x_{\alpha_1}:=\dfrac{-1-e^{2/{{\alpha}}_1}}{1-e^{2/\alpha_1}}\,.\end{aligned}$$ In particular, $x_{\alpha_1}<-1$ for $\alpha_1<0$, while $x_{\alpha_1}>1$ for $\alpha_1>0$.
$(2)$ By Theorem \[t-AD\], the mass of the created eigenvalue is $
\mu{_{ {}_{\scriptstyle \alpha_1}}}\{x{_{ {}_{\scriptstyle \alpha_1}}}\}=\dfrac{1}{{{\alpha}}_1^2 G(x{_{ {}_{\scriptstyle \alpha_1}}})},
$ where $$\begin{aligned}
G(x{_{ {}_{\scriptstyle \alpha_1}}})=\dfrac{1}{2}\int_{-1}^{1}\dfrac{d{{\lambda}}}{({{\lambda}}-x{_{ {}_{\scriptstyle \alpha_1}}})^2}=-\dfrac{1}{2}\left[\dfrac{1}{1-x{_{ {}_{\scriptstyle \alpha_1}}}}+\dfrac{1}{1+x{_{ {}_{\scriptstyle \alpha_1}}}}\right].\end{aligned}$$ Inserting the value of $x{_{ {}_{\scriptstyle \alpha_1}}}$ calculated in part $(1)$ yields $1/G(x{_{ {}_{\scriptstyle \alpha_1}}})=\dfrac{4 e^{2/{{\alpha}}_1}}{(e^{2/{{\alpha}}_1}-1)^2}$. The second statement of the lemma thus follows.
Iterated Perturbations {#s-startiterate}
======================
This section explains the heart of our construction. After fixing ${{\alpha}}_1$, we set the stage for the successive perturbations by describing how the operator $H{_{ {}_{\scriptstyle \alpha_1}}}$ is perturbed. In particular, the specific choice of the direction of the second perturbation, ${{\varphi}}_2$, will allow us to calculate the total mass of the remaining absolutely continuous part of the spectrum. The difficulties encountered in the example in Section \[example\] are bypassed by applying a unitary transformation which exploits the choice of ${{\varphi}}_2$. After the transformation, computations from Aronszajn–Donoghue theory again resemble those of Lemma \[l-compute\].
Unitary equivalence and the remaining absolutely continuous spectrum {#ss-5.1}
--------------------------------------------------------------------
Recall the rank-one perturbation setup discussed in Section \[s-firstpert\], namely, $$\begin{aligned}
\widetilde{H}{_{ {}_{\scriptstyle \alpha_1}}}=M_t+\alpha_1\langle{\,\cdot\,},\widetilde{{{\varphi}}}_1\rangle_{L^2(\widetilde{\mu}_0)}\widetilde{{{\varphi}}}_1
\,\,\,\text{on}\,\,\,
L^2(\widetilde{\mu}_0)
\,\,\,\text{where}\,\,\,
d\widetilde{\mu}_0(x):=\dfrac{1}{2}~\chi{_{ {}_{\scriptstyle [-1,1]}}}(x)dx
\,\,\,\text{and}\,\,\,
\widetilde{{\varphi}}_1\equiv {\bf 1}.\end{aligned}$$
Fix the realization of ${{\alpha}}_1$ in accordance with the probability measure ${\mathbb{P}}$. Then, Aronszajn–Donoghue theory (Theorem \[t-AD\]) provides us with information about the spectral measure, $\mu{_{ {}_{\scriptstyle \alpha_1}}}$, of the perturbed operator $\widetilde{H}{_{ {}_{\scriptstyle \alpha_1}}}$ from the previous section. Furthermore, we know that the support of the absolutely continuous part of the measure is still equal to $[-1,1]$ due to the Kato–Rosenblum theorem, Theorem \[t-KR\]. The operator $\widetilde{H}{_{ {}_{\scriptstyle \alpha_1}}}$ is represented in the space $L^2(\mu{_{ {}_{\scriptstyle \alpha_1}}})$ as multiplication by the independent variable due to the spectral theorem.
By a slight abuse of notation, for future iterations we will still write $M_t$ for this operator to avoid an infinite sequence of independent variables. In particular, we have the unitary equivalence between operators $$\begin{aligned}
\label{e-SpecRep}
\left(\widetilde{H}{_{ {}_{\scriptstyle \alpha_1}}}\text{ on }L^2(\widetilde{\mu}_0)\right)
\,\,\,\,\sim\,\,\,\,
\left(M_t\text{ on }L^2({\mu}{_{ {}_{\scriptstyle \alpha_1}}})\right).\end{aligned}$$ Let $V{_{ {}_{\scriptstyle \alpha_1}}}: L^2(\widetilde{\mu}_0)\to L^2({\mu}{_{ {}_{\scriptstyle \alpha_1}}})$ denote the corresponding intertwining unitary operator such that $$V{_{ {}_{\scriptstyle \alpha_1}}} \widetilde{H}{_{ {}_{\scriptstyle \alpha_1}}}= M_t V{_{ {}_{\scriptstyle \alpha_1}}}
\qquad\text{and}\qquad
V{_{ {}_{\scriptstyle \alpha_1}}}\widetilde{{\varphi}}_1 \equiv \text{constant}.$$
In order to find the value of this constant, we notice that the operator $V{_{ {}_{\scriptstyle {{\alpha}}_1}}}$ is given explicitly in the Representation Theorem \[t-repthm\] [@LiawTreil1]. Hence, by construction we have that $V{_{ {}_{\scriptstyle {{\alpha}}_1}}}{{\mathbf{1}}}={{\mathbf{1}}}$, where the ${{\mathbf{1}}}$ vectors are understood to be in the appropriate $L^2$ spaces, $L^2(\widetilde{\mu}_0)$ and $L^2(\mu{_{ {}_{\scriptstyle {{\alpha}}_1}}})$ respectively.
Together with the unitary property of $V_{{{\alpha}}_1}$ we see that $$\begin{aligned}
\label{calc}
\|\widetilde{\mu}_0\|=\int_{\RR}d{\widetilde{\mu}_0}(t)
=\|{{\mathbf{1}}}\|{_{ {}_{\scriptstyle L^2(\widetilde{\mu}_0)}}}=\|V{_{ {}_{\scriptstyle {{\alpha}}_1}}}{{\mathbf{1}}}\|{_{ {}_{\scriptstyle L^2(\mu_{{{\alpha}}_1})}}} &=\|{{\mathbf{1}}}\|{_{ {}_{\scriptstyle L^2(\mu_{{{\alpha}}_1})}}}=\int_{\RR}d{\mu_{{{\alpha}}_1}}(t)=\|\mu_{{{\alpha}}_1}\|.\end{aligned}$$
In particular, we have $$\|(\mu_{{{\alpha}}_1}){_{\scriptstyle \text{\rm ac}}}\|
=
\|\widetilde{\mu}_0\| - \mu{_{ {}_{\scriptstyle \alpha_1}}}\{x{_{ {}_{\scriptstyle \alpha_1}}}\}
=\|(\widetilde{\mu}_0){_{\scriptstyle \text{\rm ac}}}\| - \mu{_{ {}_{\scriptstyle \alpha_1}}}\{x{_{ {}_{\scriptstyle \alpha_1}}}\}
=1-\mu{_{ {}_{\scriptstyle \alpha_1}}}\{x{_{ {}_{\scriptstyle \alpha_1}}}\}<1.$$
Researchers experienced in the field may feel to be contradictory to results in Clark theory, the basis of unitary rank-one perturbation theory. However, a central theme discovered while producing these results is that in these aspects, self-adjoint theory and unitary theory are quite different. For instance, attempting to move this calculation into the unitary perturbation case with the Cayley transform involves an adjustment for the perturbation vector which causes some cancellations, see [@LiawTreil2 Lemma 5.1]. Furthermore, the condition in the representation theorem that requires $V{_{ {}_{\scriptstyle {{\alpha}}_1}}}{{\mathbf{1}}}={{\mathbf{1}}}$ is believed to be equivalent to the statement that $\theta(0)=0$, where $\theta$ is the generating characteristic function for the Clark measures. Hence, a contradiction with a result similar to [@CimaMathesonRoss Prop. 9.1.8] is not created.
The direction of the second perturbation vector
-----------------------------------------------
The second rank-one perturbation is now added in the direction of some particular function ${{\varphi}}_2\in{\mathcal{H}}$, to be chosen in accordance with Proposition \[p-f2\]. The task will be to observe properties of $$\label{e-example}
H{_{ {}_{\scriptstyle \alpha_2}}}=M_t+{{\alpha}}_2\langle{\,\cdot\,},{{\varphi}}_2\rangle{_{ {}_{\scriptstyle L^2(\mu{_{ {}_{\scriptstyle \alpha_1}}})}}}{{\varphi}}_2\quad\text{on }L^2(\mu{_{ {}_{\scriptstyle \alpha_1}}}).$$ To see that $H{_{ {}_{\scriptstyle \alpha_2}}}$ is a rank-two perturbation of the original operator, recall that by the operator $M_t$ on $L^2(\mu{_{ {}_{\scriptstyle \alpha_1}}})$ is the spectral presentation of the original perturbed operator $H_{\alpha_1}$ on $L^2(\widetilde{\mu}_0)$.
As announced, we study $H{_{ {}_{\scriptstyle \alpha_2}}}$ by moving to an auxiliary space. The following proposition encapsulates the main idea of this work: the choice of ${{\varphi}}_2$ and of the unitary operator $U_1$ which passes the spectral calculations from $L^2(\mu_{{{\alpha}}_1})$ to a particular *auxiliary space* denoted by $L^2(\widetilde{\mu}_{{{\alpha}}_1})$.
\[p-f2\] By a choice of a unitary multiplication operator $U_1$ and a unit vector ${{\varphi}}_2\in L^2({\mu}{_{ {}_{\scriptstyle \alpha_1}}})$, we can arrange for $$\begin{aligned}
\label{e-perp}
{{\varphi}}_2\perp L^2[(\mu{_{ {}_{\scriptstyle \alpha_1}}}){_{\scriptstyle \text{\rm pp}}}],\qquad {{\varphi}}_2\perp {{\mathbf{1}}}\end{aligned}$$ (recall that $V{_{ {}_{\scriptstyle \alpha_1}}}\widetilde{{\varphi}}_1\equiv\,$constant, so that ${{\varphi}}_2\perp V{_{ {}_{\scriptstyle \alpha_1}}}\widetilde{{\varphi}}_1$), and for the following unitary mapping of spaces $$\begin{aligned}
\label{e-U1}
U_1: L^2[(\mu{_{ {}_{\scriptstyle \alpha_1}}}){_{\scriptstyle \text{\rm ac}}}]\to L^2(\widetilde{\mu}{_{ {}_{\scriptstyle \alpha_1}}}),
$$ as well as for $$\begin{aligned}
\label{e-tau1}
d(\widetilde{\mu}_{{{\alpha}}_1}){_{ {}_{\scriptstyle ac}}} (x)= \tau_1\chi{_{ {}_{\scriptstyle [-1,1]}}}(x)dx
\qquad \text{for some constant }\tau_1.\end{aligned}$$
Before we prove this proposition, we explain what the results mean for the successive construction.
- Equation ensures that the previously introduced eigenvalues remain unchanged and that the sequence of direction vectors will be orthogonal.
- Equations and mean that the problem is once again simplified to one that resembles the setup in Section \[s-firstpert\]. So, by the spectral theorem, the rank-two perturbation $\widetilde{H}{_{ {}_{\scriptstyle \alpha_2}}}$ can be recast as $$\label{e-ranktwo}
\widetilde{H}{_{ {}_{\scriptstyle \alpha_2}}}=M_t+{{\alpha}}_2\langle{\,\cdot\,},\widetilde{{{\varphi}}}_2\rangle{_{ {}_{\scriptstyle L^2(\widetilde{\mu}_{{{\alpha}}_1})}}}\widetilde{{{\varphi}}}_2\quad\text{on }L^2(\widetilde{\mu}_{{{\alpha}}_1}).$$
Indeed, observe how the mass of the absolutely continuous spectrum decreases. For instance, let $(\mu{_{ {}_{\scriptstyle \alpha_2}}}){_{ {}_{\scriptstyle ac}}}$ denote the absolutely continuous part of the spectral measure corresponding to the rank-two perturbed operator $\widetilde{H}{_{ {}_{\scriptstyle \alpha_2}}}$. In light of the above discussion on the properties of $V_{{{\alpha}}_1}$, the mass of this measure can be calculated as $$\begin{aligned}
\|(\mu{_{ {}_{\scriptstyle \alpha_2}}}){_{ {}_{\scriptstyle ac}}}\|=\|(\mu{_{ {}_{\scriptstyle \alpha_1}}}){_{ {}_{\scriptstyle ac}}}\|-\mu{_{ {}_{\scriptstyle \alpha_2}}}\{x{_{ {}_{\scriptstyle \alpha_2}}}\} =1 - \mu{_{ {}_{\scriptstyle \alpha_1}}}\{x{_{ {}_{\scriptstyle \alpha_1}}}\}-\mu{_{ {}_{\scriptstyle \alpha_2}}}\{x{_{ {}_{\scriptstyle \alpha_2}}}\}.\end{aligned}$$ This is the main reason why we make the particular choices for the direction vectors. Thus, the proposition claims that our task becomes much simpler when we pass to an auxiliary measure $\widetilde{\mu}_{{{\alpha}}_1}$. Indeed, the numerical constant $\tau_1$ is related to those in Lemma \[l-compute\] via $\tau_1 = \|(\widetilde{\mu}_{{{\alpha}}_1}){_{ {}_{\scriptstyle ac}}}\|/2$.
Recall that the first perturbation only had the effect of creating an eigenvalue, so we may assume that $$(d\mu_{{{\alpha}}_1}){_{\scriptstyle \text{\rm ac}}}(t)=w_1(t)dt,$$ where $w_1(t)$ is some weight function. As was done in the example in Section \[example\], this weight function can be exactly determined by using Theorem \[t-AD\] and Lemma \[l-SIM\]. We omit this tedious calculation for brevity. Also, $x{_{ {}_{\scriptstyle \alpha_1}}}$ was an eigenvalue outside $[-1,1]$ created by ${{\varphi}}_1$, analyzed in Lemma \[l-compute\]. The corresponding eigenvector is supported at the single point $\left\{x{_{ {}_{\scriptstyle \alpha_1}}}\right\}$.
Define the function $h_2\in L^2[(\mu_{{{\alpha}}_1}){_{\scriptstyle \text{\rm ac}}}]$ in accordance with Lemma \[l-APP\] such that $$\begin{aligned}
h_2(x_{{{\alpha}}_1})=0, \text{ } |h_2(x)|=1 \text{ for } x\in[-1,1], \text{ and } h_2\perp V_{{{\alpha}}_1}\widetilde{{{\varphi}}}_1.\end{aligned}$$ Then define the perturbation vector $$\begin{aligned}
{{\varphi}}_2(t)=\left\{
\begin{array}{ll}\dfrac{h_2(t)}{\sqrt{2w_1(t)}}\quad& \text{for }t\in (-1,1),\\0&\text{else.}\end{array}\right.\end{aligned}$$ This definition ensures that the conditions of equation are satisfied, thanks to the function $h_2$. In particular, we have the orthogonal decomposition $$\label{e-pertbreakdown}
H{_{ {}_{\scriptstyle \alpha_2}}}=M_t\oplus[M_t+{{\alpha}}_2\langle{\,\cdot\,},{{\varphi}}_2\rangle_{L^2(\mu{_{ {}_{\scriptstyle \alpha_1}}})}{{\varphi}}_2]\quad\text{on }L^2[(\mu{_{ {}_{\scriptstyle \alpha_1}}}){_{\scriptstyle \text{\rm pp}}}]\oplus L^2[(\mu{_{ {}_{\scriptstyle \alpha_1}}}){_{\scriptstyle \text{\rm ac}}}],$$ so that the eigenvalue $x{_{ {}_{\scriptstyle \alpha_1}}}$ will remain unchanged by the second perturbation. Further examinations can thus be reduced to $M_t+{{\alpha}}_2\langle{\,\cdot\,},{{\varphi}}_2\rangle{{\varphi}}_2$ on $L^2[(\mu{_{ {}_{\scriptstyle \alpha_1}}}){_{\scriptstyle \text{\rm ac}}}]$.
The choice of ${{\varphi}}_2$ has several favorable consequences. The spectral representation of $H{_{ {}_{\scriptstyle \alpha_2}}}$ with respect to the vector ${{\varphi}}_2$ will be used to transform to the appropriate auxiliary space $L^2(\widetilde{\mu}_{{{\alpha}}_1})$ that will guarantee . Let us carry this out.
The unitary operator that realizes this spectral representation is the multiplication operator $U_1: L^2[(\mu{_{ {}_{\scriptstyle \alpha_1}}}){_{\scriptstyle \text{\rm ac}}}]\to L^2(\widetilde{\mu}{_{ {}_{\scriptstyle \alpha_1}}})$ is given by $$\begin{aligned}
\label{d-U}
U_1:=M_{\sqrt{w_1(t)/\tau_1}}.\end{aligned}$$ Operator $U_1$ is unitary because if $f\in L^2(\mu{_{ {}_{\scriptstyle \alpha_1}}}){_{\scriptstyle \text{\rm ac}}}$, then $$\begin{aligned}
\|f\|_{L^2[(\mu{_{ {}_{\scriptstyle \alpha_1}}}){_{\scriptstyle \text{\rm ac}}}]}^2&=\int_{-1}^{1}|f(t)|^2(d\mu{_{ {}_{\scriptstyle \alpha_1}}}){_{\scriptstyle \text{\rm ac}}}(t)=\int_{-1}^{1}|f(t)|^2w_1(t)dt \\
&=\int_{-1}^{1}\left|f(t)\sqrt{w_1(t)}\right|^2dt=\bigintss_{-1}^{1}\left|f(t)\sqrt{\dfrac{w_1(t)}{\tau_1}}\right|^2\tau_1 dt =\|U_1 f\|_{L^2(\widetilde{\mu}{_{ {}_{\scriptstyle \alpha_1}}})}^2.\end{aligned}$$
The mass of $x{_{ {}_{\scriptstyle \alpha_1}}}$ was explicitly calculated in Section \[s-firstpert\]. We define $$\begin{aligned}
\tau_1:=||(\mu_{{{\alpha}}_1}){_{\scriptstyle \text{\rm ac}}}||/2=\frac{1}{2}\int_{-1}^{1}(d\mu_{{{\alpha}}_1}){_{\scriptstyle \text{\rm ac}}}(t)=\frac{1}{2}\left(1-\mu_{{{\alpha}}_1}\{x{_{ {}_{\scriptstyle \alpha_1}}}\}\right)
=\frac{1}{2}\left(1-\dfrac{4 e^{2/{{\alpha}}_1}}{{{\alpha}}_1^2(e^{2/{{\alpha}}_1}-1)^2}\right).\end{aligned}$$ And the explicit verification of property $(3)$ then follows: $$\begin{aligned}
||(\widetilde{\mu}_{{{\alpha}}_1}){_{\scriptstyle \text{\rm ac}}}||=||1||_{L^2[(\widetilde{\mu}_{{{\alpha}}_1}){_{\scriptstyle \text{\rm ac}}}]}&=||U_1^{-1}(1)||_{L^2[(\mu_{{{\alpha}}_1}){_{\scriptstyle \text{\rm ac}}}]}\\
&=\int_{-1}^1\left|\sqrt{\dfrac{\tau_1}{w_1(t)}}\right|^2w_1(t)dt=\int_{-1}^1\tau_1dt=2\tau_1,\end{aligned}$$ where we used the fact that $U_1$, and hence $U_1^{-1}$, is unitary for the second equality. Finally, we note that ${{\varphi}}_2$ is a unit vector in $L^2[(\mu_{{{\alpha}}_1}){_{\scriptstyle \text{\rm ac}}}]$: $$\begin{aligned}
||{{\varphi}}_2||_{L^2[(\mu_{{{\alpha}}_1}){_{\scriptstyle \text{\rm ac}}}]}=\bigintsss_{-1}^1\left|\dfrac{h_2(t)}{\sqrt{2w_1(t)}}\right|^2w_1(t)dt=\int_{-1}^1\dfrac{1}{2}dt=1.\end{aligned}$$ The unitary operator $U_1$ then establishes $\widetilde{{{\varphi}}}_2$ as a unit vector in $L^2(\widetilde{\mu}_{{{\alpha}}_1})$, in accordance with equation . It can be seen that with these choices of $U_1$ and ${{\varphi}}_2$, the result follows.
Absolutely Continuous Spectrum under Infinite Iterations {#s-infinite}
========================================================
The iteration strategy is now essentially clear:
1. Begin with the setting as in Section \[s-firstpert\], that is, consider the multiplication by the independent variable on $L^2(\widetilde\mu_0)$ with $d\widetilde{\mu}_0(x):=\dfrac{1}{2}~\chi{_{ {}_{\scriptstyle [-1,1]}}}(x)dx$ and the perturbation direction $\widetilde{{\varphi}}_1\equiv {\bf 1}$ in $L^2(\widetilde\mu_0)$.
2. Take a probability distribution $\Omega$, and use it to fix a realization $(\alpha_1, \alpha_2, \hdots)$ of the random variables.
3. Carry out the first perturbation as in Subection \[ss-5.1\]. Let $k=2$.
4. Take the unit vector in direction ${{\varphi}}_k\in L^2(\mu_{{{\alpha}}_{k-1}})$ in accordance with Proposition \[p-f2\].
5. By Proposition \[p-f2\] and the new perturbation problem $\widetilde H{_{ {}_{\scriptstyle \alpha_k}}}$ on the auxiliary space $L^2(\widetilde{\mu}_{k-1})$ is as follows: the unperturbed perturbed operator equals multiplication by the independent variable on the $L^2 $ space with measure $d\widetilde{\mu}_{k-1}(x)= \tau_{k-1} \chi{_{ {}_{\scriptstyle [-1,1]}}}(x)dx$ and the perturbation direction of the constant unit vector $\widetilde{{\varphi}}_{k}\in L^2(\widetilde{\mu}_{k-1})$.
6. Apply the spectral theorem to yield the operator $M_t$ on the space $L^2(\mu_{{{\alpha}}_k})$. This drops the “$\widetilde{\,\,\,}\,$" in the notation and replaces $k$ by $k+1$.
7. Repeat steps (4) through (7) with Proposition \[e-ranktwo\] replaced by Corollary \[c-morevectors\] below.
Note that $\tau_{k-1}$ equals the remaining total mass after the $k$-th iteration, see Subsection \[ss-tau\] below. Keeping track of the sequence $\{\tau_k\}$ is the ultimate goal of our endeavors. Also see Section \[ss-diagram\] for an overview of the iterated process.
The $k$-th Perturbation Vector {#ss-fk}
------------------------------
After step (1) the problem is to consider the $k$-th rank-one perturbation. In analogy to equation we now consider $$H{_{ {}_{\scriptstyle \alpha_k}}}=M_t+{{\alpha}}_k\langle{\,\cdot\,},{{\varphi}}_k\rangle{_{ {}_{\scriptstyle L^2(\mu{_{ {}_{\scriptstyle \alpha_{k-1}}}})}}}{{\varphi}}_k\quad\text{on }L^2(\mu{_{ {}_{\scriptstyle \alpha_{k-1}}}}).$$
Let $\{f_1,\hdots,f_{k-1}\}$ denote the vectors in $L^2(\mu{_{ {}_{\scriptstyle \alpha_{k-1}}}})$ that correspond to the directions of previous perturbations, which were chosen after the previous $k-1$ steps. In other words, we let $$\left(f_n\in L^2(\mu{_{ {}_{\scriptstyle \alpha_{n-1}}}})\right)
\quad
\sim\quad
\left({{\varphi}}_n\in L^2(\mu{_{ {}_{\scriptstyle \alpha_{n}}}})\right)
\qquad\text{for}\qquad
n=1,\hdots,k-1,$$ where $\sim$ refers to the unitary equivalence in accordance with appropriate composition (different for each $n$) of unitary transformations. Recall that $M_t$ in $L^2(\widetilde{\mu}_{{{\alpha}}_{k-1}})$ corresponds to the previous rank-$(k-1)$ perturbation in its spectral representation. The following corollary to the proof of Proposition \[p-f2\] shows that the direction of the $k$-th perturbation vector ${{\varphi}}_k$ can be chosen analogously.
\[c-morevectors\] We can choose ${{\varphi}}_k\in L^2({\mu}{_{ {}_{\scriptstyle \alpha_{k-1}}}})$ so that $$\begin{aligned}
\label{e-perp2}
{{\varphi}}_k\perp L^2[(\mu{_{ {}_{\scriptstyle \alpha_n}}}){_{\scriptstyle \text{\rm pp}}}]
\text{ and }{{\varphi}}_k\perp f_n\text{ for all }n=1, \hdots, k-1,\end{aligned}$$ and we can choose a unitary multiplication operator $U_{k-1}: L^2[(\mu{_{ {}_{\scriptstyle \alpha_{k-1}}}}){_{\scriptstyle \text{\rm ac}}}]\to L^2(\widetilde{\mu}{_{ {}_{\scriptstyle \alpha_{k-1}}}})$. The rank-$k$ perturbation of interest then becomes $$\begin{aligned}
\label{e-rankk}
\widetilde{H}{_{ {}_{\scriptstyle \alpha_k}}}=M_t+{{\alpha}}_k\langle{\,\cdot\,},\widetilde{{{\varphi}}}_k\rangle{_{ {}_{\scriptstyle L^2(\widetilde{\mu}_{{{\alpha}}_{k-1}})}}}\widetilde{{{\varphi}}}_k\quad&\text{on }L^2(\widetilde{\mu}_{{{\alpha}}_{k-1}}),
d\widetilde{\mu}_{k-1}(x)\equiv\tau_{k-1}\chi{_{ {}_{\scriptstyle [-1,1]}}}(x)dx.\end{aligned}$$
As in the remark after Proposition \[p-f2\], we note that $\tau_{k-1} = m_{k-1}/2 = \|(\widetilde{\mu}_{{{\alpha}}_{k-1}}){_{ {}_{\scriptstyle ac}}}\|/2.$
We mimic the proof of Proposition \[p-f2\]. Consider $$\begin{aligned}
{{\varphi}}_k(t):= \begin{cases} 0 & \text{on }\RR \setminus [-1,1], \\
\dfrac{h_k(t)}{\sqrt{2w_{k-1}(t)}} \qquad& \text{on }(-1,1), \end{cases}\end{aligned}$$ where $(d\mu_{k-1}){_{\scriptstyle \text{\rm ac}}}(t)=w_{k-1}(t)dt$. The function $h_k(t)$ is such that $|h_k(t)|=1$ on $(-1,1)$, and is chosen by Lemma \[l-APP\] with $$d\eta(t) = \dfrac{d{\mu}{_{ {}_{\scriptstyle \alpha_{k-1}}}}(t)}{\sqrt{2w_{k-1}(t)}}\,,
\quad
f_n=f_n, \text{ for }n=1,\hdots, k-1,
\quad
\text{and}
\quad h=h_k.$$
This implies . Define the multiplication operator $U_{k-1}: L^2[(\mu{_{ {}_{\scriptstyle \alpha_{k-1}}}}){_{\scriptstyle \text{\rm ac}}}]\to L^2(\widetilde{\mu}_{k-1})$ to be $$\begin{aligned}
U_{k-1}:=M_{\sqrt{w_{k-1}(t)}/h_k(t)}.\end{aligned}$$ If we denote $U_{k-1}{{\varphi}}_k$ by $\widetilde{{{\varphi}}}_k$, we then have that $\|\widetilde{{{\varphi}}}_k\|_{L^2(\widetilde{\mu}_{k-1})}=1$. Property is obtained by the definition of the spectral theorem, as in the remark after Proposition \[p-f2\].
Remaining Absolutely Continuous Spectrum after $k$ Iterations {#ss-tau}
-------------------------------------------------------------
The desired byproduct of this construction is now achieved. The specific choice of ${{\varphi}}_k$ at each step in Corollary \[c-morevectors\] allows the proof of Lemma \[l-compute\] to be generalized to each iteration because $$\begin{aligned}
G_k(x)=\int_{\RR}\dfrac{d\widetilde{\mu}_k(t)}{(t-x)^2}=\tau_k\int_{-1}^{1}\dfrac{dt}{(t-x)^2}<\infty
\quad\text{for}\quad
x\notin [-1,1].
\end{aligned}$$ This means that Aronszajn–Donoghue theory applies and the essential formulas from Section \[s-startiterate\] can be generalized.
Recall that $d\widetilde{\mu}_0(x)=\tau_0\chi{_{ {}_{\scriptstyle [-1,1]}}}(x)dx$, with $\tau_0=1/2$ as above. By equation we have $$\begin{aligned}
(d\widetilde{\mu}_{{{\alpha}}_k}){_{ {}_{\scriptstyle ac}}}(x)=\tau_k\chi{_{ {}_{\scriptstyle [-1,1]}}}(x)dx.\end{aligned}$$ We determine $\tau_k$ in a similar way as $\tau_1$ in Section \[s-startiterate\]. Specifically, $$\begin{aligned}
\tau_k=\dfrac{\|(\mu_{{{\alpha}}_{k-1}}){_{ {}_{\scriptstyle ac}}}\|}{2}-\dfrac{\widetilde{\mu}_{{{\alpha}}_k}\{x{_{ {}_{\scriptstyle {{\alpha}}_k}}}\}}{2}.\end{aligned}$$
Again, the eigenvalue $x{_{ {}_{\scriptstyle \alpha_k}}}$, created by the perturbation ${{\alpha}}_k$, is unaffected by subsequent perturbations. However, the calculations for $\widetilde{\mu}_{{{\alpha}}_k}\{x{_{ {}_{\scriptstyle {{\alpha}}_k}}}\}$ will involve the constant $\tau_{k-1}$ from the previous step. Hence, the formulas in Section \[s-startiterate\] are recursive and need to be altered slightly: $$\begin{aligned}
\|(\widetilde{\mu}_{{{\alpha}}_k}){_{ {}_{\scriptstyle ac}}}\|=\|(\widetilde{\mu}_{{{\alpha}}_{k-1}}){_{ {}_{\scriptstyle ac}}}\|-\widetilde{\mu}_{{{\alpha}}_k}\{x{_{ {}_{\scriptstyle \alpha_{k}}}}\}
=1-\sum_{n=1}^{k}\widetilde{\mu}_{{{\alpha}}_n}\{x_{{{\alpha}}_n}\}
\end{aligned}$$ and together with item (2) of Lemma \[l-compute\] we have shown:
\[p-recursiveac\] The remaining absolutely continuous spectrum after $k$ iteration steps is $$\|(\widetilde{\mu}_{{{\alpha}}_k}){_{ {}_{\scriptstyle ac}}}\|=1-\sum_{n=1}^{k}\dfrac{e^{1/{{\alpha}}_n\tau_{n-1}}}{{{\alpha}}_n^2\tau_{n-1}(e^{1/{{\alpha}}_n\tau_{n-1}}-1)^2}\,.$$
The recursive process used to determine the terms in the sum is illustrated by: $$\tau_{k-1}\quad\to\quad \widetilde{\mu}_{{{\alpha}}_k}\{x{_{ {}_{\scriptstyle {{\alpha}}_k}}}\}\quad\to\quad \|(\widetilde{\mu}_{{{\alpha}}_k}){_{ {}_{\scriptstyle ac}}}\| = 2 \tau_k.$$ In particular, $\tau_k,$ $k\in {{\mathbb N}}$, depends on the realization of all previously chosen perturbation parameters $\alpha_1, \alpha_2, \hdots, \alpha_{k}$. This makes the expression in Proposition \[p-recursiveac\] too cumbersome to work with in many cases.
Rademacher Potential {#ss-rademacher}
--------------------
The concepts developed in the previous two Subsections can be applied with different choices of the perturbation parameters. We wish to determine whether certain iterated operators, or classes of them, localize ($\lim_{k\to\infty}\tau_k=0$) or delocalize ($\lim_{k\to\infty}\tau_k>0$) and what conditions are necessary and/or sufficient for such behavior. It is now clear that much of the previous construction can be easily adapted to various scenarios, so we make a slightly different choice of our starting measure. Let the starting measure be chosen as $\widetilde{\mu}_0(x)=\frac{1}{2c}\chi{_{ {}_{\scriptstyle [-c,c]}}}(x)dx$. Recall that the choice of a constant function here is possible as long as the beginning weight function is in $L^1{_{\scriptstyle \text{\rm loc}}}(-c,c)$. This is because a unitary operator can then be applied, as described in Section \[s-startiterate\], to begin with a constant weight function. Hence, the interval of support is more desirable to generalize.
The simplest scenario is to start with the perturbation parameters given by $\{{{\alpha}}_n\}_{n=1}^k$ chosen with respect to a Rademacher distribution, i.e. ${{\alpha}}_n=\pm c$. These parameters collectively take the place of the potential in the description of Anderson-type Hamiltonians, and in particular the discrete Schrödinger operator, described in Section \[ss-01\]. Consequently, we refer to the choice of ${{\alpha}}_n=\pm c$, $n=1,\dots,k$, as defining a Rademacher potential.
\[t-Rademacher\] The operator constructed in the previous three sections, when the $\{{{\alpha}}_k\}$’s are chosen i.i.d. with respect to the probability measure ${{\mathbb{P}}}=\frac{1}{2}\delta{_{ {}_{\scriptstyle -c}}}+\frac{1}{2}\delta{_{ {}_{\scriptstyle c}}}$ (Rademacher potential), localizes for any fixed disorder $c>0$.
Proposition \[p-recursiveac\] with ${{\alpha}}_n=\pm c$ for all $n$ reads: $$\begin{aligned}
\|(\widetilde{\mu}_{{{\alpha}}_k}){_{ {}_{\scriptstyle ac}}}\|=1-\dfrac{1}{c^2}\sum_{n=1}^{k}\dfrac{ e^{1/c\tau_{n-1}}}{\tau_{n-1}(e^{1/c\tau_{n-1}}-1)^2}.\end{aligned}$$ We are mainly concerned with the exact limiting value of this series. The convergence of this series is clear: Physics tells us that the absolutely continuous part of $\widetilde{\mu}_{{{\alpha}}_k}$ cannot become negative, it is bounded above by 1, and the sequence of partial sums decreases as $k$ increases.
The summand can be rearranged by expanding the denominator and factoring out a term of $e^{1/c\tau_{n-1}}$ to yield $$\begin{aligned}
\dfrac{ e^{1/c\tau_{n-1}}}{\tau_{n-1}(e^{1/c\tau_{n-1}}-1)^2} = \dfrac{1}{\tau_{n-1}(e^{1/c\tau_{n-1}}-2+e^{-1/c\tau_{n-1}})}.\end{aligned}$$ Hence, localization necessitates $$\begin{aligned}
\lim_{n\to\infty}[\tau_{n-1}(e^{1/c\tau_{n-1}}-2+e^{-1/c\tau_{n-1}})]=\infty.\end{aligned}$$ In this specific scenario, the operator began with a total mass of 1. This implies $0\leq \tau_{n-1}\leq 1$. Hence, for fixed $c$, we have: $$\begin{aligned}
\lim_{n\to\infty}[\tau_{n-1}(e^{1/c\tau_{n-1}}-2+e^{-/c\tau_{n-1}})]=\infty&\iff\lim_{n\to\infty}e^{1/c\tau_{n-1}}=\infty \\
&\iff\lim_{n\to\infty}\tau_{n-1}=0\end{aligned}$$ The first if and only if statement can be verified by noticing that the exponential is “stronger” than the $\tau_{n-1}$ term. Also, $e^{-1/c\tau_{n-1}}$ remains bounded for $0\leq \tau_{n-1}\leq 1$ by $e^{-1/c}$. Therefore, we conclude that if $\tau_{k-1}\not\to 0$ as $k\to\infty$, then the sum does not converge. This is a contradiction, so it must be that $\tau_{k-1}\to 0$ as $k\to\infty$, and the operator localizes.
Bounds for Non-Orthogonal Perturbations {#s-northo}
=======================================
In the previous sections, the recursive nature of our construction was necessary in order to ensure that each perturbation was orthogonal to previous directions and so that we were able to explicitly carry out successive computations. This guaranteed that the absolutely continuous spectrum was not increased due to via reabsorption from the point masses outside of the interval $[-1,1]$.
In an effort to escape these restrictions, we now consider the case where a perturbation in a general direction ${{\varphi}}$ is applied to an operator. Only its overlap with the the point masses of the unperturbed operator should be known. The results are contributions to abstract rank-one perturbation theory. From this perspective, the estimates provide fundamental bounds on the effects of a single perturbation. They are also useful for creating examples, as equation identifies which factors influence the shifting of mass from one type of spectrum to another.
Such perturbations can be interpreted as representative of one that would arise from the constructive iteration scheme after $N$ steps, in which the new direction vector ${{\varphi}}_k$ would then not be assumed orthogonal to the previous perturbation vectors. Here, the acquired bounds are not sufficiently sharp to allow for the choice of the perturbation parameters with respect to probability measures other than Rademacher.
Theorems \[t-worstcase\] and \[t-singular\] show the maximum/minimum amounts of absolutely continuous and pure point spectrum that can be created/destroyed by such a perturbation. The method of proof requires only knowledge of Aronszajn–Donoghue theory (Theorem \[t-AD\]) and the integral transforms therein. In particular, the theorems represent a worst-case scenario in each situation, similar to using Rademacher potentials in Section \[ss-rademacher\]. Surprisingly, sufficient conditions to gain pure point or lose absolutely continuous spectrum are also achieved, see Proposition \[p-worstcase\] and the comments after the theorems.
\[t-worstcase\] Let $\mu\in M_{+}(\RR)$ be such that $$d\mu(x)=f(x)\chi{_{\scriptstyle \text{\rm [-a,a]}}}(x)\text{dx}+d\mu{_{\scriptstyle \text{\rm s}}}(x)
\quad\text{where}\quad \mu{_{\scriptstyle \text{\rm s}}}=\sum_{n=1}^N {{\alpha}}_n \delta_{x_n},$$ as well as $f\in L^2[-a,a]$, $\|\mu\|=1$ and $\sum_{n=1}^N{{\alpha}}_n=c$. Let ${{\varphi}}\in L^2(\mu)$ be a unit vector with $$\sum_{n=1}^N{{\alpha}}_n|{{\varphi}}(x_n)|^2<\varepsilon.$$ Let the spectral measure of the self-adjoint operator $M_t+\lambda\langle{\,\cdot\,},{{\varphi}}\rangle{_{ {}_{\scriptstyle {L^2(\mu)}}}}{{\varphi}}$ on $L^2(\mu),$ with respect to ${{\varphi}}$, be denoted by $\mu_{\lambda}$. Assume that I is a compact interval not including 0. Then for all $\lambda\in$I, there exists real $k>0$ such that the spectral measure $\mu_{\lambda}$ satisfies $$\|(\mu_{\lambda}){_{\scriptstyle \text{\rm ac}}}\|\leq\|\mu{_{\scriptstyle \text{\rm ac}}}\|-k.$$
The theorem states that there is a minimum amount of absolutely continuous spectrum lost after a general perturbation. Note also that $\varepsilon\leq 1$ is required due to the assumption $\|\mu\|=1$. More explicit estimates for $k$, and its dependence on $\varepsilon$ and $\lambda$, are given in Proposition \[p-worstcase\].
In order to simplify notation, all inner products are taken in $L^2(\mu)$ unless otherwise stated. Assume the hypotheses on $f$, $\mu$ and ${{\varphi}}$ above. Decompose both ${{\varphi}}$ and $\mu$ into two parts, one concerning the absolutely continuous spectrum on the interval $[-a,a]$, and the other concerning the $N$ point masses. Hence, we define $$\begin{aligned}
\widetilde{{{\varphi}}}={{\varphi}}\chi{_{\scriptstyle \text{\rm [-a,a]}}}, \ \ {{\varphi}}_p={{\varphi}}-\widetilde{{{\varphi}}}, \ \ d\mu_{ac}(x)=f\chi{_{\scriptstyle \text{\rm [-a,a]}}}(x)dx, \ \text{and} \ \mu_p=\sum_{n=1}^N{{\alpha}}_n \delta_{x_n}.\end{aligned}$$ The rank-one perturbation $\lambda\langle{\,\cdot\,},{{\varphi}}\rangle{{\varphi}}$ can now be broken down in terms of $\widetilde{{{\varphi}}}$ and ${{\varphi}}_p$ so that the interaction between each part of the perturbation and the absolutely continuous spectrum can be estimated. We begin by estimating the norm of the perturbation: $$\begin{aligned}
\notag
\|\lambda\langle{\,\cdot\,},{{\varphi}}\rangle{{\varphi}}\|
&=
|\lambda|
\,
\|\langle{\,\cdot\,},(\widetilde{{{\varphi}}}+{{\varphi}}_p) \rangle(\widetilde{{{\varphi}}}+{{\varphi}}_p)\| \\
\label{e-bound}
&\leq |\lambda|
\,
\left(\|\langle{\,\cdot\,},\widetilde{{{\varphi}}}\rangle\widetilde{{{\varphi}}}\|+\|\langle{\,\cdot\,}, \widetilde{{{\varphi}}}\rangle{{\varphi}}_p\|+\|\langle{\,\cdot\,},{{\varphi}}_p\rangle\widetilde{{{\varphi}}}\|+\|\langle{\,\cdot\,},{{\varphi}}_p\rangle{{\varphi}}_p\|\right)\end{aligned}$$
The four terms in the inequality will be discussed and evaluated separately. The first term, $|\lambda|\|\langle{\,\cdot\,},\widetilde{{{\varphi}}}\rangle\widetilde{{{\varphi}}}\|$, involves only $\widetilde{{{\varphi}}}$ so the perturbation by this factor has no relevance to the point masses and is in fact orthogonal to $\mu_p$.
This term recreates a setting from our earlier results, where the perturbation vector was not concerned with the previous point masses due to orthogonality. The perturbation therefore has the effect of creating a single eigenvalue in the new spectral measure, determined solely by $\lambda$. The spectral measure was then a constant on an interval thanks to our use of an auxiliary space and a choice of ${{\varphi}}$, so the mass of this eigenvalue was easy to compute. We have no such luxury here, as the choice of $f$ has only the restriction that $f\in L^2[-a,a]$. Therefore, we solace ourselves with the ability to prove that there is a minimum for the mass of this eigenvalue, when $\lambda$ is chosen from a compact interval not containing 0. Estimates for this value are attained in Proposition \[p-worstcase\] below, with the corresponding loss of sharpness to the global estimate of $k$.
Let $h:\RR\to\RR$ be defined by $h(\lambda)=\mu_{\lambda}\{x_{\lambda}\}$, the mass of the created eigenvalue $x_{\lambda}$ in the spectral representation $\mu_{\lambda}$. The explicit calculation of the location $x_\lambda\in {{\mathbb R}}\backslash [-a,a]$ and the strength $\mu_{\lambda}\{x_{\lambda}\}$ of the new eigenvalue is given by Aronszajn–Donoghue Theory. The process defining the function $h(\lambda)$ is thus given by two integrals: $$\begin{aligned}
\label{e-FG}
\int_{-a}^a\dfrac{f(t)dt}{t-x_\lambda}=-\dfrac{1}{\lambda} \hspace{7mm}\text{and}\hspace{7mm} \mu_{\lambda}\{x_{\lambda}\}=G(x_{\lambda})=\int_{-a}^a\dfrac{f(t)dt}{(t-x_{\lambda})^2}.\end{aligned}$$
Both integrals are finite and yield $C^1$ functions because $f\in L^2[-a,a]$. We conclude that $h(\lambda)$ is itself continuous, as the composition of continuous functions. Note that if $\widetilde{{{\varphi}}}$ were acting on $\mu$, not just $\mu{_{\scriptstyle \text{\rm ac}}}$, then $h(\lambda)$ is not necessarily continuous as the point masses interfere with the integral. If $\lambda$ is chosen from a compact interval $I\subset\RR\setminus\{0\}$ then $h(\lambda)$ must achieve a minimum value on $I$. For the remainder of the paper, we use the definition $$\begin{aligned}
d:=\min_{\lambda\in I}h(\lambda).\end{aligned}$$
The second and fourth terms on the right hand side of the inequality are not relevant. Indeed, those two factors deal only with changes to the pure point spectrum, as the perturbations are in the “direction" of ${{\varphi}}_p$. Hence, the individual perturbations do not cause any change to the unperturbed absolutely continuous spectrum due to orthogonality of $\mu_p$ and $\mu{_{\scriptstyle \text{\rm ac}}}$.
The third term on the right hand side of the inequality , $|\lambda|\|\langle{\,\cdot\,},{{\varphi}}_p\rangle\widetilde{{{\varphi}}}\|\leq|\lambda|\|{{\varphi}}_p\|\|\widetilde{{{\varphi}}}\|$, can be handled with our assumptions and above calculations. Indeed, ${{\varphi}}_p$ only interacts with $\mu_p$ by definition, so $$\begin{aligned}
\|{{\varphi}}_p\|^2&=\langle{{\varphi}}_p,{{\varphi}}_p\rangle{_{ {}_{\scriptstyle L^2(\mu_p)}}}=
\sum_{n=1}^N{{\alpha}}_n|{{\varphi}}_p(x_n)|^2=\sum_{n=1}^N{{\alpha}}_n |{{\varphi}}(x_n)|^2\leq\varepsilon.\end{aligned}$$ Similar reasoning yields that $\|\widetilde{{{\varphi}}}\|=\sqrt{1-\varepsilon}$. The estimate for this term is then $\|\langle{\,\cdot\,},{{\varphi}}_p\rangle\widetilde{{{\varphi}}}\|\leq \sqrt{\varepsilon}\sqrt{1-\varepsilon}$.
Overall, we observe that the first term is how much the absolutely continuous spectrum is decreased by the creation of the new eigenvalue. The third term is correcting for what happens to the point masses, as there is no guarantee that some of the mass in $\mu_p$ doesn’t reenter the interval $[-a,a]$ due to the effect of ${{\varphi}}$. Moreover, the intertwining operator $V_{\lambda}$ for the spectral theorem is unitary so the essential spectrum remains unchanged under our compact perturbation and there is no total mass lost. We can now conclude $$\begin{aligned}
\|(\mu_{\lambda}){_{\scriptstyle \text{\rm ac}}}\|\leq\|\mu{_{\scriptstyle \text{\rm ac}}}\|-\left[d-|\lambda|\sqrt{\varepsilon}\sqrt{1-\varepsilon})\right].\end{aligned}$$ The theorem follows.
In general, we cannot assume that $\|(\mu_\lambda){_{\scriptstyle \text{\rm ac}}}\|\le\|\mu{_{\scriptstyle \text{\rm ac}}}\|.$ Therefore, it is imperative that $d-|\lambda|\sqrt{\varepsilon}\sqrt{1-\varepsilon}>0$ for the previous result to not be vacuous. Let $|\lambda|{_{\scriptstyle \text{\rm max}}}$ denote the maximum value of $|\lambda|$ on $I$. The desired inequality is achieved when $$\begin{aligned}
d>|\lambda|{_{\scriptstyle \text{\rm max}}}\sqrt{\varepsilon-\varepsilon^2}.\end{aligned}$$ It is noteworthy that $d$ was constructed to depend upon both $\lambda$ and the a.c. spectral mass, which directly relates to $c$ and $\varepsilon$. The value of $d$ can thus be estimated by these constants.
\[p-worstcase\] Let $\lambda$ be chosen from a compact interval $I\subset {{\mathbb R}}$ not including 0. Then $$\begin{aligned}
d\geq\dfrac{1-c}{(a+|\lambda|{_{\scriptstyle \text{\rm max}}}(1-\varepsilon)+1)^2},\end{aligned}$$ where $|\lambda|{_{\scriptstyle \text{\rm max}}} = \max_{\lambda\in I}|\lambda|$, and $d$, $c$ and $\varepsilon$ are as in the proof of Theorem \[t-worstcase\].
Without loss of generality, we can assume that $f$ is positive on the interval $[-a,a]$ and that $\lambda>0$. This means that the eigenvalue created by the $\lambda$ perturbation by $\widetilde{{{\varphi}}}$ will be to the right of the interval, i.e. $|x_{\lambda}|>a$. Also, take $\lambda=|\lambda|{_{\scriptstyle \text{\rm max}}}$. Recall the formulas in equation . In order to minimize $\mu_{\lambda}\{x_{\lambda}\}$, we minimize the kernel of the integral operator $G(x)$. This minimum occurs when $f$ is represented by a delta mass at the endpoint $\{-a\}$ so that the eigenvalue will fall as close to $a$ as possible. This delta mass is of strength $1-c$ by necessity and the integration $G(x_{\lambda})$ can be computed to find $$\begin{aligned}
\mu_{\lambda}\{x_{\lambda}\}\geq\dfrac{1-c}{(x_{\lambda}+1)^2}.\end{aligned}$$ To minimize this inequality, we must maximize the value of $x_{\lambda}$. The distance $x_{\lambda}$ is placed from the endpoint ${a}$ must be less than $$\begin{aligned}
\|\lambda\langle{\,\cdot\,},\widetilde{{{\varphi}}}\rangle\widetilde{{{\varphi}}}\|=\lambda(1-\varepsilon).\end{aligned}$$ This means that $x_{\lambda}\leq a+\lambda(1-\varepsilon)$, and we can conclude that $$\begin{aligned}
\mu_{\lambda}\{x_{\lambda}\}\geq\dfrac{1-c}{(x_{\lambda}+1)^2}\geq\dfrac{1-c}{(a+\lambda(1-\varepsilon)+1)^2},\end{aligned}$$ as desired.
This approximation of $d$ can be applied to the case where $f$ is a constant, which occurred at each step of the iterative construction.
\[t-rademacherexample\] Let $\mu\in M_{+}(\RR)$ be such that $$d\mu(t)=f(t)\chi{_{\scriptstyle \text{\rm [-a,a]}}}(t)\text{dt}+d\mu{_{\scriptstyle \text{\rm s}}}(t)
\quad\text{where}\quad \mu{_{\scriptstyle \text{\rm s}}}=\sum_{n=1}^N {{\alpha}}_n \delta_{x_n},$$ where we define $f(t)=w_N(t)$ such that $f\in L^1{_{\scriptstyle \text{\rm loc}}}$, $\|\mu\|=1$ and $\sum_{n=1}^N{{\alpha}}_n=c$. Furthermore, let ${{\varphi}}\in L^2(\mu)$ such that ${{\varphi}}|{_{\scriptstyle \text{\rm [-a,a]}}}(t)=1/\sqrt{2w_N(t)}$, $\|{{\varphi}}\|=1$ and $\sum_{n=1}^N{{\alpha}}_n|{{\varphi}}(x_n)|^2<\varepsilon$. Assume that I is a compact interval not including 0. Then for all $\lambda\in$I, we have the following inequality $$\|(\mu_{\lambda}){_{\scriptstyle \text{\rm ac}}}\|\leq\|\mu{_{\scriptstyle \text{\rm ac}}}\|-\left[\dfrac{e^{1/\lambda\tau_N}}{\lambda^2\tau_N(e^{1/\lambda\tau_N}-1)^2}-\dfrac{e^{1/\lambda\tau_N}\sqrt{\varepsilon}}{\lambda\tau_N(e^{1/\lambda\tau_N}-1)^2}\right].$$
See the proof of the previous Theorem. In this case we have the assumption that $$\widetilde{{{\varphi}}}(t)=\dfrac{1}{\sqrt{2w_N(t)}}.$$ The notation $\widetilde{{{\varphi}}}$ should not be confused with the image of ${{\varphi}}$ under a unitary operator as in previous Sections. However, recall that $w_N(t)$ is simply representing a weight function and matches the notation developed in Section \[s-infinite\]. When $\lambda$ is chosen, i.e. when Rademacher potentials are used, it is then possible to explicitly calculate the value of $d$. If a choice of $\lambda$ is not imposed, simply pick $\lambda$ in the formula to be $|\lambda|{_{\scriptstyle \text{\rm max}}}$ to obtain a general bound.
Similarly, we deduce how the singular part is effected by the perturbation at a single step.
\[t-singular\] Let $\mu\in M_{+}(\RR)$ be such that $$d\mu(t)=f(t)\chi{_{\scriptstyle \text{\rm [-a,a]}}}(t)\text{dt}+d\mu{_{\scriptstyle \text{\rm s}}}(t)
\quad\text{where}\quad \mu{_{\scriptstyle \text{\rm s}}}=\sum_{n=1}^N {{\alpha}}_n \delta_{x_n},$$ where $f\in L^2(m)$, $\|\mu\|=1$ and $\sum_{n=1}^N{{\alpha}}_n=c$. Furthermore, let ${{\varphi}}\in L^2(\mu)$, $\|{{\varphi}}\|=1$ and $$\sum_{n=1}^N{{\alpha}}_n|{{\varphi}}(x_n)|^2<\varepsilon.$$ Let the spectral measure of the self-adjoint operator $$M_t+\lambda\langle{\,\cdot\,},{{\varphi}}\rangle{_{ {}_{\scriptstyle {L^2(\mu)}}}}{{\varphi}}\quad\text{on}\quad L^2(\mu),$$ with respect to ${{\varphi}}$, be denoted by $\mu_{\lambda}$. Assume that I is a compact interval not including 0. Then for all $\lambda\in$I, there exists $k\in\RR$ such that the spectral measure $\mu_{\lambda}$ satisfies $$\|(\mu_{\lambda}){_{\scriptstyle \text{\rm s}}}\|\geq\|\mu{_{\scriptstyle \text{\rm s}}}\|+k.$$
We employ a similar strategy to the one used in Theorem \[t-worstcase\]. Namely, decompose ${{\varphi}}$ into $\widetilde{{{\varphi}}}$ and ${{\varphi}}_p$ and estimate the $\lambda$ perturbation: $$\begin{aligned}
\notag
\|\lambda\langle{\,\cdot\,},{{\varphi}}\rangle{{\varphi}}\|
&=
|\lambda|
\,
\|\langle{\,\cdot\,},(\widetilde{{{\varphi}}}+{{\varphi}}_p) \rangle(\widetilde{{{\varphi}}}+{{\varphi}}_p)\| \\
&\leq |\lambda|
\,
\left(\|\langle{\,\cdot\,},\widetilde{{{\varphi}}}\rangle\widetilde{{{\varphi}}}\|+\|\langle{\,\cdot\,}, \widetilde{{{\varphi}}}\rangle{{\varphi}}_p\|+\|\langle{\,\cdot\,},{{\varphi}}_p\rangle\widetilde{{{\varphi}}}\|+\|\langle{\,\cdot\,},{{\varphi}}_p\rangle{{\varphi}}_p\|\right).\end{aligned}$$ We are only concerned with the first, second and fourth terms in the inequality, as they affect ${{\varphi}}_p$. The first term is responsible for creating an eigenvalue of strength at least $d$, as estimated above. The fourth term has no effect, as the essential spectrum of an operator does not change under a rank-one perturbation. This means that the eigenvalues are shifted and masses are redistributed according to this term, but their total mass is the same because it cannot other kinds of spectrum. Estimating the second term is analogous to the mixed term in Theorem \[t-worstcase\] and yields an effect of $|\lambda|\sqrt{\varepsilon}\sqrt{1-\varepsilon}$. Hence, the singular mass increases by a created eigenvalue and is adjusted for possible mass entering the absolutely continuous spectrum by a mixed term. Our conclusion thus follows its absolutely continuous counterpart and we set $k=d-|\lambda|\sqrt{\varepsilon}\sqrt{1-\varepsilon}$ to yield the Theorem.
The same restrictions are relevant to applications of this theorem as to Theorem \[t-worstcase\]. In general, we cannot assume that $\|(\mu_{\lambda})_s\|\geq\|\mu_s\|$. For the result to not be vacuous, we must ensure that $d-|\lambda|\sqrt{\varepsilon}\sqrt{1-\varepsilon}>0$. Hence, it is required that $$\begin{aligned}
d>|\lambda|{_{\scriptstyle \text{\rm max}}}\sqrt{\varepsilon-\varepsilon^2}.\end{aligned}$$ The symmetry of Theorems \[t-worstcase\] and \[t-singular\] adds further validation to the estimates.
Appendix: Choosing Orthogonal Direction Vectors {#App:AppendixA}
===============================================
This elementary proof is included for the convenience of the reader, and is motivated by Theorem 2.10 in [@Folland] and the definition of the Lebesgue integral.
\[l-APP\] Let $S=\{f_n\}_{n=1}^N$ be a finite set of functions orthogonal in a separable Hilbert space $L^2(\eta)$, where $\eta$ is a positive Borel measure supported on $[-1,1]$ without a point mass at $x=1$. Then there exists a measurable function $h(x)$ with $|h(x)|=1$ a.e. with respect to $\eta$, so that the set $S\cup \{h\}$ is orthogonal.
Without loss of generality, we consider the positive parts of each $f_n$, written as $f_n^+(x):=\max\{f_n(x), 0\}$. Let $\{g_m^n\}_{m\in\NN}$ be the sequence of simple functions in standard representation which approximates $f_n^+$ pointwise and uniformly (wherever $f_n^+$ is bounded).\
Let $E_m^n$ denote the partition of $[-1,1)$ on which $g_m^n$ is constant. For $n=1, \dots, N$, take the union of the endpoints of $E_m^n$ and cover $[-1,1)$ by non-overlapping half-open intervals corresponding to this union. Denote the collection of these intervals by $I_m$. Then, for each fixed $m$, $g_m^n$, $n=1, \dots, N$, is constant on each half-open interval $I\subset I_m$.\
For each $I\subset I_m$ define $$h_m|{_{ {}_{\scriptstyle I}}}=
\begin{cases}
0 & \text{on } [-1,1)\setminus I \\
1 & \text{on the right half of } I \\
-1 & \text{on the left half of } I
\end{cases}$$ and $h_m:=\sum_I h_m|_I$. This gives us that $\left<g_m^n,h_m\right>=0$, $\forall m,n$, and $h_m$ converges with respect to $\eta$ to some measurable $h$ with $|h(x)|=1$ on $[-1,1)$.\
All that remains to show is that $\left<f_n,h\right>=0$, $\forall n$. This follows by a simple application of the Dominated Convergence Theorem to the functions $g_m^n(x)$ and $h(x)$: $$\begin{aligned}
\left<f_n,h\right> =\int_{-1}^{1}\lim_{m\to\infty}g_m^n(x)h_m(x)d\eta(x)
=\lim_{m\to\infty}\int_{-1}^{1}g_m^n(x)h_m(x)d\eta(x)
=0 \end{aligned}$$ for all $n$.
[xx]{}
E. Abakumov, A. Poltoratski, *Pseudocontinuation and cyclicity for random power series*, J. Inst. Math. Jussieu **7** (2008), no. 3, 413–424.
E. Abakumov, C. Liaw, A. Poltoratski, [*Cyclicity in rank-one perturbation problems*]{}, J. Lond. Math. Soc. [**88**]{} (2013) no. 2, 523–537.
P.W. Anderson, [*Absence of Diffusion in Certain Random Lattices*]{}, Phys. Rev., [**109**]{} (1958), 1492–1505.
N. Aronszajn, [*On a Problem of Weyl in the Theory of Singular Sturm–Liouville Equations*]{}, Am. J. Math. [**79**]{} (1957), 597–610.
R. Carey, J. Pincus, *Unitary equivalence modulo the trace class for self-adjoint operators*, Amer. J. Math. **98** (1976), no. 2, 481–514.
J.A. Cima, A.L. Matheson, W.T. Ross, [*The Cauchy transform*]{}, Mathematical Surveys and Monographs, vol. 125, American Mathematical Society, Providence, RI, 2006.
H. Cycon, R. Froese, W. Kirsh, B. Simon, *Topics in the Theory of Schrödinger Operators*, Springer-Verlag (1987).
W. Donoghue, [*On the Perturbation of Spectra*]{}, Commun. Pure Appl. Math. [**18**]{} (1965), 559–579.
G.B. Folland, [*Real Analysis; Modern Techniques and Their Applications*]{}, 2nd ed. John Wiley & Sons, Inc., Hoboken, NJ, 1999.
F. Gesztesy, B. Simon, [*Rank One Perturbations at Infinite Coupling*]{}, J. Funct. Anal. [**128**]{} (1995), 245–252.
D. Hundertmark, [*A short introduction to Anderson localization*]{}, Analysis and Stochastics of Growth Processes and Interface Models (2008), 194–218.
V. Jaksic, Y. Last, [*Spectral structure of Anderson type Hamiltonians*]{}, Invent. Math. [**141**]{} (2000), no. 3, 561–577.
$\underline{\hspace{3 cm}}$ , [*Simplicity of singular spectrum in Anderson-type Hamiltonians*]{}, Duke Math. J. [**133**]{} (2006), no. 1, 185–204.
$\underline{\hspace{3 cm}}$, *A new proof of [P]{}oltoratskii’s theorem*, J. Funct. Anal. **215** (2004), no. 1, 103–110.
J. Kahane, *Some random series of functions. Second edition.*, Cambridge Studies in Advanced Mathematics **5** Cambridge University Press, Cambridge (1985) xiv+305 pp.
V. Kapustin, A. Poltoratski, [*Boundary convergence of vector-valued pseudocontinuable functions*]{}, J. Funct. Anal. [**238**]{} (2006) 313–326.
T. Kato, [*Perturbation theory for linear operators*]{}, Classics in Mathematics, Springer–Verlag, Berlin, 1995 (reprint of the 1980 edition).
W. King, R. Kirby, C. Liaw, [*Delocalization for the 3–D discrete random Schrödinger operator at weak disorder*]{}, J. Phys. A: Math. Theor. [**47**]{} (2014) 305202.
C. Liaw, *Approach to the extended states conjecture,* J. Stat. Phys. [**153**]{} (2013) 1022–1038.
, *Rank one perturbations and Anderson-type Hamiltonians,* accepted by Banach J. Math. Anal., for preprint see <arXiv:1009.1353v3>.
C. Liaw, S. Treil, [*General Clark model for finite rank perturbations*]{}, Analysis & PDE [**12**]{} (2019), 449–492.
, [*Rank-one perturbations and singular integral operators*]{}, J. Funct. Anal., [**257**]{} (2009), no. 6, 1947–1975.
, [*Matrix Measures and Finite Rank Perturbations of Self-adjoint Operators*]{}, accepted by J. Spectr. Th., for preprint see <arXiv:1806.08856v2>.
, [*Singular integrals, rank-one perturbations and clark model in general situation*]{}. J. Anal. Math., [**130**]{} (2016), 287–328.
A. Poltoratski, [*Equivalence up to a rank-one perturbation*]{}, Pacific J. Math. [**194**]{} (2000), no. 1, 175–188.
A. Poltoratski, D. Sarason, *Aleksandrov-[C]{}lark measures*, Recent advances in operator-related function theory, Contemp. Math., vol. 393, Amer. Math. Soc., Providence, RI (2006) pp. 1–14.
B. Simon, *[Spectral analysis of rank one perturbations and applications]{}*, Mathematical Quantum Theory I: Field Theory and Many-Body Theory (1994).
$\underline{\hspace{3 cm}}$, *Cyclic vectors in the [A]{}nderson model*, Rev. Math. Phys. **6** (1994), no. 5A, 1183–1185, Special issue dedicated to Elliott H. Lieb.
$\underline{\hspace{3 cm}}$, [*Trace Ideals and Their Applications*]{}, 2nd ed. American Mathematical Society, Providence, RI, 2005.
B. Simon, T. Wolff, [*Singular continuous spectrum under rank-one perturbations and localization for random Hamiltonians*]{}, Comm. Pure Appl. Math. [**39**]{} (1986), 75–90.
*Random Schrödinger operators: Universal Localization, Correlations, and Interactions* Conference report (for the conference held in April 2009 at the Banff International Research Station).
|
---
abstract: 'The mass generation in the ($3+1$)–dimensional supersymmetric Nambu–Jona–Lasinio model in a constant magnetic field is studied. It is shown that the external magnetic field catalyzes chiral symmetry breaking.'
author:
- |
I.A.Shovkovy\
[*Bogolyubov Institute for Theoretical Physics*]{}\
[*252143 Kiev, Ukraine*]{}
date: 'January 6, 1996'
title: 'Mass Generation in the Supersymmetric Nambu–Jona–Lasinio Model in an External Magnetic Field[^1]'
---
It was shown in [@1; @2] and later confirmed in [@Ng; @Hong] that a constant magnetic field is a strong catalyst of dynamical chiral symmetry breaking, leading to the generation of a fermion dynamical mass even at the weakest attraction between fermions (the prehistory of the question includes [@pre] among others).
The effect is accounted for the effective dimensional reduction $D\to D-2$ of the infrared dynamics responsible for the fermion pairing in a magnetic field. This reduction is a reflection of simple physics: the motion of charged particles is partly restricted in the plane perpendicular to the magnetic field. The latter is also related to the fact that the chiral condensate mainly appears due to the lowest Landau level whose dynamics is ($D-2$)–dimensional.
In this talk I shall briefly present the results for the supersymmetric Nambu–Jona–Lasinio (SNJL) model in a magnetic field.
The motivation for the problem is the following. As was heuristically proved in [@1; @2], the catalysis by an external magnetic field is a rather universal (model–independent) phenomenon. In non–supersymmetric models, chiral symmetry breaking is usually realized if the coupling constant is large enough. As for the influence of the magnetic field, it reduces the critical coupling to zero. On the other hand, there is no spontaneous chiral symmetry breaking in the SNJL model at all [@6].
Below it will be shown that an external magnetic field changes the situation in the SNJL model dramatically: chiral symmetry breaking, in agreement with the universality of the effect [@2], occurs for any value of the coupling constant.
The action of the SNJL model with the $U_L(1)\times U_R(1)$ chiral symmetry in a magnetic field (in notations of Ref.[@13] except the metric $g^{\mu\nu}=\mbox{diag}(1,-1,-1,-1)$) is: $$\begin{aligned}
\Gamma&=&\int d^8z
\left[ \bar{Q}e^{V}Q+\bar{Q}^ce^{-V}Q^c
+ G(\bar{Q}^c\bar{Q})(QQ^c)\right] .
\label{eq33}\end{aligned}$$ Here $d^8z=d^4xd^2\theta d^2\bar{\theta}$, $Q^{\alpha}$ and $Q^c_{\alpha}$ are chiral superfields carrying the color index $\alpha=1, 2, \dots, N_c$, i.e. $Q^{\alpha}$ and $Q^c_{\alpha}$ are assigned to the fundamental and antifundamental representations of the $SU(N_c)$, respectively: $$\begin{aligned}
Q^{\alpha} = \varphi^{\alpha}
+ \sqrt{2}\theta\psi^{\alpha}
+ \theta^2F^{\alpha} , &\quad&
Q^c_{\alpha} = \varphi^c_{\alpha}
+ \sqrt{2}\theta\psi^c_{\alpha}
+ \theta^2F^c_{\alpha}
\label{eq34}\end{aligned}$$ (henceforth I shall omit color indices). The vector superfield $V(x,\theta,\bar{\theta}) =-\theta \sigma^{\mu} \bar{\theta}
A^{ext}_\mu$, with $A^{ext}_\mu = B x^2 \delta_{\mu}^3$, describes an external magnetic field in the $+x_1$ direction.
The action (\[eq33\]) is equivalent to the following one: $$\begin{aligned}
\Gamma_{A} &=& \int d^8z
\left[\bar{Q}e^{V}Q+\bar{Q}^ce^{-V}Q^c
+ \frac{1}{G}\bar{H}H\right]-
\nonumber\\
&&- \int d^6z\left[\frac{1}{G}HS-QQ^cS\right]
- \int d^6\bar{z}\left[\frac{1}{G}\bar{H}\bar{S}
- \bar{Q}\bar{Q}^c\bar{S}\right] .
\label{eq35}\end{aligned}$$ Here $d^6z=d^4xd^2\theta$, $d^6\bar{z}=d^4xd^2\bar{\theta}$, and $H$ and $S$ are two auxiliary chiral fields: $$\begin{aligned}
H=h+\sqrt{2}\theta\chi_h+\theta^2f_h , &\quad&
S=s+\sqrt{2}\theta\chi_s+\theta^2f_s .
\label{eq36}\end{aligned}$$ The Euler–Lagrange equations for these auxiliary fields take the form of constraints: $$\begin{aligned}
H = GQQ^c, &\quad&
S = - \frac{1}{4}\bar{D}^2(\bar{H})=
- \frac{G}{4}\bar{D}^2(\bar{Q}\bar{Q}^c).
\label{eq37}\end{aligned}$$ Here $\bar{D}$ is a SUSY covariant derivative [@13]. The action (\[eq35\]) reproduces Eq.(\[eq33\]) upon application of the constraints (\[eq37\]).
In terms of the component fields, the action (\[eq35\]) is $$\begin{aligned}
\Gamma_{A} &=& \int d^4x \Bigg[
- \varphi^{\dagger}(\partial_{\mu}-ieA^{ext}_{\mu})^2\varphi
- \varphi^{c\dagger}(\partial_{\mu}+ieA^{ext}_{\mu})^2\varphi^c
\nonumber \\
&&+ i\bar{\psi}\bar{\sigma}^{\mu}
(\partial_{\mu}-ieA^{ext}_{\mu})\psi
+ i\bar{\psi}^c\bar{\sigma}^{\mu}
(\partial_{\mu}+ieA^{ext}_{\mu})\psi^c
+ F^{\dagger}F + F^{c\dagger}F^c
\nonumber \\
&&+ \frac{1}{G}\left( -h^{\dagger}\Box h
+ i\bar{\chi}_h\bar{\sigma}^{\mu}\partial_{\mu}\chi_h
+ f^{\dagger}_hf_h \right)
+ \frac{1}{G}\left( \chi_h\chi_s - hf_s- sf_h + h.c.\right)
\nonumber \\
&&- \Big( s\psi\psi^c + (\varphi\psi^c+\varphi^c\psi)\chi_s
- s(\varphi F^c + \varphi^cF) - \varphi\varphi^c f_s
+ h.c.\Big)
\Bigg] .
\label{eq38}\end{aligned}$$ To obtain the effective potential, all the auxiliary scalar fields are treated as (independent of $x$) constants and all the auxiliary fermion fields equal zero. Then, the Euler–Lagrange equations for the fields $F$, $F^c$, $f_h$, $h$ and their conjugates leads to $F^{\dagger}=-s\varphi^c$, $F^{c\dagger}=-s\varphi$, $f^{\dagger}_h=s$, $f^{\dagger}_s=0$, plus h.c. equations. After taking these into account, the action reads $$\begin{aligned}
\Gamma_{A} &=& \int d^4x \Bigg[
- \varphi^{\dagger}\left[(\partial_{\mu}-ieA^{ext}_{\mu})^2
+ \rho^2 \right]\varphi
- \varphi^{c\dagger}\left[(\partial_{\mu}+ieA^{ext}_{\mu})^2
+ \rho^2 \right] \varphi^c
\nonumber \\
&&+ i\bar{\psi}_D\gamma^{\mu}(\partial_{\mu}
-ieA^{ext}_{\mu})\psi_D - \sigma\bar{\psi}_D\psi_D
- \pi\bar{\psi}_Di\gamma^5\psi_D - \frac{\rho^2}{G}
\Bigg] ,
\label{eq39}\end{aligned}$$ where $s=\sigma+i\pi$, $\rho^2=|s|^2=\sigma^2+\pi^2$, and the Dirac fermion field $\psi_D$ is introduced.
In leading order in $1/N_c$, the effective potential $V(\rho)$ can now be derived in the same way as in the ordinary NJL model. The difference is that, besides fermions, the two scalar fields $\varphi^c$ and $\varphi$ give a contribution to $V(\rho)$: $$V(\rho) = \frac{\rho^2}{G}
+ V_{fer}(\rho) + 2 V_{bos}(\rho),
\label{eq40}$$ where $$\begin{aligned}
V_{fer}(\rho) &=& \frac{N_c}{8\pi^2 l^4}
\int\limits^\infty_{1/(l\Lambda)^2}
\frac{ds}{s^2}\exp\left(-s( l\rho)^2\right) \coth{s},
\label{eq41} \\
V_{bos}(\rho) &=& -\frac{N_c}{16\pi^2 l^4}
\int\limits^\infty_{1/(l\Lambda)^2}
\frac{ds}{s^2}\exp\left(-s( l\rho)^2\right)
\frac{1}{\sinh{s}} .
\label{eq42}\end{aligned}$$ This can be rewritten as [@0]: $$\begin{aligned}
V(\rho) &=& \frac{N_c}{8\pi^2 l^4}
\Bigg[ \frac{( l\rho)^2}{g}
+ ( l\rho)^2\left(1-\ln\frac{( l\rho)^2}{2}\right)
+ 4\cdot\int\limits_{( l\rho)^2/2}^{[( l\rho)^2+1]/2}
dx \ln\Gamma(x)\Bigg]+
\nonumber\\
&+& \frac{N_c}{16\pi^2 l^4}
\Bigg[ \ln(\Lambda l)^2-\gamma -\ln(8\pi^2) \Bigg]
+ O\left(\frac{1}{\Lambda}\right),
\label{eq43}\end{aligned}$$ where the dimensionless coupling constant is $g=GN_c/8\pi^2l^2$.
As the magnetic field $B$ goes to zero ($l\to\infty$), one obtains: $$V(\rho) = \frac{\rho^2}{G} .
\label{eq44}$$ This potential is positive–definite, as has to be in a supersymmetric theory. The only minimum of this potential is $\rho=0$ what corresponds to the chiral symmetric vacuum [@6].
The presence of a magnetic field changes this situation dramatically: at $B\neq 0$, a non–trivial global minimum, corresponding to chiral symmetry breaking, exists for all $g>0$.
The gap equation $dV/d\rho=0$, following from Eq.(\[eq43\]), is $$\frac{N\rho}{4\pi^2 l^2}
\Bigg[ \frac{1}{g}-\ln\frac{( l\rho)^2}{2}
+ 2\ln\Gamma\left(\frac{( l\rho)^2+1}{2}\right)
- 2\ln\Gamma\left(\frac{( l\rho)^2}{2}\right)
\Bigg] = 0.
\label{eq45}$$ It is easy to check that, at $B\neq 0$, the trivial solution $\rho=0$ corresponds to a maximum of $V$, since $d^2V/d\rho^2|_{\rho\to 0} \to -\infty$. Numerical analysis of equation (\[eq45\]) for $g>0$ and $B\neq 0$ shows that there is a nontrivial solution $\rho=m_{dyn}$ which is the global minimum of the potential. The analytic expression for $m_{dyn}$ can be obtained for small $g$ (when $m_{dyn}l\ll 1$) and for very large $g$ (when $m_{dyn}l\gg 1$). In those two cases, the results are: $$\begin{aligned}
\frac{1}{g}&\simeq& -\ln \frac{\pi(\rho l)^2}{2} ,
\quad g\ll 1; \label{eq46} \\
\frac{1}{g}&\simeq& \frac{1}{2(\rho l)^2} ,
\quad g\gg 1, \label{eq48}\end{aligned}$$ i.e. $$\begin{aligned}
m_{dyn}&\simeq& \sqrt{\frac{2|eB|}{\pi}}
\exp\left[-\frac{4\pi^2}{|eB|N_cG}\right],
\quad g\ll 1; \label{eq47}\\
m_{dyn}&\simeq& \frac{|eB|}{4\pi}\sqrt{GN_c},
\quad g\gg 1. \label{eq49}\end{aligned}$$ At this point, it seems appropriate to note that the infrared dynamics in the SNJL model in a magnetic field is actually equivalent to that in the ordinary NJL model in a magnetic field as soon as the coupling is weak. This follows from direct comparison of the effective potentials and the kinetic terms of the models [@0]. The physical picture underlying this equivalence is also clear. The spectra of charged free fermions and bosons in a magnetic field are essentially different: $$E_n(k_1)=\pm\sqrt{m^2+2|eB|n+k_1^2} ,
\qquad n=0,1,2,\dots , \label{eq52}$$ and $$E_n(k_1)=\pm\sqrt{m^2+|eB|(2n+1)+k_1^2} ,
\qquad n=0,1,2,\dots . \label{eq53}$$ for fermions and bosons, respectively. The crucial difference between them is the existence of the gap $\Delta E = \sqrt{|eB|}$ in the spectrum of massless bosons and the absence of any gap in the spectrum of massless fermions. Thus at weak coupling, the infrared dynamics, responsible for chiral symmetry breaking, is dominated by fermions while bosonic degrees of freedom are irrelevant. So, it is not a surprise that the infrared dynamics in the SNJL and NJL models are equivalent.
In conclusion, I note that the results obtained here are in agreement with the general conclusion of Refs.[@1; @2], saying that the catalysis of chiral symmetry breaking by a magnetic field is a universal, model independent effect.
Acknowledgments {#acknowledgments .unnumbered}
===============
I would like to thank the organizers of the Seminar for financial support and for the opportunity to give this talk. I thank V. Elias, D.G.C. McKeon, and V.A. Miransky for enjoyable collaboration.
[99]{}
V. Elias, D.G.C. McKeon, V.A. Miransky, and I.A. Shovkovy, [*Phys. Rev.*]{} D[**54,**]{} 7884 (1996).
V.P. Gusynin, V.A. Miransky, and I.A. Shovkovy, [*Phys. Rev. Lett.*]{} [**73,**]{} 3499 (1994); [*Phys. Rev.*]{} D[**52,**]{} 4718 (1995).
V.P. Gusynin, V.A. Miransky, and I.A. Shovkovy, [*Phys. Lett.*]{} B[**349,**]{} 477 (1995); [*Phys. Rev.*]{} D[**52,**]{} 4747 (1995); [*Nucl. Phys.*]{} B[**462,**]{} 249 (1996).
L.N. Leung, Y.J. Ng, and A.W. Ackley, Phys. Rev. D[**54,**]{} 4181 (1996)); Chiral Symmetry Breaking in a Uniform External Magnetic Field, [hep-th/9701172]{}.
D.K. Hong, Y. Kim, and S.-J. Sin, Phys. Rev. D[ **54,**]{} 7879 (1996).
S.P. Klevansky and R.H. Lemmer, [*Phys. Rev.*]{} D[**38,**]{} 3559 (1988);\
K.G. Klimenko, [*Teor. Mat. Fiz.*]{} [**89,**]{} 211 (1991);\
H. Suganuma and T. Tatsumi, [*Ann. Phys.*]{} [**208,**]{} 470 (1991);\
S. Schramm, B. Müller, and A.J. Schramm, [*Mod. Phys. Lett.*]{} A[**7,**]{} 973 (1992).
W. Buchmüller and S.T. Love, [*Nucl. Phys.*]{} B[**204,**]{} 213 (1982).
J. Wess and J. Bagger, [*Supersymmetry and Supergravity*]{}, (Princeton, 1992).
[^1]: This talk is based on the work done in collaboration with V.Elias, D.G.C.McKeon, and V.A.Miransky \[1\].
|
---
abstract: 'Circumstellar discs of Be stars are thought to be formed from material ejected from a fast-spinning central star. This material possesses large amounts of angular momentum and settles in a quasi-Keplerian orbit around the star. This simple description outlines the basic issues that a successful disc theory must address: 1) What is the mechanism responsible for the mass ejection? 2) What is the final configuration of the material? 3) How the disc grows? With the very high angular resolution that can be achieved with modern interferometers operating in the optical and infrared we can now resolve the photosphere and immediate vicinity of nearby Be stars. Those observations are able to provide very stringent tests for our ideas about the physical processes operating in those objects. This paper discusses the basic hydrodynamics of viscous decretion discs around Be stars. The model predictions are quantitatively compared to observations, demonstrating that the viscous decretion scenario is currently the most viable theory to explain the discs around Be stars.'
---
Introduction
============
Recent years witnessed an important progress in our understanding of the circumstellar discs of Be stars, largely due to interferometric observations capable of angularly resolve those objects at the milliarcsecond (mas) level (see [@ste10 Stee 2010], for a review).
In the late nineties, discs around Be stars were considered to be equatorially enhanced outflowing winds, and several models and mechanisms to drive the outflow were proposed (see [@bjo00 Bjorkman 2000], for a review). The much stronger observational constraints available today allow us to rule out several theoretical scenarios that were proposed in the past (e.g. the wind compressed disc models of [@bjo93 Bjorkman & Cassinelli 1993]). As an example, spectrointerfometry and spectroastrometry have directly probed the disc kinematics ([@mei07a Meilland et al. 2007b], [@stefl10 Štefl et al. 2010], [@oud10 Oudmaijer et al. 2010]), revealing that Be discs rotate very close to Keplerian. As a result, the viscous decretion scenario, proposed originally by [@lee91] and further developed by [@por99], [@oka01], [@bjo05], among others, has emerged as the most viable scenario to explain the observed properties of Be discs.
Discs Diagnostics \[observations\]
==================================
Before reviewing the basic aspects of the theory of circumstellar discs, it is useful to put in perspective what observations tells us about the structure and kinematics of those discs.
The formation loci of different observables
-------------------------------------------
![The formation loci of different observables. The calculations assume a rapidly rotating B1Ve star ($T_{\rm eff}^{\rm pole} = 25\,000\;\rm K$, $\Omega/\Omega_{\rm crit} = 0.92$) surrounded by viscous decretion discs with different sizes. The results correspond to a viewing angle of $30^\circ$. [[*Left: continuum emission*]{}]{}. Plotted is the ratio between the observed flux to the maximum flux, $F_{\rm max}$, as a function of the disc outer radius. $F_{\rm max}$ corresponds to the flux of a model with a disc outer radius of $1000\;\rm R$. [[*Right: polarization and line equivalent width*]{}]{}. Calculations were carried out with the [hdust]{} code ([@car06a Carciofi & Bjorkman 2006]). []{data-label="formation_loci"}](review_carciofi_fig1a.eps "fig:"){width="2.6in"} ![The formation loci of different observables. The calculations assume a rapidly rotating B1Ve star ($T_{\rm eff}^{\rm pole} = 25\,000\;\rm K$, $\Omega/\Omega_{\rm crit} = 0.92$) surrounded by viscous decretion discs with different sizes. The results correspond to a viewing angle of $30^\circ$. [[*Left: continuum emission*]{}]{}. Plotted is the ratio between the observed flux to the maximum flux, $F_{\rm max}$, as a function of the disc outer radius. $F_{\rm max}$ corresponds to the flux of a model with a disc outer radius of $1000\;\rm R$. [[*Right: polarization and line equivalent width*]{}]{}. Calculations were carried out with the [hdust]{} code ([@car06a Carciofi & Bjorkman 2006]). []{data-label="formation_loci"}](review_carciofi_fig1b.eps "fig:"){width="2.6in"}
One important issue to consider when analyzing observations is that different observables probe different regions of the disc; it is therefore useful to be able to make a correspondence between a given observable (say, the continuum flux level at a given wavelength) and the part of the disc whence it comes.
To make such a correspondence, we consider a typical Be star of spectral type B1Ve and calculate the emergent spectrum arising from viscous decretion discs of different outer radii (see Sect. \[viscous\_discs\] for a detailed description of the structure of viscous discs). In Fig. \[formation\_loci\] we present results for continuum emission, continuum polarization and the equivalent width of emission lines.
Let us discuss first the continuum and line emission. The reason the spectra of Be stars show continuum excess or emission lines is that the dense disc material acts as a pseudo-photosphere that is much larger than the stellar photosphere. For the pole-on case (inclination $i=0$), the effective radius of the pseudo-photosphere is the location where the vertical optical depth $\tau_\lambda(R_{\rm eff}) = 1$. For continuum Hydrogen absorption, the opacity depends on the wavelength as $\kappa_\lambda \propto \lambda^{2}$ (if one neglects the wavelength dependence of the absorption gaunt factors) and thus the size of the pseudo-photosphere increases with wavelength (see [@car06a Carciofi & Bjorkman 2006], appendix A, for a derivation of $R_{\rm eff}(\lambda)$). In Fig. \[formation\_loci\], left panel, the intersection of the different lines (each corresponding to a given wavelength) with the horizontal dashed line marks the position in the disc whence about 95% of the continuum excess comes. For instance, from the Figure we see that the $V$ band excess is formed very close to the star, within about $2\;R$, whereas the excess at 1 mm originates from a much larger volume of the disc. As we shall see below, the fact that the continuum excess flux at visible wavelengths forms so close to the star makes such observations indispensable to study systems with complex temporal evolution because the V band continuum emission has the fastest response to disc changes as a result of photospheric activity.
In the right panel of Fig. \[formation\_loci\] we show how the line emission and continuum polarization grows with radius. For the optically thick H$\alpha$ and Br$\gamma$ lines the disc emission only fills in the photospheric absoprtion profile when the disc size is about $5\;R$. Both lines have pseudo-photospheres that extend up to about $20\;R$. For polarization, the results are at odds with the common belief that polarization is formed close to the star; it is seen that 95% of the maximum polarization is only reached when the disc size is about $10\;R$.
Disc Thickness
--------------
In a nice example of the diagnostics potential of interferometric and polarimetric observations combined, [@woo97] and [@qui97] showed that the circumstellar disc of $\zeta$ Tau is geometrically thin (opening angle of $2.5^\circ$). This result was confirmed by [@car09] from a more detailed analysis. Other studies based on the fraction of Be-shell stars vs. Be stars typically find larger values for the opening angle (e.g. $13^\circ$ from [@han96 Hanuschik 1996]) but this discrepancy is accommodated by the fact that the geometrical thickness of Be discs increases with radius (flared disc) and different observables probe different disc regions (Fig. \[formation\_loci\]). In any case, there is no doubt that the Be discs are flat, geometrically thin structures.
Disc density distribution \[density\]
-------------------------------------
Models derived both empirically ([@wat86 e.g., Waters 1986]) and theoretically (e.g., [@oka01 Okazaki 2001] and [@bjo05 Bjorkman & Carciofi 2005]) usually predict (or assume) a power-law fall off of the disc density \[$\rho(r) \propto r^{-n}]$. Values for $n$ vary widely in the literature, typically in the range $2<n<4$. One question that arises is whether this quoted range for the density slope is real or not.
Below we see that the viscous decretion disc model, in its simplest form of an isothermal and isolated disc, predicts a slope of $n=3.5$ for the disc density. On one hand, the inclusion of other physical effects can change the value of the above slope. For instance, non-isothermal viscous diffusion results in much more complex density distribution that cannot be well represented by a simple power-law, and tidal effects by a close binary may make the density slope shallower ([@oka02 Okazaki et al. 2002]). On the other hand, it is also true that many of quoted values for $n$ in the literature are heavily influenced by the assumptions and methods used in the analysis and should be viewed with caution.
Disc dynamics
-------------
Since the earliest detections of Be stars, it became clear that a rotating circumstellar disc-like material was the most natural explanation for the observed double-peaked profile of emission lines, but the problem of how the disc rotates has been an open issue until recently.
The disc rotation law is profoundly linked with the disc formation mechanism; therefore, determining observationally the disc dynamics is of great interest. Typically, one can envisage three limiting cases for the radial dependence of the azimuthal component of the velocity, $v_\phi$, depending on the forces acting on the disc material $$v_{\phi} = \left\{
\begin{array}{ll}
V_{\rm rot}(r/R)^{-1} & \mbox{radiatively driven outflow}\,, \\
V_\star(r/R) & \mbox{magnetically dominated disc} \,, \\
V_{\rm crit}(R/r)^{1/2} & \mbox{disc driven by viscosity}\,.\\
\end{array}
\right.$$ In the first case (radiatively driven outflow) the dominant force on the material is the radially directed radiation pressure that does not exert torques and thus conserves angular momentum ($V_{\rm rot}$ is thus the rotation velocity of the material when it left the stellar surface). The second case corresponds to a rigidly rotating magnestosphere a la $\sigma$ Ori E ([@tow05 Townsend et al. 2005]), in which the plasma is forced to rotate at the same speed as the magnetic field lines ($V_\star$). The last case corresponds to Keplerian orbital rotation, written in terms of the critical velocity, $V_{\rm crit} \equiv \left(GM/R\right)^{1/2}$, which is the Keplerian orbital speed at the stellar surface. This case requires a fine-tunning mechanism such that *the centrifugal force, $v_\phi^2/r$, exactly balances gravity at all radii*. As we shall see below, viscosity does provide the fine-tunning mechanism capable of producing a Keplerian disc.
[@por03] reviewed the then existing observational constraints on the disc kinematics and concluded that ”all of the kinematic evidence seems to point to a disc velocity field dominated by rotation, with little or no radial flow, at least in the regions where the kinematic signatures of emission and absorption are significant”. Today, spectrointerferometry and spectroastrometry provides clear-cut evidence that, in most systems observed and analysed so far ($\kappa$ CMa being the only possible exception, [@mei07b Meilland et al. 2007b]), the discs rotate in a Keplerian fashion ([@mei07a Meilland et al. 2007b], [@stefl10 Štefl et al. 2010], [@oud10 Oudmaijer et al. 2010]). This is an important result, seeing that it indicates that viscosity is the driving mechanism of the outflow.
Cyclic $V/R$ variations
-----------------------
About 2/3 of the Be stars present the so-called $V/R$ variations, a phenomenon characterised by the quasi-cyclic variation in the ratio between the violet and red emission peaks of the H I emission lines. These variations are generally explained by global oscillations in the circumstellar disc forming a one-armed spiral density pattern that precesses around the star with a period of a few years ([@kat83 Kato 1983], [@oka91 Okazaki 1991], [@oka97 Okazaki 1997]).
Recently, [@car09] provided a quantitative verification of the global disc oscillation theory from a detailed modelling of high-angular resolution [amber]{} data of the Be star $\zeta$ Tau. From a theoretical perspective, the existence of density waves in rotating discs, as suggested by [@kat83], imposes most stringent constraints on the rotation velocity, since mode confinement requires that the rotation law be Keplerian within about 1% and the radial flow be hightly subsonic ($\lesssim 0.01\;c_s$, [@oka07 Okazaki 2007]).
Long-term variations
--------------------
One of the most intriguing types of variability observed in the Be stars is the aperiodic transition between a normal B phase (discless phase) and a Be phase whereby the disc is lost and rebuilt in timescales of months to years (e.g., [@cla03 Clark et al. 2003], [@stefl10 Štefl et al. 2010]). The varying amount of circumstellar gas manifests itself as changes in line profiles ([@cla03 Clark et al. 2003]), continuum brightness and colors ([@har83 Harmanec 1983]) and polarization ([@dra10 Draper et al. 2010]). A fine example of the secular process of disc formation and dissipation, and its effects on the continuum brightness and colors, is shown in Fig. \[dewit\]: the outburst phase responsible for the disc build-up lasted about 300 hundred days (phases I and II, during which the star got brighter and redder) and was followed by a quiescent phase of about 500 days (phases III and IV, during which the star slowly went back to its original appearance as the previously built disc dissipated). As discussed below, the timescales involved in the disc formation and dissipation are generally consistent with the timescales of viscous diffusion.
![ Light curve ([*left panel*]{}) and CMD ([*right panel*]{}) of the variation of the star OGLE 005209.92-731820.4 ([@dew06 de Wit 2006], reproduced with permission). []{data-label="dewit"}](review_carciofi_fig2.eps){width="2.0in"}
Short-term variations
---------------------
Small-scale, short-term variations are quite common in Be stars, and possess a complex phenomenology ([@riv07 see Rivinius 2007, for a review]). On one hand, some observed variations (e.g. line profile variations due to non-radial pulsations) are associated with the photosphere proper; others, on the other hand, are thought to originate from the very base of the disc and are, therefore, the manifestation of the physical process(es) that is (are) feeding the disc (e.g. short-term $V/R$ variations of emission lines). To date, $\mu$ Cen remains the only system in which the ejection mechanism have been unambiguously identified (in this case non-radial pulsations, [@riv98 Rivinius et al. 1998]). In the viscous diffusion theory outlined below, it is assumed that matter is ejected by the star and deposited at the inner boundary of the disc with Keplerian or super-Keplerian speeds; clearly, the current status of the theory is still unsatisfactory inasmuch as the fundamental link between the disc and the photosphere proper (i.e., the feeding mechanism) is still unknown.
Studies of short-term variations associated with the disc (e.g., [@riv98 Rivinius et al. 1998]) are generally consistent with the following scenario: mass injection by some mechanism that causes a transient asymmetry in the inner disc that manifests itself by, e.g., short-term $V/R$ variations, followed by circularization and dissipation in a viscous scale (see below). Here, again, viscosity seems to be playing a major role in shaping the evolution of ejecta.
Viscous Decretion Disc Models \[viscous\_discs\]
================================================
The observational properties outlined above give strong hints as to the ingredients a successful theory for the structure of the Be discs must have. The theory must explain the keplerian rotation, slow outflow speeds, small geometrical thickness and must also account for the timescales of disc build-up and dissipation.
The only theory to date that satisfies those requirements is the viscous decretion disc model ([@lee91 Lee et al. 1991]). This model is essentially the same as that employed for protostellar discs ([@pri81 Pringle 1981]), the primary difference being that Be discs are outßowing, while pre- main-sequence discs are inflowing. In this model, it is supposed that some yet unknown mechanism injects material at the Keplerian orbital speed into the base of the disc. Eddy/turbulent viscosity then transports angular momentum outward from the inner boundary of the disc (note that this requires a continual injection of angular momentum into the base of the disc). If the radial density gradient is steep enough, angular momentum is added to the individual fluid elements and they slowly move outward. To critically test such decretion disc models of Be stars against observations, we must determine the structure of the disc from hydrodynamical considerations.
Viscous discs fed by constant decretion rates
---------------------------------------------
There are many solutions available for the case of a viscous disc fed by a constant decretion rate, all agreing in their essentials. Below we outline the mains steps of one possible derivation ([@bjo97 Bjorkman 1997]) to the goal of discussing the properties of the solution and confronting them to the observations. More detailed analyses as well as different approaches to the problem can be found in [@lee91], [@oka01], [@bjo05], [@oka07], [@jon08] and [@car08].
Hydrostatic Structure
---------------------
Our goal is to write and solve the Navier Stokes fluid equations in cylindrical coordinates ($\varpi,\phi,z$) which in the steady-state case have the following general form $$\begin{aligned}
\frac{1}{\varpi}\frac{\partial}{\partial\varpi}(\varpi\rho v_\varpi)
+ \frac{1}{\varpi}\frac{\partial}{\partial\phi} (\rho v_\phi)
+ \frac{\partial}{\partial z} (\rho v_z)
&=& 0 \enspace , \label{eq:continuity} \\
v_\varpi \frac{\partial v_\varpi}{\partial \varpi}
+ \frac{v_\phi}{\varpi} \frac{\partial v_\varpi}{\partial \phi}
+ v_z \frac{\partial v_\varpi}{\partial z}
- \frac{v^2_\phi}{\varpi}
&=& - \frac{1}{\rho} \frac{\partial P}{\partial \varpi}
+ f_\varpi \enspace , \label{eq:varpi_momentum}\\
v_\varpi \frac{\partial v_\phi}{\partial \varpi}
+ \frac{v_\phi}{\varpi} \frac{\partial v_\phi}{\partial \phi}
+ v_z \frac{\partial v_\phi}{\partial z}
+ \frac{v_\varpi v_\phi}{\varpi}
&=& - \frac{1}{\rho \varpi} \frac{\partial P}{\partial \phi}
+ f_\phi \enspace , \label{eq:phi_momentum}\\
v_\varpi \frac{\partial v_z}{\partial \varpi}
+ \frac{v_\phi}{\varpi} \frac{\partial v_z}{\partial \phi}
+ v_z \frac{\partial v_z}{\partial z}
&=& - \frac{1}{\rho} \frac{\partial P}{\partial z}
+ f_z \enspace ,\label{eq:z_momentum}\end{aligned}$$ where $\rho$ is the gas mass density, $P$ is the pressure and $f_\varpi$, $f_\phi$ and $f_z$ are the components of the external forces acting on the gas.
Let us initially ignore any viscous effects (inviscid disc) and assume that the only force acting on the gas is gravity. If we further assume circular orbits — $v_\varpi=0$, $v_\phi \neq 0$, and $v_z=0$, an assumption that will be droped later on when viscosity is included — the only non-trivial fluid equations are the $\varpi$- and $z$-momentum equations (Eqs. \[eq:varpi\_momentum\] and \[eq:z\_momentum\]), which take the form $$\begin{aligned}
\frac{1}{\rho} \frac{\partial P}{\partial \varpi}
&=& \frac{v^2_\phi}{\varpi} + f_\varpi \enspace ,
\label{eq:varpi_hydrostatic} \\
\frac{1}{\rho} \frac{\partial P}{\partial z}
&=& f_z \enspace ,
\label{eq:z_hydrostatic}\end{aligned}$$ where the external force components are given by the gravity of the spherical central star $$\begin{aligned}
f_\varpi &=& -\frac{GM\varpi}{(\varpi^2+z^2)^{3/2}} \enspace , \\
f_z &=& -\frac{GM z}{(\varpi^2+z^2)^{3/2}} \enspace .\end{aligned}$$
To specify the pressure, we introduce the equation of state, $P=c_s^2\rho$, where $c_s=(kT)^{1/2}(\mu m_{\rm H})^{-1/2}$. In this last expression, $k$ is the Boltzmann constant, $\mu$ is the gas molecular weight and $m_{\rm H}$ is the mass of the hydrogen atom.
In the thin-disc limit ($z \ll \varpi$), we obtain $$\begin{aligned}
v_\phi &=& V_{\rm crit} \left({R/\varpi}\right)^{1/2} \enspace , \label{eq:vphi}\\
\frac{\partial \ln (c_s^2\rho)}{\partial z}
&=& -\frac{V^2_{\rm crit}Rz}{c_s^2 \varpi^3}
\enspace . \label{eq:hseq}\end{aligned}$$
The above equations mean that the disc rotates at the Keplerian orbital speed and is hydrostatically supported in the vertical direction. To determine the vertical disc structure we must solve Eq. (\[eq:hseq\]). Assuming an isothermal disc one obtains $$\rho(\varpi,z) = \rho_0(\varpi) \exp\left[ -0.5 (z/H)^2 \right] \, ,$$ where $\rho_0$ the disc density at the mid-plane ($z = 0$), and the disc scale height is given by $$H(\varpi) = (c_s/v_\phi)\varpi \,.
\label{eq:scaleheight}$$ Since $v_\phi \propto \varpi^{-0.5}$, we obtain the familiar result that for an isothermal disc the scaleheight grows with distance from the star as $H \propto \varpi^{1.5}$. As we shall see below, it is useful to express $\rho(\varpi,z)$ in terms of the disc surface density, $\Sigma$, written as $$\Sigma(\varpi)=\int_{-\infty}^\infty \rho(\varpi,z) \, dz = \sqrt{2\pi}H\rho_0 \enspace .
\label{eq:sigmadef}$$ Thus $$\rho(\varpi,z) = \frac{\Sigma(\varpi)}{\sqrt{2\pi}H(\varpi)} \exp\left[ -0.5 (z/H)^2 \right] \, .
\label{eq:rho}$$
Viscous Outflow
---------------
Clearly, in Eq. (\[eq:rho\]) the disc density scale, $\Sigma(\varpi)$, is completely undetermined, because for a inviscid Keplerian disc we can choose to put an arbitrary amount of material at a given radius. To set the density scale, we must include a mechanism — viscous diffusion — to transport material from the star outwards.
Viscous flows grow in a viscous diffusion timescale $$\tau_{\rm diff} = \varpi^2 / \nu \, ,$$ where $\nu$ is the kinematic viscosity. One problem that has already been noted long ago ([@sha73 Shakura & Sunyaev 1973]) is that for molecular viscosity the diffusion timescale is much too long. [@sha73] appealed instead to the so-called eddy (or turbulent) viscosity, which they parameterized as $$\nu = \alpha c_s H \, ,$$ where $0<\alpha<1$. The $\alpha$ parameter describes the ratio of the product of the turbulent eddy size and speed to the product of disc scaleheight and sound speed. In other words, it is assumed that the largest eddies can be at most about the size of the disc scaleheight and that the ÒturnoverÓ velocity of the eddies cannot be arger than the sound speed (otherwise, the turbulence would be supersonic and the eddies would fragment into a series of shocks). With this value of the viscosity, the viscous diffusion timescale becomes $$\begin{aligned}
\tau_{\rm diff} &=& \frac{V_{\rm crit}}{\alpha c_s^2}\sqrt{\varpi R} \\
&\approx& 20 {\rm yr} \left( \frac{0.01}{\alpha} \right) \sqrt{\frac{\varpi}{R}} \, .\end{aligned}$$ Studies of the formation and dissipation of the discs around Be stars find typical time scales of months to a few years (e.g., [@wis10 Wisniewski et al. 2010]); therefore, an $\alpha$ of the order of 0.1 or larger is required to match the observed timescales (see Sect. \[nonconstant\]).
If we add viscosity to the fluid equations, we can still assume that the disc is axisymmetric and that the vertical structure is hydrostatic ($v_z=0$). However, the presence of an outflow implies that $v_\varpi \ne 0$. The $\varpi$- and $z$-momentum equations are the same as before, so $v_\phi$ and $\rho$ are the same as in the pure Keplerian case \[eqs. (\[eq:vphi\]) and (\[eq:rho\])\].
Two fluid equations remain to be solved, the continuity equation, Eq. (\[eq:continuity\]), and the $\phi$-momentum equation, Eq. (\[eq:phi\_momentum\]). The continuity equation, written in terms of the surface density, $$\frac{\partial}{\partial \varpi} (2 \pi \varpi \Sigma v_\varpi) = 0
\enspace
\label{eq:disk_continuity}$$ means that the mass decretion rate, ${\dot M} \equiv 2 \pi \varpi \Sigma v_\varpi$, is a constant (independent of $\varpi$). The viscous outflow speed is given by $$v_\varpi = \frac{\dot M}{2 \pi \varpi \Sigma} \enspace .
\label{eq:radvel}$$
This $\phi$-momentum equation now is more complicated because viscosity exerts a torque, which is described by the viscous shear stress tensor, $\pi_{\varpi\phi}$. Including this shear stress, the $\phi$-momentum equation becomes $$v_\varpi \frac{\partial v_\phi}{\partial \varpi}
+ \frac{v_\varpi v_\phi}{\varpi}
= \frac{1}{\rho \varpi^2}
\frac{\partial}{\partial \varpi} (\varpi^2 \pi_{\varpi\phi}) \enspace
,
\label{eq:navier_stokes_phi_momentum}$$ where $$\pi_{\varpi \phi} = \nu \rho \varpi
\frac{\partial (v_\phi/\varpi)}{\partial \varpi}
= -\frac{3}{2}\alpha c_s^2 \rho
\enspace .
\label{eq:shear_stress}$$ Multiplying Eq. (\[eq:navier\_stokes\_phi\_momentum\]) by $\rho \varpi^2$ and integrating over $\phi$ and $z$, we find $${\dot M} \frac{\partial}{\partial \varpi}(\varpi v_\phi)
= \frac{\partial}{\partial \varpi}(\mathcal{T})
\enspace ,
\label{eq:jdot_gradient}$$ where $$\mathcal{T} = \int_{-\infty}^{\infty} \varpi \pi_{\varpi\phi} 2\pi \varpi dz = -3\pi \alpha c_s^2 \varpi^2\Sigma
\label{eq:torque1}$$ is the viscous torque. The $\phi$-momentum equation, Eq. (\[eq:navier\_stokes\_phi\_momentum\]), expresses the fact that the change in the angular momentum flux — $\varpi v_\phi$ being the specific angular momentum — is given by the gradient of the viscous torque. Since the continuity equation implies that ú$\dot{M}$ is constant, we integrate Eq. (\[eq:navier\_stokes\_phi\_momentum\]) over $\varpi$ to obtain $$\mathcal{T}(\varpi) = \dot{M} V_{\rm crit} \sqrt{\varpi R} + \mbox{constant} \, .
\label{eq:torque2}$$ Substituting Eq. (\[eq:torque1\]) into Eq. (\[eq:torque2\]) and solving for the surface density we find $$\Sigma(\varpi)=\frac{\dot M}{3 \pi \alpha c_\mathrm{s}^2 }
\left( \frac{G M}{\varpi^{3}} \right)^{1/2}
\left[(R_0/\varpi)^{1/2}-1\right] \enspace .
\label{eq:disk_Sigma}$$ Eq. (\[eq:disk\_Sigma\]) describes the surface density of an unbounded disc (i.e. a disc that is allowed to grow indefinitely). The integration constant $R_0$ is a parameter that depends on the integration constant of Eq. (\[eq:torque2\]) and is related to the physical size of the disc; for time-dependent models, such as those of [@oka07], $R_0$ grows with time and thus $R_0$ is related with the age of the disc.
Properties of the Solution
--------------------------
We have now completed the hydrodynamic description of an isothermal, unbounded viscous decretion disc. To fully determine the problem one must specify the decretion rate, $\dot{M}$, the value of $\alpha$, and the disc age (or size). Assuming that the disc is sufficiently old, $R_0 \gg R$, in which case Eq. (\[eq:disk\_Sigma\]) becomes a simple power-law with radius, $\Sigma(\varpi)\propto \varpi^{-2}$. From Eqs. (\[eq:rho\]) and (\[eq:scaleheight\]) we obtain that the isothermal disc density profile is quite steep, $\rho \propto \varpi^{-3.5}$.
Another important property of isothermal viscous discs can be readily derived from Eq. (\[eq:scaleheight\]). Since $v_\phi$ is much larger than the sound speed (the former is of the order of several hundreds of km/s whereas the later is a few tens of km/s), the disc scaleheight is small compared to the stellar radius, i.e., the disc is geometrically thin. Finally, from Eq. (\[eq:radvel\]) we find that, for large discs, the radial velocity is a linear function of the radial distance, $v_\varpi\propto\varpi$.
![ *Top*: Vertically averaged disc temperature. *Bottom*: Local power-law index of the density profile. *Left:* Results for a low-density model ($\rho_0 = 4.2\times 10^{-12} \rm\; g\;cm^{-3}$). *Right:* Results for a high-density model ($\rho_0 = 3.0\times 10^{-11} \rm\; g\;cm^{-3}$). Adapted from [@car08]. []{data-label="fig:temperature"}](review_carciofi_fig3a.eps "fig:"){width="2.6in" height="1.6in"} ![ *Top*: Vertically averaged disc temperature. *Bottom*: Local power-law index of the density profile. *Left:* Results for a low-density model ($\rho_0 = 4.2\times 10^{-12} \rm\; g\;cm^{-3}$). *Right:* Results for a high-density model ($\rho_0 = 3.0\times 10^{-11} \rm\; g\;cm^{-3}$). Adapted from [@car08]. []{data-label="fig:temperature"}](review_carciofi_fig3b.eps "fig:"){width="2.6in" height="1.6in"} ![ *Top*: Vertically averaged disc temperature. *Bottom*: Local power-law index of the density profile. *Left:* Results for a low-density model ($\rho_0 = 4.2\times 10^{-12} \rm\; g\;cm^{-3}$). *Right:* Results for a high-density model ($\rho_0 = 3.0\times 10^{-11} \rm\; g\;cm^{-3}$). Adapted from [@car08]. []{data-label="fig:temperature"}](review_carciofi_fig3c.eps "fig:"){width="2.6in" height="1.6in"} ![ *Top*: Vertically averaged disc temperature. *Bottom*: Local power-law index of the density profile. *Left:* Results for a low-density model ($\rho_0 = 4.2\times 10^{-12} \rm\; g\;cm^{-3}$). *Right:* Results for a high-density model ($\rho_0 = 3.0\times 10^{-11} \rm\; g\;cm^{-3}$). Adapted from [@car08]. []{data-label="fig:temperature"}](review_carciofi_fig3d.eps "fig:"){width="2.6in" height="1.6in"}
Temperature structure
---------------------
The temperature structure of viscous discs were investigated by [@car06a] and [@sig07]. They found that those discs are highly nonisothermal, mainly on their denser inner parts. An example of the temperature structure is shown in Fig. \[fig:temperature\]. The temperature initially drops very quickly close to the stellar photosphere; when the disc becomes optically thin vertically the temperature rises back to the optically thin radiative equilibrium temperature, which is approximately constant, as in the winds of hot stars. The density is the most important factor that controls the temperature structure: for high density models the amplitude of the temperature variations is much larger and the nonisothermal region extends much farther out into the disc.
Non-isothermal effects on the disc structure
--------------------------------------------
From Eqs. (\[eq:rho\]) and (\[eq:disk\_Sigma\]) we see that both the viscous diffusion and the vertical hydrostatic solutions depend on the gas temperature; one can expect, therefore, that the complex temperature structure of the disc might have effects on the disc density structure. [@car08] and [@sig09] calculated the vertical density structure of a disc in consistent vertical hydrostatic equilibrium. They found that the temperature decrease causes the disc to collapse, becoming much thinner in the inner regions. This collapse redistributes the disc material toward the equator, increasing the midplane density by a factor of up to 3 relative to an equivalent isothermal model.
[@car08] investigated, in addition, how the temperature affects the viscous diffusion. The combination of the radial temperature structure, disc scaleheight, and viscous transport produces a complex radial dependence for the disc density that departs very much from the simple $n=3.5$ power-law. As shown in Fig. \[fig:temperature\], the equivalent radial density exponent varies between $n=2$ in the inner disc to $n=5$ near the temperature minimum, eventually rising back to the isothermal value $n=3.5$ in the outer disc. We conclude that non-isothermal effects on the viscous diffusion may account for the at least part of the large scatter of the index $n$ reported in the literature (Sect. \[density\]).
Two test cases: $\zeta$ Tau and $\chi$ Oph
------------------------------------------
The model described above makes several predictions about the disc structure that are in qualitative agreement with the observations (Sect. \[observations\]): viscous discs are geometrically thin, rotate in a near keplerian fashion ($v_\phi \propto \varpi^{-1/2}$ and $v_\varpi \ll v_\phi$) and have a steep density fall-off ($n=-3.5$ in the isothermal case). This model and its predictions must now be quantitatively compared to observations.
A successful verification of the viscous decretion disc model have been obtained in the case of the Be star $\zeta$ Tau ([@car09 Carciofi et al. 2009]). This star is particularly suitable for such study because it had shown little or no secular evolution in the past 18 years or so ([@stefl09 [Š]{}tefl et al. 2009]); therefore, a constant mass decretion rate, as assumed above when deriving the disc structure, is a good approximation for this system. Some of the results obtained for $\zeta$ Tau are shown in Fig. \[fig:zetatau\]. The radial dependence of the disc density, temperature, and opening angle all affect the slope of the visible and IR SED ([@wat86 Waters 1986], [@car06a Carciofi & Bjorkman 2006]), as well as the shape of the intrinsic polarization. Therefore, the fact that the model reproduces the detailed shapes of SED and spectropolarimetry represents a non-trivial test of the Keplerian decretion disc model.
Using a somewhat different model, [@tyc08] successfully fitted several observations of the Be star $\chi$ Oph, including high angular resolution interferometry. In their modeling the index $n$ of the density power-law is a free parameter. Interestingly, their best fitting model was for a much flatter density law ($n=2.5$). Whether this flatter density profile can be accommodated by including some other physical process in addition to viscosity (e.g., tidal effects by a binary) or is an indication that the viscous decretion disc model is not a good model for this system remains to be verified by further analysis.
![*Left:* Fit to the SED of $\zeta$ Tau. The dark grey lines are the observations and [the black lines the model results]{}. The light grey line corresponds to the unattenuated stellar SED and gives a measure of how the disc affects the emergent flux. Adapted from [@car09]. []{data-label="fig:zetatau"}](review_carciofi_fig4.eps "fig:"){width="2.5in"} ![*Left:* Fit to the SED of $\zeta$ Tau. The dark grey lines are the observations and [the black lines the model results]{}. The light grey line corresponds to the unattenuated stellar SED and gives a measure of how the disc affects the emergent flux. Adapted from [@car09]. []{data-label="fig:zetatau"}](review_carciofi_fig4.eps "fig:"){width="2.5in"}
Viscous Discs Fed by Non-constant Decretion Rates \[nonconstant\]
=================================================================
All the discussion so far have been focused on the problem of a disc fed by a constant decretion rate that grows to a given size in a viscous timescale. Clearly, those models can only be applied to objects, such as $\zeta$ Tau, that went through a sufficiently long and stable decretion phase. A different approach is needed if one wants to investigate dynamically active systems such as the one shown in Fig. \[dewit\].
[@oka07], [@hau10] and [@jon08] described the solution of the viscous diffusion problem for systems with non-constant mass decretion rates. Fig. \[28cma\] shows a series of models of the lightcurve of the Be star 28 CMa (Carciofi et al., in prep.), that uses the computer code [singlebe]{} by Atsuo Okazaki to solve the 1D viscous diffusion problem and the [hdust]{} code to calculate the emergent spectrum ([@hau10 Haubois et al. 2010]). The simulation begins after a few years of disc evolution to account for the previous disc build-up. At $t=2003.6$ mass decretion is turned off and the system evolves passively until $t=2008.8$, when the recent outburst started ([@stefl10 Štefl et al. 2010]). The model with $\alpha=0.9$ is the one that reproduces best the lightcurve at all phases. This is, to our knowledge, the first time the viscosity parameter is determined for a Be star disc.
![The visual light curve of 28 CMa fitted with a dynamical viscous decretion disc model with different values of the viscosity parameter, $\alpha$ ([@stefl10 Štefl et al. 2010]) []{data-label="28cma"}](review_carciofi_fig5.eps){width="5.2in"}
conclusions
===========
This paper discussed the basic hydrodynamics that determine the structure of viscous decretion disc models. Those are, to date, the most satisfactory models for Be discs because they can account quantitatively for most of their observational properties, namely the fact that they are geometrically thin, have Keplerian rotation and steep radial density profiles. It has been recently shown that the viscous decretion disc model can also explain the temporal evolution of dynamically active systems such as 28 CMa. Studies of such system allows for the determination of the viscosity parameter $\alpha$ and the mass decretion rate, $\dot{M}$, two quantities that are very difficult to determine otherwise.
Bjorkman, J. E., & Cassinelli, J. P. 1993, *ApJ*, 409, 429
Bjorkman, J. E. 1997, Circumstellar Disks, in Stellar Atmospheres: Theory and Observations, ed. J. P. de Greve, R. Blomme, & H. Hensberge, (New York: Springer)
Bjorkman, J. E. 2000, IAU Colloq. 175: The Be Phenomenon in Early-Type Stars, 214, 435
Bjorkman, J. E., & Carciofi, A. C. 2005, in ASP Conf. Ser. 337, The Nature and Evolution of Disks Around Hot Stars, ed. R. Ignace & K. G. Gayley (San Francisco: ASP), 75
Carciofi, A. C., & Bjorkman, J. E. 2006, *ApJ*, 639, 1081
Carciofi, A. C., & Bjorkman, J. E. 2008, *ApJ*, 684, 1374
Carciofi, A. C., Okazaki, A. T., Le Bouquin, J.-B., [Š]{}tefl, S., Rivinius, T., Baade, D., Bjorkman, J. E., & Hummel, C. A. 2009, *A&A*, 504, 915
Clark, J. S., Tarasov, A. E., & Panko, E. A. 2003, *A&A*, 403, 239
Draper, Z. H. et al. 2010, these proceedings
Jones, C. E., Sigut, T. A. A., & Porter, J. M. 2008, *MNRAS*, 386, 1922
Hanuschik, R. W. 1996, *A&A*, 308, 170
Harmanec, P. 1983, Hvar Observatory Bulletin, 7, 55
Haubois, X., Carciofi, A. C. & Okazaki, A. T. Ph. 2010, these proceedings
Kato, S. 1983, *PASJ*, 35, 249
Lee, U., Saio, H., Osaki, Y. 1991, *MNRAS*, 250, 432
Meilland, A., et al. 2007, *A&A*, 464, 59
Meilland, A., et al. 2007, *A&A*, 464, 73
Okazaki, A. T. 1991, *PASJ*, 43, 75
Okazaki, A. T. 1997, *A&A*, 318, 548
Okazaki, A. T. 2001, *PASJ*, 53, 119
Okazaki, A. T., Bate, M. R., Ogilvie, G. I., & Pringle, J. E. 2002, *MNRAS*, 337, 967
Okazaki, A. T. 2007, Active OB-Stars: Laboratories for Stellar and Circumstellar Physics, 361, 230
Oudmaijer, R. et al. 2010, these proceedings
Porter, J. M. 1999, *A&A*, 348, 512
Porter, J. M., & Rivinius, T. 2003, *PASP*, 115, 1153
Pringle, J. E. 1981, *ARAA*, 19, 137
Quirrenbach, A., et. al. 1997, *ApJ*, 479, 477
Rivinius, T., Baade, D., Stefl, S., Stahl, O., Wolf, B., & Kaufer, A. 1998, *A&A*, 333, 125
Rivinius, T. 2007, Active OB-Stars: Laboratories for Stellare and Circumstellar Physics, 361, 219
Shakura N. I., Sunyaev R. A., 1973, *A&A*, 24, 337
Stee, Ph. 2010, these proceedings
Sigut, T. A. A., & Jones, C. E. 2007, *ApJ*, 668, 481
Sigut, T. A. A., McGill, M. A., & Jones, C. E. 2009, *ApJ*, 699, 1973
tefl, S., et al. 2009, *A&A*, 504, 929
Štefl , S. et al. 2010, these proceedings
Townsend, R. H. D., Owocki, S. P., & Groote, D. 2005, *ApJ*, 630, L81
Tycner, C., Jones, C. E., Sigut, T. A. A., Schmitt, H. R., Benson, J. A., Hutter, D. J., & Zavala, R. T. 2008, *ApJ*, 689, 461
Waters, L. B. F. M. 1986, *A&A*, 162, 121
Wisniewski, J. P., Draper, Z. H., Bjorkman, K. S., Meade, M. R., Bjorkman, J. E., & Kowalski, A. F. 2010, *ApJ*, 709, 1306
de Wit, W. J., Lamers, H. J. G. L. M., Marquette, J. B., & Beaulieu, J. P. 2006, *A&A*, 456, 1027
Wood, K., Bjorkman, K. S., & Bjorkman, J. E. 1997, *ApJ*, 477, 926
|
---
abstract: 'Bundle adjustment is an important global optimization step in many structure from motion pipelines. Performance is dependent on the speed of the linear solver used to compute steps towards the optimum. For large problems, the current state of the art scales superlinearly with the number of cameras in the problem. We investigate the conditioning of global bundle adjustment problems as the number of images increases in different regimes and fundamental consequences in terms of superlinear scaling of the current state of the art methods. We present an unsmoothed aggregation multigrid preconditioner that accurately represents the global modes that underlie poor scaling of existing methods and demonstrate solves of up to 13 times faster than the state of the art on large, challenging problem sets.'
author:
- Tristan Konolige
- Jed Brown
bibliography:
- 'refs.bib'
title: Multigrid for Bundle Adjustment
---
Introduction
============
Bundle adjustment is a nonlinear optimization step often used in structure from motion (SfM) and SLAM applications to remove noise from observations. Because such noise can have long range effects, it is necessary that this optimization step be global. For large SfM problems, say reconstructing a whole city, the problem size can become very large. As the problem size grows, existing techniques start to fail. We introduce a new method that scales better than existing preconditioners on large problem sizes.
There are a variety of techniques to solve the bundle adjustment problem, but the most commonly used one is Levenberg-Marquardt [@agarwal2010bundle; @kushal2012visibility; @triggs1999bundle]. Levenberg-Marquardt is a iterative nonlinear least-squares optimizer that solves a series of linear systems to determine steps it takes towards the optimum. Performance of Levenberg-Marquardt depends heavily on the performance of the linear system solver. The linear system has a special structure created by the interaction of cameras and points in the scene. The majority of entries in this system are zero, so sparse matrices are used. Using the Schur complement, the linear system can be turned into a much smaller reduced system [@triggs1999bundle]. A common choice for solving the reduced system is either Cholesky for small systems, or iterative solvers for large systems [@ceres-solver]. The family of iterative solvers used, Krylov methods, have performance inversely related to the condition number of the linear system [@toselli2005domain]. This leads many to couple the Krylov method with a preconditioner: a linear operator that, when applied to the linear system, reduces the condition number. There are a broad number of linear systems in the literature and hence a large number of preconditioners. Preconditioners trade off between robustness and performance; preconditioners tailored to a specific problem are usually the fastest, but require an expert to create and tune.
Preconditioners already used in the literature are point block Jacobi [@agarwal2010bundle], successive over relaxation [@agarwal2010bundle], visibility-based block Jacobi [@kushal2012visibility], and visibility-based tridiagonal [@kushal2012visibility]. Of these, the visibility based preconditioners are the fastest on large problems. We propose a new multigrid preconditioner that outperforms point block Jacobi and visibility based preconditioners on large, difficult problems. Multigrid methods are linear preconditioners that exploit multilevel structure to scale linearly with problem size. Multigrid methods originated from the need to solve large systems of partial differential equations, and there has been some success applying multigrid to non-PDE areas like graph Laplacians [@lamg]. Our multigrid methods exploits the geometric structure present in bundle adjustment to create a fast preconditioner.
Background
==========
Bundle Adjustment
-----------------
Bundle adjustment is a nonlinear optimization problem over a vector of camera and point parameters $x$ with goal of reducing noise from inaccurate triangulations in SfM. We use a nonlinear least-squares formulation where we minimize the squared sum of reprojection errors [@triggs1999bundle], $f_i$, over all camera-point observations, $$x^* = \operatorname*{arg\,min}_{x} \sum_{i\in\text{observations}} \left\lVert f_i(x) \right\rVert.$$ Here $\sum || f_i ||$ is the *objective function*. A usual choice of solver for the nonlinear least-squares problem is the Levenberg-Marquardt algorithm [@levenberg1944method]. This is a quasi-Newton method that repeatedly solves $$\left(J_x^T J_x + D\right) \delta_i = - J_x f(x),\quad J_x = \frac{\partial f}{\partial x},$$ where $D$ is a diagonal damping matrix, to compute steps $\delta_i$ towards a minimum. Levenberg-Marquardt can be considered as a combination of Gauss-Newton and gradient descent.
Solving the linear system, $J^T J + D$, is the slowest part of bundle adjustment. Splitting $x$ into $[x_c x_p]^T$, where $x_c$ are the camera parameters and $x_p$ are the point parameters, yields a block system $$\begin{aligned}
F &= J_{x_c},\quad E = J_{x_p},\quad J_x = \begin{bmatrix} F & E \end{bmatrix},\\
J_x^TJ_x+D &= \begin{bmatrix}
A = F^T F + D_{x_c} & F^T E \\
E^T F & C = E^T E + D_{x_p} \\
\end{bmatrix}.\end{aligned}$$ $C$ is a block diagonal matrix with blocks of size $3 \times 3$ corresponding to point parameters. $A$ is a block diagonal matrix with blocks of size $9 \times 9$ corresponding to camera parameters. $E^TF$ is a block matrix with blocks of size $9 \times 3$ corresponding to interaction between points and cameras. A usual trick to apply is using the Schur complement to eliminate the point parameter block $C$: $$S = A - F^TE C^{-1} E^T F.$$ $S$ is block matrix with blocks of size $9 \times 9$. $C$ is chosen over $A$ because the number of points is often orders of magnitude larger than the number of cameras. Thus, applying the Schur complement greatly reduces the size of the linear system being solved.
The Schur complement system has structure determined by the covisibility of cameras: if $c_i$ and $c_j$ both observe the same point, then the block $S_{ij}$ is nonzero. In almost all scenarios, cameras do not have points in common with every other camera, so $S$ is sparse. $S$ tends to be easier for the linear solver when all cameras view the same object, for example, tourist photos of the Eiffel Tower. Cameras are close together and a single camera out of place has little effect on the other cameras (because of the high amount of overlap between views). On the other hand, linear solvers are slower when cameras view a large area, like in *street view* where images taken from a car as it drives around a city. In this situation, adjusting a single camera’s position has an effect on all cameras near to it, causing long dependency chains between cameras. Long dependency chains cause issues for iterative linear solvers as information can only be propagated one step in the chain per iteration of the solver.
This linear system is normally not solved to a tight tolerance. Usually, a fairly inexact solve of the linear problem can still lead to good convergence in the nonlinear problem [@agarwal2010bundle]. As the nonlinear problem gets closer to a minima, the accuracy of the linear solve should increase. The method for controlling the linear solve accuracy is called a *forcing sequence*. Ceres Solver [@ceres-solver], the current state of the art nonlinear least-squares solver, uses a criteria proposed by Nash and Sofer [@nash1990assessing] to determine when to stop solving the linear problem: $$\begin{aligned}
Q_n = \frac{1}{2} x^T A x - x^T b, \\
\text{stop if } i \frac{Q_i - Q_{n-1}}{Q_i} \leq \tau,\end{aligned}$$ where $i$ is the current conjugate gradients iteration number and $\tau$ is the tolerance from the forcing sequence. It is important to note that the occurrence of the iteration number in the criteria means that more powerful preconditioners end up solving the linear problem to a tighter tolerance.
When using a simple projective model, the bundle adjustment problem is ill-conditioned. Causes for ill-conditioning include difference in scale between parameters and a highly nonlinear distortion. Improving the conditioning of the problem is possible, for example, in [@qu2018efficient], Qu adaptively reweights the residual functions and uses a local parameterization of the camera pose to improve conditioning. Changes like this are orthogonal to improving linear solver performance, so we use a simple projective model for this paper.
Existing Solvers
----------------
There are a variety of ways to solve $S$. Konolige uses a sparse direct Cholesky solver [@konolige2010sparse]. Sparse direct solvers are often a good choice for small problems because of their small constant factors. For large problems with 2D/planar connectivity, sparse direct methods require $O(n^{1.5})$ time and $O(n\log n)$ space when small vertex separators exist (a set of vertices whose removal splits the graph in half) [@lipton1979generalized]. In street view problems, camera view overlap in street intersections creates large vertex separators, making sparse direct solvers a poor choice for large problems. To improve scaling, Agarwal et al. propose using conjugate gradients with Jacobi preconditioning [@agarwal2010bundle]. Kushal and Agarwal later extend this work with block-Jacobi and block-tridiagonal preconditioners formed using the *visibility*, or number of observed points in common between cameras [@kushal2012visibility]. Jian et al. propose using a preconditioner based off of a subgraph of the unreduced problem similar to a low-stretch spanning tree [@jian2012generalized].
Algebraic Multigrid
-------------------
Algebraic Multigrid (AMG) is a technique for constructing scalable preconditioners for symmetric positive definite matrices. AMG constructs a series of increasingly smaller matrices $\{A_0: \mathbb{R}^{n_0\times n_0}, A_1: \mathbb{R}^{n_1\times n_1}, ...\}$ that are approximations to the original matrix $A_0$. $A_0$ is solved by repeatedly solving coarse levels, $A_{l+1}$, and using the solution on the fine level, $A_l$. The restriction ($R_l:\mathbb{R}^{n_{l+1}\times n_l}$) and prolongation ($P_l:\mathbb{R}^{n_{l}\times n_{l+1}}$) matrices map from $A_{l}$ to $A_{l+1}$ and back, respectively. The coarse solve accurately corrects error in long range interactions on the fine level. This coarse level correction is paired with a *smoother* that provides local correction. The combination of coarse grid correction and fine grid smoothing, when applied to the entire *hierarchy* of levels ($\{A_0, A_1, ...\}$), creates a preconditioner that bounds the iteration count of the iterative solver independently of problem size. Algorithm \[alg:vcycle\] shows a full multigrid preconditioner application (one <span style="font-variant:small-caps;">mgcycle</span> is one preconditioner application).
[$x$ $\gets$ Direct solve on $A_{l} x = b$]{} [$x$ $\gets$ smooth($x, b$)]{} \[line:presmooth\] [$r$ $\gets$ $b - A_l x$]{} \[line:residual\] [$r_c$ $\gets$ $R_l r$]{} \[line:restriction\] [$x_c$ $\gets$ $0$]{} [$x_c$ $\gets$ ]{} \[line:coarse\] [$x$ $\gets$ $x + P_l x_c$]{} \[line:prolongation\] [$x$ $\gets$ smooth($x, b$)]{} \[line:postsmooth\]
Multigrid performance depends on the choice of smoother and method of constructing the coarse grid. Usual choices of smoother are point block Jacobi, point block Gauss-Seidel, and Chebyshev. Typically one or two iterations of the smoother are applied for pre- and post-smoothing. Smoothers must reduce local error and be stable on long range error. Aggregation based methods construct $R$ and $P$ by partition the degrees of freedom of the fine level into non-overlapping *aggregates* [@vanek1996algebraic]. Each aggregate corresponds to a single degree of freedom on the coarse level. $R$ computes each aggregate’s coarse level dof as a weighted average of all the aggregate’s dofs on the fine level. $P$ applies the same process in reverse, so $P = R^T$. Given a level $A_{l}$, the coarse level matrix is constructed as $A_{l+1}=R_l A_l P_l$. Choosing aggregates is problem dependent, and is an important contribution of our paper.
The Algorithm
=============
Nullspace {#sec:nullspace}
---------
Fast convergence of multigrid requires satisfaction of the *strong approximation property*, $$\min_u||e-Pu||^2_A \leq \frac{\omega}{||A||} \langle Ae,Ae \rangle,$$ for some fine grid error $e$ and constant $\omega$ determining convergence rate [@tamstorf2015smoothed; @rugestuben; @maclachlan2014theorectical]. To satisfy this condition, $e$s for which $||Ae||$ is small (*near-nullspace* vectors) must be accurately captured by $P$. For any bundle adjustment formulation with monocular cameras and no fixed camera locations, $J^TJ$ has a nullspace, $N$, with dimension 7 corresponding to the free modes of the nonlinear problem [@triggs1999bundle]. These are 3 rotational modes, 3 translational modes, and 1 scaling mode. When the damping matrix $D$ is small, this nullspace becomes near-nullspace vectors, $K$, of $J^TJ+D$. For the Schur complement system, the near-nullspace of $S$ is $K_{x_c}$. We augment $K$ with 9 columns that are constant on each of the 9 camera parameters.
Aggregation
-----------
form new aggregate with $i$ and $j$ break add $i$ to aggregate $k$ break
The multigrid aggregation algorithm determines both how quickly the linear system solver converges and the time it takes to apply the preconditioner. Choosing aggregates that are too large results in a cheap cycle that converges slowly. On the other hand, if aggregates are too small, the solver will converge quickly but each iteration will be computationally slow. The aggregation routine needs to strike the right balance between too large and too small aggregates.
Typical aggregation routines for multigrid form fixed diameter aggregates by clustering together a given “root” node with all its neighbors. This technique works well on PDE problems where the connectivity is predictable and each degree of freedom has is connect to a limited number of other degrees of freedom. Bundle Adjustment does not necessarily have these characteristics. Street view-like problems might have low degree for road sections that do not overlap, but when roads intersect, some dof’s can be connected to many others. Choosing one of these well connected dof’s as the root of an aggregate creates a too aggressive coarsening.
Aggregation routines for non-mesh problems exist, for example, for graph Laplacians [@lamg; @ligmg]. These routines have to contend with dofs that are connected to a majority of other dofs; something we do not expect to see in street view-like datasets. Instead, we use a greedy algorithm that attempts to form aggregates by aggregating unaggregated vertices with their “closest” connected neighbor and constrains the maximum size of aggregates to prevent too aggressive coarsening.
Closeness of dofs is determined by the *strength of connection* matrix. Almost all multigrid aggregation algorithms use this matrix as an input to determine which vertices should be aggregated together. The strength of connection matrix can be created based only using matrix entries (for example, the affinity [@lamg] and algebraic distance metrics [@brandt2015algebraic]) or use some other, geometric information. For bundle adjustment, this other information can be camera and point positions or the visibility information between them. The strength of connection metric we choose to use is the visibility metric used by Kushal and Agarwal in [@kushal2012visibility]. We tried other metrics, like including the percentage of image overlap between two cameras, but the visibility metric remained superior. The visibility strength of connection matrix, $G$, is defined as: $$\begin{aligned}
G_{i,j} &= \begin{cases}
0 & i == j, \\
\frac{v_i^T v_j}{||v_i|| ||v_j||} & \text{otherwise},
\end{cases} \\
v_{k_l} &= \begin{cases}
1 & \text{ camera } k \text{ sees point } l, \\
0 & \text{ otherwise}.
\end{cases}\end{aligned}$$
Most aggregation routines for PDE’s enforce some kind of diameter constraint on aggregates. We find that for our problems this is not necessary. However, we force aggregates to contain no more than 20 dofs, to ensure our aggregates do not become very large. In practice, we see that aggregate sizes are usually in the range for 8 to 3, with the mean aggregate size usually just a little more than 3.
Prolongation
------------
We use a standard multigrid prolongation construction technique [@adams2002evaluation; @tamstorf2015smoothed]. For each aggregate, the nullspace is restricted to the aggregate, a QR decomposition is applied, and $\mathcal{Q}$ becomes a block of $P$ while $\mathcal{R}$ becomes a block of the coarse nullspace: $$\begin{aligned}
\mathcal{Q}_{\text{agg}}\mathcal{R}_{\text{agg}} &= K_{\text{agg}} \text{ forall agg} \in \text{ aggregates}, \\
P &= \Pi\begin{bmatrix}
\mathcal{Q}_1 & & \\
& \ddots & \\
& & \mathcal{Q}_n
\end{bmatrix}, \\
K_{\text{coarse}} &= \Pi\begin{bmatrix}
\mathcal{R}_1 \\
\vdots \\
\mathcal{R}_n
\end{bmatrix}.\end{aligned}$$ Here $\Pi$ is a permutation matrix from contiguous aggregates to the original ordering. Using the QR decomposition frees us from having to compute the local nullspace and represent it on the coarse level (this would require computing the center of mass of each aggregate). Our near-nullspace has dimension 16 (7 from $J^TJ$’s nullspace, 9 from per dof constant vectors), so each of our coarse level matrices has 16 by 16 blocks.
Smoother
--------
We use a Chebyshev smoother [@adams2003parallel] with a point block Jacobi matrix. We find the Chebyshev smoother to be more effective than block-Jacobi and Gauss-Seidel smoothing. The Chebyshev smoother does come with a disadvantage: it requires an estimate of the largest eigenvalue, $\lambda_{\text{max}}$, of $D^{-1}A$. Like Tamstorf et al., we find that applying generalized Lanczos on $Ax=\lambda D x$ is the most effective way to find the largest eigenvalue [@tamstorf2015smoothed]. This eigenvalue estimate is expensive, so we limit it to 5 applications of the operator. We use $1.1\lambda_{\text{max}}$ for the high end of the Chebyshev bound and $0.3\lambda_{\text{max}}$ as the lower end. However, the superior performance of the Chebyshev smoother outweighs its increased setup cost. We also tried using the Gershgorin estimate of the largest eigenvalue, but that proved to be very inaccurate (by multiple orders of magnitude). We apply two iterations of pre-smoothing and two of post-smoothing.
To Smooth or Not To Smooth
--------------------------
Aggregation-based multigrid uses prolongation smoothing in order to improve convergence [@vanek1996algebraic]. Smoothing the prolongation operator is sufficient to satisfy the strong approximation property and achieve constant iteration count regardless of problem size. Usually, smoothing the prolongation operator improves convergence rate at the cost of increased complexity of the coarse grids. In PDEs and other problems with very regular connectivity, this trade off is worthwhile. However, in other problems, like in irregular graph Laplacians, irregular problem structure causes massive fill-in—coarse grids become dense [@lamg]. The street view bundle adjustment problems we are working with appear to be similar in structure to PDE based problems: the number of nonzeros per row is bounded and the diameter of the problem is relatively large. However, when we apply prolongation smoothing to our multigrid preconditioner, we see large fill-in in the coarse grids, similar to what happens in irregular graph Laplacians. Although the nonzero structure of street view bundle adjustment appears similar to PDEs, it still has places where dofs are coupled with many other dofs—places where large fill-in occurs. These places could be landmarks that are visible from far away or intersections where there is a large amount of camera overlap. The large fill-in on the coarse grid makes the setup phase too expensive to justify the improved performance in the solve phase. Choosing not to smooth aggregates means our preconditioner does not scale linearly, but it does scale better than any of the current state of the art preconditioners.
Implicit Operator
-----------------
On many bundle adjustment problems, it is often faster to apply the Schur complement in an implicit manner, rather than constructing $S$ explicitly [@agarwal2010bundle]. That is, we can apply manifest $Sx$ for a vector $x$ as $Ax - F^T(E(C^{-1}(E^T(Fx))))$. As conjugate gradients requires only matrix-vector products, we can use the implicit matrix product with it for improved performance. An issue arises when we use a preconditioner with CG: the preconditioner often needs the explicit representation of $S$. For block-Jacobi preconditioning, Agarwal et al. [@agarwal2010bundle] construct on the relevant blocks of $S$. The same technique is used by Kushal and Agarwal in their visibility-based preconditioner [@kushal2012visibility]. For Algebraic multigrid, the explicit matrix representation is needed to form the Galerkin projection $P^T S P$. We could use the implicit representation with the Galerkin projection, $P^T (A (P x)) - P^T (F^T (E (C^{-1}(E^T(F(Px))))))$, but then we are paying the cost of the full implicit matrix at each level in our hierarchy. We instead compute the cost of using the implicit vs explicit product on each level of our hierarchy and choose the cheapest one. In our tests we create both the implicit and explicit representations for each level, but only use the most efficient one (computing the number of nonzeros in a sparse matrix product is difficult without forming the product itself). It would be possible to create only the needed representation on each level, but we have not explored the costs. This may actually have a large performance impact as the Galerkin projection requires an expensive matrix-triple product (currently the most expensive part of setup).
Generating Synthetic Datasets
=============================
To compare our preconditioner to existing ones, we need datasets to test against. Only a couple of real-world datasets are publicly available, namely the Bundle Adjustment in the Large datasets[^1] [@agarwal2010bundle] and the 1DSFM datasets[^2] [@wilson2014robust]. The largest of these datasets contains 15 thousand cameras. As we are interested in evaluating scaling of our algorithms, we require much larger datasets. Also most of these datasets are “community photo” style, i.e. there are many pictures of the same object. Furthermore, all of these datasets contain too many outliers: long range effects are not exposed to the linear solver. We would like datasets with more varied camera counts and visibility structure similar to what we would expect from street view, so we generate a series of synthetic datasets with these properties.
We generate a ground truth (zero error) bundle adjustment dataset by taking an existing 3D model of a city and drawing potential camera paths through it. We generate random camera positions on these paths, then generate random points on the geometry and test visibility from every camera to every point. We can control the number of cameras and the number of points to generate datasets of different sizes. By choosing different 3D models or different camera paths, we can change the visibility graph between cameras and points. For our datasets, we use a 3D model of Zwolle in the Netherlands[^3]. Figure \[fig:synthetic\] shows the 3D model and one of the more complicated datasets we generated.
These datasets do not contain any error, so we add noise into each. The straight forward approach of adding Gaussian noise directly to the camera and point parameters results in a synthetic problem that is easier to solve than the real world problems as it contains no non-local effects. Instead, we add long rang drift to the problem: as cameras and points get farther from the origin we perturb their location more and more in a consistent direction. Adding a little noise to the camera rotational parameters also helps as rotational error is highly nonlinear. We are careful to not add too much noise or too many incorrect correspondences as this leads to problems with many outliers (see section \[sec:robust\] a solution).
Results
=======
We tested our multigrid preconditioner against point block Jacobi and visibility-based block Jacobi preconditioners on a number of synthetic problems (we found the visibility-based tridiagonal preconditioner to perform similarly to visibility-based block Jacobi, so we omit it). Our test machine is an Intel Core i5-3570K running at 3.40GHz with 16GB of dual-channel 1600MHz DDR3 memory. For large problems, we use NERSC’s Cori—an Intel Xeon E5-2698 v3 2.3 GHz Haswell processor with 128 GB DDR4 2133 MHz memory. We use Ceres Solver [@ceres-solver] to perform our nonlinear optimization as well as for the conjugate gradient linear solver. Ceres Solver also provides the point block Jacobi and visibility based preconditioners. We terminate the nonlinear optimization at 100 iterations or if any of Ceres Solver’s default termination criteria are hit. Our initial trust region radius is 1e4. We use a constant forcing sequence with tolerance $\tau$. Results are post processed to ensure that all nonlinear solves for a given problem end at the same objective function value. For some preconditioners (like our multigrid), this significantly impacted the total number of nonlinear iterations taken (see section \[sec:solver-accuracy\]).
Our preconditioner is written in Julia [@julia] and uses SuiteSparse [@suitesparse] for its Galerkin products. We have not spent much time optimizing our preconditioner. We do not cache the sparse matrix structure between nonlinear iterations and reallocate almost all matrix products. Furthermore, the Julia code allocates more and is slower than it could be if written in C or C++. Jacobian matrices are copied between Julia and Ceres Solver, leading to a larger memory overhead. We do not use a sparse matrix format that exploits the block structure of our matrices or use matrix-multiples that exploit this structure. All of this is to say that our method could be optimized further for potentially greater speedup.
Still, our multigrid preconditioner performs better than point block Jacobi and visibility based preconditioners on most large problems. Figure \[fig:reltime\] shows the relative solve time of other preconditioners vs our multigrid preconditioner for a variety of synthetic problems. Our preconditioner is up to 13 times faster than point block Jacobi, and 18 times faster than visibility based preconditioners. Median speedup is 1.7 times faster than point block Jacobi, and 2.8 times faster than visibility based preconditioners. This includes cases where problems are large, but not difficult; a situation where our preconditioner performs poorly. On smaller problems (with fewer than 1000 cameras), our preconditioner is significantly slower than direct methods.
On problems where the geometry is simpler, point block Jacobi normally outperforms visibility based preconditioners and our preconditioner. This is because the linear problems are relatively easier to solve and the visibility based methods cannot recoup their expensive setup cost. On more complicated problems (when the camera path crosses itself), the difficulty of the solve makes the high setup cost of the visibility based methods worthwhile. We find that these more complicated problems are also where our multigrid preconditioner has a larger speedup over the other methods. We believe that this is because the multigrid preconditioner does a good job of capturing long range effects in the problem.
Solver Accuracy {#sec:solver-accuracy}
---------------
Our multigrid preconditioner is a more accurate preconditioner than point block Jacobi and visibility based methods. In general, preconditioners like point block Jacobi converge fast in the residual norm, but converge slower in the error norm. Multigrid tends to converge similarly in the error norm and the residual norm. This behavior is reflected in the nonlinear convergence when using our preconditioner vs point block Jacobi. Each nonlinear iteration with multigrid reduces the objective function by a larger value than point block Jacobi, indicating that the multigrid solution was more accurate. See figure \[fig:cost-iteration\] for plots of this behavior on some of our datasets. Also interesting to note in this figure is the slope of convergence. In almost all the plots, the solvers first converge quickly then hit a point where they start converging more slowly. Our multigrid preconditioner also follows this characteristic, but converges more steeply in the first phase and continues converging quickly for longer. We believe this is because our preconditioner more accurately captures long range effects. For nonlinear optimization problems where a high degree of accuracy is required, this behavior makes our multigrid preconditioner even more performant than existing preconditioners.
For solves where $\tau$ is smaller (0.01), our preconditioner performs better than point block Jacobi. When $\tau$ is larger, our preconditioner is generally slower than point block Jacobi because the setup cost of our preconditioner is not amortized. In general, our preconditioner is a good choice when when tight (small) solve tolerances are used or when the linear problems are hard to solve.
Scaling
-------
For larger problem sizes, the algorithmic complexity of different solution techniques begins to dominate over constant factors. It is well known that solving a second order elliptic system (such as elasticity) on a $\sqrt n\times \sqrt n$ grid using conjugate gradients with point block Jacobi preconditioning requires $O(n^{1/2})$ iterations, for a total cost of $O(n^{1.5})$ [@toselli2005domain]. We expect to interpret the global coupling and scaling of bundle adjustment similarly, in terms of diameter of the visibility graph, which has 2D grid structure for street view data in cities. If the structure is not two dimensional, say for a long country road, then we would expect the bound to be $O(n^2)$. We expect that visibility based methods also scale as $O(n^{1.5})$, but with different constant factors as they cannot handle long range effects. Multigrid can be bounded by $O(n)$, but this requires certain conditions on the prolongation operator that we do not satisfy (specifically, not smoothing the prolongation operator means that we do not satisfy the strong approximation property). Empirically, we find that our multigrid technique does not scale linearly with problem size, but still scales better than other preconditioners.
To empirically verify the scaling of visibility-based methods and our multigrid method, we construct a series of city block-like problems with increasing numbers of blocks. Increasing the number of city blocks instead of adding more cameras to the same structure means that the test problems have increasing diameter. We add noise that looks like a sin wave to the problem to induce long range errors. Figure \[fig:scaling-test\] shows the results of this experiment. Surprisingly, point block Jacobi is scaling as $O(n^2)$, which indicates that bundle adjustment is more similar to a shell problem than an elasticity problem.
Parallelism
-----------
Our solver is currently single threaded. Most of the time spent in the solver is in linear algebra, so using a parallel linear algebra framework is an easy way to parallelism. The only non-linear algebra part of our multigrid solver is aggregation. There are parallel aggregation techniques, but they require more work than simply swapping out a library.
Robust Error Metrics {#sec:robust}
--------------------
Often, there are outlying points in bundle adjustment problems. These are the product of incorrect correspondences, points that are too close to accurately track, or points with very poor initialization. In any case, outlying points make up a disproportionate amount of the objective function (due to the quadratic scaling of the reprojection error). Levenberg-Marquardt attempts to minimize the error, and the quickest way it does is to fix each outlier in turn. This effectively masks the presence of long-range error, leading to good linear solver performance but poor nonlinear convergence. The usual solution is to use a robust loss function. Robust loss functions are quadratic around the origin, but become linear the farther they get from the origin. The robust loss function we use is Huber loss, $$\text{loss}(x) = \begin{cases}
x & x \leq 1, \\
2\sqrt{x}-1 & x > 1,
\end{cases}$$ where $x$ is the squared L2 norm of the residuals.
Point block Jacobi is a local preconditioner: it is effective at resolving noise in a small neighborhood. Without a robust loss function point block Jacobi is quick because it “fixes” outliers in a couple iterations. A robust loss function exposes long-range noise making point block Jacobi slow. However, multigrid is more effective at addressing long range error, so it is a comparatively faster preconditioner when used with a robust loss function.
Conclusion & Future Work
========================
We present a multigrid preconditioner for conjugate gradients that performs better than any existing preconditioner and solver on bundle adjustment problems with long range effects or problems requiring a high solve tolerance. In tests on a set of large synthetic problems, our preconditioner is up to 13 times faster than the next best preconditioner. Our preconditioner is tailored for a specific kind of bundle adjustment problem: a 9-parameter camera model with reprojection error. Generalizing this preconditioner to different kinds of camera models would require computing a new analytical nullspace. For most models, this should just involve finding the instantaneous derivatives of the 7 free modes (3x translation, 3x rotation, 1x scaling). It would also be possible to use an eigensolver to find the near-nullspace at an increase in setup time cost.
In future work we would like to find a way to automatically switch between point block Jacobi and multigrid preconditioners depending on the difficultly of the linear problem. We would also like to improve our preconditioner so that it will scale linearly with the problem size by either using some sort of filtered smoothing, or other multigrid techniques used to compensate for lack of prolongation smoothing.
Acknowledgments {#acknowledgments .unnumbered}
===============
This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research under Award Number DE-SC0016140. This research used resources of the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231.
[^1]: <http://grail.cs.washington.edu/projects/bal>
[^2]: <http://www.cs.cornell.edu/projects/1dsfm>
[^3]: <https://3d.bk.tudelft.nl/opendata/3dfier/>
|
---
abstract: |
A Bethe-Salpeter treatment of Cooper pairs (CPs) based on an ideal Fermi gas (IFG) “sea” yields the familiar negative-energy, two-particle bound-state if two-hole CPs are ignored, but is meaningless otherwise as it gives purely-imaginary energies. However, when based on the BCS ground state, legitimate two-particle “moving” CPs emerge but as positive-energy, finite-lifetime resonances for nonzero center-of-mass momentum, with a *linear* dispersion leading term. Bose-Einstein condensation of such pairs may thus occur in exactly two dimensions as it cannot with quadratic dispersion.
**PACS** 05.30.Fk; 05.30.Jp; 71.10.-w; 74.20.Fg
author:
- 'V.C. Aguilera-Navarro$^{a}$, M. Fortes$^{b}$, and M. de Llano$^{c}$'
title: '**Generalized Cooper Pairing and Bose-Einstein Condensation**'
---
Shortly after the publication of the BCS theory [@BCS] of superconductivity charged Cooper pairs [@Coo] (CPs) observed in magnetic flux quantization experiments with 3D conventional [@classical][@classical2], and much later with quasi-2D cuprate [@cuprates] superconductors, suggested CPs as an indispensable ingredient. Although BCS theory admits the presence of Cooper “correlations,” several boson-fermion (BF) models [@BF4]-[@BF10] with real, bosonic CPs have been introduced after the pioneering work of Refs. [@BF]-[@BF2]. However, with one exception [@BF7a]-[@CMT02], all such models neglect the effect of two-hole (2h) CPs treated on an equal footing with two-particle (2p) CPs—as Green’s functions [@FW] can naturally guarantee.
The BCS condensate consists of equal numbers of 2p and 2h Cooper correlations; this is evident from the perfect symmetry about $\mu$, the electron chemical potential, of the well-known Bogoliubov [@Bog] $v^{2}(\epsilon)$ and $u^{2}(\epsilon)$ coefficients \[see just below (\[mCP\]) later on\], where $\epsilon$ is the electron energy. Some motivation for this Letter comes from the unique but unexplained role played by *hole* charge carriers in the normal state of superconductors in general [@Hirsch], as well as from the ability of the “complete (in that both 2h- and 2p-CPs are allowed in varying proportions) BF model” of Refs. [@BF7a]-[@CMT02] to “unify” both BCS and Bose-Einstein condensation (BEC) theories as special cases. Substantially higher $T_{c}$’s than BCS theory are then predicted without abandoning electron-phonon dynamics. Compelling evidence for a significant presence of this dynamics in high-$T_{c}$ cuprate superconductors from angle-resolved photoemission spectroscopy data has recently been reported [@Shen].
In this Letter the Bethe-Salpeter (BS) many-body equation (in the ladder approximation) treating both 2p and 2h pairs on an equal footing is used to show that, while the ordinary CP problem \[based on an ideal Fermi gas (IFG) ground state (the usual “Fermi sea”)\] does *not* possess stable energy solutions: i) CPs based not on the IFG-sea but on the BCS ground state survive as *positive* energy resonances; ii) their dispersion relation in leading order in the total (or center-of-mass) momentum (CMM) $\hbar\mathbf{K\equiv\hbar(k}_{1}+\mathbf{k}_{2})$ is *linear* rather than the quadratic $\hbar^{2}K^{2}/4m$ of a composite boson (e.g., a deuteron) of mass $2m$ moving not in the Fermi sea but in vacuum; and iii) this latter “moving CP” solution, though often confused with it, is physically *distinct* from another more common solution sometimes called the Anderson-Bogoliubov-Higgs (ABH) [@ABH], ([@BTS] p. 44), [@Higgs]-[@Traven2] collective excitation. The ABH mode is also linear in leading order and goes over into the IFG ordinary sound mode in zero coupling. A new feature emerging from our present 2D results, compared with a prior 3D study outlined in Ref. [@Honolulu], is the imaginary energy term leading to finite-lifetime CPs. We focus here on 2D because of its interest [@Varma][@Sachdev] for quasi-2D cuprate superconductors. In general, our results will be crucial for Bose-Einstein condensation (BEC) scenarios employing BF models of superconductivity, not only *in* *exactly 2D* as with the Berezinskii-Kosterlitz-Thouless [@BKT][@KT] transition, but also down to ($1+\epsilon$)D which characterize the quasi-1D organo-metallic (Bechgaard salt) superconductors [@organometallics]-[@jerome2]. Striking experimental confirmation of how superconductivity is “extinguished” as dimensionality $d$ is diminished towards unity has been reported by Tinkham and co-workers [@Tinkham01][@Tinkham00]. They measured resistance vs. temperature curves in superconducting nanowires consisting of carbon nanotubes sputtered with amorphous $Mo_{79}Ge_{21}$ and of widths from 22 to 10 nm, showing how $T_{c}$ vanishes for the thinnest widths. Our results also apply, albeit with a different interaction, to neutral-atom superfluidity as in liquid $^{3}$He [@He3] as well as to ultracold trapped alkali Fermi gases such as $^{6}$Li [@Li6] and $^{40}$K [Holland]{} since pairing is believed to occur there also.
For bosons with excitation energy $\varepsilon_{K}=C_{s}K^{s}+o(K^{s})$ (for small CMM $K$) BEC occurs in a box of length $L$ if and only if $d>s,$ since $T_{c}\equiv0$ for all $d\leq s$. The commonest example is $s=2$ as in the textbook case of ordinary bosons with $\varepsilon_{K}=$ $\hbar^{2}K^{2}/2m$ exactly, giving the familiar result that BEC is not allowed for $d\leq2$. The general result for any $s$ is seen as follows. The total boson number is $$N=N_{0}(T)+\sum_{\mathbf{K\neq0}}[\exp\beta(\varepsilon_{\mathbf{K}}-\mu
_{B})-1]^{-1}%$$ with $\beta\equiv k_{B}T$. Since $N_{0}(T_{c})\simeq0$ while the boson chemical potential $\mu_{B}$ also vanishes at $T=T_{c}$, in the thermodynamic limit the boson number density becomes$$N/L^{d}\simeq A_{d}\int_{0^{+}}^{\infty}\mathrm{d}KK^{d-1}[\exp\beta_{c}%
(C_{s}K^{s}+\cdots)-1]^{-1}%$$ where $A_{d}$ is a finite coefficient. Thus$$N/L^{d}\simeq A_{d}(k_{B}T_{c}/C_{s})\int_{0^{+}}^{K_{\max}}\mathrm{d}%
KK^{d-s-1}+\int_{K_{\max}}^{\infty}\cdots,$$ where $K_{\max}$ is small and can be picked arbitrarily so long as the integral $\int_{K_{\max}}^{\infty}\cdots$ is finite, as is $N/L^{d}$. However, if $d=s$ the first integral gives $\ln K\mid_{_{0}}^{K_{\max}}=-\infty$; and if $d<s$ it gives $1/(d-s)K^{s-d}\mid_{_{0}}^{K_{\max}}=-\infty$. Hence, $T_{c}$[ ]{}must vanish[ ]{}if and only if[ ]{}$d\leq
s$,[ ]{}but is otherwise finite. This conclusion hinges *only* on the leading term of the boson dispersion relation $\varepsilon_{K}$. The case $s=1$ emerges in the CP problem to be discussed now.
In dealing with the many-electron system we assume a BCS-like electron-phonon model $s$-wave inter-electron interaction, whose double Fourier transform $\nu(|\mathbf{k}_{1}-\mathbf{k}_{1}^{\prime}|)$ is just $$\nu(k_{1},k_{1}^{\prime})=-(k_{F}/k_{1}^{\prime})V \label{int}%$$ if$\;k_{F}-k_{D}<k_{1}<k_{F}+k_{D}$, and $=0$ otherwise. Here $V>0$, $\hbar
k_{F}\equiv mv_{F}$ the Fermi momentum, $m$ the effective electron mass, $v_{F}$ the Fermi velocity, and $k_{D}\equiv\omega_{D}/v_{F}$ with $\omega
_{D}$ the Debye frequency. The usual condition $\hbar\omega_{D}\ll E_{F}$ then implies that $k_{D}/k_{F}\equiv\hbar\omega_{D}/2E_{F}\ll1$.
The BS wavefunction equation [@Honolulu] in the ladder approximation with both particles and holes for the original IFG-based CP problem using (\[int\]) leads to an equation for the wavefunction $\psi_{\mathbf{k}}$ in momentum space for CPs with *zero* CMM $\mathbf{K\equiv k}%
_{1}+\mathbf{k}_{2}=0$ that is $$(2\xi_{k}-\mathcal{E}_{0})\psi_{\mathbf{k}}=V\sum_{\mathbf{k}^{\prime}}%
{}^{^{\prime}}\psi_{\mathbf{k}^{\prime}}-V\sum_{\mathbf{k}^{\prime}}%
{}^{^{\prime\prime}}\psi_{\mathbf{k}^{\prime}}.\label{CP}%$$ Here $\xi_{k}\equiv\hbar^{2}k^{2}/2m-E_{F}$, $\mathcal{E}_{0}$ is the eigenvalue energy and $\mathbf{k\equiv%
%TCIMACRO{\U{bd}}%
%BeginExpansion
\frac12
%EndExpansion
({k}_{1}-{k}_{2})}$ is the relative wavevector of a pair. The single prime over the first (2p-CP) summation term denotes the restriction $0<\xi
_{k^{\prime}}<\hbar\omega_{D}$ while the double prime in the last (2h-CP) term means $-\hbar\omega_{D}<\xi_{k^{\prime}}<0$. Without this latter term we have Cooper’s Schrödinger-like equation [@Coo] for 2p-CPs whose implicit solution is clearly $\psi_{\mathbf{k}}=(2\xi_{k}-\mathcal{E}_{0})^{-1}%
V\sum_{\mathbf{k}^{\prime}}^{^{\prime}}\psi_{\mathbf{k}^{\prime}}.$ Since the summation term is constant, performing that summation on both sides allows canceling the $\psi_{\mathbf{k}}$-dependent terms, leaving the eigenvalue equation $\sum_{\mathbf{k}}^{^{\prime}}(2\xi_{k}-\mathcal{E}_{0})^{-1}%
=1/V$ with the familiar solution $\mathcal{E}_{0}=-2\hbar\omega
_{D}/(e^{2/\lambda}-1)$ (exact in 2D, and to a very good approximation otherwise if $\hbar\omega_{D}\ll E_{F}$) where $\lambda\equiv VN(E_{F})$ with $N(E_{F})$ the electronic density of states (DOS) for one spin. This corresponds to a negative-energy, stationary-state bound pair. For $K\geqslant0$ the CP eigenvalue equation becomes $$\sum_{\mathbf{k}}{}^{^{\prime}}(2\xi_{k}+\hbar^{2}K^{2}/2m-\mathcal{E}%
_{K})^{-1}=1/V.\label{CPKeqn}%$$ Note that a CP state of energy $\mathcal{E}_{K}$ is characterized only by a definite $\mathbf{K}$ but *not* definite $\mathbf{k}$, in contrast to a “BCS pair” defined [@BCS] with fixed $\mathbf{K}$ and $\mathbf{k}$ (or equivalently definite $\mathbf{k}_{1}$ and $\mathbf{k}_{2}$). Without the first summation term in (\[CP\]) the same result in $\mathcal{E}_{0}$ for 2p-CPs follows for 2h-CPs (apart from a sign change). However, using similar techniques to solve the *complete* equation (\[CP\])—which *cannot* be derived from an ordinary (non-BS) Schrödinger-like equation in spite of its simple appearance—gives the purely-imaginary $\mathcal{E}_{0}=\pm i2\hbar\omega_{D}/\sqrt{e^{2/\lambda}-1}$, thus implying an obvious instability. This was reported in Refs. [@BTS] p. 44 and [@AGD] who did not stress the pure 2p and 2h cases just discussed. Clearly then, the original CP picture is *meaningless* if particle- and hole-pairs are treated on an equal footing as consistency demands. This is perhaps the prime motivation for seeking a new unperturbed Hamiltonian about which to, e.g., do perturbation theory.
A BS treatment not about the IFG sea but about the BCS ground state *vindicates the CP concept*. This substitution might seem an artificial mathematical construct but its experimental support lies precisely in Refs. [@classical]-[@cuprates] and its physical justification lies in recovering two expected results: the ABH sound mode as well as finite-lifetime effects in CPs. In either 3D [@Honolulu] or 2D the BS equation yields two *distinct* solutions: the usual ABH sound solution and a highly nontrivial “moving CP” solution. The BS formalism gives rise to a set of three coupled equations, one for each (2p, 2h and ph) channel wavefunction for any spin-independent interaction such as (\[int\]). However, the ph channel decouples, leaving only two coupled wavefunction equations for the ABH solution. The equations involved are too lengthy, and will be derived in detail elsewhere. The *ABH collective excitation mode* energy $\mathcal{E}_{K}$ is found to be determined by an equation that for $\mathbf{K}=0$ gives $\mathcal{E}_{0}=0$ (Ref. [@BTS] p. 39) and reduces to $\int_{0}^{\hbar\omega_{D}}d\xi/\sqrt{\xi^{2}+\Delta^{2}%
}=1/\lambda$,$\ $the familiar BCS $T=0$ gap equation for interaction (\[int\]) whose solution is $\Delta=$ $\hbar\omega_{D}/\sinh(1/\lambda)$. Taylor-expanding $\mathcal{E}_{K}$ about $K=0$ and small $\lambda$ gives
$$\mathcal{E}_{K}\simeq\frac{\hbar v_{F}}{\sqrt{2}}K+O(K^{2}).\tag{4}%$$
Note that the leading term is just the ordinary sound mode in an IFG whose sound speed $c$ $=$ $v_{F}/\sqrt{d}$ in $d$ dimensions which also follows trivially from the zero-temperature IFG pressure $P=n^{2}[d(E/N)/dn]=2nE_{F}%
/(d+2)$ on applying the familiar thermodynamic relation $dP/dn=mc^{2}$. Here $E=dE_{F}/(d+2)$ is the IFG ground-state energy while $n\equiv N/L^{d}%
=k_{F}^{d}/d2^{d-2}\pi^{d/2}\Gamma(d/2)$ the fermion-number density.
The second solution in the BCS-ground-state-based BS treatment is the *moving CP* solution for the pair energy $\mathcal{E}_{K}$ which in 2D is contained in the equation
$$\begin{gathered}
\frac{1}{2\pi}\lambda\hbar v_{F}\int_{k_{F}-k_{D}}^{k_{F}+k_{D}}dk\int
_{0}^{2\pi}d\varphi u_{\mathbf{K}/2+\mathbf{k}}v_{\mathbf{K}/2-\mathbf{k}%
}\times\nonumber\\
\times\{u_{\mathbf{K}/2-\mathbf{k}}v_{\mathbf{K}/2+\mathbf{k}}-u_{\mathbf{K}%
/2+\mathbf{k}}v_{\mathbf{K}/2-\mathbf{k}}\}\times\nonumber\\
\times\frac{E_{\mathbf{K}/2+\mathbf{k}}+E_{\mathbf{K}/2-\mathbf{k}}%
}{-\mathcal{E}_{K}^{2}+(E_{\mathbf{K}/2+\mathbf{k}}+E_{\mathbf{K}%
/2-\mathbf{k}})^{2}}=1, \label{mCP}%\end{gathered}$$
where $\varphi$ is the angle between $\mathbf{K}$ and $\mathbf{k}$; $\lambda\equiv VN(E_{F})$ as before with $N(E_{F})\equiv m/2\pi\hbar^{2}$ the constant 2D DOS and $V$ the interaction strength defined in (\[int\]); $E_{\mathbf{k}}\equiv\sqrt{\xi_{k}{}^{2}+\Delta^{2}}$ with $\Delta$ the fermionic gap; while $u_{k}^{2}\equiv%
%TCIMACRO{\U{bd}}%
%BeginExpansion
\frac12
%EndExpansion
(1+\xi_{k}/E_{\mathbf{k}})$ and $v_{k}^{2}\equiv1-u_{k}^{2}$ are the Bogoliubov functions [@Bog]. In addition to the pp and hh wavefunctions (depicted grafically in Ref. [@Honolulu] Fig. 2), diagrams associated with the ph channel give zero contribution at $T=0$. A third equation for the ph wavefunction describes the ph bound state but turns out to depend only on the pp and hh wavefunctions. Taylor-expanding $\mathcal{E}_{K}$ in powers of $K$ around $K=0$, and introducing a possible damping factor by adding an imaginary term $-i\Gamma_{K}$ in the denominator, yields to order $K^{2}$ for small $\lambda$$$\begin{aligned}
\pm\mathcal{E}_{K} & \simeq2\Delta+\frac{\lambda}{2\pi}\hbar v_{F}%
K+\frac{1}{9}\frac{\hbar v_{F}}{k_{D}}e^{1/\lambda}K^{2}\nonumber\\
& -i\left[ \frac{\lambda}{\pi}\hbar v_{F}K+\frac{1}{12}\frac{\hbar v_{F}%
}{k_{D}}e^{1/\lambda}K^{2}\right] +O(K^{3}) \label{linquadmCP}%\end{aligned}$$ where the upper and lower sign refers to 2p- and 2h-CPs, respectively. A linear dispersion in leading order again appears, but now associated with the bosonic moving CP. The *positive*-energy 2p-CP resonance has a lifetime $\tau_{K}\equiv$ $\hbar/2\Gamma_{K}=\hbar/2\left[ (\lambda/\pi)\hbar
v_{F}K+(\hbar v_{F}/12k_{D})e^{1/\lambda}K^{2}\right] $ diverging only at $K=0$, and falling to zero as $K$ increases. Thus, “faster” moving CPs are shorter-lived and eventually break up, while “non-moving” ones are stationary states. The linear term $(\lambda/2\pi)\hbar v_{F}K$ contrasts sharply with the *coupling-independent* leading-term in $\mathcal{E}%
_{K}=\mathcal{E}_{0}-(2/\pi)\hbar v_{F}K+O(K^{2})$ (or $1/2$ in 3D [@Schrieffer] instead of $2/\pi$) that follows from the *original* CP problem (\[CPKeqn\]) neglecting holes—for either interaction (\[int\]) [@PhysC98] *or* an attractive delta inter-fermion potential [@PRB2000][@PhysicaC] (imagined regularized [@GT] to have a single bound state whose binding energy serves as the coupling parameter). In the latter simple example, moreover, it is manifestly clear in 2D [@PRB2000] that the quadratic $\hbar^{2}K^{2}/4m$ stands alone as the leading term for any coupling only when $E_{F}\equiv$ $%
%TCIMACRO{\U{bd}}%
%BeginExpansion
\frac12
%EndExpansion
mv_{F}^{2}$ is *strictly* zero, i.e., in the absence of the Fermi sea. Fig. 1 graphs the exact moving CP (mCP) energy extracted from (\[mCP\]), along with its leading linear-dispersion term and this plus the next (quadratic) term from (\[linquadmCP\]). The interaction parameter values used in (\[int\]) were $\hbar\omega_{D}/E_{F}=0.05$ (a typical value for cuprates) and the two values $\lambda=%
%TCIMACRO{\U{bc}}%
%BeginExpansion
\frac14
%EndExpansion
$ and $%
%TCIMACRO{\U{bd}}%
%BeginExpansion
\frac12
%EndExpansion
$, giving for $\mathcal{E}_{0}/E_{F}\equiv2\Delta/E_{F}=2\hbar\omega_{D}%
/E_{F}\sinh(1/\lambda)\simeq0.004$ and $0.028,$ respectively (marked as dots in the figure). Remarkably enough, the linear approximation (thin short-dashed lines in figure) is better over a wider range of $K/k_{F}$ values for weaker coupling in spite of a larger and larger partial contribution from the quadratic term in (\[linquadmCP\]); this peculiarity also emerged from the ordinary CP treatment of Ref. [@PhysC98] and might suggest the expansion in powers of $K$ to be an asmyptotic series that should be truncated after the linear term. For reference we also plot the linear term $\hbar v_{F}K/\sqrt
{2}$ of the sound solution (4).
We cannot presently address such matters as the nature of the normal state, the pseudogaps observed in underdoped cuprates, etc., but efforts in these directions are in progress.
\[ptb\]
[fig1.eps]{}
Like Cooper’s [@Coo] \[see Eq. (\[CPKeqn\])\], our BS CPs are characterized by a definite $\mathbf{K}$ and *not* also by definite $\mathbf{k}$ as the pairs discussed by BCS [@BCS]. Hence, the objection does not apply that CPs are not bosons because BCS pairs with definite $\mathbf{K}$ and $\mathbf{k}$ (or equivalently definite $\mathbf{k}_{1}$ and $\mathbf{k}_{2}$) have creation/annihilation operators that do *not* obey Bose commutation relations \[Ref. [@BCS], Eqs. (2.11) to (2.13)\]. In fact, either (\[CPKeqn\]) or (\[mCP\]) shows that a given “ordinary” or BS CP state labeled by either $\mathbf{K}$ or $\mathcal{E}_{K}$ can accommodate (in the thermodynamic limit) an indefinitely many possible BCS pairs with different $\mathbf{k}$’s. This implies BE statistics for either ordinary or BS CPs as each energy state has no occupation limit.
To conclude, hole pairs treated on a par with electron pairs play a vital role in determining the precise nature of CPs even at zero temperature, only when based not on the usual ideal-Fermi-gas (IFG) “sea” but on the BCS ground state. Treatment them with the Bethe-Salpeter equation gives purely-imaginary-energy CPs when based on the IFG, and positive-energy resonant-state CPs with a finite lifetime for nonzero CMM when based on the BCS ground state—instead of the more familiar negative-energy stationary states of the original IFG-based CP problem that neglects holes, as sketched just below (\[CP\]). The BS “moving-CP” dispersion relation is gapped by twice the BCS energy gap, followed by a *linear* leading term in the CMM expansion about $K=0$. This linearity is distinct from the better-known one associated with the sound or ABH collective excitation mode whose energy vanishes at $K=0$. Thus, boson-fermion models assuming this CP linearity for the boson component instead of the quadratic $\hbar^{2}K^{2}/4m$ can give BEC for all $d>1$, including exactly 2D, and thus in principle apply not only to quasi-2D cuprate but also to quasi-1D organo-metallic superconductors.
Partial support is acknowledged from grant IN106401, PAPIIT (Mexico). MdeLl thanks P.W. Anderson, M. Casas, J.R. Clem, D.M. Eagles, S. Fujita, and K. Levin for comments, and with MF is grateful to M. Grether, O. Rojo, M.A. Solís, V.V. Tolmachev, A.A. Valladares and H. Vucetich for discussions. Both VCAN and MdeLl thank CNPq (Brazil) and CONACyT (Mexico) for bilateral support.
[9]{} J. Bardeen, L.N. Cooper and J.R. Schrieffer, Phys. Rev. **108**, 1175 (1957).
L.N. Cooper, Phys. Rev. **104**, 1189 (1956).
B.S. Deaver, Jr. and W.M. Fairbank, Phys. Rev. Lett. **7**, 43 (1961).
R. Doll and M. Näbauer, Phys. Rev. Lett. **7**, 51 (1961).
C.E. Gough, M.S Colclough, E.M. Forgan, R.G. Jordan, M. Keene, C.M. Muirhead, I.M. Rae, N. Thomas, J.S. Abell, and S. Sutton, Nature **326**, 855 (1987).
J. Ranninger, R. Micnas, and S. Robaszkiewicz, Ann. Phys. (Paris) **13**, 455 (1988).
R. Friedberg and T.D. Lee, Phys. Rev. B** 40**, 6745 (1989).
V.B. Geshkenbein, L.B. Ioffe, and A.I. Larkin, Phys. Rev. B **55**, 3173 (1997).
V.V. Tolmachev, Phys. Lett. A** 266**, 400 (2000).
M. de Llano and V.V. Tolmachev, Physica A **317**, 546 (2003)*.*
J. Batle, M. Casas, M. Fortes, M. de Llano, M.A. Solís, and V.V. Tolmachev, Cond. Matter Theories **18**, (in press) (2003) *BCS and BEC finally unified: A brief review.* cond-mat/0211456.
A. Lanzara *et al.*, Nature **412**, 510 (2001).
Y. Domanski and J. Ranninger, Phys. Rev. B **63**, 134505 (2001).
M. Casas, N.J. Davidson, M. de Llano, T.A. Mamedov, A. Puente, R.M. Quick, A. Rigo, and M.A. Solís, Physica A** 295**, 146** **(2001).
M. Casas, M. de Llano, A. Puente, A. Rigo, and M.A. Solís, Sol. State Comm. **123**, 101 (2002).
M.R. Schafroth, Phys. Rev. **96**,** **1442** **(1954).
M.R. Schafroth, S.T. Butler, and J.M. Blatt, Helv. Phys. Acta **30**, 93 (1957).
M.R. Schafroth, Sol. State Phys. **10**, 293 (1960).
J.M. Blatt, *Theory of Superconductivity* (Academic, New York, 1964).
A.L. Fetter and J.D. Walecka, *Quantum Theory of Many-Particle Systems* (McGraw-Hill, New York, 1971) see esp. pp. 70-72, 131-139, and refs. therein.
N.N. Bogoliubov, N. Cim.** 7**, 794 (1958).
J. Hirsch, Physica C **341-348** (2000) 213 and also www.iitap.iastate.edu/htcu/forum.html\#Q3.
, 1900[ (1958).]{}
N.N. Bogoliubov, V.V. Tolmachev, and D.V. Shirkov, *A New Method in the Theory of Superconductivity* (Consultants Bureau, NY, 1959).
, 132 [(1964). ]{}
L. Belkhir, M. Randeria, Phys. Rev. B **49**, 6829 (1994).
S.V. Traven, Phys. Rev. Lett. **73**, 3451 (1994).
S.V. Traven, Phys. Rev. **51**, 3242 (1995).
M. Fortes, M.A. Solís, M. de Llano, and V.V. Tolmachev, Physica C **364-365**, 95 (2001).
V.L. Berezinskii, Sov. Phys. JETP **34,** 610 (1972).
J.M. Kosterlitz and D.J. Thouless, J. Phys. **C6**, 1181 (1973).
D. Jérome, Science **252**, 1509 (1991).
J.M. Williams, A.J. Schultz, U. Geiser, K.D. Carlson, A.M. Kini, H.H. Wang, W.K. Kwok, M.H. Whangbo, and J.E. Schirber, Science **252**, 1501 (1991).
H. Hori, Int. J. Mod Phys. B **8,** 1 (1994).
C.N. Lau, N. Markovic, M. Bockrath, A. Bezryadin, and M. Tinkham, Phys. Rev. Lett. **87**, 217003 (2001).
A. Bezryadin, C.N. Lau, and M. Tinkham, Nature **404**, 971 (2000).
D. Vollkardt and P. Wölfle, *The Superfluid Phases of Helium 3*, (Taylor & Francis, London, 1990).
K.M. O’Hara, S.L. Hemmer, M.E. Gehm, S.R. Granade, and J.E. Thomas, Science **298**, 2179 (2002).
M. Holland, B. DeMarco, and D.S. Jin, Phys. Rev. A **61**, 053610 (2000), and refs. therein.
A.A. Abrikosov, L.P. Gorkov, and I.E. Dzyaloshinskii, *Methods of Quantum Field Theory in Statistical Physics* (Dover, NY, 1975) § 33.
J.R. Schrieffer, *Theory of Superconductivity* (Benjamin, NY, 1964) p. 33.
M. Casas, S. Fujita, M. de Llano, A. Puente, A. Rigo, M.A. Solís, Physica C **295**, 93 (1998).
S.K. Adhikari, M. Casas, A. Puente, A. Rigo, M. Fortes, M.A. Solís, M. de Llano, A.A. Valladares & O. Rojo, Phys. Rev. B **62**, 8671 (2000).
S.K. Adhikari, M. Casas, A. Puente, A. Rigo, M. Fortes, M. de Llano, M.A. Solís, A. A. Valladares & O. Rojo, Physica C **351**, 341 (2001).
P. Gosdzinsky and R. Tarrach, Am. J. Phys. **59**, 70 (1991).
|
---
abstract: |
We consider maximizing a monotone submodular function under a cardinality constraint or a knapsack constraint in the streaming setting. In particular, the elements arrive sequentially and at any point of time, the algorithm has access to only a small fraction of the data stored in primary memory. We propose the following streaming algorithms taking $O(\varepsilon^{-1})$ passes:
1. a $(1-e^{-1}-\varepsilon)$-approximation algorithm for the cardinality-constrained problem
2. a $(0.5-\varepsilon)$-approximation algorithm for the knapsack-constrained problem.
Both of our algorithms run in $O^\ast(n)$ time, using $O^\ast(K)$ space, where $n$ is the size of the ground set and $K$ is the size of the knapsack. Here the term $O^\ast$ hides a polynomial of $\log K$ and $\varepsilon^{-1}$. Our streaming algorithms can also be used as fast approximation algorithms. In particular, for the cardinality-constrained problem, our algorithm takes $O(n\varepsilon^{-1} \log (\varepsilon^{-1}\log K) )$ time, improving on the algorithm of Badanidiyuru and Vondrák that takes $O(n \varepsilon^{-1} \log (\varepsilon^{-1} K) )$ time.
author:
- |
Chien-Chung Huang\
CNRS, École Normale Supérieure\
`villars@gmail.com`
- |
Naonori Kakimura[^1]\
Keio University\
`kakimura@math.keio.ac.jp`
title: |
Multi-Pass Streaming Algorithms for\
Monotone Submodular Function Maximization
---
Introduction
============
A set function $f:2^E \rightarrow \mathbb{R}_+$ on a ground set $E$ is *submodular* if it satisfies the *diminishing marginal return property*, i.e., for any subsets $S \subseteq T \subsetneq E$ and $e\in E \setminus T$, $$f(S \cup {\{e\}}) - f(S)\geq f(T \cup {\{e\}}) - f(T).$$ A function is *monotone* if $f(S)\leq f(T)$ for any $S\subseteq T$. Submodular functions play a fundamental role in combinatorial optimization, as they capture rank functions of matroids, edge cuts of graphs, and set coverage, just to name a few examples. Besides their theoretical interests, submodular functions have attracted much attention from the machine learning community because they can model various practical problems such as online advertising [@Alon:2012em; @Kempe:2003iu; @Soma:2014tp], sensor location [@Krause:2008vo], text summarization [@Lin:2010wpa; @Lin:2011wt], and maximum entropy sampling [@Lee:2006cm].
Many of the aforementioned applications can be formulated as the maximization of a monotone submodular function under a knapsack constraint. In this problem, we are given a monotone submodular function $f:2^E \to {\mathbb{R}}_+$, a size function $c:E \rightarrow {\mathbb{N}}$, and an integer $K \in {\mathbb{N}}$, where ${\mathbb{N}}$ denotes the set of positive integers. The problem is defined as $$\begin{aligned}
\text{maximize\ \ }f(S) \quad \text{subject to \ } c(S)\leq K, \quad S\subseteq E,
\label{eq:problem}\end{aligned}$$ where we denote $c(S)=\sum_{e\in S}c(e)$ for a subset $S \subseteq E$. Throughout this paper, we assume that every item $e \in E$ satisfies $c(e) \leq K$ as otherwise we can simply discard it. Note that, when $c(e)=1$ for every item $e \in E$, the constraint coincides with a cardinality constraint: $$\begin{aligned}
\text{maximize\ \ }f(S) \quad \text{subject to \ } |S|\leq K, \quad S\subseteq E.
\label{eq:problem_size}\end{aligned}$$
The problem of maximizing a monotone submodular function under a knapsack or a cardinality constraint is classical and well-studied [@FNS_cardinality; @Wolsey:1982]. The problem is known to be NP-hard but can be approximated within the factor of (close to) $1-e^{-1}$; see e.g., [@Badanidiyuru:2013jc; @DBLP:journals/siamcomp/ChekuriVZ14; @FisherNemhauserWolsey; @Kulik:2013ix; @Sviridenko:2004hq; @yoshida_2016]. Notice that for both problems, it is standard to assume that a function oracle is given and the complexity of the algorithms is measured based on the number of oracle calls.
In this work, we study the two problems with a focus on designing *space and time efficient* approximation algorithms. In particular, we assume the *streaming* setting: each item in the ground set $E$ arrives sequentially, and we can keep only a small number of the items in memory at any point. This setting renders most of the techniques in the literature ineffective, as they typically require random access to the data.
#### Our contribution
Our contributions are summarized as follows.
\[thm:main\_cardinality\] Let $ n = |E|$. We design streaming $(1-e^{-1}-\varepsilon)$-approximation algorithms for the problem requiring either
1. $O\left(K\right)$ space, $O(\varepsilon^{-1} \log (\varepsilon^{-1} \log K))$ passes, and $O\left( n\varepsilon^{-1} \log(\varepsilon^{-1} \log K) \right)$ running time, or
2. $O\left( K \varepsilon^{-1} \log K\right)$ space, $O(\varepsilon^{-1})$ passes, and $O\left(n\varepsilon^{-1}\log K+n\varepsilon^{-2}\right)$ running time.
\[thm:main\_knapsack\] Let $ n = |E|$. We design streaming $(0.5-\varepsilon)$-approximation algorithms for the problem requiring $O\left(K\varepsilon^{-7}\log^2 K\right)$ space, $O(\varepsilon^{-1})$ passes, and $O\left(n\varepsilon^{-8}\log^2 K\right)$ running time.
To put our results in a better context, we list related work in Tables \[tab:Summary\] and \[tab:Summary2\]. For the cardinality-constrained problem, our first algorithm achieves the same ratio $1-e^{-1}-\varepsilon$ as Badanidiyuru and Vondrák [@Badanidiyuru:2013jc], using the same space, while strictly improving on the running time and the number of passes. The second algorithm improves further the number of passes to $O(\varepsilon^{-1})$, which is independent of $K$ and $n$, but slightly loses out in the running time and the space requirement. For the knapsack-constrained problem, our algorithm gives the best ratio so far using only small space (though at the cost of using more passes than [@APPROX17; @Yu:2016]). In the non-streaming setting, Sviridenko [@Sviridenko:2004hq] gave a $(1-e^{-1})$-approximation algorithm, which takes $O(Kn^4 )$ time. Very recently, Ene and Nguy[\~]{}n [@Ene2017] gave $(1-e^{-1}-\varepsilon)$-approximation algorithm, which takes $O((1/\varepsilon)^{O(1/\varepsilon^{4})} \log n)$.[^2]
#### Our Technique {#our-technique .unnumbered}
We first give an algorithm, called , for the cardinality-constrained problem . This algorithm is later used as a subroutine for the knapsack-constrained problem . The basic idea of is similar to those in [@Badanidiyuru:2013jc; @mcGregor2017]: in each pass, a certain threshold is set; items whose marginal value exceeds the threshold are added into the collection; others are just ignored. In [@Badanidiyuru:2013jc; @mcGregor2017], the threshold is decreased in a conservative way (by the factor of $1-\varepsilon$) in each pass. In contrast, we adjust the threshold [*dynamically*]{}, based on the $f$-value of the current collection. We show that, after $O(\varepsilon^{-1})$ passes, we reach a $(1-e^{-1}-\varepsilon)$-approximation. To set the threshold, we need a prior estimate of the optimal value, which we show can be found by a pre-processing step requiring either $O(K\varepsilon^{-1} \log K)$ space and a single pass, or $O(K)$ space and $O(\varepsilon^{-1} \log (\varepsilon^{-1} \log K))$ passes. The implementation and analysis of the algorithm are very simple. See Section \[sec:size\] for the details.
For the knapsack-constrained problem , let us first point out the challenges in the streaming setting. The techniques achieving the best ratios in the literature are in [@Ene2017; @Sviridenko:2004hq]. In [@Sviridenko:2004hq], *partial enumeration* and *density greedy* are used. In the former, small sets (each of size at most 3) of items are guessed and for each guess, density greedy adds items based on the decreasing order of marginal ratio (i.e., the marginal value divided by the item size). To implement density greedy in the streaming setting, large number of passes would be required. In [@Ene2017], partial enumeration is replaced by a more sophisticated multi-stage guessing strategies (where fractional items are added based on the technique of multilinear extension) and a “lazy” version of density greedy is used so as to keep down the time complexity. This version of density greedy nonetheless requires a priority queue to store the density of all items, thus requiring large space.
We present algorithms, in increasing order of sophistication, in Sections \[sec:simple\] to \[sec:0.5\], that give $0.39-\varepsilon$, $0.46-\varepsilon$, and $0.5-\varepsilon$ approximations respectively. The first simpler algorithms are useful for illustrating the main ideas and also are used as subroutines for later, more involved algorithms. The first algorithm adapts the algorithm for the cardinality-constrained case. We show that still performs well if all items in the optimal solution (henceforth denoted by ${\textup{\rm OPT}}$) are small in size. Therefore, by ignoring the largest optimal item $o_1$, we can obtain a $(0.39-\varepsilon)$-approximate solution (See Section \[sec:simple\]). The difficulty arises when $c(o_1)$ is large and the function value $f(o_1)$ is too large to be ignored. To take care of such a large-size item, we first aim at finding a good item $e$ whose size approximates that of $o_1$, using a single pass [@APPROX17]. This item $e$ satisfies the following properties: (1) $f(e)$ is large, (2) the marginal value of ${\textup{\rm OPT}}- o_1$ with respect to $e$ is large. Then, after having this item $e$, we apply to pack items in ${\textup{\rm OPT}}- o_1$. Since the largest item size in ${\textup{\rm OPT}}-o_1$ is smaller, the performance of is better than just applying to the original instance. The same argument can be applied for ${\textup{\rm OPT}}- o_1 - o_2$, where $o_2$ is the second largest item. These solutions, together with $e$, yield a $(0.46-\varepsilon)$-approximation (See Section \[sec:046\] for the details). The above strategy would give a $(0.5-\varepsilon)$-approximation if $f(o_1)$ is large enough. When $f(o_1)$ is small, we need to generalize the above ideas further. In Section \[sec:0.5\], we propose a two-phase algorithm. In Phase 1, an initial *good set* $Y \subseteq E$ is chosen (instead of a single good item); in Phase 2, pack items in some subset ${\textup{\rm OPT}}'\subseteq {\textup{\rm OPT}}$ using the remaining space. Ideally, the good set $Y$ should satisfy the following properties: (1) $f(Y)$ is large, (2) the marginal value of ${\textup{\rm OPT}}'$ with respect to $Y$ is large, and (3) the remaining space, $K-c(Y)$, is sufficiently large to pack items in ${\textup{\rm OPT}}'$. To find a such a set $Y$, we design two strategies, depending on the sizes, $c(o_1)$, $c(o_2)$ of the two largest items in ${\textup{\rm OPT}}$.
The first case is when $c(o_1)+c(o_2)$ is large. As mentioned above, we may assume that $f(o_1)$ is small. In a similar way, we can show that $f(o_2)$ is small. Then there exists a “dense” set of small items in ${\textup{\rm OPT}}$, i.e., $\frac{ f({\rm OPT} \setminus \{o_1, o_2\})}{ c({\rm OPT} \setminus \{o_1, o_2\})}$ is large. The good set $Y$ thus can be small items approximating $f({\rm OPT} \setminus \{o_1, o_2\})$ while still leaving enough space for Phase 2. The other case is when $c(o_1)+c(o_2)$ is small. In this case, we apply a modified version of to obtain a good set $Y$. The modification allows us to lower-bound the marginal value of ${\textup{\rm OPT}}'$ with respect to $Y$. Furthermore, we can show that $Y$ is already a $(0.5-\varepsilon)$-approximation when $c(Y)$ is large. Thus we may assume that $c(Y)$ is small, implying that we have still enough space to pack items in ${\textup{\rm OPT}}'$ in Phase 2.
#### Related Work {#related-work .unnumbered}
Maximizing a monotone submodular function subject to various constraints is a subject that has been extensively studied in the literature. We do not attempt to give a complete survey here and just highlight the most relevant results. Besides a knapsack constraint or a cardinality constraint mentioned above, the problem has also been studied under (multiple) matroid constraint(s), $p$-system constraint, multiple knapsack constraints. See [@Calinescu:2011ju; @ChanSODA2017; @Chan2017; @DBLP:journals/siamcomp/ChekuriVZ14; @Filmus:2014; @Kulik:2013ix; @Lee:2010] and the references therein. In the streaming setting, researchers have considered the same problem with matroid constraint [@DBLP:journals/mp/ChakrabartiK15] and knapsack constraint [@APPROX17; @Yu:2016], and the problem without monotonicity [@DBLP:conf/icalp/ChekuriGQ15; @mirzasoleiman18streaming]. For the special case of set-covering function with cardinality constraint, McGregor and Vu [@mcGregor2017] give a $(1-e^{-1}-\varepsilon)$-approximation algorithm in the streaming setting. They use a sampling technique to estimate the value of $f({\textup{\rm OPT}})$ and then collect items based on thresholds using $O(\varepsilon^{-1})$ passes. Batani *et al.* [@Bateni:2017] independently proposed a streaming algorithm with a sketching technique for the same problem.
#### Notation {#notation .unnumbered}
For a subset $S \subseteq E$ and an element $e \in E$, we use the shorthand $S+e$ and $S-e$ to stand for $S \cup \{e\}$ and $S \setminus \{e \}$, respectively. For a function $f:2^E \to {\mathbb{R}}$, we also use the shorthand $f(e)$ to stand for $f(\{e\})$. The *marginal return* of adding $e \in E$ with respect to $S \subseteq E$ is defined as $f(e \mid S) = f(S+e) - f(S)$.
Cardinality Constraint {#sec:size}
======================
Simple Algorithm with Approximated Optimal Value
------------------------------------------------
In this section, we introduce a procedure (see Algorithm \[alg:simple\_size\]). This procedure can be used to give a $(1-e^{-1} - \varepsilon)$-approximation with the cardinality constraint; moreover, it will be adapted for the knapsack-constrained problem in Section \[sec:simple\].
The input of consists of
1. An instance $\mathcal{I}= ( f, K, E)$ for the problem .
2. Approximated values $v$ and $W$ of $f({\textup{\rm OPT}})$ and $c({\textup{\rm OPT}})$, respectively, where ${\textup{\rm OPT}}$ is an optimal solution of ${\mathcal{I}}$. Specifically, we suppose $v \leq f({\textup{\rm OPT}})$ and $W \geq c({\textup{\rm OPT}})$.
The output of is a set $S$ that satisfies $f(S)\geq \beta v$ for some constant $\beta$ that will be determined later. If $f({\textup{\rm OPT}})\leq (1+\varepsilon)v$ in addition, then the output turns out to be a $(\beta-\varepsilon)$-approximation. We will describe how to find such $v$ satisfying that $v\leq f({\textup{\rm OPT}})\leq (1+\varepsilon)v$ in the next subsection.
The following observations hold for the algorithm .
\[lem:fact\_size\] During the execution of in each round in Lines 3–8, the following hold:
1. The current set $S \subseteq E$ always satisfies $f(T'\mid S_0) \geq \alpha |T'|$, where $T'=S\setminus S_0$.
2. If an item $e \in E$ fails the condition $f(e\mid S_e) < \alpha$ at Line 6, where $S_e$ is the set just before $e$ arrives, then the final set $S$ in the round satisfies $f(e\mid S) < \alpha$.
\(1) Every item $e\in T'$ satisfies $f(e\mid S_e)\geq \alpha$, where $S_e$ is the set just before $e$ arrives. Hence $f(T\mid S_0) = \sum_{e\in T}f(e\mid S_e)\geq \alpha |T|$. (2) follows from the definition of submodularity.
Moreover, we can bound $f(S)$ from below using the size of $S$.
\[lem:c\_size\] In the end of each round in Lines 3–8, we have $$f(S)\geq \left(1-e^{-\frac{|S|}{W}}-2\varepsilon\right)v.$$
We prove the statement by induction on the number of rounds. Let $S$ be a set in the end of some round. Furthermore, let $S_0$ and $T$ be corresponding two sets in the round; thus $S = S_0 \cup T$. By induction hypothesis, we have $$f(S_0)\geq \left(1-e^{-\frac{|S_0|}{W}}-2\varepsilon\right)v.$$ Note that $S_0=\emptyset$ in the first round, that also satisfies the above inequality.
Due to Lemma \[lem:fact\_size\](1), it holds that $f(S) = f(S_0) + f(T\mid S_0) \geq f(S_0) + \alpha |T|$, where $\alpha= \frac{(1-\varepsilon)v - f(S_0)}{W}$. Hence it holds that $$\begin{aligned}
f(S) & \geq f(S_0)\left(1 - \frac{|T|}{W}\right) + (1-\varepsilon) \frac{|T|}{W}v \\
& \geq \left(1 - e^{-\frac{|S_0|}{W}}- 2 \varepsilon \right)\left(1 - \frac{|T|}{W}\right)v+ \frac{|T|}{W}v - \frac{|T|}{W}\varepsilon v\\
& = \left(1 - \left(1 - \frac{|T|}{W}\right) e^{-\frac{|S_0|}{W}}\right)v - \left( 2 - \frac{|T|}{W} \right) \varepsilon v\\
& \geq \left(1 - \left(1 - \frac{|T|}{W}\right) e^{-\frac{|S_0|}{W}}\right)v - 2 \varepsilon v,\end{aligned}$$ where the second inequality uses the induction hypothesis. Since $\left(1 - \frac{|T|}{W}\right)\leq e^{-\frac{|T|}{W}}$, we have $$f(S) \geq \left(1 - e^{-\frac{|S_0|+|T|}{W}}-2\varepsilon\right)v = \left(1 - e^{-\frac{|S|}{W}}-2\varepsilon\right)v,$$ which proves the lemma.
The next lemma says that the function value increases by at least $\varepsilon f({\textup{\rm OPT}})$ in each round. This implies that the algorithm terminates in $O(\varepsilon^{-1})$ rounds.
\[lem:round\_size\] Suppose that we run with $v \leq f({\textup{\rm OPT}})$ and $W \geq |{\textup{\rm OPT}}|$. In the end of each round, if the final set $S=S_0\cup T$ at Line 7 satisfies $|S| <K$, then $f(S) - f(S_0) \geq \varepsilon f({\textup{\rm OPT}})$.
Suppose that the final set $S_0\cup T$ satisfies $|S_0\cup T|<K$. This means that, in the last round, each item $e$ in ${\textup{\rm OPT}}\setminus (S_0\cup T)$ is discarded because the marginal return is not large, which implies that $f(e\mid S) < \alpha$ by Lemma \[lem:fact\_size\](2). As $|{\textup{\rm OPT}}\setminus S| \leq W$ and $\alpha = \frac{(1-\varepsilon)v - f(S_0)}{W}$, we have from submodularity that $$f({\textup{\rm OPT}})\leq f(S)+ \sum_{e\in {\textup{\rm OPT}}\setminus S}f(e\mid S) \leq f(S)+ \alpha W\leq f(S)+ (1-\varepsilon)v -f(S_0).$$ Since $v\leq f({\textup{\rm OPT}})$, this proves the lemma.
From Lemmas \[lem:c\_size\] and \[lem:round\_size\], we have the following.
\[thm:ratio\_size\] Let $\mathcal{I} = (f, {K}, E)$ be an instance of the cardinality-constrained problem . Suppose that $ v \leq f({\textup{\rm OPT}}) \leq (1+\varepsilon) v$. Then can compute a $(1- e^{-1}- O(\varepsilon))$-approximate solution in $O(\varepsilon^{-1})$ passes and $O(K)$ space. The total running time is $O(\varepsilon^{-1} n)$.
While $|S|<{K}$, the $f$-value is increased by at least $\varepsilon f({\textup{\rm OPT}})$ in each round by Lemma \[lem:round\_size\]. Hence, after $p$ rounds, the current set $S$ satisfies that $f(S)\geq p \varepsilon f({\textup{\rm OPT}})$. Since $f(S)\leq f({\textup{\rm OPT}})$, the number of rounds is at most $\varepsilon^{-1}+1$. As each round takes $O(n)$ time, the total running time is $O(\varepsilon^{-1} n)$. Since we only store a set $S$, the space required is clearly $O(K)$.
The algorithm terminates when $|S|={K}$. From Lemma \[lem:c\_size\] and the fact that $f({\textup{\rm OPT}}) \leq (1+\varepsilon) v$, we have $$f(S)\geq \left(1-e^{-1}-2\varepsilon\right)v \geq \left(1-e^{-1}-O(\varepsilon)\right)f({\textup{\rm OPT}}).$$
Algorithm with guessing the optimal value {#sec:size_guessv}
-----------------------------------------
We first note that $m \leq f({\textup{\rm OPT}}) \leq mK$, where $m = \max_{e \in E}f(e)$. Hence, if we prepare $\mathcal{V}=\{(1+\varepsilon)^im\mid (1+\varepsilon)^i\leq K, i=0,1,\dots \}$, then we can guess $v$ such that $v\leq f({\textup{\rm OPT}})\leq (1+\varepsilon)v$. As the size of $\mathcal{V}$ is equal to $O(\varepsilon^{-1}\log K)$, if we run for each element in $\mathcal{V}$, we need $O(K \varepsilon^{-1}\log K)$ space and $O(\varepsilon^{-1})$ passes in the streaming setting. This, however, will take $O(n \varepsilon^{-2}\log K)$ running time. We remark that, using a $(0.5-\varepsilon)$-approximate solution $X$ by a single-pass streaming algorithm [@Badanidiyuru:2013jc], we can guess $v$ from the range between $f(X)$ and $(2+\varepsilon) f(X)$, which leads to $O(K\varepsilon^{-1}\log K)$ space and $O(n\varepsilon^{-1}\log K+n\varepsilon^{-2})$ time, taking $O(\varepsilon^{-1})$ passes. This proves the second part in Theorem \[thm:main\_cardinality\].
Below we explain how to reduce the running time to $O(\varepsilon^{-1}n\log (\varepsilon^{-1}\log K))$ by the binary search.
We can find a $(1- e^{-1}-\varepsilon)$-approximate solution in $O(\varepsilon^{-1}\log (\varepsilon^{-1} \log K))$ passes and $O(K)$ space, running in $O(n \varepsilon^{-1}\log (\varepsilon^{-1}\log K))$ time.
We here describe an algorithm using with slight modification. Let $p$ be the minimum integer that satisfies $(1+\varepsilon)^p \geq K$. It follows that $p = O(\varepsilon^{-1}\log K)$.
We set $s_0 = 1$ and $t_0 = p$. Suppose that $m (1+\varepsilon)^{s_i} \leq f({\textup{\rm OPT}}) \leq m (1+\varepsilon)^{t_i}$ for some $i\geq 0$. Set $u = \lfloor(s_i+t_i)/2\rfloor$, and take the middle $v' = m (1+\varepsilon)^u$. Perform , but we stop the repetition in $\varepsilon^{-1}+1$ rounds.
Suppose that the output $S$ is of size $K$. Then, if $v'\geq f({\textup{\rm OPT}})$, we have $f(S)\geq (1- e^{-1}- O(\varepsilon)) v' \geq (1- e^{-1}- O(\varepsilon))f({\textup{\rm OPT}})$ by Lemma \[lem:c\_size\]. Hence we may assume that $v'\leq f({\textup{\rm OPT}}) \leq m (1+\varepsilon)^{t_i}$. So we set $s_{i+1}= u$ and $t_{i+1}=t_{i}$.
Suppose that the output $S$ is of size $<K$. It follows from Lemma \[lem:round\_size\] that, if $f({\textup{\rm OPT}})\geq v'$, it holds that $f(S) > p \varepsilon f({\textup{\rm OPT}})$ after $p$ rounds. Hence, after $\varepsilon^{-1}+1$ rounds, we have $f(S)>f({\textup{\rm OPT}})$, a contradiction. Thus we are sure that $f({\textup{\rm OPT}}) < v'$. So we see that $m (1+\varepsilon)^{s_i}\leq f({\textup{\rm OPT}}) \leq v'$, and we set $s_{i+1}= s_i$ and $t_{i+1}=u$.
We repeat the above binary search until the interval is 1. As $t_0/s_0 = p$, the number of iterations is $O(\log p)=O\left(\log \left(\varepsilon^{-1} \log K\right) \right)$. Since each iteration takes $O(\varepsilon^{-1})$ passes, it takes $O(\varepsilon^{-1} \log (\varepsilon^{-1} \log K))$ passes in total. The running time is $O(n \varepsilon^{-1} \log (\varepsilon^{-1} \log K))$. Notice that there is no need to store the solutions obtained in each iteration, rather, just the function values and the corresponding indices $u_i$ are enough to find out the best solution. Therefore, just $O\left(K+\log\left(\varepsilon^{-1} \log K\right)\right)=O(K)$ space suffices. The algorithm description is given in Algorithm \[alg:simple2\_size\].
Simple Algorithm for the Knapsack-Constrained Problem {#sec:simple}
=====================================================
In the rest of the paper, let ${\mathcal{I}}= (f, c, {K}, E)$ be an input instance of the problem . Let ${\textup{\rm OPT}}= \{o_1, \dots , o_\ell\}$ denote an optimal solution with $c(o_1) \geq c(o_2) \geq \cdots \geq c(o_\ell)$. We denote $c_i=c(o_i)/{K}$ for $i=1,2,\dots, \ell$.
Similarly to Section \[sec:size\], we suppose that we know in advance the approximate value $v$ of $f({\textup{\rm OPT}})$, i.e., $v\leq f({\textup{\rm OPT}})\leq (1+\varepsilon)v$. The value $v$ can be found with a single-pass streaming algorithm with constant ratio [@Yu:2016] in $O(n\varepsilon^{-1}\log K)$ time and $O(K \varepsilon^{-1}\log K)$ space. Specifically, letting $X$ be the output of a single-pass $\alpha$-approximation algorithm, we know that the optimal value is between $f(X)$ and $f(X)/\alpha$. We can guess $v$ by a geometric series $\{(1+\varepsilon)^i\mid i\in \mathbb{Z}\}$ in this range, and then the number of guesses is $O(\varepsilon^{-1})$. Thus, if we design an algorithm running in $O(T_1)$ time and $O(T_2)$ space provided the approximate value $v$, then the total running time is $O(n\varepsilon^{-1}\log K + \varepsilon^{-1}T_1)$ and the space required is $O(\max\{\varepsilon^{-1}\log K, \varepsilon^{-1} T_2\})$.
Simple Algorithm
----------------
We first claim that the algorithm in Section \[sec:size\] can be adapted for the knapsack-constrained problem as below (Algorithm \[alg:simple\]). At Line 6, we pick an item when the marginal return per unit weight exceeds the threshold $\alpha$. We stop the repetition when $f(S) - f(S_0) < \varepsilon v$. Clearly, the algorithm terminates.
In a similar way to Lemmas \[lem:fact\_size\] and \[lem:c\_size\], we have the following observations. We omit the proof.
\[lem:fact\_knapsack\] During the execution of in each round in Lines 3–8, the following hold:
1. The current set $S \subseteq E$ always satisfies $f(T'\mid S_0) \geq \alpha c(T')$, where $T'=S\setminus S_0$.
2. If an item $e \in E$ fails the condition $f(e\mid S_e) < \alpha c(e)$ at Line 6, where $S_e$ is the set just before $e$ arrives, then the final set $S$ in the round satisfies $f(e\mid S) < \alpha c(e)$.
3. In the end of each round, we have $$f(S)\geq \left(1-e^{-\frac{c(S)}{W}}-2\varepsilon\right)v.$$
Furthermore, similarly to the proof of Lemma \[lem:round\_size\], we see that the output has size more than $K-c(o_1)$.
\[lem:round\_knapsack\] Suppose that we run with $v \leq f({\textup{\rm OPT}})$ and $W \geq c({\textup{\rm OPT}})$. In the end of the algorithm, it holds that $c(S)>K-c(o_1)$.
Suppose to the contrary that $c(S)\leq K-c(o_1)$ in the end. Then, in the last round, each item $e$ in ${\textup{\rm OPT}}\setminus S$ is discarded because the marginal return is not large, which implies that $f(e\mid S) < \alpha c(e)$ by Lemma \[lem:fact\_knapsack\](2). As $c({\textup{\rm OPT}}\setminus S) \leq W$ and $\alpha = \frac{(1-\varepsilon)v - f(S_0)}{W}$, where $S_0$ is the initial set in the last round, we have $$f({\textup{\rm OPT}})\leq f(S)+ \sum_{e\in {\textup{\rm OPT}}\setminus S}f(e\mid S) \leq f(S)+ \alpha W \leq f(S)+ (1-\varepsilon)v -f(S_0).$$ Since $v\leq f({\textup{\rm OPT}})$, we obtain $f(S) - f(S_0) \geq \varepsilon v$, which proves the lemma.
Thus, we obtain the following approximation ratio, depending on size of the largest item.
\[lem:simpleratio\] Let $\mathcal{I}=(f, c, {K}, E)$ be an instance of the problem . Suppose that $v\leq f({\textup{\rm OPT}})\leq O(1)v$ and $W \geq c({\textup{\rm OPT}})$. The algorithm can find in $O(\varepsilon^{-1})$ passes and $O(K)$ space a set $S$ such that ${K}-c(o_1) < c(S)\leq {K}$ and $$\label{eq:simpleratio_knapsack}
f(S)\geq \left(1-e^{-\frac{K-c(o_1)}{W}}-O(\varepsilon)\right)v.
$$ The total running time is $O(\varepsilon^{-1} n)$.
Let $S$ be the final set of . By Lemma \[lem:round\_knapsack\], the final set $S$ satisfies that $c(S) > {K}-c(o_1)$. Hence follows from Lemma \[lem:round\_knapsack\] (3). The number of passes is $O(\varepsilon^{-1})$, as each round increases the $f$-value by $\varepsilon v$ and $f({\textup{\rm OPT}})\leq O(1)v$. Hence the running time is $O(\varepsilon^{-1} n)$, and the space required is clearly $O(K)$.
Lemma \[lem:simpleratio\] gives us a good ratio when $c(o_1)$ is small (see Corollary \[cor:c1\_lb\] in Section \[sec:0.5overview\]). However, the ratio worsens when $c(o_1)$ becomes larger. In the next subsection, we show that can be used to obtain a $(0.39-\varepsilon)$-approximation by ignoring large-size items.
$0.39$-Approximation: Ignoring Large Items {#sec:039}
------------------------------------------
Let us remark that would work for finding a set $S$ that approximates *any subset* $X$. More precisely, given an instance ${\mathcal{I}}=(f, c, {K}, E)$ of the problem , consider finding a feasible set to ${\mathcal{I}}$ that approximates
> ($\ast$) a subset $X\subseteq E$ such that $v \leq f(X)\leq O(1)v$ and $W \geq c(X)$.
This means that $v$ and $W$ are the approximated values of $f(X)$ and $c(X)$, respectively. Let $X=\{x_1, \dots, x_\ell\}$ with $f(x_1)\geq \dots \geq f(x_\ell)$. Note that $X$ is not necessarily feasible to ${\mathcal{I}}$, i.e., $c(X)$ (and thus $W$) may be larger than $K$, but we assume that $c(x_i)\leq K$ for any $i=1,\dots, \ell$. Then can find an approximation of $X$.
\[cor:simpleratio\] Suppose that we are given an instance ${\mathcal{I}}= (f,c, K, E)$ for the problem and $v, W$ satisfying the above condition for some subset $X\subseteq E$. Then can find a set $S$ in $O(\varepsilon^{-1})$ passes and $O(K)$ space such that $K - c(x_1) < c(S)\leq K$ and $$f(S)\geq \left(1-e^{-\frac{c(S)}{W}} - O(\varepsilon)\right)v \geq \left(1-e^{-\frac{K - c(x_1)}{W}} - O(\varepsilon)\right)v.$$ The total running time is $O(\varepsilon^{-1} n)$.
In particular, Corollary \[cor:simpleratio\] can be applied to approximate ${\textup{\rm OPT}}-o_1$, with estimates of $c(o_1)$ and $f(o_1)$.
\[cor:IgnoreLarge\] Suppose that we are given an instance ${\mathcal{I}}= (f,c, K, E)$ for the problem such that $v\leq f({\textup{\rm OPT}})\leq O(1)v$ and $W \geq c({\textup{\rm OPT}})$. We further suppose that we are given ${\underline{c}}_1$ with ${\underline{c}}_1K\leq c(o_1)\leq (1+\varepsilon){\underline{c}}_1K$ and $\tau$ with $f(o_1)\leq \tau v$. Then we can find a set $S$ in $O(\varepsilon^{-1})$ passes and $O(K)$ space such that $K - c(o_2) < c(S)\leq K$ and $$f(S)\geq (1-\tau)\left(1-e^{-\frac{K - c(o_2)}{W-{\underline{c}}_1}} - O(\varepsilon)\right)v.$$ In particular, when $W=K$, we have $$\label{eq:IgnoreLarge}
f(S)\geq (1 - \tau)\left(1-e^{-1} - O(\varepsilon)\right)v.$$
We may assume that $\tau\leq 0.5$, as otherwise by taking a singleton $e$ with maximum return $f(e)$, we have $f(e)\geq \tau v$, implying that $S=\{e\}$ satisfies the inequality as $\tau\geq 0.5$. Moreover, it holds that $c({\textup{\rm OPT}}-o_1)\leq W - \underline{c}_1 K$ and $f({\textup{\rm OPT}}- o_1) \geq f({\textup{\rm OPT}})-f(o_1)\geq (1-\tau)v$, and thus $f({\textup{\rm OPT}}- o_1)\leq v \leq 2(1-\tau)v$. Using the fact, we perform to approximate ${\textup{\rm OPT}}-o_1$. Since the largest size in ${\textup{\rm OPT}}-o_1$ is $c(o_2)$, by Corollary \[cor:simpleratio\], we can find a set $S$ such that $K - c(o_2) < c(S)\leq K$ and $$f(S)\geq (1 - \tau)\left(1-e^{-\frac{K - c(o_2)}{W-{\underline{c}}_1K}} - O(\varepsilon)\right)v.$$ Thus the first part of the lemma holds.
When $W=K$, the above bound is equal to $$\label{eq:039proof}
f(S)\geq (1 - \tau)\left(1-e^{-\frac{1 - c_2}{1-{\underline{c}}_1}} - O(\varepsilon)\right)v.$$ We note that $$\frac{1-c_2}{1-{\underline{c}}_1} \geq 1-\varepsilon.$$ Indeed, the inequality clearly holds when $c_2\leq {\underline{c}}_1$. Consider the case when $c_2\geq {\underline{c}}_1$. Then, since $c_2\leq 1-{\underline{c}}_1$, we see that ${\underline{c}}_1\leq 0.5$. Hence, since $c_2\leq c_1\leq (1+\varepsilon){\underline{c}}_1$, we obtain $$\frac{1-c_2}{1-{\underline{c}}_1} \geq 1 - \varepsilon \frac{{\underline{c}}_1}{1-{\underline{c}}_1}\geq 1 - \varepsilon,$$ where the last inequality holds since ${\underline{c}}_1\leq 0.5$. Thus we have from .
The above corollary, together with Lemma \[lem:simpleratio\], delivers a $(0.39-\varepsilon)$-approximation.
Suppose that we are given an instance ${\mathcal{I}}= (f,c, K, E)$ for the problem with $v\leq f({\textup{\rm OPT}})\leq (1+\varepsilon)v$. Then we can find a $(0.39-O(\varepsilon))$-approximate solution in $O(\varepsilon^{-1})$ passes and $O(\varepsilon^{-1}K)$ space. The total running time is $O(\varepsilon^{-2} n)$.
Fist suppose that $c(o_1)\leq 0.505K$. Then Lemma \[lem:simpleratio\] with $W=K$ implies that we can find a set $S_1$ such that $$f(S_1)\geq \left(1-e^{-\frac{{K}-c(o_1)}{K}}-O(\varepsilon)\right)v\geq \left(1-e^{-(1-0.505)}-O(\varepsilon)\right)v\geq (0.39-O(\varepsilon))v.$$ Thus we may suppose that $c(o_1) > 0.505K$. We guess ${\underline{c}}_1$ with ${\underline{c}}_1K\leq c(o_1)\leq (1+\varepsilon){\underline{c}}_1K$ by a geometric series of the interval $[0.505, 1.0]$, i.e., we find ${\underline{c}}_1$ such that $0.505\leq {\underline{c}}_1\leq c(o_1)/K \leq (1+\varepsilon){\underline{c}}_1\leq 1$ using $O(\varepsilon^{-1})$ space. We may also suppose that $f(o_1) < 0.39v$, as otherwise we can just take a singleton with maximum return from $E$. By Corollary \[cor:IgnoreLarge\] with $W=K$ and $\tau =0.39$, we can find a set $S_2$ such that $$f(S_2)\geq 0.61 \left(1-e^{-\frac{1 - c_2}{1-{\underline{c}}_1}} - O(\varepsilon)\right)v.$$ Since $c_2\leq 1-{\underline{c}}_1\leq 0.495$, we have $$\frac{1 - c_2}{1-{\underline{c}}_1} \geq \frac{1 - 0.495}{1-0.505}\geq 1.02.$$ Therefore, it holds that $$f(S_2)\geq 0.61 \left(1-e^{-1.02} - O(\varepsilon)\right) \geq (0.39-O(\varepsilon))v.$$ This completes the proof.
$0.46$-Approximation Algorithm {#sec:046}
==============================
In this section, we present a $(0.46-\varepsilon)$-approximation algorithm for the knapsack-constrained problem. In our algorithm, we assume that we know in advance approximations of $c_1$ and $c_2$. That is, we are given $\underline{c}_i, \overline{c}_i$ such that $\underline{c}_i\leq c_i\leq \overline{c}_i$ and $\overline{c}_i\leq (1+\varepsilon)\underline{c}_i$ for $i\in\{1,2\}$. Define $E_i = \{ e\in E\mid c(e)\in[\underline{c}_i, \overline{c}_i]\}$ for $i\in\{1,2\}$. We call items in $E_1$ [*large items*]{}, and items in $E\setminus (E_1\cup E_2)$ are [*small*]{}. Notice that we often distinguish the cases $c_1 \leq 0.5$ and $c_1 \geq 0.5$. In the former case, we assume that $\overline{c}_1 \leq 0.5$ while in the latter, $\underline{c}_1 \geq 0.5$.
We first show that we may assume that $c_1+c_2\leq 1-\varepsilon$. This means that we may assume that $\overline{c}_1+\overline{c}_2\leq 1$. See Appendix for the proof.
\[lem:Assumption1\] Suppose that we are given $v$ such that $v\leq f({\textup{\rm OPT}}) \leq (1+\varepsilon) v$. If $c_1+c_2\leq 1- \varepsilon$, we can find a $(0.5-O(\varepsilon))$-approximate solution in $O(\varepsilon^{-1}{K})$ space using $O(\varepsilon^{-1})$ passes. The total running time is $O(n \varepsilon^{-1})$.
The main idea of our algorithm is to choose an item $e\in E_1$ such that both $f({\textup{\rm OPT}}- o_1\mid e)$ and $f({\textup{\rm OPT}}- o_1 - o_2\mid e)$ are large. After having this item $e$, we define $g(\cdot)=f(\cdot\mid e)$, and consider the problem: $$\text{maximize\ \ }g(S) \quad \text{subject to \ } c(S)\leq K-c(e),\quad S\subseteq E.\label{eq:p1}\\$$ We then try to find feasible sets to that approximate ${\textup{\rm OPT}}- o_1$ and ${\textup{\rm OPT}}-o_1 - o_2$. These solutions, together with the item $e$, will give us well-approximate solutions for the original instance. More precisely, we have the following observation.
\[obs:1\] Let $e\in E$ be an item. Define $g(\cdot)=f(\cdot \mid e)$. If $g({\textup{\rm OPT}}- o_1)\geq p_1v$ and $S_1$ is a feasible set to the problem such that $g(S_1)\geq \kappa_1 p_1v$, then it holds that $c(\{e\}\cup S_1)\leq K$ and $$f(\{e\}\cup S_1)\geq f(e) + \kappa_1 p_1 v.$$ Similarly, if $g({\textup{\rm OPT}}- o_1-o_2)\geq p_2v$ and $S_2$ is a feasible set to the problem such that $g(S_2)\geq \kappa_2 p_2v$, then it holds that $c(\{e\}\cup S_2)\leq K$ and $$f(\{e\}\cup S_2)\geq f(e) + \kappa_2 p_2 v.$$
To make the RHSs in Observation \[obs:1\] large, we aim to find an item $e$ from $E_1$ such that $f(e)\approx f(o_1)$ and $p_1, p_2$ are large simultaneously. We propose two algorithms for finding such $e$ in Section \[eq:046gooditem\]. We then apply to approximate ${\textup{\rm OPT}}-o_1$ and ${\textup{\rm OPT}}-o_1-o_2$ for , respectively. Since the largest item sizes in ${\textup{\rm OPT}}-o_1$ and ${\textup{\rm OPT}}-o_1-o_2$ are smaller, the performances $\kappa_1$ and $\kappa_2$ of are better than just applying to the original instance. Therefore, the total approximation ratio becomes at least $0.46$. The following subsections give the details.
Finding a Good Item {#eq:046gooditem}
-------------------
One of the important observation is the following, which is useful for analysis when $c_1\leq 0.5$.
\[lem:good\_e\_1\] Let $e_0\in E$. Suppose that $f({\textup{\rm OPT}})\geq v$. If $f(e_0+o_1)<\beta v$, then we have $$f({\textup{\rm OPT}}-o_1\mid e_0)\geq (1-\beta) v.$$ Moreover, if $f(e_0+o_2)<\beta v$ in addition, then we obtain $$f({\textup{\rm OPT}}-o_1-o_2\mid e_0)\geq (1- 2\beta + f(e_0)) v.$$
By assumption, it holds that $\beta v > f(e_0+o_1) = f(e_0) + f(o_1\mid e_0)$, implying $$f({\textup{\rm OPT}}- o_1\mid e_0) \geq f({\textup{\rm OPT}}\mid e_0) - f(o_1\mid e_0) \geq (f({\textup{\rm OPT}})-f(e_0)) - (\beta v - f(e_0)) \geq (1-\beta) v.$$ Moreover, if $f(e_0+o_2)<\beta v$ in addition, then we have $\beta v > f(e_0+o_2) = f(e_0) + f(o_2\mid e_0)$, implying $$f({\textup{\rm OPT}}- o_1 - o_2 \mid e_0) \geq f({\textup{\rm OPT}}- o_1\mid e_0) - f(o_2 \mid e_0) \geq (1-\beta) v- (\beta v - f(e_0)).$$ Thus the statement holds.
When $c_1 \leq \overline{c}_1 \leq 0.5$, for any item $e_0\in E_1$, we see that $e_0+o_1$ is a feasible set. Hence, by checking whether $f(e_0+e')\geq \beta v$ for some $e'\in E$ using a single pass, it holds that, either we have a feasible set $e_0+e'$ such that $f(e_0+e')\geq \beta v$, or we bound $f({\textup{\rm OPT}}- o_1 \mid e_0)$ and $f({\textup{\rm OPT}}- o_1 - o_2 \mid e_0)$ from below by the above lemma.
Another way to lower-bound $p_1$ and $p_2$ in Observation \[obs:1\] is to use the algorithm in [@APPROX17]. It is difficult to correctly identify $o_1$ among the items in $E_1$, but we can nonetheless find a reasonable approximation of it by a single pass [@APPROX17]. For the sake of convenience, we define a procedure . This procedure takes an estimate $v$ of $f({\textup{\rm OPT}})$ along with the estimate of the size of $o_1$ and of its $f$-value. It then returns an item of similar size, which, together with ${\textup{\rm OPT}}- o_1$, guarantees $(2/3-O(\varepsilon))v$. More precisely, we have the following proposition.
\[thm:APPROX\] Let $X\subseteq E$ such that $f(X)\geq v$. Furthermore, assume that there exists $x_1\in X$ such that $ \underline{c} {K}\leq c(x_1) \leq \overline{c}{K}$ and $\tau v/(1+\varepsilon)\leq f(x_1) \leq \tau v$. Then , a single-pass streaming algorithm using $O( 1)$ space, returns a set $Y$ of $O(1)$ items such that some item $e^\ast$ in $Y$ satisfies $$f(X - x_1 + e^\ast) \geq \Gamma (f(x_1))v - O(\varepsilon)v,$$ where $$\Gamma (t)=
\begin{cases}
\frac{2}{3} & \mbox{if \quad $t\geq 0.5$}\\
\frac{5}{6}-\frac{t}{3} & \mbox{if \quad $0.5 \geq t\geq 0.4$}\\
\frac{9}{10}-\frac{t}{2} & \mbox{if \quad $0.4\geq t \geq 0$}.
\end{cases}$$ Moreover, for any item $e\in Y$, we have $\tau v/(1+\varepsilon)\leq f(e) \leq \tau v$ and ${\underline{c}}{K}\leq c(e) \leq {\overline{c}}{K}$.
Using the procedure , we can find a good item $e$.
\[lem:good\_e\_2\] Let $Y:=$, where $f({\textup{\rm OPT}})\geq v$ and $\tau v/(1+\varepsilon)\leq f(o_1) \leq \tau v$. Then there exists $e\in Y$ such that $\tau v/(1+\varepsilon)\leq f(e) \leq \tau v$ and $$f({\textup{\rm OPT}}- o_1 \mid e) \geq ( \Gamma (\tau) - \tau )v - O(\varepsilon) v.$$ Moreover, if $f(e+o_2)<\beta v$ in addition, then $$f({\textup{\rm OPT}}-o_1-o_2\mid e)\geq (\Gamma (\tau) - \beta ) v - O(\varepsilon) v.$$
It follows from Theorem \[thm:APPROX\] that some $e\in Y$ satisfies that $f({\textup{\rm OPT}}- o_1 + e) \geq \Gamma (f(o_1))v - O(\varepsilon)v \geq \Gamma (\tau)v - O(\varepsilon)v$ and $f(e)\leq \tau v$, and hence $$f({\textup{\rm OPT}}- o_1 \mid e) = f({\textup{\rm OPT}}- o_1+e)-f(e)\geq ( \Gamma (\tau) - \tau )v - O(\varepsilon) v.$$ Moreover, if $f(e+o_2)<\beta v$ in addition, then we have $$\beta v > f(e+o_2) = f(e) + f(o_2\mid e) \geq \frac{\tau }{1+\varepsilon} v + f(o_2\mid e)\geq \tau v + f(o_2\mid e) - O(\varepsilon) v,$$ implying $$f({\textup{\rm OPT}}- o_1 - o_2 \mid e) \geq f({\textup{\rm OPT}}- o_1\mid e) - f(o_2 \mid e) \geq ( \Gamma (\tau) - \tau )v - (\beta - \tau ) v - O(\varepsilon) v.
$$ Thus the statement holds.
Algorithm: Taking a Good Large Item First {#eq:046alg}
-----------------------------------------
Suppose that we have $e\in E_1$ such that $f({\textup{\rm OPT}}- o_1 \mid e)\geq p_1 v$ and $f({\textup{\rm OPT}}- o_1 - o_2 \mid e)\geq p_2 v$, knowing that such $e$ can be found by Lemma \[lem:good\_e\_1\] or \[lem:good\_e\_2\]. More precisely, when $c_1\geq 0.5$, we first find a set $T$ by , where $\tau v/(1+\varepsilon) \leq f(o_1)\leq \tau v$; when $c_1\leq 0.5$, set $T=\{e\}$ for arbitrary $e\in E_1$. Then $|T|=O(1)$ and some $e\in T$ satisfies $f({\textup{\rm OPT}}- o_1 \mid e)\geq p_1 v$ and $f({\textup{\rm OPT}}- o_1 - o_2 \mid e)\geq p_2 v$, where $p_1$ and $p_2$ are determined by Lemma \[lem:good\_e\_1\] or \[lem:good\_e\_2\].
Then, for each item $e\in T$, consider the problem , and let ${\mathcal{I}}'$ be the corresponding instance. We apply to the instance ${\mathcal{I}}'$ approximating ${\textup{\rm OPT}}-o_1$ and ${\textup{\rm OPT}}-o_1-o_2$, respectively. Here we set $v_\ell=p_\ell v$ ($\ell=1,2$), $W_1 = W-{\underline{c}}_1 K$, and $W_2 = W-{\underline{c}}_1 K -{\underline{c}}_2 K$. It follows that $c({\textup{\rm OPT}}- o_1)\leq W_1$ and $c({\textup{\rm OPT}}- o_1 - o_2)\leq W_2$. Define $S^e_\ell =e +$ for $\ell=1,2$. Also define $S^e_0=e+e^\ast$, where $e^\ast = \arg\max_{e'\in E: c(e')\leq {K}-c(e)} f(e+e')$. Moreover, for $\ell = 0,1,2$, define $\tilde{S}_\ell$ to be the set that achieves $\max\{f(S^e_\ell)\mid e\in T\}$.
The algorithm, called , can be summarized as in Algorithm \[alg:LargeFirst\]. We can perform Lines 3–8 in parallel using the same $O(\varepsilon^{-1})$ passes. Since $|T|=O(1)$, it takes $O(K)$ spaces.
The following bounds follow from Corollary \[cor:simpleratio\] and Observation \[obs:1\].
\[lem:LargeFirst\] Suppose that $v\leq f({\textup{\rm OPT}})\leq (1+\varepsilon)v$ and $c({\textup{\rm OPT}})\leq W$. We further suppose that ${\underline{c}}_\ell\leq c_\ell\leq {\overline{c}}_\ell \leq (1+\varepsilon){\underline{c}}_\ell$ $\ell=1,2$ and $\frac{\tau}{1+\varepsilon} v\leq f(o_1)\leq \tau v$. Let $e\in E_1$ be an item such that $f({\textup{\rm OPT}}- o_1 \mid e)\geq p_1 v$ and $f({\textup{\rm OPT}}- o_1 - o_2 \mid e)\geq p_2 v$. Then, if $c_1+c_2\leq {1-\varepsilon / \delta}$ for some constant $\delta$, it holds that $$\begin{aligned}
f(\tilde{S}_1) & \geq \left(\tau + p_1 \left( 1 - e^{-\frac{K-{\overline{c}}_1 K-c_2K}{W-{\underline{c}}_1K}}\right)-O(\varepsilon)\right)v,\label{eq:LargeFirst1}\\
f(\tilde{S}_2) & \geq \left(\tau + p_2 \left( 1 - e^{-\frac{K-{\overline{c}}_1 K-c_3K}{W-{\underline{c}}_1 K-{\underline{c}}_2 K}}\right)-O(\varepsilon)\right)v.\label{eq:LargeFirst2}\end{aligned}$$ In particular, if $W=K$ and $c_1+c_2\leq {1-\varepsilon / \delta}$ for some constant $\delta$, it holds that $$\begin{aligned}
f(\tilde{S}_1) & \geq \left(\tau + p_1 \left( 1 - e^{-({1-\delta}) \mu}\right)-O(\varepsilon)\right)v,\label{eq:LargeFirst3}\\
f(\tilde{S}_2) & \geq \left(\tau + p_2 \left( 1 - e^{-\left(\frac{{1-\delta}}{\mu}-1\right)}\right)-O(\varepsilon)\right)v,\label{eq:LargeFirst4},\end{aligned}$$ where $\mu=\frac{1-{\overline{c}}_1-{\overline{c}}_2}{1-{\overline{c}}_1}$.
We note that $c({\textup{\rm OPT}}-o_1)\leq W_1 = W - {\underline{c}}_1 K$, and items in ${\textup{\rm OPT}}- o_1$ are of size at most $c(o_2)$. By Corollary \[cor:simpleratio\], can find a set $S$ such that $$g(S)\geq p_1 \left(1 - e^{-\frac{K-{\overline{c}}_1 K-c_2K}{W-{\underline{c}}_1K}} -O(\varepsilon)\right) v$$ as the capacity $K-c(e)\geq K - {\overline{c}}_1 K$. Therefore, since $f(e)\geq (\tau - O(\varepsilon))v$, the inequality follows from Observation \[obs:1\]. The inequality holds in a similar way, noting that $c({\textup{\rm OPT}}-o_1-o_2)\leq W - {\underline{c}}_1 K- {\underline{c}}_2 K$, and items in ${\textup{\rm OPT}}- o_1 - o_2$ are of size at most $c(o_3)$.
Suppose that $W=K$. Then the above inequalities and can be transformed to $$\begin{aligned}
f(\tilde{S}_1) & \geq \left(\tau + p_1 \left( 1 - e^{-\frac{1-{\overline{c}}_1-c_2}{1-{\underline{c}}_1}}\right)-O(\varepsilon)\right)v, \label{eq:046proof1} \\
f(\tilde{S}_2) & \geq \left(\tau + p_2 \left( 1 - e^{-\frac{1-{\overline{c}}_1 -c_3}{1-{\underline{c}}_1 -{\underline{c}}_2}}\right)-O(\varepsilon)\right)v. \label{eq:046proof2}\end{aligned}$$ Since ${\overline{c}}_\ell\leq (1+\varepsilon){\underline{c}}_\ell$ for $\ell =1,2$, we have $$\begin{aligned}
\lambda_1 &:= \frac{1-{\overline{c}}_1}{1-{\underline{c}}_1} \geq 1 - \varepsilon \frac{{\underline{c}}_1}{1-{\underline{c}}_1} \geq {1-\delta}+ \varepsilon, \mbox{\quad and}\\
\lambda_2 &:= \frac{1-{\overline{c}}_1 -{\overline{c}}_2}{1-{\underline{c}}_1 -{\underline{c}}_2} \geq 1 - \varepsilon \frac{{\underline{c}}_1+{\underline{c}}_2}{1-{\underline{c}}_1 -{\underline{c}}_2} \geq {1-\delta}+ \varepsilon,
$$ where the second inequalities of each follow because ${\underline{c}}_1\leq {\underline{c}}_1+{\underline{c}}_2\leq {1-\varepsilon / \delta}$. Using $\lambda_1$, the exponent in is equal to $$\frac{1-{\overline{c}}_1-c_2}{1-{\underline{c}}_1} = \lambda_1 \frac{1-{\overline{c}}_1-c_2}{1-{\overline{c}}_1} \geq \left({1-\delta}\right)\mu.$$ Thus holds. Moreover, since $c_3 \leq 1-{\underline{c}}_1 -{\underline{c}}_2$, using $\lambda_2$, the exponent in is equal to $$\frac{1-{\overline{c}}_1 -c_3}{1-{\underline{c}}_1 -{\underline{c}}_2} \geq \frac{1-{\overline{c}}_1}{1-{\underline{c}}_1 -{\underline{c}}_2} - 1 = \lambda_2 \frac{1-{\overline{c}}_1}{1-{\overline{c}}_1 -{\overline{c}}_2} - 1 \geq \left({1-\delta}\right) \frac{1-{\overline{c}}_1}{1-{\overline{c}}_1 -{\overline{c}}_2} - 1.$$ Thus holds.
Analysis: $0.46$-Approximation {#eq:046anal}
------------------------------
We next analyze the approximation ratio of the algorithm. We consider two cases when $c_1\leq 0.5$ and $c_1\geq 0.5$ separately; we will show that , together with , admits a $(0.46-\varepsilon)$-approximation when $c_1\leq 0.5$ and a $(0.49-\varepsilon)$-approximation when $c_1\geq 0.5$, respectively.
\[lem:046small\] Suppose that $c_1\leq 0.5$ and $c_1+c_2\leq {1-\varepsilon / \delta}$, where $\delta = 0.01$. We further suppose that ${\underline{c}}_\ell\leq c_\ell\leq {\overline{c}}_\ell\leq (1+\varepsilon) {\underline{c}}_\ell$ $\ell=1,2$, ${\overline{c}}_1\leq 0.5$, and $v\leq f({\textup{\rm OPT}})\leq (1+\varepsilon)v$. Then Algorithm , together with , can find a $(0.46-O(\varepsilon))$-approximate solution in $O(\varepsilon^{-1})$ passes and $O(\varepsilon^{-1}K)$ space. The total running time is $O(\varepsilon^{-2}n)$.
First suppose that $f(o_1)\leq 0.272v$. Then, by Corollary \[cor:IgnoreLarge\], we can find a set $S$ such that $$f(S) \geq 0.728 \left(1-e^{-1} - O(\varepsilon)\right)v\geq (0.46-O(\varepsilon))v.$$ Thus we may suppose that $f(o_1) \geq 0.272v$. We may also suppose that $f(o_1) \leq 0.46v$, as otherwise taking a singleton with maximum return from $E_1$ gives a $0.46$-approximation. We guess ${\underline{\tau}}$ and ${\overline{\tau}}$ with $0.272v\leq {\underline{\tau}}v\leq f(o_1) \leq {\overline{\tau}}v\leq 0.46 v$ and ${\overline{\tau}}\leq (1+\varepsilon){\underline{\tau}}$ from the interval $[0.272, 0.46]$ by a geometric series using $O(\varepsilon^{-1})$ space.
By Lemmas \[lem:good\_e\_1\] and \[lem:LargeFirst\], the output of is lower-bounded by the RHSs of the following three inequalities: $$\begin{aligned}
f(\tilde{S}_0) &\geq \beta v, \notag \\
f(\tilde{S}_1) &\geq \left({\overline{\tau}}+ (1-\beta) \left( 1 - e^{-\left({1-\delta}\right)\mu}\right)-O(\varepsilon)\right)v,\label{eq:Small3}\\
f(\tilde{S}_2) &\geq \left({\overline{\tau}}+ (1-2\beta+{\overline{\tau}}) \left( 1 - e^{-\left(\frac{{1-\delta}}{\mu}-1\right)}\right)-O(\varepsilon)\right)v, \label{eq:Small4}\end{aligned}$$ where $\mu=\frac{1-{\overline{c}}_1-{\overline{c}}_2}{1-{\overline{c}}_1}$. We may assume that $\beta < 0.46$. If $\mu \geq 0.5$, then implies that $$f(\tilde{S}_1) \geq \left( 0.272 + (1- 0.46) \left( 1 - e^{- \frac{{1-\delta}}{2}}\right)-O(\varepsilon)\right)v \geq (0.46-O(\varepsilon))v,$$ when $\delta = 0.01$. On the other hand, if $\mu \leq 0.5$, then implies that $$f(\tilde{S}_2) \geq \left(0.272 + (1-2\cdot 0.46 + 0.272) \left( 1 - e^{-\left(\frac{{1-\delta}}{0.5}-1\right)}\right)-O(\varepsilon)\right)v \geq (0.46-O(\varepsilon))v,$$ when $\delta = 0.01$. Thus the statement holds.
Similarly, we have the following guarantee when $c_1\geq 0.5$.
\[lem:046large\] Suppose that $c_1\geq 0.5$ and $c_1+c_2\leq {1-\varepsilon / \delta}$, where $\delta = 0.01$. We further suppose that ${\underline{c}}_\ell\leq c_\ell\leq {\overline{c}}_\ell\leq (1+\varepsilon) {\underline{c}}_\ell$ $\ell=1,2$, ${\underline{c}}_1\geq 0.5$, and $v\leq f({\textup{\rm OPT}})\leq (1+\varepsilon)v$. Then Algorithm , together with , can find a $(0.49-O(\varepsilon))$-approximate solution in $O(\varepsilon^{-1})$ passes and $O(\varepsilon^{-1}K)$ space. The total running time is $O(\varepsilon^{-2}n)$.
First suppose that $f(o_1) \leq 0.224 v$. Then, by Corollary \[cor:IgnoreLarge\], we can find a set $S$ such that $$f(S)\geq 0.776 \left(1-e^{-1} - O(\varepsilon)\right)v \geq (0.49-O(\varepsilon))v.$$ Thus we may suppose that $f(o_1) \geq 0.224v$. We may also suppose that $f(o_1) \leq 0.49v$, as otherwise taking a singleton with maximum return from $E_1$ gives a $0.49$-approximation. We guess ${\underline{\tau}}$ and ${\overline{\tau}}$ with $0.224v\leq {\underline{\tau}}v\leq f(o_1) \leq {\overline{\tau}}v\leq 0.49v$ and ${\overline{\tau}}\leq (1+\varepsilon){\underline{\tau}}$ from the interval $[0.224, 0.49]$ by a geometric series using $O(\varepsilon^{-1})$ space.
Let $\mu = \frac{1-{\overline{c}}_1 -{\overline{c}}_2}{1-{\overline{c}}_1}$. By Corollary \[cor:IgnoreLarge\], we can find a set $\tilde{S}$ such that $$\label{eq:Large1}
f(\tilde{S})\geq (1 - {\overline{\tau}}) \left(1-e^{-\frac{1 - {\overline{c}}_2}{1-{\underline{c}}_1}} - O(\varepsilon)\right)v\geq (1 - {\overline{\tau}}) \left(1-e^{- ({1-\delta}) \mu-1} - O(\varepsilon)\right)v.$$ Here we note that $$\frac{1 - {\overline{c}}_2}{1-{\underline{c}}_1} = \frac{1 - {\overline{c}}_1}{1-{\underline{c}}_1} \left(\frac{1 - {\overline{c}}_1 - {\overline{c}}_2}{1-{\overline{c}}_1} \right)+\frac{{\overline{c}}_1}{1-{\underline{c}}_1}\geq \left({1-\delta}\right)\mu + 1,$$ since $\frac{{\overline{c}}_1}{1-{\underline{c}}_1}\geq 1$ when ${\overline{c}}_1\geq {\underline{c}}_1\geq 0.5$. Moreover, there exists $e'\in T$ such that $f({\textup{\rm OPT}}-o_1\mid e')$ and $f({\textup{\rm OPT}}-o_1-o_2\mid e')$ are bounded as in Lemma [lem:good\_e\_2]{}. By Lemma \[lem:LargeFirst\], the output of is lower-bounded by the RHSs of the following three inequalities: $$\begin{aligned}
f(\tilde{S}_0)&\geq \beta v,\nonumber\\
f(\tilde{S}_1)&\geq \left({\overline{\tau}}+ (\Gamma({\overline{\tau}})-{\overline{\tau}}) \left( 1 - e^{-\left({1-\delta}\right)\mu}\right)-O(\varepsilon)\right)v,\label{eq:Large2}\\
f(\tilde{S}_2)&\geq \left({\overline{\tau}}+ (\Gamma({\overline{\tau}})-\beta) \left( 1 - e^{-\left(\frac{{1-\delta}}{\mu}-1\right)}\right)-O(\varepsilon)\right)v.\label{eq:Large3}\end{aligned}$$ We may assume that $\beta < 0.49$. The above inequalities – imply that one of $\tilde{S}$, $\tilde{S}_\ell$ ($\ell = 0,1,2$) admits a $(0.49-O(\varepsilon))$-approximation.
More specifically, we can obtain the ratio as follows. First suppose that $\mu\geq 0.505$. Then, if ${\overline{\tau}}\leq 0.3562$, then implies that $$f(\tilde{S}) \geq (1 - 0.3562) \left(1-e^{- \left( \left({1-\delta}\right)0.505+1\right)} - O(\varepsilon)\right)v\geq (0.50-O(\varepsilon))v.$$ If $0.4\geq \tau \geq 0.3562$, then implies that $$f(\tilde{S}_1)\geq \left(0.3562 + \left(\frac{9}{10}-\frac{3\cdot 0.3562}{2}\right) \left( 1 - e^{-\left({1-\delta}\right)0.505}\right)-O(\varepsilon)\right)v\geq (0.50-O(\varepsilon))v.$$ If $\tau \geq 0.4$, then implies that $$f(\tilde{S}_1)\geq \left(0.4 + \left(\frac{5}{6}-\frac{4\cdot 0.4}{3}\right) \left( 1 - e^{-\left({1-\delta}\right)0.505}\right)-O(\varepsilon)\right)v\geq (0.51-O(\varepsilon))v.$$ Thus we obtain a $(0.5-O(\varepsilon))$-approximation when $\mu\geq 0.505$ using and .
Next suppose that $\mu <0.505$. First consider the case when $\tau \leq 0.22$. Then, since $\mu \geq 0$, it follows from that $$f(\tilde{S}) \geq (1 - 0.22) \left(1-e^{- 1} - O(\varepsilon)\right)v\geq (0.49-O(\varepsilon))v,$$ Next assume that $\tau\geq 0.22$ and $\mu \geq 3.5\left(\tau -0.22\right)$. Since $\mu < 0.505$, we have $\tau \leq 0.365$. Hence implies that $$f(\tilde{S}) \geq (1 - \tau) \left(1-e^{- \left(\left({1-\delta}\right)3.5\left(\tau -0.22\right)+1\right)} - O(\varepsilon)\right)v\geq (0.50-O(\varepsilon))v.$$ Otherwise, that is, if $\mu \geq 3.5\left(\tau -0.22\right)$, then implies that $$\begin{aligned}
f(\tilde{S}_2) &\geq \left(\frac{2}{7}\mu+0.22 + \left(\frac{5}{6}-\frac{1}{3}\left(\frac{2}{7}\mu+0.22\right) - 0.49\right) \left( 1 - e^{-\left(\frac{{1-\delta}}{\mu}-1\right)}\right)-O(\varepsilon)\right)v\\
& \geq (0.49-O(\varepsilon))v,\end{aligned}$$ when $\mu<0.505$. Thus the statement holds.
We remark that the proof of Lemmas \[lem:046small\] works when $K\geq W$, and that of Lemma \[lem:046large\] works when $K\geq W$ and $c_1\geq {\underline{c}}_1\geq 0.5 W$.
In summary, we have the following theorem.
\[thm:046\] Suppose that we are given an instance ${\mathcal{I}}= (f,c, K, E)$ for the problem . Then we can find a $(0.46-\varepsilon)$-approximate solution in $O(\varepsilon^{-1})$ passes and $O(K\varepsilon^{-4}\log K)$ space. The total running time is $O(n \varepsilon^{-5}\log K)$.
As mentioned in the beginning of Section \[sec:simple\], if we design an algorithm running in $O(T_1)$ time and $O(T_2)$ space provided the approximate value $v$, then the total running time is $O(n\varepsilon^{-1}\log K + \varepsilon^{-1}T_1)$ and the space required is $O(\max\{\varepsilon^{-1}\log K, \varepsilon^{-1} T_2\})$. Thus suppose that we are given $v$ such that $v\leq f({\textup{\rm OPT}})\leq (1+\varepsilon)v$.
By Lemma \[lem:Assumption1\], we may assume that $c_1+c_2\leq {1-\varepsilon / \delta}$ where $\delta = 0.01$. We may assume that $c_1\geq 0.383$, as otherwise Lemma \[lem:simpleratio\] implies that yields a $(0.46-\varepsilon)$-approximation. For $i\in\{1,2\}$, we guess $\underline{c}_i, \overline{c}_i$ such that $\underline{c}_i\leq c_i\leq \overline{c}_i$ and $\overline{c}_i\leq (1+\varepsilon)\underline{c}_i$ by a geometric series. This takes $O(\varepsilon^{-2}\log K)$ space, since the range of $c_1$ is $[0.383, 0.46]$ and that of $c_2$ is $[1/K, 1]$.
When $c_1\leq 0.5$, it follows from Lemma \[lem:046small\] that we can find a $(0.46-\varepsilon)$-approximate solution in $O(\varepsilon^{-1})$ passes and $O(\varepsilon^{-1}K)$ space. When $c_1\geq 0.5$, it follows from Lemma \[lem:046large\] that we can find a $(0.49-\varepsilon)$-approximate solution in $O(\varepsilon^{-1})$ passes and $O(\varepsilon^{-1}K)$ space. Hence, for each $\underline{c}_i, \overline{c}_i$, it takes $O(\varepsilon^{-1})$ passes and $O(\varepsilon^{-1}K)$ space, running in $O(n\varepsilon^{-2})$ time in total. Thus, for a fixed $v$, the space required is $O(K\varepsilon^{-3}\log K)$, and the running time is $O(n\varepsilon^{-4}\log K)$. Therefore, the algorithm in total uses $O(\varepsilon^{-1})$ passes and $O(K \varepsilon^{-4}\log K)$ space, running in $O(n\varepsilon^{-5}\log K)$ time. Thus the statement holds.
Improved $0.5$-Approximation Algorithm {#sec:0.5}
======================================
In this section, we further improve the approximation ratio to $0.5$. Recall that we are given $v$ with $v\leq f({\textup{\rm OPT}})\leq (1+\varepsilon)v$ taking $O(\varepsilon^{-1})$ space.
Overview {#sec:0.5overview}
--------
We first remark that algorithms so far give us a $(0.5-\varepsilon)$-approximation for some special cases. In fact, Lemma \[lem:simpleratio\] and Corollary \[cor:IgnoreLarge\] lead to a $(0.5-\varepsilon)$-approximation when $c(o_1)\leq 0.3{K}$ or $f(o_1)\leq 0.15 v$.
\[cor:c1\_lb\] If $c(o_1)\leq 0.3{K}$ or $f(o_1)\leq 0.15 v$, then we can find a set $S$ in $O(\varepsilon^{-1})$ passes and $O(K)$ space such that $c(S)\leq K$ and $f(S)\geq (0.5-O(\varepsilon))v$.
First suppose that $c(o_1)\leq 0.3{K}$. By Lemma \[lem:simpleratio\], the output $S$ of satisfies that $$f(S)\geq \left(1-e^{-\frac{K-c(o_1)}{K}}-O(\varepsilon)\right)v \geq \left(1-e^{-0.7}-O(\varepsilon)\right)v\geq {(0.5-O(\varepsilon))}v.$$
Next suppose that $f(o_1)\leq 0.15 v$. We see that $f({\textup{\rm OPT}}-o_1)\geq f({\textup{\rm OPT}})-f(o_1)\geq 0.85 v$. By Corollary \[cor:IgnoreLarge\], we can find a set $S$ such that $K - c(o_2) < c(S)\leq K$ and $$f(S)\geq 0.85 \left(1-e^{-1} - O(\varepsilon)\right)v\geq (0.5-O(\varepsilon))v.$$
Moreover, the following corollary asserts that we may suppose that $f(o_1)$ and $f(o_2)$ are small.
\[cor:boundf1\] In the following cases, , together with , can find a $(0.5-\varepsilon)$-approximate solution in $O(\varepsilon^{-1})$ passes and $O(K\varepsilon^{-4}\log K)$ space:
1. when $c_1\geq 0.5$ and $f(o_1)\geq 0.362v$.
2. when $c_1\leq 0.5$ and $f(o_1)\geq 0.307v$.
3. when $f(o_2)\geq 0.307v$.
\(1) Suppose that $c_1\geq 0.5$ and $f(o_1)\geq 0.362v$. We may also suppose that $f(o_1) < 0.5v$, as otherwise we can just take a singleton with maximum return from $E$. We guess ${\underline{\tau}}$ and ${\overline{\tau}}$ such that $0.362v\leq {\underline{\tau}}v\leq f({\textup{\rm OPT}})\leq {\overline{\tau}}v\leq 0.5 v$ and ${\overline{\tau}}\leq (1+\varepsilon){\underline{\tau}}$ from the interval $[0.362, 0.5]$ by a geometric series using $O(\varepsilon^{-1})$ space. Consider applying for each ${\overline{\tau}}$. By Lemmas \[lem:good\_e\_2\] and \[lem:LargeFirst\], the output of is lower-bounded by the RHSs of the inequalities and , where we may assume that $\beta <0.5$.
First suppose that $\mu=\frac{1-{\overline{c}}_1-{\overline{c}}_2}{1-{\overline{c}}_1}\geq 0.495$. Then implies that, if ${\overline{\tau}}\geq 0.4$, we obtain $$f(\tilde{S}_1) \geq \left(0.4 + \left(\frac{5}{6}-\frac{4}{3}\cdot 0.4\right) \left( 1 - e^{-({1-\delta}) 0.495}\right)-O(\varepsilon)\right)v\geq (0.51-O(\varepsilon))v,$$ and if ${\overline{\tau}}< 0.4$, then $$f(\tilde{S}_1) \geq \left(0.362 + \left(\frac{9}{10}-\frac{3}{2}\cdot 0.362\right) \left( 1 - e^{-({1-\delta}) 0.495}\right)-O(\varepsilon)\right)v\geq (0.50-O(\varepsilon))v.$$ Otherwise, suppose that $\mu < 0.495$. Then implies that, if ${\overline{\tau}}\geq 0.4$, we have $$f(\tilde{S}_2) \geq \left(0.4 + \left(\frac{5}{6}-\frac{1}{3}\cdot 0.4 - 0.5\right) \left( 1 - e^{-\left(\frac{{1-\delta}}{0.495}-1\right)}\right)-O(\varepsilon)\right)v \geq (0.52-O(\varepsilon))v.$$ and if ${\overline{\tau}}< 0.4$, then $$f(\tilde{S}_2) \geq \left(0.362 + \left(\frac{9}{10}-\frac{1}{2}\cdot 0.362 - 0.5\right) \left( 1 - e^{-\left(\frac{{1-\delta}}{0.495}-1\right)}\right)-O(\varepsilon)\right)v \geq (0.50-O(\varepsilon))v.$$ Thus the statement holds.
\(2) The argument is similar to (1). Suppose that $c_1\leq 0.5$ and $f(o_1)\geq 0.307v$. We guess ${\underline{\tau}}$ and ${\overline{\tau}}$ such that $0.307v\leq {\underline{\tau}}v\leq f({\textup{\rm OPT}})\leq {\overline{\tau}}v\leq 0.50 v$ and ${\overline{\tau}}\leq (1+\varepsilon){\underline{\tau}}$ from the interval $[0.307, 0.5]$ by a geometric series using $O(\varepsilon^{-1})$ space. Consider applying for each ${\overline{\tau}}$. By Lemmas \[lem:good\_e\_1\] and \[lem:LargeFirst\], the output of is lower-bounded by the RHSs of and , where we may assume that $\beta <0.5$.
First suppose that $\mu=\frac{1-{\overline{c}}_1-{\overline{c}}_2}{1-{\overline{c}}_1}\geq 0.495$. Then implies that $$f(\tilde{S}_1) \geq \left(0.307 + (1-0.5) \left( 1 - e^{-\left({1-\delta}\right)0.495}\right)-O(\varepsilon)\right)v\geq (0.50-O(\varepsilon))v.$$ Otherwise, if $\mu \leq 0.495$, then implies that $$f(\tilde{S}_2) \geq \left(0.307 + (1-2\cdot 0.5 + 0.307) \left( 1 - e^{-\left(\frac{{1-\delta}}{0.495}-1\right)}\right)-O(\varepsilon)\right)v \geq (0.50-O(\varepsilon))v.$$ Thus the statement holds.
\(3) This case can be shown by applying to $E_2$. More precisely, we replace $c_1$ with $c_2$ in with $\tau v \geq f(o_2)\geq \tau v/(1+\varepsilon)$. We also set $W_1=W-{\underline{c}}_2K$ instead of $W-{\underline{c}}_1K$. Then, since $({\overline{c}}_1 + {\overline{c}}_2)K\leq K$, we can use the same analysis as in the proof of Lemma \[lem:046small\]; the output of is lower-bounded by the RHSs of the following three inequalities: $$\begin{aligned}
f(\tilde{S}_0) &\geq \beta v, \\
f(\tilde{S}_1) &\geq \left({\overline{\tau}}+ (1-\beta) \left( 1 - e^{-\left({1-\delta}\right)\mu'}\right)-O(\varepsilon)\right)v,\\
f(\tilde{S}_2) &\geq \left({\overline{\tau}}+ (1-2\beta+{\overline{\tau}}) \left( 1 - e^{-\left(\frac{{1-\delta}}{\mu'}-1\right)}\right)-O(\varepsilon)\right)v,\end{aligned}$$ where $\mu'=\frac{1-{\overline{c}}_1-{\overline{c}}_2}{1-{\overline{c}}_2}$. We may assume that $\beta < 0.5$. Since the lower bounds are the same as and in the proof (2), the statement holds.
Recall that, in Section \[sec:046\], we found an item $e$ such that Observation \[obs:1\] can be applied, that is, $f({\textup{\rm OPT}}-o_1\mid e)$ and $f({\textup{\rm OPT}}-o_1-o_2\mid e)$ are large. In this section, we aim to find a good set $Y\subseteq E$ such that $f({\textup{\rm OPT}}'\mid Y)$ is large for some ${\textup{\rm OPT}}'\subseteq {\textup{\rm OPT}}$, using $O(\varepsilon^{-1})$ passes, while guaranteeing that the remaining space $K- c(Y)$ is sufficiently large. We then solve the problem of maximizing the function $f(\cdot\mid Y)$ to approximate ${\textup{\rm OPT}}'$ with algorithms in previous sections. Specifically, we devise two strategies depending on the size of $c_1+c_2$ (see Sections \[sec:c1c2Large\] and \[sec:c1c2Small\] for more specific values of $c_1$ and $c_2$).
#### First Strategy: Packing small items first {#first-strategy-packing-small-items-first .unnumbered}
First consider the case when $c_1+c_2$ is large. Recall that $f(o_1)$ and $f(o_2)$ are supposed to be small by Corollary \[cor:boundf1\]. Hence, there is a “dense” set ${\textup{\rm OPT}}-o_1-o_2$ of small items, i.e., $\frac{ f({\rm OPT} \setminus \{o_1, o_2\})}{ c({\rm OPT} \setminus \{o_1, o_2\})}$ is large. Therefore, we consider collecting such small items. However, if we apply to the original instance to approximate ${\textup{\rm OPT}}-o_1-o_2$, then we can only find a set whose function value is at most $f({\textup{\rm OPT}}-o_1-o_2)$.
The main idea of this case is to stop collecting small items early. That is, we introduce $$\begin{aligned}
\label{prob:c1c2Large_1st}
\text{maximize\ \ }f(S) \quad \text{subject to \ } c(S)\leq K_1, \quad S\subseteq E,
$$ where $K_1\leq K - c(o_1)$, and apply to this instance to approximate ${\textup{\rm OPT}}-o_1-o_2$. Let $Y$ be the output. The key observation is that, in Phase 2, since we still have space to take $o_1$, we may assume that $f({\textup{\rm OPT}}-o_1\mid Y)\geq 0.5v$ in a way similar to Lemma \[lem:good\_e\_1\]. Given such a set $Y$, define $g(\cdot )=f(\cdot \mid Y)$ and the problem: $$\begin{aligned}
\label{prob:c1c2Large_2nd}
\text{maximize\ \ }g(S) \quad &\text{subject to \ } c(S)\leq K-c(Y), \quad S\subseteq E.\end{aligned}$$ We apply approximation algorithms in Sections \[sec:simple\]–\[sec:046\] to approximate ${\textup{\rm OPT}}-o_1$, using the fact that $g({\textup{\rm OPT}}- o_1) \geq 0.5v$ and $c({\textup{\rm OPT}}- o_1) \leq (1-{\underline{c}}_1)K$. Let $\tilde{S}$ be the output of this phase. Then $Y\cup \tilde{S}$ is a feasible set to the original instance, and it holds that $f(Y\cup \tilde{S}) = f(Y) + g(\tilde{S})$.
We remark that the lower bound for $f(Y)$ depends on the size $c(Y)$ by Corollary \[cor:simpleratio\], and that for $g(\tilde{S})$ depends on the knapsack capacity $K-c(Y)$. Hence the lower bound for $f(Y\cup \tilde{S})$ can be represented as a function with respect to $c(Y)$. By balancing the two lower bounds with suitable $K_1$, we can obtain a $(0.5-O(\varepsilon))$-approximation. See Sections \[sec:Large\_c1c2Large\] and \[sec:Small\_c1c2Large\] for more details.
#### Second Strategy: Packing small items later {#second-strategy-packing-small-items-later .unnumbered}
Suppose that $c_1+c_2$ is small. Then $c({\rm OPT} \setminus \{o_1, o_2\})$ is large and we do not have the dense set of small items as before. For this case, we introduce a modified version of for the original problem to find a good set $Y$. The difference is that, in each round, we check whether any item in $E$, by itself, is enough to give us a solution with $0.5v$. Such a modification would allow us to lower bound $f({\textup{\rm OPT}}'|Y)$ for some ${\textup{\rm OPT}}' \subseteq {\textup{\rm OPT}}$ for Phase 2. We may assume that $c(Y)<0.7K$, as otherwise we are done by Lemma \[lem:fact\_knapsack\], which means that we still have enough space to pack other items. That is, define $g(\cdot )=f(\cdot \mid Y)$ and the problem: $$\begin{aligned}
\label{prob:c1c2Small_2nd}
\text{maximize\ \ }g(S) \quad &\text{subject to \ } c(S)\leq K-c(Y), \quad S\subseteq E.\end{aligned}$$ Let ${\textup{\rm OPT}}'=\{e\in {\textup{\rm OPT}}\mid c(e)\leq K-c(Y)\}$. We aim to find a feasible set to this problem that approximates ${\textup{\rm OPT}}'$ in Phase 2. Thanks to the modification of , we can assume that $g({\textup{\rm OPT}}')$ is large. However, an extra difficulty arises if $K-c(Y)\geq c({\textup{\rm OPT}}')$, we cannot apply our algorithms developed in previous sections. For this, we need to combine and to obtain the better ratios, where the results are summarized as below.
\[lem:largeW\] Suppose that we are given an instance ${\mathcal{I}}' = (f,c, K', E)$ for the problem . Let $X$ be a subset such that $c(e)\leq K'$ for any $e\in X$ and $c(X)\leq W' =\eta K'$, where $\eta > 1$. We further suppose that $v'\leq f(X)\leq O(1)v'$. Then we can find a set $S$ in $O(n\varepsilon^{-4}\log K')$ time and $O(K'\varepsilon^{-3}\log K')$ space, using $O(\varepsilon^{-1})$ passes, such that the following hold:
1. If $\eta\in [1, 1.4]$, then $f(S)\geq (0.315-O(\varepsilon)) v'$.
2. If $\eta\in [1.4, 1.5]$, then $f(S)\geq (0.283-O(\varepsilon)) v'$.
3. If $\eta\in [1.5, 2]$, then $f(S)\geq (0.218-O(\varepsilon)) v'$.
4. If $\eta\in [2, 2.5]$, then $f(S)\geq (0.178-O(\varepsilon)) v'$.
The proof will be given in Section \[sec:ProofLargeW\].
Using Lemma \[lem:largeW\] with case analysis, we can find a feasible set to that approximates ${\textup{\rm OPT}}'$. This solution, together with $Y$, gives a $(0.5-O(\varepsilon))$-approximate solution.
Packing Small Items First {#sec:c1c2Large}
-------------------------
### When $c_1\geq 0.5$ {#sec:Large_c1c2Large}
In this section, we assume that $c_1\geq {\underline{c}}_1 \geq 0.5$. Since the range of $c_1$ is $[0.5, 1]$, we can guess ${\underline{c}}_1$ and ${\overline{c}}_1$ using $O(\varepsilon^{-1})$ space. We also guess ${\underline{c}}_2$ and ${\overline{c}}_2$ using $O(\varepsilon^{-1}\log K)$ space.
Recall that in the proof of Lemma \[lem:046large\], we have shown that we obtain a $(0.5-\varepsilon)$-approximation when $\mu = \frac{1-{\overline{c}}_1 -{\overline{c}}_2}{1-{\overline{c}}_1}\geq 0.505$. Therefore, in this section, we assume that $\mu < 0.505$, i.e., $$\label{eq:c1large_assumption}
1- {\overline{c}}_1\geq \frac{200}{101}(1-{\overline{c}}_1 -{\overline{c}}_2) \geq 1.98 (1-{\overline{c}}_1 -{\overline{c}}_2).
$$ This implies that ${\overline{c}}_1+{\overline{c}}_2\geq 0.747$.
Then, if holds, we can find a $(0.5-\varepsilon)$-approximate solution in $O(\varepsilon^{-1})$ passes and $O(K\varepsilon^{-5}\log^2 K)$ space. The total running time is $O(n \varepsilon^{-6}\log^2 K)$.
The rest of this subsection is devoted to the proof of the above lemma. It suffices to design an $O(\varepsilon^{-1})$-pass algorithm provided the approximated value $v$ and ${\overline{c}}_i$, ${\underline{c}}_i$ ($i=1,2$) such that ${\overline{c}}_i\leq (1+\varepsilon){\underline{c}}_i$, running in $O(K\varepsilon^{-2}\log K)$ space and $O(n\varepsilon^{-3}\log K)$ time. We may also assume that $c_1+c_2\leq {1-\varepsilon / \delta}$ where $\delta = 0.01$.
#### Finding a good set $Y$. {#finding-a-good-set-y. .unnumbered}
By Corollary \[cor:boundf1\], we may assume that $f({\textup{\rm OPT}}-o_1-o_2)$ is relatively large. More specifically, $f({\textup{\rm OPT}}-o_1-o_2)\geq f({\textup{\rm OPT}}) - f(o_1) - f(o_2) \geq 0.33 v$. On the other hand, implies that ${\overline{c}}_1+{\overline{c}}_2\geq 0.747$, which means that $c({\textup{\rm OPT}}-o_1-o_2)$ is small. We consider collecting such a “dense” set of small items by introducing $$\begin{aligned}
\label{prob:c1c2Large_c1large_1st}
\text{maximize\ \ }f(S) \quad \text{subject to \ } c(S)\leq 1.98 {\underline{{c_{\rm s}}}}K, \quad S\subseteq E,
$$ where we define ${\underline{{c_{\rm s}}}}=1 - {\overline{c}}_1 -{\overline{c}}_2$. We apply to to find a set $Y$ that approximates ${\textup{\rm OPT}}- o_1 -o_2$. By , we still have space to take $o_1$ after taking $Y$. We denote ${\overline{{c_{\rm s}}}}=1 - {\underline{c}}_1 -{\underline{c}}_2$.
\[lem:SmallFirst\_c1large\] We can find a subset $Y$ in $O(\varepsilon^{-1})$ passes and $O(K)$ space such that $$\begin{aligned}
f(Y) & \geq 0.33 \left(1-e^{-\frac{c(Y)}{{\overline{{c_{\rm s}}}}K}}\right)v-O(\varepsilon)v, \\
1.98{\underline{{c_{\rm s}}}}K \geq c(Y) & \geq \left(0.98{\overline{{c_{\rm s}}}}- 1.98 \varepsilon ({\underline{c}}_1+{\underline{c}}_2) \right)K.\end{aligned}$$ Moreover, if $f(Y+o_1)< 0.5 v$, then $f({\textup{\rm OPT}}-o_1\mid Y)\geq 0.5 v$.
The first inequality follows from Corollary \[cor:simpleratio\] applied to approximate ${\textup{\rm OPT}}-o_1-o_2$ for the instance , noting that $f({\textup{\rm OPT}}- o_1 - o_2)\geq 0.33 v$. Since items in ${\textup{\rm OPT}}- o_1 - o_2$ are of size at most $c(o_3)$, it is obvious from Lemma \[lem:round\_knapsack\] that $(1.98{\underline{{c_{\rm s}}}}-c_3) K \leq c(Y) \leq 1.98 {\underline{{c_{\rm s}}}}K$. Since ${\underline{{c_{\rm s}}}}\geq {\overline{{c_{\rm s}}}}- (1+\varepsilon)({\underline{c}}_1+{\underline{c}}_2)$ and $c_3\leq {\overline{{c_{\rm s}}}}$, the lower bound is bounded by $$(1.98{\underline{{c_{\rm s}}}}-c_3) K \geq 1.98({\overline{{c_{\rm s}}}}- \varepsilon({\underline{c}}_1+{\underline{c}}_2))K - {\overline{{c_{\rm s}}}}K = \left(0.98{\overline{{c_{\rm s}}}}- 1.98 \varepsilon ({\underline{c}}_1+{\underline{c}}_2) \right)K.$$ Finally, if $f(Y+o_1)< 0.5 v$, then we have $$f({\textup{\rm OPT}}-o_1\mid Y)\geq f({\textup{\rm OPT}}\mid Y) - f(o_1\mid Y) \geq (f({\textup{\rm OPT}})-f(Y)) - (0.5 v - f(Y)) \geq 0.5 v.$$
#### Packing the remaining space. {#packing-the-remaining-space. .unnumbered}
Define $g(\cdot )=f(\cdot \mid Y)$. Consider the problem , and let ${\mathcal{I}}'$ be the corresponding instance. We shall find a feasible set to approximate ${\textup{\rm OPT}}- o_1$. By Lemma \[lem:SmallFirst\_c1large\], we may assume that $g({\textup{\rm OPT}}- o_1)\geq v'=v/2$, as otherwise we can find an item $e$ such that $c(Y+e)\leq K$ and $f(Y+e)\geq 0.5 v$ using a single pass. Let $W' = (1-{\underline{c}}_1)K$ and $K'=K-c(Y)$. Then $c({\textup{\rm OPT}}-o_1)\leq W'$ holds.
The algorithm can find a set $\tilde{S}$ such that $$g(\tilde{S}) \geq \frac{1}{2}\left(1-e^{-\frac{1 - y - c_2}{1-{\underline{c}}_1}} - O(\varepsilon)\right)v,$$ where $y = c(Y)/K$.
Moreover, noting that $c(Y)\leq 1.98 {\underline{{c_{\rm s}}}}\leq (1-{\overline{c}}_1)\leq 0.5 \leq {\underline{c}}_1$ since ${\underline{c}}_1\geq 0.5$ and , we have $W'\leq K'$. Hence we can apply a $(0.46-\varepsilon)$-approximation algorithm in Lemmas \[lem:046small\] and \[lem:046large\] with $g({\textup{\rm OPT}}- o_1) \geq v'= v/2$ and $c({\textup{\rm OPT}}- o_1) \leq W'$. That is, we can find a set $\tilde{S}'$ such that $$g(\tilde{S}')\geq \frac{1}{2}(0.46-O(\varepsilon)) v=(0.23 - O(\varepsilon))v.$$ Then $Y\cup \tilde{S}$ and $Y\cup \tilde{S}'$ are both feasible set to the original instance. By Lemma \[lem:SmallFirst\_c1large\], we have $$\begin{aligned}
f(Y\cup \tilde{S})& =f(Y)+g(\tilde{S}) \geq 0.33 \left(1-e^{-\frac{y}{{\overline{{c_{\rm s}}}}}}\right)v + \frac{1}{2}\left(1-e^{-\frac{1 - y - c_2}{1-{\underline{c}}_1}}\right)v - O(\varepsilon)v, \label{eq:c1large_c1c2large_1}\\
f(Y\cup \tilde{S}')& =f(Y)+g(\tilde{S}') \geq 0.33 \left(1-e^{-\frac{y}{{\overline{{c_{\rm s}}}}}}\right)v + 0.23 v- O(\varepsilon)v.\label{eq:c1large_c1c2large_2}\end{aligned}$$ Since each bound is a concave function with respect to $y$, the worst case is achieved when $y=0.98{\overline{{c_{\rm s}}}}- 1.98 \varepsilon ({\underline{c}}_1+{\underline{c}}_2)$ or $1.98{\underline{{c_{\rm s}}}}$.
Suppose that $y= 0.98{\overline{{c_{\rm s}}}}- 1.98 \varepsilon ({\underline{c}}_1+{\underline{c}}_2)$. Then it holds that $$\frac{y}{{\overline{{c_{\rm s}}}}} = 0.98 - 1.98 \varepsilon\frac{{\underline{c}}_1+{\underline{c}}_2}{1-{\underline{c}}_1-{\underline{c}}_2}\geq 0.98 -1.98\delta,$$ assuming that ${\underline{c}}_1+{\underline{c}}_2\leq {1-\varepsilon / \delta}$. Moreover, since $y \leq {\overline{{c_{\rm s}}}}= 1 - {\underline{c}}_1 -{\underline{c}}_2$, $$\frac{1 - y - c_2}{1-{\underline{c}}_1} \geq \frac{1 - (1 - {\underline{c}}_1 -{\underline{c}}_2) -c_2}{1-{\underline{c}}_1}\geq \frac{{\underline{c}}_1}{1-{\underline{c}}_1} -\varepsilon\frac{{\underline{c}}_2}{1-{\underline{c}}_1}\geq 1 - \varepsilon,$$ where the last inequality follows since ${\underline{c}}_1\geq 0.5$ and ${\underline{c}}_2\leq 1-{\underline{c}}_1$. Hence, by , we obtain $$f(Y\cup \tilde{S})\geq 0.33\left(1-e^{-\left(0.98-1.98\delta\right)}\right) + \frac{1}{2}\left(1-e^{-1}\right)- O(\varepsilon)v\geq (0.51- O(\varepsilon)) v$$ when $\delta = 0.01$.
Suppose that $y= 1.98 {\underline{{c_{\rm s}}}}$. Then we have $$\frac{y}{{\overline{{c_{\rm s}}}}} = 1.98 \frac{{\underline{{c_{\rm s}}}}}{{\overline{{c_{\rm s}}}}}\geq 1.98 \left({1-\delta}\right).$$ Hence implies that $$f(Y \cup \tilde{S}')\geq 0.33\left(1-e^{-1.98 \left({1-\delta}\right)}\right)v + 0.23 v- O(\varepsilon)v \geq (0.51- O(\varepsilon)) v$$ when $\delta = 0.01$.
Therefore, it follows that the maximum of $f(Y\cup \tilde{S})$ and $f(Y\cup \tilde{S}')$ is at least $(0.51-O(\varepsilon))v$ for any $c(Y)$. Thus we can find a $(0.5-O(\varepsilon))$-approximate solution assuming .
In the above, we apply the algorithms in Sections \[sec:046\] to ${\mathcal{I}}'$ to approximate ${\textup{\rm OPT}}- o_1$. To do it, we need to have approximated sizes of $c(o_2)$ and $c(o_3)$, which are the two largest items in ${\textup{\rm OPT}}- o_1$. Since ${\overline{c}}_2, {\underline{c}}_2$ are given in the beginning, it suffices to guess approximated values ${\overline{c}}_3$ and ${\underline{c}}_3$ of $c(o_3)$ using $O(\varepsilon^{-1}\log K)$ space. Therefore, the space required is $O(K\varepsilon^{-2}\log K)$ and the running time is $O(n\varepsilon^{-3}\log K)$.
In summary, when $c_1\geq 0.5$, we have the following, combining the above discussion with Lemmas \[lem:Assumption1\] and \[lem:046large\].
\[thm:05large\] For any instance ${\mathcal{I}}= (f,c, K, E)$ for the problem , if $c_1 \geq 0.5$, then we can find a $(0.5-\varepsilon)$-approximate solution in $O(\varepsilon^{-1})$ passes and $O(K\varepsilon^{-5}\log^2K)$ space. The total running time is $O(n \varepsilon^{-6}\log^2 K)$.
### When $c_1\leq 0.5$ {#sec:Small_c1c2Large}
In this section, we assume that $c_1\leq 0.5$. Note that we may assume that $c_1\geq 0.3$ by Corollary \[cor:c1\_lb\]. Furthermore, we suppose that $$\label{eq:c1small_assumption}
2.4 (1-{\overline{c}}_1 -{\overline{c}}_2) \leq 1- {\overline{c}}_1.
$$ (Section \[sec:c1c2Small\] handles the case when this inequality does not hold.) This implies that ${\overline{c}}_1+{\overline{c}}_2\geq 14/19 \geq 0.735$, where the minimum is when ${\overline{c}}_1={\overline{c}}_2$. Thus ${\overline{c}}_1\geq 7/19 \geq 0.36$. The argument is similar to the previous subsection. That is, we first try to find a dense set of small items, and then apply algorithms in Sections \[sec:simple\]–\[sec:046\].
\[lem:Small\_c1c2Large\] Suppose that $0.3 \leq c_1\leq 0.5$. Then, if holds, we can find a $(0.5-\varepsilon)$-approximate solution in $O(\varepsilon^{-1})$ passes and $O(K\varepsilon^{-6}\log K)$ space. The total running time is $O(n \varepsilon^{-7}\log K)$.
The rest of this subsection is devoted to the proof of the above lemma. Since the range of $c_1$ is $[0.3, 0.5]$, we can guess ${\underline{c}}_1$ and ${\overline{c}}_1$, where ${\underline{c}}_1\geq 0.3$ and ${\overline{c}}_1\leq 0.5$, using $O(\varepsilon^{-1})$ space. We also guess ${\underline{c}}_2$ and ${\overline{c}}_2$ using $O(\varepsilon^{-1})$ space, since the range of $c_2$ is $[0.235, 0.5]$ by . Recall that they satisfy ${\underline{c}}_i\leq c_i\leq {\overline{c}}_i\leq (1+\varepsilon){\underline{c}}_i$ for $i=1,2$. Therefore, it suffices to design an $O(\varepsilon^{-1})$-pass algorithm provided the approximated value $v$ and ${\overline{c}}_i$, ${\underline{c}}_i$ ($i=1,2$) such that ${\overline{c}}_i\leq (1+\varepsilon){\underline{c}}_i$, running in $O(K\varepsilon^{-3}\log K)$ space and $O(n\varepsilon^{-4}\log K)$ time. We may also assume that $c_1+c_2\leq {1-\varepsilon / \delta}$ where $\delta = 0.01$.
#### Finding a good set $Y$. {#finding-a-good-set-y.-1 .unnumbered}
By Corollary \[cor:boundf1\], we may assume that $f({\textup{\rm OPT}}-o_1-o_2)$ is relatively large, while $c({\textup{\rm OPT}}-o_1-o_2)$ is small. More specifically, $f({\textup{\rm OPT}}-o_1-o_2)\geq f({\textup{\rm OPT}}) - f(o_1) - f(o_2) \geq 0.386 v$, but $c({\textup{\rm OPT}}-o_1-o_2)\leq 5/19 K\leq 0.265 K$. We consider collecting such a “dense” set of small items by introducing $$\begin{aligned}
\label{prob:c1c2Large_c1small_1st}
\text{maximize\ \ }f(S) \quad \text{subject to \ } c(S)\leq 2.4{\underline{{c_{\rm s}}}}K, \quad S\subseteq E,\end{aligned}$$ where we recall ${\underline{{c_{\rm s}}}}= 1 - {\overline{c}}_1 -{\overline{c}}_2$. By , we still have space to take $o_1$ after applying to . We denote ${\overline{{c_{\rm s}}}}=1 - {\underline{c}}_1 -{\underline{c}}_2$.
Similarly to Lemma \[lem:SmallFirst\_c1large\], we have the following lemma.
\[lem:SmallFirst\_c1small\] We can find a subset $Y$ in $O(\varepsilon^{-1}n)$ time and $O(K)$ space such that $$\begin{aligned}
f(Y) & \geq 0.386 \left(1-e^{-\frac{c(Y)}{{\overline{{c_{\rm s}}}}K}}\right)v -O(\varepsilon)v,\\
2.4{\underline{{c_{\rm s}}}}K & \geq c(Y)\geq (2.4{\underline{{c_{\rm s}}}}- c_3)K.
$$ Moreover, if $f(Y+o_1)< 0.5 v$, then $f({\textup{\rm OPT}}-o_1\mid Y)\geq 0.5 v$.
#### Packing the remaining space. {#packing-the-remaining-space.-1 .unnumbered}
Let $Y$ be a set found by Lemma \[lem:SmallFirst\_c1small\]. Define $g(\cdot )=f(\cdot \mid Y)$. Consider the problem . By Lemma \[lem:SmallFirst\_c1small\], we may assume that $g({\textup{\rm OPT}}- o_1)\geq v/2$ by checking whether adding an item $e$ to $Y$ gives us a $0.5$-approximation using a single pass. We set $W'=(1-{\underline{c}}_1)K\geq c({\textup{\rm OPT}}- o_1)$ and $K'=K-c(Y)$. There are two cases depending on the sizes of $W'$ and $K'$. Note that $K'\geq W'$ if and only if $y \leq c_1$, where we denote $y = c(Y)/K$.
#### (a) $y \leq c_1$. {#a-y-leq-c_1. .unnumbered}
In this case, $K'\geq W'$ holds. Hence we can apply our algorithm in Section \[sec:046\] with $g({\textup{\rm OPT}}- o_1) \geq v'= 0.5v$ and $c({\textup{\rm OPT}}- o_1) \leq W'$. Our algorithm in fact admits a $(0.49-\varepsilon)$-approximation by Lemma \[lem:046large\] since the biggest size in ${\textup{\rm OPT}}- o_1$ is $c_2K$ and, by , $$c_2K \geq (1-\varepsilon) {\overline{c}}_2K\geq \frac{1.4}{2.4}(1-{\overline{c}}_1)K - {\overline{c}}_2 \varepsilon K \geq 0.5 W',$$ when $\varepsilon$ is small, e.g., $\varepsilon < 1/12$. Let $S$ be the obtained set, that is, it satisfies that $c(S)\leq K - c(Y)$ and $g(Y)\geq (0.49-O(\varepsilon))v'$. Then $Y\cup S$ is a feasible set to the original instance.
By Lemma \[lem:SmallFirst\_c1small\], the set $Y\cup S$ satisfies $$\label{eq:ratio_i}
f(Y\cup S) = f(Y)+g(S)\geq 0.386 \left(1-e^{-\frac{y}{{\overline{{c_{\rm s}}}}}}\right)v + 0.5\cdot 0.49v - O(\varepsilon)v.$$ Since $y\geq 2.4{\underline{{c_{\rm s}}}}- c_3\geq 2.4{\underline{{c_{\rm s}}}}- {\overline{{c_{\rm s}}}}$ by Lemma \[lem:SmallFirst\_c1small\], the exponent in is $$\frac{y}{{\overline{{c_{\rm s}}}}} \geq 2.4\frac{{\underline{{c_{\rm s}}}}}{{\overline{{c_{\rm s}}}}} - 1\geq 2.4({1-\delta})-1\geq 1.4 - 2.4\delta,$$ when ${\underline{c}}_1 +{\underline{c}}_2\leq {1-\varepsilon / \delta}$. Hence the RHS of is lower-bounded by $$0.386 \left(1-e^{-1.4 + 2.4\delta}\right)v + 0.5\cdot 0.49v - O(\varepsilon)v \geq (0.53-O(\varepsilon))v.$$
To apply the algorithms in Sections \[sec:046\] to approximate ${\textup{\rm OPT}}- o_1$, we need to have approximated sizes of $c(o_2)$ and $c(o_3)$. Since we need to guess ${\overline{c}}_3, {\underline{c}}_3$ using $O(\varepsilon^{-1}\log K)$ additional space, the space required is $O(K\varepsilon^{-2}\log K)$ and the running time is $O(n\varepsilon^{-3}\log K)$.
#### (b) $y > c_1$. {#b-y-c_1. .unnumbered}
In this case, $K' < W'$ holds. We consider the problem to approximate ${\textup{\rm OPT}}-o_1-o_2$.
Suppose that $\tau v' \geq g(o_2)\geq \tau v'/(1+\varepsilon)$. Since $g({\textup{\rm OPT}}- o_1)\geq v'$, it holds that $g({\textup{\rm OPT}}- o_1 -o_2)\geq g({\textup{\rm OPT}}- o_1) - g(o_2)\geq (1-\tau)v'-\varepsilon v'$. Since $v'=v/2$, it follows from Corollary \[cor:simpleratio\] that we can find a set $\tilde{S}_1$ such that $c(\tilde{S}_1)\leq K - c(Y)$ and $$\label{eq:c1small_ii_1}
g(\tilde{S}_1) \geq \frac{1}{2}(1 - \tau)\left(1-e^{-\frac{1 - y - c_3}{{\overline{{c_{\rm s}}}}}}\right)v - O(\varepsilon)v.$$ Moreover, if we take a singleton $e$ with maximum return $g(e)$ such that $c(e)\leq K - c(Y)$, then letting $\tilde{S}_2=\{e\}$, we have $c(\tilde{S}_2)\leq K - c(Y)$ and $$\label{eq:c1small_ii_2}
g(\tilde{S}_2) \geq g(o_2)\geq \frac{1}{2}\tau v - O(\varepsilon)v.$$ Note that $c({\textup{\rm OPT}}-o_1-o_2)=(1-{\underline{c}}_1-{\underline{c}}_2)K\leq 5/19 K$ and $K' = K-c(Y)\geq {\overline{c}}_1 K \geq 7/19 K$. Hence Lemmas \[lem:046small\] and \[lem:046large\] are applicable to approximate ${\textup{\rm OPT}}-o_1-o_2$, and we can find a set $\tilde{S}_3$ such that $c(\tilde{S}_3)\leq K - c(Y)$ and $$\label{eq:c1small_ii_3}
g(\tilde{S}_3) \geq \frac{1}{2}(1 - \tau) 0.46 v - O(\varepsilon)v.$$ Then the lower bound of the best solution is $$\max\{f(Y\cup \tilde{S}_\ell)\mid \ell=1,2,3\}\geq
0.386 \left(1-e^{-\frac{y}{{\overline{{c_{\rm s}}}}}}\right)v + \max\left\{ g(\tilde{S}_\ell) \mid \ell =1,2,3\right\} - O(\varepsilon)v.$$ Since every bound is a concave function with respect to $y$, the worst case is achieved when $y=c_1$ or $2.4{\underline{{c_{\rm s}}}}$. Recall that ${\overline{c}}_1 \geq 7/19$ and ${\overline{c}}_1 +{\overline{c}}_2\geq 14/19$.
Suppose that $y=c_1$. If $\tau \geq 0.42$, then implies that $$f(Y\cup \tilde{S}_2) \geq 0.386 \left(1-e^{-\frac{c_1}{{\overline{{c_{\rm s}}}}}}\right)v+\frac{1}{2}0.42 v - O(\varepsilon)v\geq (0.50-O(\varepsilon)) v,$$ since $$\frac{c_1}{{\overline{{c_{\rm s}}}}} \geq \frac{7/19 - \varepsilon}{5/19 + \varepsilon}\geq 1.4 - O(\varepsilon).$$ If $\tau \leq 0.42$, then implies $$\label{eq:c1small_ii_4}
f(Y\cup \tilde{S}_1) \geq 0.386 \left(1-e^{-\frac{c_1}{{\overline{{c_{\rm s}}}}}}\right)v+ \frac{1}{2}(1 - 0.42)\left(1-e^{-\frac{1 - {\overline{c}}_1 - {\overline{c}}_3}{{\overline{{c_{\rm s}}}}}} - O(\varepsilon)\right)v.
$$ Since ${\overline{c}}_3\leq {\overline{{c_{\rm s}}}}\leq 5/19+\varepsilon$, we have $$\frac{c_1}{{\overline{{c_{\rm s}}}}} \geq \frac{19}{5}{\overline{c}}_1 - O(\varepsilon), \text{\ and\ }
\frac{1 - {\overline{c}}_1 - {\overline{c}}_3}{{\overline{{c_{\rm s}}}}} \geq \frac{1 - {\overline{c}}_1}{{\overline{{c_{\rm s}}}}} - 1\geq \frac{19}{5}\left(1- {\overline{c}}_1\right)-1-O(\varepsilon).$$ Hence implies that $$f(Y\cup \tilde{S}_1) \geq 0.386 \left(1-e^{-\frac{19}{5}{\overline{c}}_1}\right)v+ 0.29\left(1-e^{-\frac{19}{5}\left(1- {\overline{c}}_1\right)+1}\right)v - O(\varepsilon)v \geq (0.50-O(\varepsilon)) v$$ as $0.5 \geq {\overline{c}}_1\geq 7/19$.
Suppose that $y=2.4{\underline{{c_{\rm s}}}}$. Then we have $\frac{y}{{\overline{{c_{\rm s}}}}} \geq 2.4(1-\delta)$, since ${\underline{c}}_1 + {\underline{c}}_2 \leq {1-\varepsilon / \delta}$. If $\tau \geq 0.314$, then implies that $$f(Y\cup \tilde{S}_2) \geq 0.386 \left(1-e^{-2.4\left({1-\delta}\right)}\right)v+\frac{1}{2}0.314 v -O(\varepsilon) v\geq (0.50-O(\varepsilon)) v.$$ If $\tau \leq 0.314$, then implies that $$f(Y\cup \tilde{S}_3) \geq 0.386 \left(1-e^{-2.4\left({1-\delta}\right)}\right)v+ \frac{1}{2}(1-0.314)0.46 -O(\varepsilon) v \geq (0.50-O(\varepsilon)) v.$$
Therefore, it holds that $$\max\{f(Y\cup \tilde{S}_\ell)\mid \ell=1,2,3\}\geq (0.50-O(\varepsilon)) v.$$ Thus we can find a $(0.5-O(\varepsilon))$-approximate solution. Note that we apply the algorithms in Sections \[sec:046\] to approximate ${\textup{\rm OPT}}- o_1 - o_2$ in the above, and hence we need to estimate approximations of $c(o_3)$ and $c(o_4)$, which are the two largest items in ${\textup{\rm OPT}}- o_1 - o_2$. This requires $O(\varepsilon^{-2}\log K)$ space in a similar way to the proof of Theorem \[thm:046\]. Therefore, the space required is $O(K\varepsilon^{-3}\log K)$ and the running time is $O(n\varepsilon^{-4}\log K)$.
Packing Small Items Later {#sec:c1c2Small}
-------------------------
In this section, we consider the remaining case. By Corollary \[cor:c1\_lb\] and Theorem \[thm:05large\], it suffices to consider the case when $0.3\leq c_1\leq 0.5$. Moreover, we assume that $2.4(1-{\overline{c}}_1 -{\overline{c}}_2) > 1- {\overline{c}}_1$, as otherwise Lemma \[lem:Small\_c1c2Large\] implies a $(0.5-\varepsilon)$-approximation. That is, ${\overline{c}}_2< \frac{1.4}{2.4}(1-{\overline{c}}_1)$. Hence it suffices to consider when $c_2\leq 7/19\leq 0.37$.
\[lem:LastCase\] Suppose that $0.3\leq c_1\leq 0.5$. Then, if does not hold, then we can find a $(0.5-\varepsilon)$-approximate solution in $O(\varepsilon^{-1})$ passes and $O(K \varepsilon^{-7}\log^2 K)$ space. The total running time is $O(n \varepsilon^{-8}\log^2 K)$.
We first show that we may assume that ${\overline{c}}_2$ is bounded from below.
\[cor:Small\_bound\_c2\] Suppose that $0.3\leq c_1\leq 0.5$. If $\frac{1-{\overline{c}}_2}{1-{\overline{c}}_1}\geq 1.3$, then we can find a set $S$ such that $f(S)\geq (0.5-O(\varepsilon))v$ in $O(K)$ space and $O(\varepsilon^{-1})$ passe.
By Corollary \[cor:boundf1\], we may suppose that $f(o_1)<0.307v$. If $\frac{1-{\overline{c}}_2}{1-{\overline{c}}_1}\geq 1.3$, then it holds that $$\frac{1 - {\overline{c}}_2}{1-{\underline{c}}_1}\geq \frac{1 - {\overline{c}}_1}{1-{\underline{c}}_1} \frac{1-{\overline{c}}_2}{1-{\overline{c}}_1} \geq 1.3({1-\delta}).$$ Hence Corollary \[cor:simpleratio\] with $\tau = 0.307$ implies that we can find a set $S$ such that $$f(S) \geq (1 - 0.307)\left(1-e^{-1.3({1-\delta})} - O(\varepsilon)\right)v\geq (0.5-O(\varepsilon))v.$$
Since the range of $c_1$ is $[0.3, 0.5]$, we can guess ${\underline{c}}_1, {\overline{c}}_1$ with ${\overline{c}}_1\leq (1+\varepsilon){\underline{c}}_1$ using $O(\varepsilon^{-1})$ space. Moreover, the above corollary implies that we may assume that ${\overline{c}}_2\geq 1 - 1.3 (1-{\overline{c}}_1)\geq 0.09$ as ${\overline{c}}_1 \geq 0.3$. Hence the range of $c_2$ is $[0.09, 0.5]$, which implies that we can guess ${\underline{c}}_2, {\overline{c}}_2$ with ${\overline{c}}_2\leq (1+\varepsilon){\underline{c}}_2$ using $O(\varepsilon^{-1})$ space. We also guess ${\underline{c}}_3$ and ${\overline{c}}_3$ using $O(\varepsilon^{-1}\log K)$ space.
To prove Lemma \[lem:LastCase\], we will show that, given such ${\overline{c}}_i, {\underline{c}}_i$ ($i=1,2,3$) and $v$, there is an algorithm using $O(K\varepsilon^{-3}\log K)$ space and $O(n\varepsilon^{-4}\log K)$ time.
#### Finding a good set $Y$. {#finding-a-good-set-y.-2 .unnumbered}
The first phase, called (see Algorithm \[alg:phaseone\]), is roughly similar to . As before, we assume $v \leq f({\textup{\rm OPT}})\leq (1+\varepsilon) v$ and $ c({\textup{\rm OPT}}) \leq {K}$ (notice that we here set $W = {K}$). The difference is in that, in each round, we check whether any item in $E$, by itself, is enough to give us a solution with $0.5v$ (Lines 4–5). We terminate the repetition when $c(S)> (1- {\overline{c}}_1) K$. As will be explained (see Lemma \[lem:boundF\]), we can lower-bound $f({\textup{\rm OPT}}- Z\mid Y)$ for some subset $Z\subseteq {\textup{\rm OPT}}$, because $c_2$ is small.
It is clear that Lemma \[lem:fact\_knapsack\](1)(2) still hold in . Moreover, terminates in $O(\varepsilon^{-1}n)$ time.
In the following discussion, let $Y$ be the final output set of , $Y'$ the set in the beginning of the last round, and $T'$ be the elements added in the last round, i.e., $Y = Y' \cup T'$. We now give two different bounds on $f(Y)$. The proof is identical to Lemmas \[lem:c\_size\] and \[lem:fact\_knapsack\](3), where the first one is a stronger bound obtained in the proof of Lemma \[lem:c\_size\].
\[lem:d\_size\]
1. $f(Y) \geq \left(1 - \left(1-\frac{c(T')}{{K}} \right) e^{-\frac{c(Y')}{{K}}} - O(\varepsilon)\right)v.$
2. $f(Y) \geq \left(1- e^{-\frac{c(Y)}{{K}}} - O(\varepsilon)\right)v.$
To avoid triviality, we assume that $f(Y)< 0.5 v$. Then we may assume that $c(Y) \leq 0.7 {K}$, as otherwise, Lemma \[lem:d\_size\](2) immediately implies that $f(Y) \geq (0.5-O(\varepsilon))v$ (cf. Corollary \[cor:c1\_lb\]).
Suppose that $f(Y) < 0.5v$. Then for any $j$, we have $f(o_j \mid Y) \leq \left(e^{-\frac{c(Y')}{{K}}} -0.5\right)v$. \[lem:boundingMarginalo1o2\]
By submodularity, $f(o_j \mid Y) \leq f(o_j \mid Y')$. As $c(Y')<(1-{\overline{c}}_1)K$ and $f(Y) < 0.5v$, in the last round, Lines 4–5 imply that every item $e$, including $o_j$, has $ f(e \mid Y') \leq 0.5v - f(Y') \leq \left(e^{-\frac{c(Y')}{{K}}} -0.5\right)v$, where the last inequality follows by Lemma \[lem:d\_size\](2).
\[lem:boundF\] If $f(Y)< 0.5v$ and $c_2\leq 0.37$, then it satisfies the following.
Case 1:
: If $(1-{\overline{c}}_2)K\geq c(Y)\geq (1-{\overline{c}}_1)K$ then $f({\textup{\rm OPT}}- o_1 \mid Y) \geq 0.693v - f(Y)$.
Case 2:
: If $c(Y)\geq (1-{\overline{c}}_2)K$ then $f({\textup{\rm OPT}}- o_1 -o_2 \mid Y) \geq 0.54v - f(Y)$.
Case 3:
: If $c(Y)\geq (1-{\overline{c}}_3)K$ then $f({\textup{\rm OPT}}- o_1 -o_2 -o_3 \mid Y) \geq 0.567v - f(Y)$.
**Case 1:** follows immediately, as $f(o_1)\leq 0.307 v$ by Corollary \[cor:boundf1\] (2).
**Case 2:** Since $\overline{c}_2 \leq 0.37$, in this case, we can assume that $0.63{K}\leq c(Y)$.
If $c(T')\geq 0.315 {K}$, then $f(Y) \geq (0.5- O(\varepsilon))v$. \[clm:tprimenottoobig1\]
We write $c(Y)/{K}=a$ and $c(T')/{K}= b$. Then Lemma \[lem:d\_size\](1) implies that
$$f(Y) \geq (1- (1-b) e^{-(a-b)} - O(\varepsilon))v.$$
We lower-bound the function $h(a,b) = 1- (1-b) e^{b-a}$ as follows. As $\frac{\partial h }{\partial a }, \frac{\partial h }{\partial b } \geq 0$, we plug in the lower bound of $a$ and $b$ into $h$. By assumption, $b \geq 0.315$; $a= c(Y)/{K}\geq 0.63$. Then
$$h(a,b) \geq 1 - 0.685 e^{ -0.315}\geq 0.50.$$ The proof follows.
By Claim \[clm:tprimenottoobig1\], we may assume that $c(T')< 0.315K$. This implies that $c(Y')\geq c(Y)-c(T') > 0.315K$. Hence, by Lemma \[lem:boundingMarginalo1o2\], it holds that $f(o_1 \mid Y)$, $f(o_2 \mid Y) < \left(e^{-0.315}-0.5\right)v\leq 0.2297 v$. Therefore, $f({\textup{\rm OPT} - o_1 -o_2}\mid Y) \geq 0.54v - f(Y)$ holds as $f({\textup{\rm OPT} - o_1 -o_2}\mid Y) \geq f({\textup{\rm OPT}}\mid Y) - f(o_1\mid Y) - f(o_2\mid Y)$ and $f({\textup{\rm OPT}}\mid Y) \geq v - f(Y)$. **Case 3:** We can prove it in a similar way to Case 2. Since $\overline{c}_3 \leq 1/3$, in this case, we can assume that $2/3{K}\leq c(Y) \leq 0.7{K}$.
If $c(T')\geq 0.22 {K}$, then $f(S) \geq (0.5- O(\varepsilon))v$. \[clm:tprimenottoobig2\]
We write $c(Y)/{K}=a$ and $c(T')/{K}= b$. Then Lemma \[lem:d\_size\](1) implies that
$$f(Y) \geq (1- (1-b) e^{-(a-b)} - O(\varepsilon))v.$$
In a similar way to Claim \[clm:tprimenottoobig1\], we lower-bound the function $h(a,b) = 1- (1-b) e^{b-a}$ by setting $b = 0.22$ and $a= c(Y)/{K}= 2/3$. Then
$$h(a,b) \geq 1 - 0.78 e^{ -(2/3-0.22)}\geq 0.50.$$ Thus the proof follows.
By Claim \[clm:tprimenottoobig2\], we see that $c(T') < 0.22K$. This implies that $c(Y')\geq c(Y)-c(T')\geq 2/3-0.22 > 0.44$. Hence, by Lemma \[lem:boundingMarginalo1o2\], it holds that $f(o_j \mid Y) < 0.144 v$ for $j=1,2,3$. Therefore, $f({\textup{\rm OPT}}- o_1-o_2-o_3\mid Y) \geq 0.567v - f(Y)$ holds from submodularity and the fact that $f({\textup{\rm OPT}}\mid Y) \geq v - f(Y)$.
#### Packing the remaining space. {#packing-the-remaining-space.-2 .unnumbered}
Let $Y$ be a set found by . After taking $Y$, we consider the problem to fill in the remaining space. We approximate ${\textup{\rm OPT}}-o_1$, ${\textup{\rm OPT}}-o_1-o_2$, and ${\textup{\rm OPT}}-o_1-o_2-o_3$, respectively, depending on the size $c(Y)$ of $Y$. Recall that $c(Y)<0.7K$.
#### Case 1: $(1 - \overline{c}_2){K}\geq c(Y) \geq (1-\overline{c}_1){K}$. {#case-1-1---overlinec_2kgeq-cy-geq-1-overlinec_1k. .unnumbered}
By Lemma \[lem:boundF\], it holds that $$\label{eq:LastCase_1_bound}
f({\textup{\rm OPT}}-o_1\mid Y)\geq 0.693 v-f(Y).$$ Let $v' = 0.693 v-f(Y)$. Define $g(\cdot) = f(\cdot \mid Y)$. Consider the problem to approximate ${\textup{\rm OPT}}-o_1$. We set $W' = (1 - {\underline{c}}_1)K$ and $K' = K-c(Y)$.
If we can find a set $\tilde{S}$ such that $c(\tilde{S})\leq K - c(Y)$ and $g(\tilde{S})\geq \kappa v'$, then $Y\cup \tilde{S}$ is a feasible set to the original instance, and it holds by Lemma \[lem:d\_size\] and that $$\label{eq:LastCase_1}
f(Y\cup \tilde{S})\geq \left(1-e^{-y}\right)v + \kappa \left(0.693 - \left(1-e^{-y}\right) \right)v-O(\varepsilon)v,$$ where $y=c(Y)/K$.
We shall use Lemma \[lem:largeW\] to find such a set $\tilde{S}$. Since $0.3\leq {\underline{c}}_1$ and $y\leq 0.7$, the ratio $\eta$ of $W'$ and $K'$ is $$\eta = \frac{W'}{K'}= \frac{1-{\underline{c}}_1}{1 - y}\leq\frac{0.7}{0.3} \leq 2.5.$$
#### (i) $\eta\in [2, 2.5]$. {#i-etain-2-2.5. .unnumbered}
In this case, we see that $\eta \geq 2$ if and only if $$y\geq \frac{1+{\underline{c}}_1}{2} \geq 0.65,$$ since ${\underline{c}}_1\geq 0.3$. It follows from Lemma \[lem:largeW\] (d) that we can find a set $\tilde{S}$ such that $c(\tilde{S})\leq K - c(Y)$ and $g(\tilde{S})\geq 0.178 v'$. Hence, since $y\geq 0.65$, implies that $$f(Y\cup \tilde{S})\geq \left(1-e^{-y}\right)v + 0.178\cdot \left(0.693v -\left(1-e^{-y}\right)v\right)-O(\varepsilon)v\geq (0.51-O(\varepsilon))v.$$
#### (ii) $\eta\in [1.5, 2]$. {#ii-etain-1.5-2. .unnumbered}
We see that $\eta \geq 1.5$ if and only if $$y\geq \frac{0.5+{\underline{c}}_1}{1.5} = \frac{1+2{\underline{c}}_1}{3}.$$ Also, since $c(Y)\geq (1- {\overline{c}}_1)K$, we have $$y\geq \max\left\{\frac{1+2{\underline{c}}_1}{3}, 1- {\overline{c}}_1\right\}\geq 0.6 - O(\varepsilon),$$ where the lower bound is achieved when both the terms are equal. It follows from Lemma \[lem:largeW\] (c) that we can find a set $\tilde{S}$ such that $c(\tilde{S})\leq K - c(Y)$ and $g(\tilde{S})\geq 0.218 v'$. Hence, by , we obtain $$f(Y\cup \tilde{S})\geq \left(1-e^{-y}\right)v + 0.218\cdot \left(0.693v -\left(1-e^{-y}\right)v\right)-O(\varepsilon)v\geq (0.50-O(\varepsilon))v,$$ as $y\geq 0.6 - O(\varepsilon)$.
#### (iii) $\eta\in [1.4, 1.5]$. {#iii-etain-1.4-1.5. .unnumbered}
It means that $$y\geq \frac{0.4+{\underline{c}}_1}{1.4} = \frac{2+5{\underline{c}}_1}{7}.$$ Also, since $c(Y)\geq (1- {\overline{c}}_1)K$, we have $$y\geq \max\left\{\frac{2+5{\underline{c}}_1}{7}, 1- {\overline{c}}_1\right\}\geq \frac{7}{12} - O(\varepsilon).$$ It follows from Lemma \[lem:largeW\] (b) that we can find a set $\tilde{S}$ such that $c(\tilde{S})\leq K - c(Y)$ and $g(\tilde{S})\geq 0.283 v'$. Hence, by , we obtain $$f(Y\cup \tilde{S})\geq \left(1-e^{-y}\right)v + 0.283\cdot \left(0.693v -\left(1-e^{-y}\right)v\right)-O(\varepsilon)v\geq (0.51-O(\varepsilon))v,$$ as $y\geq 7/12 - O(\varepsilon)$.
#### (iv) $\eta\in [1, 1.4]$. {#iv-etain-1-1.4. .unnumbered}
It follows from Lemma \[lem:largeW\] (a) that we can find a set $\tilde{S}$ such that $c(\tilde{S})\leq K - c(Y)$ and $g(\tilde{S})\geq 0.315 v'$. Hence, by , we obtain $$f(Y\cup \tilde{S})\geq \left(1-e^{-y}\right)v + 0.315\cdot \left(0.693v -\left(1-e^{-y}\right)v\right)-O(\varepsilon)v.$$ This is at least $(0.5-O(\varepsilon))v$ if $y\geq 0.53$. Thus we may suppose that $c(Y)< 0.53K$. Since $c(Y)\geq (1-{\overline{c}}_1)K$, we see ${\overline{c}}_1\geq 1-0.53 = 0.47$. Moreover, since $2.4(1-{\overline{c}}_1-{\overline{c}}_2) > (1-{\overline{c}}_1)$, we have ${\overline{c}}_2\leq 0.31$. Hence we have that $$\frac{1 - c_2}{1-{\underline{c}}_1}\geq \frac{1 - {\overline{c}}_1}{1-{\underline{c}}_1} \frac{1-{\overline{c}}_2}{1-{\overline{c}}_1}\geq ({1-\delta})\frac{0.69}{0.5}\geq 1.38({1-\delta}),$$ as ${\overline{c}}_1 \leq {1-\varepsilon / \delta}$. Therefore, by Corollary \[cor:Small\_bound\_c2\], we can find an $(0.5-O(\varepsilon))$-approximation.
#### (v) $\eta\in [0, 1]$. {#v-etain-0-1. .unnumbered}
It follows from Lemmas \[lem:046small\] and \[lem:046large\] that we can find a set $\tilde{S}$ such that $c(\tilde{S})\leq K - c(Y)$ and $g(\tilde{S})\geq 0.46 v'$. By , we have $$f(Y\cup \tilde{S})\geq \left(1-e^{-y}\right)v + 0.46\cdot \left(0.693v -\left(1-e^{-y}\right)v\right) -O(\varepsilon)v\geq (0.53-O(\varepsilon))v,$$ since $y\geq 0.5$.
Therefore, in each case, the algorithm in Lemma \[lem:largeW\] yields a $(0.5-O(\varepsilon))$-approximation. The space required is $O(K\varepsilon^{-3}\log K)$ and the running time is $O(n\varepsilon^{-4}\log K)$. Thus Lemma \[lem:LastCase\] holds for Case 1.
#### Case 2: $c(Y) > (1-\overline{c}_2){K}$. {#case-2-cy-1-overlinec_2k. .unnumbered}
We may suppose that $c(Y)\leq 0.7K$. Since $c(Y)\geq (1-\overline{c}_2){K}$, we have $0.3\leq {\overline{c}}_2$. Also $c(Y)\geq (1-\overline{c}_2){K}\geq 0.63K$ holds since ${\overline{c}}_2\leq 0.37$.
Define $g(\cdot) = f(\cdot \mid Y)$, and consider the problem to approximate ${\textup{\rm OPT}}-o_1-o_2$. By Lemma \[lem:boundF\], it holds that $$g({\textup{\rm OPT}}-o_1-o_2)\geq 0.54 v-f(Y).$$ Let $v' = 0.54 v-f(Y)$. In a way similar to Case 1, if we can find a set $\tilde{S}$ such that $c(\tilde{S})\leq K -c(Y)$ and $g(\tilde{S})\geq \kappa v'$, then $Y\cup \tilde{S}$ is a feasible set to the original instance, and it holds by Lemma \[lem:d\_size\] that $$\label{eq:LastCase_2}
f(Y\cup \tilde{S})\geq \left(1-e^{-y}\right)v + \kappa \left(0.54 - \left(1-e^{-y}\right) \right)v-O(\varepsilon)v.$$
We denote $W' = ( 1- {\underline{c}}_1 - {\underline{c}}_2)K$, $K' = K-c(Y)$, and $y = c(Y)/K$. Since $y\leq 0.7$ and ${\underline{c}}_1+{\underline{c}}_2\geq (1-\varepsilon)({\overline{c}}_1+{\overline{c}}_2)\geq 0.6(1-\varepsilon)$, it holds that $$\eta = \frac{W'}{K'}\leq \frac{1-{\underline{c}}_1-{\underline{c}}_2}{1-y} \leq \frac{4}{3}+2\varepsilon \leq 1.5,
$$ where the last inequality follows because we may suppose that $\varepsilon \leq 1/12$.
#### (i) $\eta > 1$. {#i-eta-1. .unnumbered}
In this case, it holds that $y\geq {\underline{c}}_1+{\underline{c}}_2$. Since $y\geq 1 - {\overline{c}}_2$, we have $$y\geq \max\{{\underline{c}}_1+{\underline{c}}_2, 1-\overline{c}_2\}\geq \frac{2}{3}-O(\varepsilon).$$ By Lemma \[lem:largeW\], we can find a set $\tilde{S}$ such that $c(\tilde{S})\leq K - c(Y)$ and $g(\tilde{S})\geq 0.315 v'$. Hence, by , we obtain $$f(Y\cup \tilde{S})\geq \left(1-e^{-y}\right)v + 0.315\cdot\left(0.54 v-\left(1-e^{-y}\right)v\right) -O(\varepsilon)v \geq (0.50-O(\varepsilon))v$$ when $y\geq 2/3 - O(\varepsilon)$.
#### (ii) $\eta \leq 1$. {#ii-eta-leq-1. .unnumbered}
It follows from Lemmas \[lem:046small\] and \[lem:046large\] that we can find a set $\tilde{S}$ such that $c(\tilde{S})\leq K - c(Y)$ and $g(\tilde{S})\geq 0.46 v'$. By , we have $$f(Y\cup \tilde{S})\geq \left(1-e^{-y}\right)v + 0.46\cdot\left(0.54 v-\left(1-e^{-y}\right)v\right) -O(\varepsilon)v\geq (0.51-O(\varepsilon))v,$$ since $y\geq 0.63$.
Therefore, in each case, the algorithm in Lemma \[lem:largeW\] yields a $(0.5-O(\varepsilon))$-approximation. The space required is $O(K\varepsilon^{-3}\log K)$ and the running time is $O(n\varepsilon^{-4}\log K)$. Thus Lemma \[lem:LastCase\] holds for Case 2.
#### Case 3: $c(Y) > (1-\overline{c}_3){K}$. {#case-3-cy-1-overlinec_3k. .unnumbered}
In this case, we may assume that ${\overline{c}}_3\geq 0.3$ since $c(Y)\leq 0.7K$, and hence ${\overline{c}}_1+{\overline{c}}_2+{\overline{c}}_3\geq 0.9$.
Define $g(\cdot) = f(\cdot \mid Y)$, and consider the problem to approximate ${\textup{\rm OPT}}-o_1-o_2-o_3$. By Lemma \[lem:boundF\], it holds that $$g({\textup{\rm OPT}}-o_1-o_2-o_3)\geq 0.567 v-f(Y).$$ Let $v' = 0.567 v-f(Y)$. We set $W' = (1-{\underline{c}}_1-{\underline{c}}_2-{\underline{c}}_3)K$ and $K' = K-c(Y)$. Then, since ${\underline{c}}_1+{\underline{c}}_2+{\underline{c}}_3\geq 0.9(1-\varepsilon)$, we have $W'\leq (0.1 + 0.9\varepsilon)K$. In addition, since $c(Y)\leq 0.7K$, we see $K'\geq 0.3K$.
Since $W'\leq K'$, the algorithm in Section \[sec:046\] is applicable, and we can find a set $\tilde{S}$ such that $c(\tilde{S})\leq K - c(Y)$ and $g(\tilde{S})\geq 0.46 v'$. Since $y=c(Y)/K\geq 2/3$, we obtain by Lemma \[lem:boundF\] $$f(Y\cup \tilde{S})\geq \left(1-e^{-y}\right) v + 0.46\cdot \left(0.567 v-\left(1-e^{-y}\right)v \right) -O(\varepsilon)v\geq (0.52-O(\varepsilon))v.$$
Therefore, since the algorithm in Section \[sec:046\] runs in $O(K\varepsilon^{-3}\log K)$ space and $O(n\varepsilon^{-4}\log K)$ time, provided the approximated optimal value, Lemma \[lem:LastCase\] holds for Case 3.
Proof of Lemma \[lem:largeW\] {#sec:ProofLargeW}
-----------------------------
In this subsection, we prove Lemma \[lem:largeW\]. Recall that $W' = \eta K'$ for some $\eta > 1$ and that $c(e)\leq K'$ for any $e\in X$. Note that would work even if $\eta \geq 1$, and, by Corollary \[cor:simpleratio\], can find a set $S$ such that $$\label{eq:ProofLargeW}
f(S)\geq \left(1-e^{-\frac{1-c_1}{\eta}}-O(\varepsilon)\right)v.$$ Moreover, when $\eta \leq 1$, we can obtain a $(0.46-O(\varepsilon))$-approximate solution by in Section \[sec:046\]. This algorithm runs in $O(K'\varepsilon^{-3}\log K')$ space and $O(n\varepsilon^{-4}\log K')$ time using $O(\varepsilon^{-1})$ passes, provided the approximated optimal value $v$.
#### (a) $\eta\in [1, 1.4]$. {#a-etain-1-1.4. .unnumbered}
If there exists an item $e$ such that $f(e)\geq 0.315 v$, then taking a singleton with maximum return admits a $0.315$-approximation. Thus we may assume that $f(e)\leq 0.315 v$ for any item $e\in E$. If $c_1\leq \eta -1$, then the set $S$ in satisfies that $$f(S)\geq \left(1-e^{-\frac{1-c_1}{\eta}}-O(\varepsilon)\right)v\geq \left(1-e^{-\frac{3}{7}}-O(\varepsilon)\right)v\geq (0.348-O(\varepsilon)) v.$$ Otherwise, we consider approximating ${\textup{\rm OPT}}-o_1$. Since $c({\textup{\rm OPT}}- o_1)\leq K' - (\eta -1)K'\leq \eta K' = W'$, we can use a $(0.46-O(\varepsilon))$-approximation algorithm in Section \[sec:046\]. Since $f({\textup{\rm OPT}}- o_1)\geq v - f(o_1)\geq 0.685 v$, we can find a set $S$ such that $$f(S)\geq 0.685 (0.46-O(\varepsilon))v \geq (0.315-O(\varepsilon))v.$$ Thus the statement holds.
#### (b) $\eta\in [1.4, 1.5]$. {#b-etain-1.4-1.5. .unnumbered}
The proof is similar to (a). We may assume that $f(e)\leq 0.283 v$ for any item $e\in E$. If $c_1\leq \eta -1$, then the set $S$ in satisfies that $$f(S)\geq \left(1-e^{-\frac{1-c_1}{\eta}}-O(\varepsilon)\right)v\geq \left(1-e^{-\frac{1}{3}}-O(\varepsilon)\right)v\geq (0.283-O(\varepsilon)) v.$$ Otherwise, apply a $(0.46-O(\varepsilon))$-approximation algorithm to approximate ${\textup{\rm OPT}}- o_1$. Since $f({\textup{\rm OPT}}- o_1)\geq v - f(o_1)\geq 0.72 v$, the ratio of the output $S$ is $$f(S)\geq 0.72 (0.46-O(\varepsilon))v \geq (0.331-O(\varepsilon))v.$$ Thus the statement holds.
#### (c) $\eta\in [1.5, 2]$. {#c-etain-1.5-2. .unnumbered}
We will use the above argument in (a) and (b) recursively. We may assume that $f(e)< 0.22 v$ for any $e\in E$. If $c_1 < 0.5$, then then the set $S$ in satisfies that $$f(S)\geq \left(1-e^{-\frac{1-c_1}{\eta}}-O(\varepsilon)\right)v\geq \left(1-e^{-\frac{0.5}{2}}-O(\varepsilon)\right)v\geq (0.22-O(\varepsilon))v.$$ So consider the case when $c_1\geq 0.5$. Consider approximating ${\textup{\rm OPT}}-o_1$. Since $c({\textup{\rm OPT}}- o_1)\leq 2K'- 0.5K' \leq 1.5 K'$ and $f({\textup{\rm OPT}}- o_1)\geq 0.78 v$, the algorithm in (b) can find a set $S$ such that $$f(S)\geq 0.78 (0.28 - O(\varepsilon)) v \geq (0.218 - O(\varepsilon))v.$$ Thus the statement holds.
#### (d) $\eta\in [2, 2.5]$. {#d-etain-2-2.5. .unnumbered}
We may assume that $f(e)< 0.18 v$ for any item $e\in E$. If $c_1 < 0.5$, then then the set $S$ in satisfies that $$f(S)\geq \left(1-e^{-\frac{1-c_1}{\eta}}-O(\varepsilon)\right)v\geq \left(1-e^{-\frac{0.5}{2.5}}-O(\varepsilon)\right)v\geq (0.18-O(\varepsilon))v.$$ So consider the case when $c_1\geq 0.5$. Consider approximating ${\textup{\rm OPT}}-o_1$. Since $c({\textup{\rm OPT}}- o_1)\leq 2.5K'-0.5K' \leq 2.0 W'$ and $f({\textup{\rm OPT}}- o_1)\geq 0.82 v$, the algorithm in (c) can find a set $S$ such that $$f(S)\geq 0.82\cdot (0.218 - O(\varepsilon)) v \geq (0.178 - O(\varepsilon))v.$$ Thus the statement holds.
[22]{}
N. Alon, I. Gamzu, and M. Tennenholtz. Optimizing budget allocation among channels and influencers. In [*Proceedings of the 21st International Conference on World Wide Web (WWW)*]{}, pages 381–388, 2012.
A. Badanidiyuru, B. Mirzasoleiman, A. Karbasi, and A. Krause. Streaming submodular maximization: massive data summarization on the fly. In [*Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD)*]{}, pages 671–680, 2014.
A. Badanidiyuru and J. Vondr[á]{}k. Fast algorithms for maximizing submodular functions. In [*Proceedings of the 25th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA)*]{}, pages 1497–1514, 2013.
M. Bateni, H. Esfandiari, and V. Mirrokni. Almost optimal streaming algorithms for coverage problems. In [*Proceedings of the 29th ACM Symposium on Parallelism in Algorithms and Architectures*]{}, SPAA ’17, pages 13–23, New York, NY, USA, 2017. ACM.
G. Calinescu, C. Chekuri, M. P[á]{}l, and J. Vondr[á]{}k. Maximizing a monotone submodular function subject to a matroid constraint. , 40(6):1740–1766, 2011.
A. Chakrabarti and S. Kale. Submodular maximization meets streaming: matchings, matroids, and more. , 154(1-2):225–247, 2015.
T.-H. H. Chan, Z. Huang, S. H.-C. Jiang, N. Kang, and Z. G. Tang. Online submodular maximization with free disposal: Randomization beats for partition matroids online. In [*Proceedings of the 28th Annual [ACM-SIAM]{} Symposium on Discrete Algorithms (SODA)*]{}, pages 1204–1223, 2017.
T.-H. H. Chan, S. H.-C. Jiang, Z. G. Tang, and X. Wu. Online submodular maximization problem with vector packing constraint. In [*Annual European Symposium on Algorithms (ESA)*]{}, pages 24:1–24:14, 2017.
C. Chekuri, S. Gupta, and K. Quanrud. Streaming algorithms for submodular function maximization. In [*Proceedings of the 42nd International Colloquium on Automata, Languages, and Programming (ICALP)*]{}, volume 9134, pages 318–330, 2015.
C. Chekuri, J. Vondr[á]{}k, and R. Zenklusen. Submodular function maximization via the multilinear relaxation and contention resolution schemes. , 43(6):1831–1879, 2014.
A. Ene and H. L. Nguy[\~]{}n. A nearly-linear time algorithm for submodular maximization with a knapsack constraint. arXive https://arxiv.org/abs/1709.09767, 2017.
Y. Filmus and J. Ward. A tight combinatorial algorithm for submodular maximization subject to a matroid constraint. , 43(2):514–542, 2014.
M. L. Fisher, G. L. Nemhauser, and L. A. Wolsey. An analysis of approximations for maximizing submodular set functions i. , pages 265–294, 1978.
M. L. Fisher, G. L. Nemhauser, and L. A. Wolsey. An analysis of approximations for maximizing submodular set functions ii. , 8:73–87, 1978.
C.-C. Huang, N. Kakimura, and Y. Yoshida. Streaming algorithms for maximizing monotone submodular functions under a knapsack constraint. In [*The 20th International Workshop on Approximation Algorithms for Combinatorial Optimization Problems(APPROX2017)*]{}, 2017.
D. Kempe, J. Kleinberg, and [É]{}. Tardos. Maximizing the spread of influence through a social network. In [*Proceedings of the 9th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD)*]{}, pages 137–146, 2003.
A. Krause, A. P. Singh, and C. Guestrin. Near-optimal sensor placements in gaussian processes: Theory, efficient algorithms and empirical studies. , 9:235–284, 2008.
A. Kulik, H. Shachnai, and T. Tamir. Maximizing submodular set functions subject to multiple linear constraints. In [*Proceedings of the 20th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA)*]{}, pages 545–554, 2013.
J. Lee. , volume 3 of [*Encyclopedia of Environmetrics*]{}, pages 1229–1234. John Wiley [&]{} Sons, Ltd., 2006.
J. Lee, M. Sviridenko, and J. Vondr[á]{}k. Submodular maximization over multiple matroids via generalized exchange properties. , 35(4):795–806, 2010.
H. Lin and J. Bilmes. Multi-document summarization via budgeted maximization of submodular functions. In [*Proceedings of the 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT)*]{}, pages 912–920, 2010.
H. Lin and J. Bilmes. A class of submodular functions for document summarization. In [*Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies (ACL-HLT)*]{}, pages 510–520, 2011.
A. McGregor and H. T. Vu. Better streaming algorithms for the maximum coverage problem. In [*International Conference on Database Theory (ICDT)*]{}, 2017.
B. Mirzasoleiman, A. Badanidiyuru, A. Karbasi, J. Vondrák, and A. Krause. Lazier than lazy greedy. In [*Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence*]{}, AAAI’15, pages 1812–1818. AAAI Press, 2015.
B. Mirzasoleiman, S. Jegelka, and A. Krause. Streaming non-monotone submodular maximization: Personalized video summarization on the fly. In [*Proc. Conference on Artificial Intelligence (AAAI)*]{}, Feburary 2018.
T. Soma, N. Kakimura, K. Inaba, and K. Kawarabayashi. Optimal budget allocation: Theoretical guarantee and efficient algorithm. In [*Proceedings of the 31st International Conference on Machine Learning (ICML)*]{}, pages 351–359, 2014.
M. Sviridenko. A note on maximizing a submodular set function subject to a knapsack constraint. , 32(1):41–43, 2004.
L. Wolsey. Maximising real-valued submodular functions: primal and dual heuristics for location problems. , 1982.
Y. Yoshida. Maximizing a monotone submodular function with a bounded curvature under a knapsack constraint. https://arxiv.org/abs/1607.04527, 2016.
Q. Yu, E. L. Xu, and S. Cui. Streaming algorithms for news and scientific literature recommendation: Submodular maximization with a $d$-knapsack constraint. , 2016.
Proof of Lemma \[lem:Assumption1\]
==================================
We discuss how to obtain a $(0.5 - O(\varepsilon))$-approximation when $c_1 + c_2$ is almost 1.
\[clm:appendix1\] Suppose that $f(o_1+o_2)\geq v'$. We can find a set $S$ using two passes and $O(\varepsilon^{-1}K)$ space such that $|S|=2$ and $$f(S)\geq \left(\frac{2}{3}-\varepsilon\right)v'.$$
We begin by reviewing the algorithm[^3] in [@APPROX17].
Let $E_{\rm R} \subseteq E$ be a subset of the ground set (and we call $E_{\rm R}$ *red items*). Let $X\subseteq E$ such that $v \leq f(X) \leq (1+ \varepsilon)v$. Assume that there exists $x\in X\cap E_{\rm R}$ such that $\underline{\tau} v \leq f(x) \leq \overline{\tau}v$. Then we can find a set $Y \subseteq E_{\rm R}$ of red items, in one pass and $O(n)$ time, with $|Y| = O(\log_{1+\varepsilon} \frac{\overline{\tau}}{ \underline{\tau}})$ such that some item $e^*$ in $Y$ satisfies $f(X - x + e^*) \geq (2/3- O(\varepsilon)) v$. \[thm:redItems\]
For each $t=1,2,\dots, K/2$, define $E_t= \{e\in E \mid t \leq c(e) \leq K-t\}$ as the red items. The critical thing to observe is that, if $t\leq c(o_2)$, we see $o_1 \in E_{c(o_2)}$.
The above observation suggests the following implementation. In the first pass, for each set $E_t$, apply Theorem \[thm:redItems\] to collect a set $X_t \subseteq E_t$ (apparently we can set $\overline{\tau}= 2/3$ and $\underline{\tau}=1/3$). Since $|X_t|=O(\log_{1+\varepsilon}2)=O(\varepsilon^{-1})$, it takes $O(\varepsilon^{-1} K)$ space and $O(n)$ time in total. Then it follows from Theorem \[thm:redItems\] that, for each $t$ with $t\leq c(o_2)$, there exists $e^\ast\in X_t$ such that $f(o_2+e^\ast)\geq (2/3- O(\varepsilon)) v'$ and $c(e^\ast)\leq K - c(o_2)$. In the second pass, for each item $e$ in $E$, check whether there exists $e'$ in $X_{c(e)}$ such that $c(e+e')\leq K$ and $f(e + e') \geq (2/3-O(\varepsilon))v'$. It follows that there exists at least one pair of $e$ and $e'$ satisfying the condition. The second pass also takes $O(\varepsilon^{-1} K)$ space as we keep $X_t$’s. Since $|X_t|=O(\varepsilon^{-1})$, the second phase takes $O(\varepsilon^{-1}n)$ time.
Suppose that $v\leq f({\textup{\rm OPT}}) \leq (1+\varepsilon) v$. If $f(o_1+o_2)\geq 0.75v$, then we are done using Claim \[clm:appendix1\]. So assume otherwise, meaning that $f({\textup{\rm OPT} - o_1 -o_2}) \geq 0.25v$. Notice that we can also assume that $f({\textup{\rm OPT}}- o_1)\geq 0.5v$. Now consider two possibilities.
If $c_1 \geq 1- \sqrt{\varepsilon}$, then we can find a set $S$ in $O(\varepsilon^{-1})$ passes and $O(K)$ space such that $c(S)\leq K$ and $f(S)\geq (0.5-O(\varepsilon))v$.
Since $c_1 \geq 1- \sqrt{\varepsilon}$, we have $c({\textup{\rm OPT}}- o_1) \leq \sqrt{\varepsilon} {K}$. Consider the problem to approximate ${\textup{\rm OPT}}- o_1$. Then the largest item in ${\textup{\rm OPT}}- o_1$ is $c(o_2)$ which is at most $\sqrt{\varepsilon}K$. By Corollary \[cor:simpleratio\], can obtain a set $S$ satisfying that $$f(S)\geq 0.5 \left(1-e^{-\frac{1 - \sqrt{\varepsilon} }{\sqrt{\varepsilon}}} - O(\varepsilon)\right)v\geq (0.5 - O(\varepsilon))v,$$ where the last inequality follows because $e^{-\frac{1 - \sqrt{\varepsilon} }{\sqrt{\varepsilon}}} \leq \varepsilon$ when $\varepsilon \leq 1$.
If $c_1 < 1- \sqrt{\varepsilon}$, then we can find a set $S$ in $O(\varepsilon^{-1})$ passes and $O(K)$ space such that $c(S)\leq K$ and $f(S)\geq (0.5-O(\varepsilon))v$.
Consider the problem: $$\text{maximize\ \ }f(S) \quad \text{subject to \ } c(S)\leq \sqrt{\varepsilon} K,\quad S\subseteq E,\\$$ to approximate ${\textup{\rm OPT}}- o_1 - o_1$. Let ${\mathcal{I}}'$ be the corresponding instance. Since $f({\textup{\rm OPT}}- o_1 - o_2)\geq 0.25 v$ and $c({\textup{\rm OPT}}- o_1 - o_2)\leq \varepsilon K$, Corollary \[cor:simpleratio\] implies that can obtain a set $Y$ satisfying that $$f(Y)\geq 0.25 \left(1-e^{-\frac{\sqrt{\varepsilon} - \varepsilon}{\varepsilon}} - O(\varepsilon)\right)v\geq (0.25 - O(\varepsilon))v,$$ since the largest item in ${\textup{\rm OPT}}- o_1 -o_2$ has size at most $\varepsilon$. After taking the set $Y$, we still have space for packing either $o_1$ or $o_2$, since $c(Y)\leq \sqrt{\varepsilon} K <K-c(o_1)$.
Define $g := f(\cdot \mid Y)$. If some element $e$ satisfies $c(Y) + c(e) \leq {K}$ and $f(Y+e) \geq 0.5v$, then we are done. Thus we may assume that no such element exists, implying that $f(Y+o_\ell)< 0.5v$ for $\ell =1,2$. Hence it holds that $$g({\textup{\rm OPT}}-o_1)\geq g({\textup{\rm OPT}})-g(o_1)\geq \left( f({\textup{\rm OPT}})-f(Y) \right) - \left(f(Y+o_1)-f(Y)\right)\geq 0.5 v,$$ This implies that $$g({\textup{\rm OPT} - o_1 -o_2}) \geq g({\textup{\rm OPT}}- o_1) - g(o_2) \geq 0.5 v - (f(Y+o_1)-f(Y)) \geq f(Y)\geq (0.25 - O(\varepsilon))v.
$$
Consider the problem: $$\text{maximize\ \ }g(S) \quad \text{subject to \ } c(S)\leq K - c(Y),\quad S\subseteq E,\\$$ to approximate ${\textup{\rm OPT}}- o_1 - o_1$. Denote by ${\mathcal{I}}''$ the corresponding instance. Since $K-c(Y)\geq (1-\sqrt{\varepsilon})K$ and $g({\textup{\rm OPT} - o_1 -o_2})\geq (0.25 - O(\varepsilon))v$, Corollary \[cor:simpleratio\] implies that can obtain a set $S$ satisfying that $$f(S)\geq (0.25- O(\varepsilon)) \left(1-e^{-\frac{1-\sqrt{\varepsilon} - \varepsilon}{\varepsilon}} - O(\varepsilon)\right)v\geq
(0.25 - O(\varepsilon))v.$$ Therefore, $Y\cup S$ satisfies that $c(Y\cup S)\leq K$ and $$f(Y\cup S) = f(Y) + g(S)\geq (0.5 - O(\varepsilon))v.$$
For a given $v$, the above can be done in $O(\varepsilon^{-1}{K})$ space using $O(\varepsilon^{-1})$ passes. The total running time is $O(n \varepsilon^{-1})$. This completes the proof of Lemma \[lem:Assumption1\].
[^1]: Supported by JST ERATO Grant Number JPMJER1201, Japan, and by JSPS KAKENHI Grant Number JP17K00028.
[^2]: In [@Badanidiyuru:2013jc], a $(1-e^{-1}-\varepsilon)$-approximation algorithm of running time $O(n^2 (\varepsilon^{-1} \log \frac{n}{\varepsilon})^{\varepsilon^{-8}})$ was claimed. However, this algorithm seems to require some assumption on the curvature of the submodular function. See [@Ene2017; @yoshida_2016] for details on this issue.
[^3]: This theorem is essentially a rephrasing of Theorem \[thm:APPROX\].
|
---
abstract: 'We examine Bourbaki’s function, an easily-constructed continuous but nowhere-differentiable function, and explore properties including functional identities, the antiderivative, and the box and Hausdorff dimensions of the graph.'
address: 'Division of Mathematics and Computer Science, University of Maine at Farmington, Farmington, ME 04938'
author:
- James McCollum
date: September 2010
title: 'Properties of Bourbaki’s Function'
---
Introduction
============
While Bernard Bolzano [@Jarnik81] introduced one of the earliest examples of a continuous, nowhere-differentiable function, his example is only one of countless similar functions, many of which are defined in less complex ways. One such function is found in Nicolas Bourbaki’s *Elements of Mathematics—Functions of a Real Variable* [@Bourbaki04]: This function, which we call Bourbaki’s Function, is defined by a few inductive rules, and its simple self-similar structure allows for abundant and relatively easy analysis.
Okamoto [@Okamoto05] defines Bourbaki’s Function $f_i$ for any iteration $i\geq0$ over $[0,1]$ as follows: $f_0(x)=x$ for all $x \in [0,1]$, every $f_i$ is continuous on $[0,1]$, every $f_i$ is affine in each subinterval $[k/3^i,(k+1)/3^i]$ where $k\in\{0,1,2,\ldots,3^i-1\}$, and
&f\_[i+1]{}()=f\_i(),\
&f\_[i+1]{}()=f\_i()+,\
&f\_[i+1]{}()=f\_i()+,\
&f\_[i+1]{}()=f\_i().
Figs. 1 and 2 illustrate the construction of the graph of $f$. Okamoto [@Okamoto05] has shown, using the above equations, that the function $$f(x)=\displaystyle\lim_{i\to\infty}f_i(x)$$ is continuous and nowhere differentiable. We can observe from these equations that $f_i(x)=f(x)$ for any $x\in[0,1]$ that can be expressed as some integral multiple of $1/3^i$. The values between each of these inputs, however, may change with new iterations.
![Graphs of $f_0$ and $f_1$. Note the steps taken in constructing $f_1$ from $f_0$.](BourbakiIterations0and1){width=".75\textwidth"}
![Graph of $f_2$ and an approximate graph of $f$. Again, note the steps taken in constructing $f_2$ from $f_1$.](BourbakiIteration2andApproximation){width=".75\textwidth"}
Formulas (1)–(4) can evaluate $f(x)$ easily for integral multiples of $1/3^i$: Consider the ternary expansion $x=0.x_1x_2\ldots x_i$, where $x_1,x_2,\ldots,x_i \in \{0,1,2\}$. (If $x=1$, we say equivalently that $x=0.222\ldots$) The value of $i$ coincides with the first iteration $i$ for which $f_i(x)=f(x)$. We can rewrite the ternary expansion as $$x=\sum_{j=1}^i\frac{x_j}{3^j}.$$ Since the sum always will have a denominator of $1/3^i$, we can evaluate $f(x)$ using an $f_i$ that covers intervals of length at least $1/3^i$—which, by definition, is $f_h$ for $h\geq i$. To evaluate $f(x)$, we start with $k=0$ and $i=1$, and we apply formula (1), (2), or (3) depending on the value of each $x_j$: If $x_j=0$, we use (1); if $x_j=1$, we use (2); and if $x_j=2$, we use (3). In every case, we must keep track of the values $k/3^i$, $(3k+1)/3^{i+1}$, $(3k+2)/3^{i+1}$, $(3k+1)/3^{i}$, and $(3k+2)/3^{i}$.
To evaluate $f(x)$ for other $x$ values—when $x$ is an irrational number or any rational number in $[0,1]$ whose denominator is not a power of three—we must consider a non-terminating ternary expansion: $$x=\sum_{j=1}^\infty\frac{x_j}{3^j}.$$ In other words, to evaluate $f(x)$ for such an $x$, we would have to apply formulas (1), (2), and (3) indefinitely. While we certainly could obtain a fair approximation with enough iterations, keeping track of certain values—in particular, $f_i((k+1)/3^i)-f_i(k/3^i)$—would grow more difficult with each iteration.
In this paper, we will use the self-similarity of $f$ to find “shortcuts” to evaluating $f(x)$ for such values of $x$, and we will examine how this self-similarity concerns the fractal nature of the graph of $f$, the antiderivative $F$ of $f$, and the properties of the graph of $F$. More specifically, in Section 2, we will prove that the graph of $f$ possesses rotational symmetry about the point $(1/2,1/2)$—in other words, $$f(1-x)=1-f(x) \textrm{\ for all\ }x\in[0,1].$$ In the same section, we will use the function’s self-similar properties to infer three basic identities that evaluate $f(x/3^i)$, $f([2-x]/3^i)$, and $f([2+x]/3^i)$ in terms of $f(x)$. In Section 3, we will use these general identites to evaluate $f(x)$ for specific sets of numbers that have some form other than $x=k/3^i$. In Section 4, we will use the rotational symmetry of the graph of $f$ to prove that $$\int_x^{1-x}f(t)\,dt=1/2-x.$$ In the same section, we will derive three other identities for the area under the graph of $f$, and we will use these identities iteratively to construct a graph of $F$. In Section 5, we will do for $F$ what we did for $f$ in Section 3. Finally, in Section 6, we will show that the graph of $f$ has box-counting and Hausdorff dimensions equal to $\log_3 5$, and we will show that the arc length of the graph of $F$ is bounded below by $\displaystyle\frac{\sqrt{5}}{2}$ and above by $\displaystyle\frac{3}{2}$.
Functional Identities
=====================
We observe from the graphs that each $f_i$ possesses rotational symmetry about its center, which implies a useful identity for $f$:
For all $x \in [0,1]$, $f(1-x)=1-f(x)$.
By definition, $f_0(x)=x$ for all $x \in [0,1]$, so clearly this is true for $f_0$. To prove this for $f$ itself, however, we will take advantage of the function’s inductive nature and consider only the points where $x$ can be expressed as a rational number in $[0,1]$ whose denominator takes the form $3^i$.
We consider $i=1$ for a base case. We must show, then, that $f_1(1-x)=1-f(x)$ for $x \in \{0,1/3,2/3,1\}$. Evaluating the function at these points, we get
f\_1(1-0)&=f\_1(1)=1=1-0=1-f\_1(0)\
f\_1(1-)&=f\_1()==1-=1-f\_1()\
f\_1(1-)&=f\_1()==1-=1-f\_1()\
f\_1(1-1)&=f\_1(0)=0=1-1=1-f\_1(1)
So for $i=1$, $f_i(1-x)=1-f_i(x)$ for all $x \in \{0,1/3^i,2/3^i,\ldots,1\}$.
From here we can make our inductive hypothesis: For some $j\geq1$, $f_j(1-x)=1-f_j(x)$, where $x \in \{0,1/3^j,2/3^j,\ldots,1\}$. Now, we can deduce from (1) and (4) that if $f_j(1-x)=1-f_j(x)$ and $f_{j+1}(x)=f_j(x)$ for $x=k/3^j$, then $f_{j+1}(1-x)=1-f_{j+1}(x)$, since both $x,(1-x)\in\{0,1/3^j,2/3^j,\ldots,1\}$. To apply (2) and (3), we first let $k'=3^j-1-k$. This means that $k' \in \{0,1,\ldots,3^j-1\}$ and that $1-k/3^j=(k'+1)/3^j$. Then
f\_[j+1]{}(1-) &=f\_[j+1]{}()\
&=f\_[j+1]{}()\
&=f\_[j+1]{}()\
&=f\_j ( )+.
Using our inductive hypothesis, we make appropriate substitutions for these iteration-$j$ functions to get
f\_[j+1]{}(1-)&=1-f\_j(1- )+\
&-\
&=1-f\_j()+f\_j()-f\_j()\
&=1-f\_j()+f\_j()-f\_j()\
&=1-(f\_j()+)\
&=1-f\_[j+1]{}().
Working the same substitutions through for $f_{j+1}(1-(3k+2)/3^{j+1})$ will give us $1-f_{j+1}((3k+2)/3^{j+1})$.
Therefore, by induction, $f_i(1-x)=1-f_i(x)$ for all $i\geq1$ and all $x\in\{0,1/3^i,2/3^i,\ldots,1\}$. As $i$ approaches infinity, the interval $1/3^i$ between each $x\in\{0,1/3^i,2/3^i,\ldots,1\}$ approaches zero, and since this set is dense in $[0,1]$, the limit $f(x)$ satisfies $f(1-x)=1-f(x)$ for all $x \in [0,1]$.
This identity proves helpful in evaluating $f(x)$ for values of $x$ whose ternary expansions do not terminate—used in conjunction with the three propositions of this section, this identity makes it possible to evaluate $f(x)$ for values of the form $1/(3^i+1)$, for instance. We also can use it to show that $f(1/2)=1/2$:
f()&=1-f()\
2f()&=1\
f()&=
Katsuura [@Katsuura91] defines the contraction mappings $w_n: X \mapsto X$, where $n \in \{1,2,3\}$ and $X=[0,1] \times [0,1]$, as follows: For all $(x,y)\in X$,
w\_1(x,y)&=(,)\
w\_2(x,y)&=(,)\
w\_3(x,y)&=(,)
Seperately applying mappings (5), (6), and (7) to the line $y=x$ (the graph of $f_0$) produces the graph of $f_1$; applying the same mappings to the graph of $f_1$ gives the graph of $f_2$; and so on. More generally, if $\Gamma_i$ is the graph of $f_i$, then $$\Gamma_{i+1}=w_1(\Gamma_i)\cup w_2(\Gamma_i)\cup w_3(\Gamma_i).$$ Since $f=\displaystyle\lim_{i\to\infty}f_i$, we can say that $\Gamma=\displaystyle\lim_{i\to\infty}\Gamma_{i+1}=\displaystyle\lim_{i\to\infty}w_1(\Gamma_i)\cup w_2(\Gamma_i)\cup w_3(\Gamma_i)$. So $\Gamma=w_1(\Gamma)\cup w_2(\Gamma)\cup w_3(\Gamma)$ is the unique invariant set for the iterated function system (IFS) given by $w_1$, $w_2$, and $w_3$ (see [@Katsuura91]). Since $w_1(\Gamma_i)=\Gamma_{i+1}$ on $[0,1/3]$, $w_2(\Gamma_i)=\Gamma_{i+1}$ on $[1/3,2/3]$, and $w_3(\Gamma_i)=\Gamma_{i+1}$ on $[2/3,1]$, we are able to prove three more identities:
For all $x \in [0,1]$ and $i\geq 0$, $\displaystyle f\left(\left(\frac{1}{3}\right)^ix\right)=\left(\frac{2}{3}\right)^if(x)$.
Let $x\in[0,1]$. Then $(x, f(x))\in\Gamma$. If $i=0$, our result is obvious. If $i>0$, then $$\underbrace{w_1\circ w_1\circ \dots \circ w_1}_{i-1}\circ w_1(x,f(x))=\left(\left(\frac{1}{3}\right)^ix,\left(\frac{2}{3}\right)^if(x)\right)$$ from the definition of $w_1$. And since $w_1^i(\Gamma)\subseteq\Gamma$, where $w_1^i(\Gamma)$ denotes $i$ applications of $w_1$ on $\Gamma$, we have $\displaystyle f\left(\left(\frac{1}{3}\right)^ix\right)=\left(\frac{2}{3}\right)^if(x)$.
For all $x \in [0,1]$ and $i>0$, $$f\left(\frac{2-x}{3^i}\right)=\frac{2^{i-1}}{3^i}[1+f(x)].$$
Let $x\in[0,1]$. Then $(x, f(x))\in\Gamma$. If $i>0$, then
\_[i-1]{}w\_2(x,y)&=(,)\
&=(,)
from the definition of $w_1$. And since $w_1(\Gamma)\subseteq\Gamma$ and $w_2^i(\Gamma)\subseteq\Gamma$, we have $\displaystyle f\left(\frac{2-x}{3^i}\right)=\frac{2^{i-1}}{3^i}[1+f(x)]$.
For all $x \in [0,1]$ and $i>0$, $$f\left(\frac{2+x}{3^i}\right)=\left(\frac{2}{3}\right)^if(x)+\frac{2^{i-1}}{3^i}.$$
This can be proven in the same manner as Proposition 2 if we replace the $w_2$ with $w_3$.
Function Values
===============
As $f$ has no explicit formula, we must take advantage of its self-similar structure to evaluate $f(x)$ for nearly all values of $x$ (see Fig. 3). The four identities we have proven already will help.
![As $f$ has no explicit formula, the self-similarity of its graph is key in determining its values at different points.](BourbakiValues){width="100.00000%"}
For all $j>i>0$,
1. $\displaystyle f\left(\frac{1}{3^i+1}\right)=\frac{2^i}{3^i+2^i}$,\
2. $\displaystyle f\left(\frac{1}{3^i-1}\right)=\frac{2^i}{3^i+2^{i-1}}$,\
3. $\displaystyle f\left(\frac{2}{3^i+1}\right)=\frac{2^{i-1}}{3^i-2^{i-1}}$,\
4. $\displaystyle f\left(\frac{2}{3^i-1}\right)=\frac{2^{i-1}}{3^i-2^i}$,\
5. $\displaystyle f\left(\frac{1}{3^j+3^i}\right)=\left(\frac{2}{3}\right)^i\left(\frac{2^{j-i}}{3^{j-i}+2^{j-i}}\right)$, and\
6. $\displaystyle f\left(\frac{1}{3^j-3^i}\right)=\left(\frac{2}{3}\right)^i\left(\frac{2^{j-i}}{3^{j-i}+2^{j-i-1}}\right)$.
Let $j>i>0$.\
(i) Clearly $$1-\frac{1}{3^i+1}=\frac{3^i}{3^i+1}$$ and using Theorem 1 and Proposition 1, it follows that
f()&=f()\
&=f()\
&=()\^if(1-)\
&=()\^i\
&=()\^i-()\^if().
So we have $$\left[1+\left(\frac{2}{3}\right)^i\right]f\left(\frac{1}{3^i+1}\right)=\left(\frac{2}{3}\right)^i$$ and thus, $$f\left(\frac{1}{3^i+1}\right)=\frac{2^i}{3^i+2^i},\textrm{\ for\ }i>0.\\$$
\(ii) Likewise, we know that for $i>0$, $$\frac{1}{3^i-1}=1-\frac{3^i-2}{3^i-1}$$ and $$\frac{2-(3^i-2)/(3^i-1)}{3^i}=\frac{1}{3^i-1}.$$ The next steps are almost identical to those from the previous proof, so we will omit them. Making appropriate substitutions, applying the function to both sides, and using Theorem 1 and Proposition 2 gives us $$f\left(\frac{1}{3^i-1}\right)=\frac{2^i}{3^i+2^{i-1}},\textrm{\ for\ }i>0.\\$$
\(iii) For $i>0$, we have
$$\frac{2-(2/(3^i+1))}{3^i}=\frac{2}{3^i+1}.$$
Applying the function to both sides and using Proposition 2 gives us $$f\left(\frac{2}{3^i+1}\right)=\frac{2^{i-1}}{3^i-2^{i-1}},\textrm{\ for\ }i>0.\\$$
\(iv) Similarly, $$\frac{2+(2/(3^i-1))}{3^i}=\frac{2}{3^i-1},$$ and by applying $f$ to both sides and using Proposition 3, we get $$f\left(\frac{2}{3^i-1}\right)=\frac{2^{i-1}}{3^i-2^i},\textrm{\ for\ }i>0.\\$$
\(v) Now, for $j>i>0$, we know that $$\frac{1}{3^j+3^i}=\left(\frac{1}{3^i}\right)\left(\frac{1}{3^{j-i}+1}\right)$$ and since $j>i>0$ implies that $j-i>0$, we can apply Proposition 1 and the identity $f(1/(3^i+1))=2^i/(3^i+2^i)$ (which we proved in part (i) of this theorem) to obtain
f()&=f(()())\
&=()\^if()\
&=()\^i(),j>i>0.\
\(vi) Given $j>i>0$, we also know that $$\frac{1}{3^j-3^i}=\left(\frac{1}{3^i}\right)\left(\frac{1}{3^{j-i}-1}\right)$$ and by applying Proposition 1 and the identity $f(1/(3^i-1))=2^i/(3^i+2^{i-1})$, which we proved in part (ii) of this theorem, we get
f()&=f(()())\
&=()\^if()\
&=()\^i()j>i>0.
Integral Identities
===================
In this section, we will study the antiderivative $\displaystyle F(x)=\int_0^xf(t)\,dt$. Our goal is to find inductive formulas describing $F(x)$. To do this, we first need to prove a result for $F$ analogous to Theorem 1 for $f$. Then we will use this result in conjunction to prove results similar to Propositions (1)–(3).
Since the integral $\displaystyle\int_0^xf(t)\,dt$ measures the area under a self-similar curve, it exhibits a degree of self-similarity itself. It turns out that this is the case: We can derive four identities for the integral from our identities for $f$—one of which corresponds to Theorem 1, and three which correspond to Propositions (1)–(3) and serve as iterative formulas for $F$.
For all $x\in[0,1]$, $\displaystyle\int_x^{1-x}f(t)\,dt=1/2-x$.
By Theorem 1, we know that for any $t\in[0,1]$, $f(1-t)=1-f(t)$. So clearly
\_a\^bf(1-t)dt&=\_a\^b1-f(t)dt\
&=\[b-a\]-\_a\^bf(t)dt
for all $a,b\in[0,1]$. So if we let $a=x$ and $b=1-x$, where $x\in[0,1]$, we get
\_x\^[1-x]{}f(1-t)dt&=\[(1-x)-x\]-\_x\^[1-x]{}f(t)dt\
&=\[1-2x\]-\_x\^[1-x]{}f(t)dt.
By $u$-substitution on $\int_x^{1-x}f(1-t)\,dt$, we have
\_x\^[1-x]{}f(1-t)dt&=-\_[1-(x)]{}\^[1-(1-x)]{}f(t)dt\
&=-\_[1-x]{}\^[x]{}f(t)dt\
&=\_x\^[1-x]{}f(t)dt.
Then by substitution, $$\int_x^{1-x}\!\!\!\!\!\!f(t)\,dt=[1-2x]-\int_x^{1-x}\!\!\!\!\!\!f(t)\,dt.$$ So $$2\int_x^{1-x}\!\!\!\!\!\!f(t)\,dt=1-2x$$ and thus, $$\int_x^{1-x}\!\!\!\!\!\!f(t)\,dt=\frac{1}{2}-x, \textrm{\ for all\ }x\in[0,1].\qedhere$$
This theorem is illustrated in Fig. 4. One notable result immediately follows:
The area under the graph of Bourbaki’s function is $$A=\int_0^1f(t)\,dt=\frac{1}{2}.$$
![The symmetry of the curve generated by $f$ applies to the area under it, as well; over any region $[x,1-x]$ for $x\in[0,1]$, the area under the curve is equal to the area under the line $y=1/2$.](BourbakiTheorem3){width=".5\textwidth"}
For all $x\in[0,1]$ and $i\geq0$, $\displaystyle\int_0^{x/3^i}\!\!\!\!\!\!f(t)\,dt=\left(\frac{2}{9}\right)^i\int_0^xf(t)\,dt$.
By Proposition 1, for all $t,x\in[0,1]$ and $i\geq0$,
\_0\^xf()dt&=\_0\^x()\^if(t)dt\
&=()\^i\_0\^xf(t)dt.
Now
\_0\^xf()dt&=3\^i\_0\^[x/3\^i]{}f(t)dt\
&=()\^i\_0\^xf(t)dt
and thus, $$\int_0^{x/3^i}\!\!\!\!\!\!f(t)\,dt=\left(\frac{2}{9}\right)^i\int_0^xf(t)\,dt.\qedhere$$
This result is illustrated in Fig. 5.
![Proposition 4 for $i=1$. The area under the graph over $[0,1]$ and the area over $[0,1/3]$ are in the proportion 1:2/9. This is exactly the proportion of the area in the two boxes pictured.](BourbakiProposition4){width="100.00000%"}
For all $x\in[0,1]$ and $i>0$, $$\int_{(2-x)/3^i}^{2/3^i}\!\!\!\!\!\!\!\!\!\!\!\!f(t)\,dt=\frac{2^{i-1}}{9^i}\left[x+\int_0^xf(t)\,dt\right].$$
We know by Proposition 2 that for all $t\in[0,1]$ and $i>0$, $f([2-t]/3^i)=(2^{i-1}/3^i)[1+f(t)]$. So clearly
\_0\^xf()dt&=\_0\^x\[1+f(t)\]dt\
&=\_0\^x1+f(t)dt\
&=
for all $x\in[0,1]$. By $u$-substitution on $\int_0^xf([2-t]/3^i)\,dt$, we have
\_0\^xf()dt&=-3\^i\_[2/3\^i]{}\^[(2-x)/3\^i]{}f(t)dt\
&=3\^i\_[(2-x)/3\^i]{}\^[2/3\^i]{}f(t)dt.
So $$3^i\int_{(2-x)/3^i}^{2/3^i}\!\!\!\!\!\!\!\!\!\!\!\!f(t)\,dt=\frac{2^{i-1}}{3^i}\left[x+\int_0^xf(t)\,dt\right]$$ and thus, $$\int_{(2-x)/3^i}^{2/3^i}\!\!\!\!\!\!\!\!\!\!\!\!f(t)\,dt=\frac{2^{i-1}}{9^i}\left[x+\int_0^xf(t)\,dt\right], \textrm{\ for all\ }x\in[0,1] \textrm{\ and\ }i\geq0.\qedhere$$
For all $x\in[0,1]$ and $i>0$, $$\int_{(2+x)/3^i}^{2/3^i}\!\!\!\!\!\!\!\!\!\!\!\!f(t)\,dt=\left(\frac{2^{i-1}}{9^i}\right)x+\left(\frac{2}{9}\right)^i\int_0^xf(t)\,dt.$$
This can be proven in the same manner as Proposition 5 if we apply Proposition 3 instead of Proposition 2.
Using Propositions 4–6, we can construct a simple inductive formula for the antiderivative of $f$.
The antiderivative $F$ of $f$ can be expressed as $F(x)=\displaystyle\lim_{i\to\infty}F_i(x)$, where $F_i$ is defined at any iteration $i\geq0$ as follows: $F_0(x)=x/2$ for all $x\in[0,1]$, every $F_i$ is continuous on $[0,1]$, every $F_i$ is affine on each subinterval $[k/3^i,(k+1)/3^i]$ where $k\in\{0,1,2,\dots,3^i-1\}$, and
F\_[i+1]{}()&=F\_i(),\
F\_[i+1]{}()&=F\_i(),\
F\_[i+1]{}()&=,\
F\_[i+1]{}()&=(+)+F\_i()\
F\_[i+1]{}()&=F\_i()
Given the domain of $f$, we will let the antiderivative $F(x)=\displaystyle\int_0^xf(t)\,dt$. Using this notation, Proposition 4 can be expressed as $\displaystyle F\left(\frac{x}{3^i}\right)=\left(\frac{2}{9}\right)^iF(x)$. We also can rewrite Propositions 5 and 6 accordingly, but we must adjust them so that their integrals have a lower bound of 0. We can do this easily for Proposition 5 using a few substitutions:
F()&=\_0\^[1/3\^i]{}f(t)dt+\_[1/3\^i]{}\^[(1+x)/3\^i]{}f(t)dt\
&=\_0\^[1/3\^i]{}f(t)dt+\_[1/3\^i]{}\^[(2-\[1-x\])/3\^i]{}f(t)dt\
&=\_0\^[1/3\^i]{}f(t)dt+\_[1/3\^i]{}\^[2/3\^i]{}f(t)dt-\_[(2-\[1-x\])/3\^i]{}\^[2/3\^i]{}f(t)dt\
&=()\^i()+()-\[(1-x)+F(1-x)\]\
&=\
&=\[1+2x-F(x)\].
Working out Proposition 6 is even simpler:
F()&=\_0\^[(2+x)/3\^i]{}f(t)dt\
&=\_0\^[1/3\^i]{}f(t)dt+\_[1/3\^i]{}\^[2/3\^i]{}f(t)dt+\_[2/3\^i]{}\^[(2+x)/3\^i]{}f(t)dt\
&=()\^i()+()+x+()\^iF(x)\
&=(+x)+()\^iF(x).
For $i=1$, the expressions in Propositions 4–6 describe the area under the graph of $f$ over each third of $[0,1]$ in terms of the area over $[0,1]$. For $i=2$, Propositions 4–6 can be applied over one another to describe the area under the graph over each third of each third of $[0,1]$ in terms of the areas for $i=1$, and so on. Using our three rewritten propositions, we can approximate the graph of $F$ with continuous, affine iterations. We start with the area over $[0,1]$: We know that $F(0)=0$, and from the corollary to Theorem 3, we have $F(1)=1/2$, so our first iteration must be the graph of $F_0(x)=x/2$. By applying Propositions 4–6 from here, we can evaluate $F_i(k/3^i)$ for any $k\in\{0,1,2,\ldots,3^i-1\}$. Finally, because the set of all $k/3^i$ is dense in $[0,1]$ as $i$ goes to infinity, we have $\displaystyle F(x)=\lim_{i\to\infty}F_i(x)$ for all $x\in[0,1]$.
Using this theorem, we can obtain a decent approximation of the graph of $F$ (see Fig. 6).
![$F(x)$ corresponds to the area under the graph of $f$ from 0 to $x$, or $\int_0^xf(t)\,dt$.](BourbakiArea){width="100.00000%"}
We can see from the graph that $F$ appears nondecreasing everywhere on $[0,1]$. In fact, this is the case, since $f(x)\geq0$ for all $x\in[0,1]$. We also can see that the graph looks perfectly smooth, but it also appears to shift between upwards and downwards concavity everywhere. The Fundamental Theorem of Calculus explains both of these observations. Okamoto [@Okamoto05] has proven that $f$ is continuous and well-defined everywhere on $[0,1]$, and according to the Fundamental Theorem of Calculus, that means that its antiderivative $F$ is continuous, well-defined, and differentiable everywhere on $[0,1]$; also, $F'(x)=f(x)$, so it follows that $F''(x)=f'(x)$. But Okamoto [@Okamoto05] has shown that $f'(x)$ does not exist for any $x\in[0,1]$, so $F''(x)$ also does not exist for any $x\in[0,1]$—in other words, the graph of $F$ is neither concave up nor concave down anywhere on $[0,1]$. This differs from the concavity of a line, as any linear function of the form $l(x)=ax+b$ will have a second derivative of $l''(x)=0$ and thus could be said to be *both* concave up and concave down.
Integral Values
===============
Like $f$, $F$ has no explicit formula, so we must use our identities to predict different values of $F(x)$.
For $i>0$,
1. $\displaystyle\int_0^{1/(3^i+1)}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!f(t)\,dt=\frac{2^{i-1}}{9^i}\ \frac{3^i-1}{3^i+1}\ \frac{1}{1-(2/9)^i}$,
2. $\displaystyle\int_0^{1/(3^i-1)}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!f(t)\,dt=\frac{2^{i-1}}{9^i}\ \frac{3^i+1}{3^i-1}\ \frac{1}{1+2^{i-1}/9^i}$,
3. $\displaystyle\int_0^{2/(3^i+1)}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!f(t)\,dt=\frac{2^{i-1}}{9^i}\ \frac{5\cdot3^i+1}{2\cdot3^i+2}\ \frac{1}{1+2^{i-1}/9^i}$, and
4. $\displaystyle\int_0^{2/(3^i-1)}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!f(t)\,dt=\frac{2^{i-1}}{9^i}\ \frac{5\cdot3^i-1}{2\cdot3^i-2}\ \frac{1}{1-(2/9)^i}$.
Let $i>0$.\
(i) Now, we know by Proposition 4 that for all $x\in[0,1]$ and $i\geq0$, $$\int_0^{x/3^i}\!\!\!\!\!\!f(t)\,dt=\left(\frac{2}{9}\right)^i\int_0^xf(t)\,dt$$ and clearly $$1-\frac{1}{3^i+1}=\frac{3^i}{3^i+1}$$ So keeping this in mind and applying Theorem 3, we have
\_0\^[1/(3\^i+1)]{}f(t)dt&=()\^i\_0\^[3\^i/(3\^i+1)]{}f(t)dt\
&=()\^i\
&=()\^i\_0\^[1/(3\^i+1)]{}f(t)dt+()\^i(-)
which means that $$\left[1-\left(\frac{2}{9}\right)^i\right]\int_0^{1/(3^i+1)}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!f(t)\,dt=\left(\frac{2^{i-1}}{9^i}\right)\left(\frac{3^i-1}{3^i+1}\right).$$ Since $1-(2/9)^i=0$ when $i=0$, we make the restriction $i>0$ in order to divide on both sides. This gives us $$\int_0^{1/(3^i+1)}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!f(t)\,dt=\frac{2^{i-1}}{9^i}\ \frac{3^i-1}{3^i+1}\ \frac{1}{1-(2/9)^i},\textrm{\ for\ }i>0.\\$$
\(ii) Now, we know by Proposition 5 that for all $x\in[0,1]$ and $i>0$, $$\int_{(2-x)/3^i}^{2/3^i}\!\!\!\!\!\!\!\!\!\!\!\!f(t)\,dt=\frac{2^{i-1}}{9^i}\left[x+\int_0^xf(t)\,dt\right]$$ and clearly $$1-\frac{1}{3^i-1}=\frac{3^i-2}{3^i-1}.$$ Thus, we can see that $$\int_{1/(3^i-1)}^{2/3^i}\!\!\!\!\!\!\!\!\!\!\!\!f(t)\,dt=\frac{2^{i-1}}{9^i}\left[\frac{3^i-2}{3^i-1}+\int_0^{(3^i-2)/(3^i-1)}\!\!\!\!\!\!\!\!\!\!\!\!f(t)\,dt\right],$$ and so $$\int_0^{2/3^i}\!\!\!\!\!\!f(t)\,dt-\int_0^{1/(3^i-1)}\!\!\!\!\!\!\!\!\!\!\!\!f(t)\,dt=\frac{2^{i-1}}{9^i}\left[\frac{3^i-2}{3^i-1}+\int_0^{1/(3^i-1)}\!\!\!\!\!\!\!\!\!\!\!\!f(t)\,dt+\int_{1/(3^i-1)}^{(3^i-2)/(3^i-1)}\!\!\!\!\!\!\!\!\!\!\!\!f(t)\,dt\right].$$ Then
\_0\^[1/3\^i]{}f(t)dt+\_[1/3\^i]{}\^[2/3\^i]{}f(t)dt-\_0\^[1/(3\^i-1)]{}f(t)dt&=()\
&+\_0\^[1/(3\^i-1)]{}f(t)dt\
&+(-).
Propositions 4 and 5 give us
()()\^i+(1+)-\_0\^[1/(3\^i-1)]{}f(t)dt&=()\
&+\_0\^[1/(3\^i-1)]{}f(t)dt\
&+(-).
So
(1+)\_0\^[1/(3\^i-1)]{}f(t)dt&=()()\^i+(1+)-()\
&-(-)\
&=()()\^i+(1-+)\
&=()()\^i+()\^i()\
&= .
Now, $$\int_0^{1/(3^i-1)}\!\!\!\!\!\!\!\!\!\!\!\!f(t)\,dt=\frac{2^{i-1}}{9^i}\ \frac{3^i+1}{3^i-1}\ \frac{1}{1+2^{i-1}/9^i},\textrm{\ for\ }i>0.\\$$
\(iii) We know by Proposition 5 that for all $x\in[0,1]$ and $i>0$, $$\int_{(2-x)/3^i}^{2/3^i}\!\!\!\!\!\!\!\!\!\!\!\!f(t)\,dt=\frac{2^{i-1}}{9^i}\left[x+\int_0^xf(t)\,dt\right]$$ and we can see that clearly $$\frac{2-2/(3^i+1)}{3^i}=\frac{2}{3^i+1}.$$ So
\_[2/(3\^i+1)]{}\^[2/3\^i]{}f(t)dt&=\_0\^[2/3\^i]{}f(t)dt-\_0\^[2/(3\^i+1)]{}f(t)dt\
&=\_0\^[1/3\^i]{}f(t)dt+\_[1/3\^i]{}\^[2/3\^i]{}f(t)dt-\_0\^[2/(3\^i+1)]{}f(t)dt\
&=()\^i()+(1+)-\_0\^[2/(3\^i+1)]{}f(t)dt
and
\_[2/(3\^i+1)]{}\^[2/3\^i]{}f(t)dt&=\
&=()+\_0\^[2/(3\^i+1)]{}f(t)dt.
Thus, by substitution, $$\left(\frac{2}{9}\right)^i\left(\frac{1}{2}\right)+\frac{2^{i-1}}{9^i}\left(1+\frac{1}{2}\right)-\int_0^{2/(3^i+1)}\!\!\!\!\!\!\!\!\!\!\!\!f(t)\,dt=\frac{2^{i-1}}{9^i}\left(\frac{2}{3^i+1}\right)+\frac{2^{i-1}}{9^i}\int_0^{2/(3^i+1)}\!\!\!\!\!\!\!\!\!\!\!\!f(t)\,dt,$$ and so
(1+)\_0\^[2/(3\^i+1)]{}f(t)dt&=()\^i()+()-()\
&=(1+-)\
&= .
Therefore, $$\int_0^{2/(3^i+1)}\!\!\!\!\!\!\!\!\!\!\!\!f(t)\,dt=\frac{2^{i-1}}{9^i}\frac{5\cdot3^i+1}{2\cdot3^i+2}\ \frac{1}{1+2^{i-1}/9^i},\textrm{\ for\ }i>0.\\$$
\(iv) We know by Proposition 6 that for all $x\in[0,1]$ and $i>0$, $$\int_{2/3^i}^{(2+x)/3^i}\!\!\!\!\!\!\!\!\!\!\!\!f(t)\,dt=\frac{2^{i-1}}{9^i}x+\left(\frac{2}{9}\right)^i\int_0^xf(t)\,dt$$ and clearly $$\frac{2+2/(3^i-1)}{3^i}=\frac{2}{3^i-1}.$$ The rest of the proof follows steps similar to those in part (iii) of this theorem to give us $$\int_0^{2/(3^i-1)}\!\!\!\!\!\!\!\!\!\!\!\!f(t)\,dt=\frac{2^{i-1}}{9^i}\ \frac{5\cdot3^i-1}{2\cdot3^i-2}\ \frac{1}{1-(2/9)^i},\textrm{\ for\ }i>0.\qedhere$$
Comments on Dimension
=====================
We have established that the graphs of $f$ and $F$ both exhibit self-similarity and pathological behavior. In this section, we will use the self-similarity of the graph of $f$ to prove that it exhibits fractal behavior by having a Hausdorff dimension greater than its topological dimension. To do this, we will show first that the graph of $f$ has box-counting dimension $\log_3 5$, and then we will use the Mass Distribution Principle (i.e. [@Falconer03]) to show that the graph’s Hausdorff dimension can be no less than its box-counting dimension.
Although we have shown that the graph of $F$ has no second derivative, we know that $F$ is a $C^1$ function, having a derivative that is continuous everywhere. Because of this, the graph of $F$ consititutes a rectifiable curve, and so it must have Hausdorff dimension 1. In this section, we will show that the arc length of the graph lies between $\sqrt{5}/2$ and $3/2$.
If $\Gamma$ is the graph of $f$, then its box-counting dimension $\dim_B(\Gamma)=\log_3 5$.
Let $\Gamma$ be the graph of $f$. We must look at $f$ as $\displaystyle\lim_{i\to\infty}f_i$ here, so we will define $\Gamma_i$ as the graph of $f_i$. Obviously, $\Gamma=\displaystyle\lim_{i\to\infty}\Gamma_i$.
Now we consider the box-counting dimension $\dim_B(\Gamma_i)$ of $\Gamma_i$. As the affine pieces of $\Gamma_i$ are defined over intervals of length $1/3^i$, we will count how many boxes of side length $\delta_i=1/3^i$ will cover $\Gamma_i$. For $\Gamma_0$, the graph of $y=x$ for $x\in[0,1]$, the number $N_{\delta_0}$ of boxes required for the cover is clearly 1; with $\delta_0=1$, we have a box covering the entire graph. For $\Gamma_1$, we count boxes with $\delta_1=1/3$, and we get $N_{\delta_1}=5$; exactly two boxes cover each of the graph’s “tall” sides, and one box covers the central portion. For $\Gamma_2$ with boxes of side length $\delta_2=1/9$, we have $N_{\delta_2}=25$; we count four boxes for each of the four tallest sections, two boxes for each of the four second-tallest sections, and one for the centermost section (see Fig. 7).
![Box-counting for $\Gamma_1$ and $\Gamma_2$](BourbakiBoxCounting){width="100.00000%"}
This gives us an idea of how $i$ varies with $N_{\delta_i}$, but to obtain a general result, we will take the route of Katsuura [@Katsuura91] again and view $\Gamma$ as the attractor for a three-component IFS. We recall that all three contraction mappings from $\Gamma_i$ to $\Gamma_{i+1}$ shinks $\Gamma_i$ horizontally by a factor of $1/3$, but the first and third shrink $\Gamma_i$ vertically by a factor of $2/3$, while the second does so by a factor of only $1/3$. Now, if we can cover $\Gamma_i$ by $N_{\delta_i}$ boxes, then by necessity, the middle portion of $\Gamma_{i+1}$—the region of the second mapping—could be covered by $N_{\delta_i}$ boxes, as well, since it is $\Gamma_i$ scaled down by a factor of $1/3$ and we are counting how many boxes scaled down by the same factor can cover it. Applying the similar logic to the regions of the first and third mappings, we can see that $\Gamma_i$ scaled down horizontally by $1/3$ and vertically by $2/3$ will be covered by $2N_{\delta_i}$ boxes scaled down by a factor of $1/3$. Therefore,
N\_[\_[i+1]{}]{}&=2N\_[\_i]{}+N\_[\_i]{}+2N\_[\_i]{}\
&=5N\_[\_i]{}
and since $N_{\delta_0}$ is 1, we can say that for $i>0$, $$N_{\delta_i}=5^i.$$ Now we consider the formula for box-counting dimension: $$\dim_B(\Gamma)=\lim_{\delta_i\to0}\frac{\log(N_{\delta_i})}{-\log(\delta_i)} \textrm{\ (See e.g., \cite{Falconer03}).}$$ And since $\delta=1/3^i$, the formula becomes
\_B()&=\_[i]{}\
&=\_[i]{}\
&=\_3 5.
Now we will prove that the Hausdorff dimension of $\Gamma$ is equal to its box-counting dimension.
If $\Gamma$ is the graph of $f$, then its Hausdorff dimension $\dim_H(\Gamma)=\log_3 5$.
To begin, we will consider an alternate iterative construction of $\Gamma$ using Katsuura’s mappings. Let $E_0=[0,1]\times[0,1]$ and define further levels of the construction by $E_{i+1}=w_1(E_i)\cup w_2(E_i)\cup w_3(E_i)$ where $i>0$ and $w_1$, $w_2$, and $w_3$ are the mappings given in [@Katsuura91]. We see that $E_{i+1}\subset E_i$ for all $i\geq0$, and $\displaystyle\bigcap_{i=0}^\infty E_i=\Gamma$ (See Figs. 8 and 9).
![$E_0$ and $E_1$. Note that we can divide the $i$th level of the construction into $3^i$ rectangles of length $(1/3)^i$.](BourbakiMappings0and1){width="100.00000%"}
![$E_2$ and $\Gamma$. Recall that in Okamoto’s construction, linear segments are constructed “upwards” to a graph with infinite length, whereas in this construction, rectangular regions are constructed “downwards” to a graph with zero area.](BourbakiMappings2andApproximation){width="100.00000%"}
Using methods related to the box-counting process in Theorem 7, it can be shown that the area of $E_{i+1}$ can be expressed as
A(E\_[i+1]{})&=A(E\_i)+A(E\_i)+A(E\_i)\
&=A(E\_i),
and since it is obvious that $A(E_0)=1$, we have $A(E_i)=(5/9)^i$ for all $i\geq0$.
Now, let $\mu$ be the natural mass distribution on $\Gamma$; we start with unit mass on $E_0$ and repeatedly “spead” this mass over the total area of each $E_i$. Also, let $U$ be any set whose diameter $|U|<1$. Then there exists some $i\geq0$ such that $$\left(\frac{1}{3}\right)^{i+1}\leq|U|<\left(\frac{1}{3}\right)^i,$$ an inequality that applies to any $U$ satisfying $0<|U|<1$. From this point, it is clear that for every set $U$ of this type, there is some $i$ such that $U$ is contained in an open square of side length $(1/3)^i$ and $U$ contains points in at most two level-$i$ “sub-rectangles” (See Fig. 10).
![Estimating the Hausdorff dimension of $\Gamma$ using the Mass Distribution Principle. Note that any appropriately-sized set $U$ will “fit” in some corresponding open square at some level $E_i$ of the construction, and as a result, $U$ will share points with at most two sub-rectangles of $E_i$.](BourbakiMassDistribution){width="100.00000%"}
Hence, the area of $U$ is bounded above by the area of the open square containing it; that is, $A(U)\leq(1/9)^i$. In terms of measure, we know that the entire area of $U$ can be contained in $E_i$, so
(U)&\
&\
&()\^i.
And since $\displaystyle\left(\frac{1}{3}\right)^{i+1}\leq|U|$ implies that $\displaystyle\left(\frac{1}{3}\right)^i\leq3|U|$, we have $$\mu(U)\leq\left(\frac{1}{5}\right)^i=\left(\frac{1}{3^i}\right)^{\log_3 5}\leq(3|U|)^{\log_3 5}=5|U|^{\log_3 5},$$ and thus, by the Mass Distribution Principle, $\log_3 5\leq\dim_H(\Gamma)\leq\dim_B(\Gamma)$, and given the upper bound obtained in Theorem 7, we have $\dim_H(\Gamma)=\log_3 5$.
Because the graph of $F$ is a rectifiable curve, we can determine its arc length over $[0,1]$. In the following theorem, we will show that this arc length is finite by bounding it above and below.
If $G$ is the graph of $F$ and $L$ is the arc length of $G$, then $\displaystyle\frac{\sqrt{5}}{2}\leq L\leq\frac{3}{2}$.
Let $G$ be the graph of $F$. We must consider $F$ as the limit of its iterations here, so we will define $G_i$ as the graph of $F_i$. Obviously, $G=\displaystyle\lim_{i\to\infty}G_i$.
Because each $F_i$ is affine on $[0,1/3^i],[1/3^i,2/3^i],\dots,[(3^i-1)/3^i,1]$ we can apply the Triangle Inequality to the linear “piece” of $G_i$ at each of these intervals; for instance, if $l$ is the length of $G_i$ on $[1/3^i,2/3^i]$, we have $|2/3^i-1/3^i|+| F(2/3^i)-F(1/3^i)|\geq l$ (see Fig. 11).
![Applying the Triangle Inequality to $G_1$ and $G_2$](BourbakiAntiderivativeTriangles){width="100.00000%"}
Because the inequality holds over all of $[0,1/3^i],[1/3^i,2/3^i],\dots,[(3^i-1)/3^i,1]$, it also will hold for the sums of the respective sides of each “triangle”; that is, if $L_i$ is the total arc length of $G_i$, then $$L_i\leq\sum_{j=1}^{3^i}\left|\frac{j}{3^i}-\frac{j-1}{3^i}\right|+\sum_{j=1}^{3^i}\left|F_i\left(\frac{j}{3^i}\right)-F_i\left(\frac{j-1}{3^i}\right)\right|.$$ And by taking the limit as $i$ approaches infinity on both sides, we get
L&\_[i]{}\_[j=1]{}\^[3\^i]{}|-|+\_[j=1]{}\^[3\^i]{}|F\_i()-F\_i()|\
&\_[i]{}\_[j=1]{}\^[3\^i]{}+\_[j=1]{}\^[3\^i]{}|F\_i()-F\_i()|\
&\_[i]{}1+\_[j=1]{}\^[3\^i]{}|F\_i()-F\_i()|.
We observed earlier that $F$ is nondecreasing. Now, $F_{i+1}(k/3^i)=F_i(k/3^i)$, so by induction, $F(x)=F_i(x)$ wherever $x=k/3^i$. And since every $F_i$ is affine everywhere between such points, every $F_i$ must be nondecreasing everywhere on $[0,1]$, as well. This means that for all $j\in\{1,2,\dots,3^i\}$, $$F_i\left(\frac{j}{3^i}\right)-F_i\left(\frac{j-1}{3^i}\right)\geq0$$ and thus, we can make the substitution $$\left|F_i\left(\frac{j}{3^i}\right)-F_i\left(\frac{j-1}{3^i}\right)\right|=F_i\left(\frac{j}{3^i}\right)-F_i\left(\frac{j-1}{3^i}\right)$$ which gives us
L&\_[i]{}1+\_[j=1]{}\^[3\^i]{}F\_i()-F\_i()\
&\_[i]{}1+F\_i()-F\_i()+F\_i()-F\_i()\
&+…+F\_i()-F\_i()\
&\_[i]{}1+F\_i(1)-F\_i(0)\
&\_[i]{}1+-0\
&.
For the lower bound of $L$, we need only recall that the shortest distance between two points is a straight line. The endpoints of $G$ are $(0,0)$ and $(1,1/2)$, and the line connecting them has length $\displaystyle\frac{\sqrt{5}}{2}$. Thus, $L\geq\displaystyle\frac{\sqrt{5}}{2}$.
Now, because the affine segments of each $G_{i+1}$ deviate from the straight lines in $G_i$ from which they are constructed, we see that $L_{i+1}>L_i$ for all $i$. Approximating $L$, we have $L_2\approx1.1269$, which is strictly greater than $\displaystyle\frac{\sqrt{5}}{2}$. Thus, $\displaystyle\frac{\sqrt{5}}{2}<L\leq\frac{3}{2}$.
We conclude that although the graph of $F$ exhibits self-similarity and pathological behavior, it is by definition not a fractal. If we see $F$ as a measure of the area bounded by the graph of $f$ and the $x$-axis, this conclusion makes more sense; a region with a fractal boundary of infinite length still can contain a finite area. (A standard example of this phenomenon is the Koch snowflake; see e.g., [@Edgar08].)
Concluding Remarks
==================
Okamoto [@Okamoto05] shows that Bourbaki’s Function is just one member of a parametrized family of functions $F_a$ with analogous constructions. Using generalizations of expressions (1)–(7), it is possible to prove that given $x\in[0,1]$ and $i>0$, $F_a$ abides by the following rules for all $a\in(0,1)$:
- $F_a(1-x)=1-F_a(x)$\
- $\displaystyle F_a\left(\frac{x}{3^i}\right)=a^iF_a(x)$\
- $\displaystyle F_a\left(\frac{2-x}{3^i}\right)=(2a^i-a^{i-1})F_a(x)+(a^{i-1}-a^i)$\
- $\displaystyle F_a\left(\frac{2+x}{3^i}\right)=a^iF_a(x)+(a^{i-1}-a^i)$\
- $\displaystyle F_a\left(\frac{1}{3^i+1}\right)=\frac{a^i}{1+a^i}$\
- $\displaystyle F_a\left(\frac{1}{3^i-1}\right)=\frac{a^{i-1}-a^i}{1+2a^i-a^{i-1}}$\
- $\displaystyle F_a\left(\frac{2}{3^i+1}\right)=\frac{1-2a^{i-1}+a^i}{a^{i-1}-2a^i}$\
- $\displaystyle F_a\left(\frac{2}{3^i-1}\right)=\frac{a^{i-1}-a^i}{1-a^i}$\
- $\displaystyle F_a\left(\frac{1}{3^j-3^i}\right)=\frac{a^{i-1}-a^i}{1-a^i}$
Similar generalized identities for the antiderivative of any $F_a$ can be derived using methods similar to those used in this paper. It should be noted, however, not every $F_a$ is nowhere differentiable—a fact that influences several properties of the family of functions, including the dimension of their graphs and the nature of their derivatives almost everywhere in $[0,1]$. We expect more general proofs to shed light on these subjects.
Obviously, the formulas for finding function and integral values in Theorems 2 and 4 do not guarantee results for any number in $[0,1]$ or even any rational number in that interval. We do not know of any shortcut for finding $f(1/7)$, for instance, since $1/7$ cannot be expressed in terms of $1/(3^i+1)$, $1/(3^i-1)$, $2/(3^i+1)$, $2/(3^i-1)$, $1/(3^j+3^i)$, or $1/(3^j-3^i)$. We are unsure if a simple algorithm can be found for evaluating $f(1/m)$ for any natural number $m$; while we attempted to do this by parts for $f(1/3m)$, $f(1/[3m-1])$, and $f(1/[3m-2])$, we were unsuccessful.
Acknowledgments
===============
The approximate graphs of $f$ and $F$ were produced using *Dynamical Grapher for Quadratic Maps* [@Basselet].
Many thanks to Professor Daniel Jackson for providing resources, feedback, and encouragment throughout the duration of this project.
[25]{} Basselet, Hunter, Adam Case, and Daniel Jackson. *Dynamical Grapher for Quadratic Maps*. Computer software. Vers. 1.0. Web. 25 July 2010. $<$http://faculty.umf.maine.edu/daniel.jackson1/public.www/DGrapher/DGrapher.html$>$. Bourbaki, Nicolas. *Functions of a Real Variable: Elementary Theory*. Trans. from the 1976 French original by Philip Spain. Berlin: Springer, 2004. Edgar, Gerald. *Measure, Topology, and Fractal Geometry*. New York, NY: Springer, 2008. Falconer, Kenneth. *Fractal Geometry—Mathematical Foundations and Applications*. Chichester: Wiley, 2003. Jarník, Vojtêch, and Bernard Bolzano. *Bolzano and the Foundations of Mathematical Analysis*. Prague: Society of Czechoslovak Mathematicians and Physicists, 1981. Katsuura, Hidefumi. “Continuous Nowhere-Differentiable Functions—an Application of Contraction Mappings.” *American Mathematical Monthly* 98.5 (1991) 411–416. Okamoto, Hisashi. “A remark on continuous, nowhere differentiable functions.” *Proceedings of the Japan Academy* A 81.3 (2005) 47–50.
|
---
abstract: 'Isotopic effects in the fragmentation of excited target residues following collisions of $^{12}$C on $^{112,124}$Sn at incident energies of 300 and 600 MeV per nucleon were studied with the INDRA 4$\pi$ detector. The measured yield ratios for light particles and fragments with atomic number $Z \leq$ 5 obey the exponential law of isotopic scaling. The deduced scaling parameters decrease strongly with increasing centrality to values smaller than 50% of those obtained for the peripheral event groups. Symmetry term coefficients, deduced from these data within the statistical description of isotopic scaling, are near $\gamma =$ 25 MeV for peripheral and $\gamma <$ 15 MeV for central collisions.'
author:
- 'A. Le F[è]{}vre'
- 'G. Auger'
- 'M.L. Begemann-Blaich'
- 'N. Bellaize'
- 'R. Bittiger'
- 'F. Bocage'
- 'B. Borderie'
- 'R. Bougault'
- 'B. Bouriquet'
- 'J.L. Charvet'
- 'A. Chbihi'
- 'R. Dayras'
- 'D. Durand'
- 'J.D. Frankland'
- 'E. Galichet'
- 'D. Gourio'
- 'D. Guinet'
- 'S. Hudan'
- 'G. Immé'
- 'P. Lautesse'
- 'F. Lavaud'
- 'R. Legrain'
- 'O. Lopez'
- 'J. [Ł]{}ukasik'
- 'U. Lynen'
- 'W.F.J. M[ü]{}ller'
- 'L. Nalpas'
- 'H. Orth'
- 'E. Plagnol'
- 'G. Raciti'
- 'E. Rosato'
- 'A. Saija'
- 'C. Schwarz'
- 'W. Seidel'
- 'C. Sfienti'
- 'B. Tamain'
- 'W. Trautmann'
- 'A. Trzciński'
- 'K. Turz[ó]{}'
- 'E. Vient'
- 'M. Vigilante'
- 'C. Volant'
- 'B. Zwiegliński'
- 'A.S. Botvina'
title: |
Isotopic Scaling and the Symmetry Energy\
in Spectator Fragmentation
---
[^1]
The growing interest in isospin effects in nuclear reactions is motivated by an increasing awareness of the importance of the symmetry term in the nuclear equation of state, in particular for astrophysical applications. Supernova simulations or neutron star models require inputs for the nuclear equation of state at extreme values of density and asymmetry [@lattimer; @lattprak; @botv04]. The demonstration in the laboratory of the effects of the symmetry term at abnormal densities is, therefore, an essential first step within a program aiming at gaining such information experimentally [@bao02; @greco02].
Multifragmentation is generally considered a low-density phenomenon, with a high degree of thermalization believed to be reached. Accepting the concept of a freeze-out volume and the applicability of grand canonical logic, the probability of producing a cluster of a given atomic number $Z$ and mass $A$ at temperature $T$ depends exponentially on the free energy of that cluster, $F(Z,A,T)$. The cluster free energies depend on the strength of the symmetry term $E_{\rm sym} = \gamma (A-2Z)^2/A$ in the liquid-drop energy which, in turn, must depend on the extent of expansion of the fragments. This work makes use of an observable that isolates the symmetry contribution to the cluster free energy to explore the difference in this term for fragments produced in peripheral and central collisions. It is found that, while the sequential decay strongly degrades the quality of this observable, the symmetry energy coefficient does indeed decrease as the collisions producing the fragments become more violent.
In the Copenhagen statistical multifragmentation model (SMM), standard coefficients $\gamma$ = 23 to 25 MeV are used to describe the nascent fragments [@bond95; @botv02; @gamma]. In the freeze-out scenario adopted there, normal-density fragments are considered to be statistically distributed within an expanded volume, and the density is only low on average. An experimental value for $\gamma$ of about standard magnitude has recently been obtained within a statistical description of isotopic scaling in light-ion (p, d, $\alpha$) induced reactions at relativistic energies of up to 15 GeV [@botv02]. This result, however, may not be representative for multi-fragment decays because the data were inclusive and the mean multiplicities of intermediate-mass fragments correspondingly small [@beaulieu]. In the present work, we apply the same method to exclusive data obtained with heavier projectiles, $^{12}$C on $^{112}$Sn and $^{124}$Sn targets at 300 and 600 MeV per nucleon incident energy. Here, according to the established systematics [@schuett96], maximum fragment production occurs at central impact parameters.
Isotopic scaling, also termed isoscaling, has been shown to be a phenomenon common to many different types of heavy ion reactions [@botv02; @tsang01; @soul03; @fried04]. It is observed by comparing product yields from otherwise identical reactions with isotopically different projectiles or targets, and it is constituted by an exponential dependence of the measured yield ratios $R_{21}(N,Z)$ on the neutron number $N$ and proton number $Z$ of the considered product. The scaling expression $$R_{21}(N,Z) = Y_2(N,Z)/Y_1(N,Z) = C \cdot exp(\alpha N + \beta Z)
\label{eq:scalab}$$ describes rather well the measured ratios over a wide range of complex particles and light fragments [@tsang01a].
In the grand-canonical approximation, assuming that the temperature $T$ is about the same, the scaling parameters $\alpha$ and $\beta$ are proportional to the differences of the neutron and proton chemical potentials for the two systems, $\alpha = \Delta \mu_{\rm n}/T$ and $\beta = \Delta \mu_{\rm p}/T$. Of particular interest is their connection with the symmetry term coefficient. It has been obtained from the statistical interpretation of isoscaling within the SMM [@botv02] and Expanding-Emitting-Source Model [@tsang01a] and confirmed by an analysis of reaction dynamics [@ono03]. The relation is $$\label{eq:dmunu}
\alpha T = \Delta \mu_{\rm n} = \mu_{\rm n,2} - \mu_{\rm n,1} \approx 4\gamma
(\frac{Z_{1}^2}{A_{1}^2}-\frac{Z_{2}^2}{A_{2}^2})$$ where $Z_{i}$ and $A_{i}$ are the charges and mass numbers of the two systems (the indices 1 and 2 denote the neutron poor and neutron rich system, respectively). With the knowledge of the temperature and the isotopic compositions, the coefficient $\gamma$ of the symmetry term can be obtained from isoscaling.
The data were obtained with the INDRA multidetector [@pouthas] in experiments performed at the GSI. Beams of $^{12}$C with 300 and 600 MeV per nucleon incident energy, delivered by the heavy-ion synchrotron SIS, were directed onto enriched targets of $^{112}$Sn (98.9%) and $^{124}$Sn (99.9%) with areal densities between 1.0 and 1.2 mg/cm$^2$. Light charged particles and fragments ($Z \leq 5$) were detected and isotopically identified with the calibration telescopes of rings 10 to 17 of the INDRA detector which cover the range of polar angles $45^{\circ} \leq \theta_{\rm lab} \leq 176^{\circ}$. These telescopes consist of pairs of an 80-$\mu$m Si detector and a 2-mm Si(Li) detector which are mounted between the ionization chamber and the CsI(Tl) crystal of one of the modules of a ring [@pouthas]. Further experimental details may be found in [@turzo04] and the references given therein. For impact-parameter selection, the charged-particle multiplicity $M_{\rm C}$ measured with the full detector was used, and four bins were chosen for the sorting of the data. For 600 MeV per nucleon, the two most central bins were combined for reasons of counting statistics.
Kinetic energy spectra of light reaction products with $Z \leq 5$, integrated over the impact parameter and the angular range $\theta_{\rm lab} \geq 45^{\circ}$, are shown in Fig. \[fig:spec\]. To reduce preequilibrium contributions, upper limits of 20 MeV and 70 MeV were set for hydrogen and helium isotopes, respectively, which, however, are not crucial. The spectra of Li, Be, and B fragments were integrated above the energy thresholds for isotopic identification which amounted to 28, 40, and 52 MeV, respectively.
=7.0cm
The ratios of the fragment yields measured for the two reactions and integrated over the chosen intervals of energy and angle ( $\theta_{\rm lab} \geq 45^{\circ}$) obey the law of isoscaling. This is illustrated in Fig. \[fig:iso\] which shows the scaled isotopic ratios $S(N) = R_{21}(N,Z)/{\rm exp}(\beta Z)$. Their slope parameters change considerably with impact parameter, extending from $\alpha$ = 0.62 to values as low as $\alpha$ = 0.25 for the most central event group at 600 MeV per nucleon (Table \[tab:table1\] and Fig. \[fig:data300600\], top).
=7.0cm
----------------- ----------------- ------------------ ----------------- ------------------
300 MeV 600 MeV
$b/b_{\rm max}$ $\alpha$ $\beta$ $\alpha$ $\beta$
0.0 - 0.2 0.28 $\pm$ 0.01 -0.33 $\pm$ 0.03
0.2 - 0.4 0.31 $\pm$ 0.01 -0.32 $\pm$ 0.01 0.25 $\pm$ 0.02 -0.28 $\pm$ 0.04
0.4 - 0.6 0.36 $\pm$ 0.01 -0.39 $\pm$ 0.02 0.32 $\pm$ 0.02 -0.34 $\pm$ 0.03
0.6 - 1.0 0.62 $\pm$ 0.01 -0.68 $\pm$ 0.02 0.52 $\pm$ 0.02 -0.59 $\pm$ 0.03
----------------- ----------------- ------------------ ----------------- ------------------
: \[tab:table1\] Parameters obtained from fitting the measured isotopic yield ratios with the scaling function given in Eq. (\[eq:scalab\]).
Temperature estimates were obtained from the yields of $^{3,4}$He and $^{6,7}$Li isotopes, and the deduced $T_{\rm HeLi}$ contains a correction factor 1.2 for the effects of sequential decay [@poch95; @xi97]. The temperatures are quite similar for the two target cases and increase with centrality from about 6 MeV to 9 MeV (Fig \[fig:data300600\], middle). This is consistent with the results obtained for $^{197}$Au fragmentations [@poch95; @xi97] and with the established dependence on the system mass [@nato02]. The rise of $T_{\rm HeLi}$, however, does not compensate for the decrease of $\alpha$, as it did in the case of light-particle induced reactions [@botv02], and $\Delta \mu_{\rm n}$, consequently, decreases toward the central collisions.
The analytical expression for $\Delta \mu_{\rm n}$ (Eq. \[eq:dmunu\]) contains the isotopic compositions of the sources, more precisely the difference of the squared $Z/A$ values, $\Delta (Z^2/A^2) = (Z_{1}/A_{1})^2 - (Z_{2}/A_{2})^2$. For the target spectators, this quantity is not expected to deviate significantly from its original value [@botv02], in contrast to mean-field dominated reaction systems at intermediate energies [@ono03; @shetty04]. This was confirmed with calculations performed with the Li[è]{}ge-cascade-percolation model [@volant04] and with the Relativistic Mean Field Model (RBUU, Ref. [@gait04]) in which isotopic effects of the nuclear mean field are treated explicitly. The individual $Z/A$ values change slightly but $\Delta (Z^2/A^2)$ remains nearly the same. For central collisions at 600 MeV per nucleon, e.g., the RBUU calculations predict a reduction of $\Delta (Z^2/A^2)$ by 6% for the target-rapidity region after 90 fm/c collision time.
The suggested corrections are small and can thus be temporarily ignored. With the compositions of the original targets, $\Delta (Z^2/A^2)$ = 0.0367, the expression $\gamma = \alpha T/0.147$ is obtained from Eq. \[eq:dmunu\]. It was used to determine an apparent symmetry-term coefficient $\gamma_{\rm app}$, i.e. without sequential decay corrections for $\alpha$, from the data shown in Fig. \[fig:data300600\] (the mean values were used for $T$). The results are close to the normal-density coefficient for peripheral collisions but drop to lower values at the more central impact parameters (Fig. \[fig:data300600\], bottom).
=7.0cm
The effects of sequential decay were studied with the microcanonical Markov-chain version of the Statistical Multifragmentation Model [@botv01]. The target nuclei $^{112,124}$Sn with excitation energies of 4, 6, and 8 MeV per nucleon were chosen as inputs, and the symmetry term $\gamma$ was varied between 4 and 25 MeV. The isoscaling coefficient $\alpha$ was determined from the calculated fragment yields before (hot fragments) and after (cold fragments) the sequential decay stage of the calculations for which standard values for the fragment masses were used. The energy balance at freeze-out and during the secondary deexcitation was taken into account as described in [@bond95].
The hot fragments exhibit the linear relation of $\alpha$ with $\gamma$ as expected (Fig. \[fig:symm\], top panel). With $\gamma$ = 25 MeV, the sequential processes cause a slight broadening of the isotopic distributions and the resulting $\alpha$ is lowered by 10% to 20%, similar to what was reported in [@botv02]. For smaller values of $\gamma$, however, larger changes of $\alpha$ are predicted. The decay of the wings of the wider distributions of hot fragments toward the valley of stability causes the resulting cold distributions to be narrower and the isoscaling coefficients to be larger. The overall variation of $\alpha$ with $\gamma$ is weaker, and the decrease of $\gamma$ with centrality should, thus, be stronger than that of $\gamma_{\rm app}$ (Fig. \[fig:data300600\]). For central collisions, this implies $\gamma <$ 15 MeV as an upper limit but much smaller values are more likely to be expected. With the present calculations, for excitation energies up to 8 MeV per nucleon and the original target compositions, the measured $\alpha < 0.3$ is only reproduced with $\gamma \approx$ 6 MeV (Fig. \[fig:symm\], top).
=7.0cm
The bottom panel of Fig. \[fig:symm\] shows how the situation changes when the variation of the isotopic compositions of the two systems is again included as a degree of freedom. The shaded band in the plane of $\Delta (Z^2/A^2)$ versus $\gamma$ represents the region consistent with the weighted mean $\alpha = 0.29$ measured for the central bins ($b/b_{\rm max} \leq 0.4$) at the two energies. It was obtained from the predictions $\alpha(\gamma)$ for cold fragments at excitation energies between 6 and 8 MeV per nucleon (Fig. \[fig:symm\], top) according to $\Delta (Z^2/A^2) = 0.0367 \cdot 0.29/ \alpha(\gamma)$, i.e. using that $\alpha$ is, to first order, proportional to the difference of the isotopic compositions, here expressed by $\Delta (Z^2/A^2)$ (see, e.g., [@botv02; @ono03]). If this difference remains close to the cascade and RBUU predictions the resulting symmetry term coefficient for the central reaction channels is very small, $\gamma \leq$ 10 MeV. To restore the compatibility with a standard $\gamma \approx$ 25 MeV would require considerable isotopic asymmetries in the initial reaction phase, much larger than what is expected according to the models.
In conclusion, the observed decrease of the isoscaling parameter $\alpha$ with centrality which is not compensated by a correspondingly increasing temperature requires a decreasing symmetry term coefficient $\gamma$ in a statistical description of the fragmentation process. The effect is enhanced if sequential fragment decay is taken into account. Values less than 15 MeV, as obtained from the present analysis, are not necessarily unreasonable in a realistic description of the chemical freeze-out state. Besides the global expansion of the system also the possibly expanded or deformed structure of the forming fragments as well as their interaction with other fragments and with the surrounding nucleon gas will have to be considered. The presented results depend, crucially, on the isotopic evolution of the multi-fragmenting system as it approaches the chemical freeze-out and, more quantitatively, on the treatment of sequential decay in the analysis. These questions deserve further attention.
The authors would like to thank T. Gaitanos for communicating the results of RBUU calculations and for valuable discussions. This work was supported by the European Community under contract ERBFMGECT950083.
[99]{}
J.M. Lattimer [*et al.*]{}, Phys. Rev. Lett. [**66**]{}, 2701 (1991).
J.M. Lattimer and M. Prakash, Phys. Rep. [**333**]{}, 121 (2000).
A.S. Botvina and I.N. Mishustin, Phys. Lett. B [**584**]{}, 233 (2004).
Bao-An Li, Phys. Rev. Lett. [**88**]{}, 192701 (2002).
V. Greco [*et al*]{}, Phys. Lett. B [**562**]{}, 215 (2003).
J.P. Bondorf [*et al*]{}, Phys. Rep. [**257**]{}, 133 (1995).
A.S. Botvina, O.V. Lozhkin, and W. Trautmann, Phys. Rev. C [**65**]{}, 044610 (2002).
the notation follows that originally chosen in Ref. [@bond95]; alternatively $C_{\rm sym}$ is frequently used for the same quantity.
L. Beaulieu [*et al.*]{}, Phys. Lett. B 463, 159 (1999) and Phys. Rev. Lett. [**84**]{}, 5971 (2000).
A. Sch[ü]{}ttauf [*et al.*]{}, Nucl. Phys. [**A607**]{}, 457 (1996).
M.B. Tsang [*et al.*]{}, Phys. Rev. Lett. [**86**]{}, 5023 (2001).
G. A. Souliotis [*et al.*]{}, Phys. Rev. C [**68**]{}, 024605 (2003).
W. A. Friedman, Phys. Rev. C [**69**]{}, 031601(R) (2004).
M.B. Tsang [*et al.*]{}, Phys. Rev. C [**64**]{}, 054615 (2001).
A. Ono [*et al.*]{}, Phys. Rev. C [**68**]{}, 051601(R) (2003).
J. Pouthas [*et al.*]{}, Nucl. Instr. Meth. in Phys. Res. [**A357**]{}, 418 (1995).
K. Turz[ó]{} [*et al.*]{}, Eur. Phys. J. A [**21**]{}, 293 (2004).
J. Pochodzalla [*et al.*]{}, Phys. Rev. Lett. [**75**]{}, 1040 (1995).
Hongfei Xi [*et al.*]{}, Z. Phys. A 359, 397 (1997); Eur. Phys. J. A [**1**]{}, 235 (1998).
J.B. Natowitz [*et al.*]{}, Phys. Rev. C [**65**]{}, 034618 (2002).
D.V. Shetty [*et al.*]{}, Phys. Rev. C [**70**]{}, 011601(R) (2004).
C. Volant [*et al.*]{}, Nucl. Phys. [**A734**]{}, 545 (2004).
T. Gaitanos [*et al.*]{}, Nucl. Phys. [**A732**]{}, 24 (2004), and private communication.
A.S. Botvina and I.N. Mishustin, Phys. Rev. C [**63**]{}, 061601(R) (2001).
[^1]: deceased
|
---
abstract: 'Different techniques from machine learning are applied to the problem of computing line bundle cohomologies of (hypersurfaces in) toric varieties. While a naive approach of training a neural network to reproduce the cohomologies fails in the general case, by inspecting the underlying functional form of the data we propose a second approach. The cohomologies depend in a piecewise polynomial way on the line bundle charges. We use unsupervised learning to separate the different polynomial phases. The result is an analytic formula for the cohomologies. This can be turned into an algorithm for computing analytic expressions for arbitrary (hypersurfaces in) toric varieties.'
author:
- 'Daniel Klaewer, Lorenz Schlechter'
bibliography:
- 'MLCohom.bib'
title: Machine Learning Line Bundle Cohomologies of Hypersurfaces in Toric Varieties
---
Introduction
============
The idea of applying concepts from data science to problems naturally appearing in string phenomenology is of course not new. The emergence of the string landscape, the set of effective field theories arising from some consistent string construction, has quickly lead people to consider statistical tools to tackle its enormous size [@Douglas:2003um].
Following early work on genetic algorithms [@Abel:2014xta; @Allanach:2004my], with techniques from data science and machine learning recently becoming important for the solution of many real world problems, there has been an increased interest in applying machine learning wisdom to the exploration of the landscape [@He:2017aed; @He:2017set; @Krefl:2017yox; @Ruehle:2017mzq; @Carifio:2017bov][@Carifio:2017nyb; @Wang:2018rkk; @Bull:2018uow; @Erbin:2018csv].
We want to stress here that while e.g. the number of flux vacua is numerically huge (the famous estimated lower bound being $10^{500}$), we are still dealing with a possibly finite and likely countable set whose members can be described by a vector with integral entries. Often the answer to many interesting questions about the vacua can also be described by a set of integers, as is the case for yes/no questions of the type “Is my vacuum supersymmetric?” or “Does my vacuum contain a tachyon”, but also questions such as “How many generations of SM-fermions does my vacuum contain?”. We want to address the question whether such (complicated) mappings between vectors of integers can be naturally modelled by neural networks (NNs). A particular such questions is:
*“Given a (hypersurface in a) toric variety $X$, what are the ranks $h^\bullet$ of the line bundle cohomology groups $H^\bullet \mathcal({O}_X(D))$, for some toric divisor D?”*
In many cases the answer to this question is provided by the cohomCalg program [@cohomCalg:Implementation], which supports us with data sets on which neural networks can be trained.
As a first approach, we try to directly train a neural network to reproduce the cohomologies. We study first whether this approach can work for the toric ambient spaces and also hypersurfaces therein. The possibility of interpolating and extrapolating the data from a training set is then investigated. This approach is very similar to the one adopted in [@Ruehle:2017mzq], where genetic algorithms were employed to optimise a neural network for regression of line bundle cohomologies.
Our second approach consists of a two step procedure. First we cluster the cohomology data using unsupervised learning. The resulting clusters turn out to have a simple polynomial formula for their cohomologies. The two steps lead to an analytic expression for the rank of the line bundle cohomology groups.
On the way we solve a shortcoming of the cohomCalg algorithm by implementing some of the mappings in the Koszul complex.
After completion of this work we became aware of [@Constantin:2018hvl], which deals with the similar problem of computing line bundle cohomologies in the case of CICYs in products of projective spaces.
Line Bundles on Hypersurfaces in Toric Varieties
================================================
A vast majority of the Calabi-Yau manifolds that are used in string constructions are obtained as complete intersections in toric varieties, the anticanonical hypersurfaces forming a subset of these. Although our techniques are expected to generalise to the case of complete intersections, we will treat only the case of hypersurfaces as a proof of principle.
Toric varieties can be described in many different ways, one of which is the gauged linear sigma model (GLSM) [@Witten:1993yc]. The GLSM is an $\mathcal{N}=(2,2)$ SUSY gauge theory in two dimensions, with chiral superfields $x_i$, $i=1,\dots,I$, representing homogeneous coordinates of the toric space. The GLSM features $R$ abelian gauge symmetries, and the charge vectors $Q_i^{(r)}$, $r=1,\dots,R$ encode the weights under $(\mathbb{C}^*)^R$ rescalings of the homogeneous coordinates. Analogous to the case of projective spaces, the resulting toric variety $X$ is then formed as a quotient of $\mathbb{C}^I$ by the homogeneous rescalings, after cutting out a suitable fixed point set $F$ $$X=\frac{\mathbb{C}^I-F}{(\mathbb{C}^*)^R}\;.$$ This fixed point set depends on the choice of the FI parameters in the gauge theory. Solvability of the D-terms will result in the constraint that certain subsets $\mathcal{S}_\alpha$ of the full set of coordinates should not vanish simultaneously $$\mathcal{S}_\alpha=\left\{x_{\alpha_1},\dots,x_{\alpha_{|\mathcal{S}_\alpha|}}\right\}\,\qquad,\alpha=1,\dots,N\;.$$ The extracted set then takes the form $$F=\bigcup\limits_{\alpha=1}^N \left\{x_{\alpha_1}=\dots=x_{\alpha_{|\mathcal{S}_\alpha|}}=0\right\}\;.$$ The ring-theoretic way of handling the information in the vanishing set is given by the Stanley-Reisner ideal $$\text{SR}=\left<\tilde{\mathcal{S}}_1,\dots,\tilde{\mathcal{S}}_N\right>\;.$$ Here the generators $\tilde{\mathcal{S}}_\alpha=\prod_{i=1}^{|\mathcal{S}_\alpha|} x_{\alpha_i}$ are monomials constructed out of the coordinates in the sets $\mathcal{S}_\alpha$.
The homogeneous coordinates of a toric variety provide us with a natural open covering in terms of the sets $U_i=\{x|x_i\neq 0\}$ as well as a set of divisors $D_i=\{x|x_i=0\}$. Due to the equivalence between line bundles and divisors, line bundles on a toric variety take the form of tensor products of the $L_i=\mathcal{O}_X(D_i)$ and their inverses. We can also classify line bundles in terms of their GLSM charges as $$L_i=\mathcal{O}_X\left(Q_i^{(1)},\dots,Q_i^{(R)}\right)\;.$$ In a toric variety, the anticanonical hypersurface $H=\sum_iD_i$ has vanishing first Chern class and is thus Calabi-Yau. Line bundles $\mathcal{O}_X(D)$ on the ambient space descend to line bundles on this hypersurface $\mathcal{O}_H(D)$. The two are related by an exact sequence of sheaves, the Koszul sequence $$\label{eq:Koszulseq}
0\to\mathcal{O}_X(D-H)\stackrel{m}{\to}\mathcal{O}_X(D)\stackrel{res}{\to}\mathcal{O}_H(D)\to 0\;.$$ Here $m$ is multiplication with the defining section of $O_X(H)$ of the hypersurface and $res$ is the restriction map to it. Our main interest are the sheaf cohomology groups $H^\bullet(\mathcal{F})$ for the sheaves $\mathcal{F}=\mathcal{O}_X(D),\,\mathcal{O}_H(D)$. In principle the ambient space cohomology can be computed in a brute force way as the Čech cohomology $\check{H}^\bullet(\mathcal{F},\mathcal{U})$ with respect to the open cover $\mathcal{U}$ defined by the $U_i$.
The cohomCalg Algorithm {#sec:cohomcalg}
=======================
A more elegant and fast way to compute the sheaf cohomology is given by the cohomCalg algorithm, which has been conjectured in [@CohomOfLineBundles:Algorithm], proven in [@Rahn:2010fm] and implemented in [@cohomCalg:Implementation]. The algorithm gives generators of the cohomology groups in terms of *rationoms*, which are just monomials of the form $$\label{eq:rationom}
\frac{T(\vec{x})}{(\prod y_i)\cdot W(\vec{y})}\;,$$ where the vectors $\vec{x},\vec{y}$ refer to a splitting of the homogeneous coordinates as follows. The power set of the Stanley-Reisner ideal[^1] is decomposed into its k-element subsets as $$P(\text{SR})=\bigcup\limits_{k=0}^{|\text{SR}|}P_k(\text{SR})\;.$$ One defines index-sets $A=\{\alpha_1,\dots,\alpha_k\}\subset\{1,\dots,|\text{SR}|\}$ which allow us to label the elements of the sets $P_k(\text{SR})$ as $\mathcal{P}^k_A=\{\tilde{\mathcal{S}}_{\alpha_1},\dots,\tilde{\mathcal{S}}_{\alpha_k}\}$. For a given $\mathcal{P}^k_A$, the union of all its associated $\mathcal{S}_{\alpha_i}$ is denoted as $$\mathcal{Q}_A^k=\bigcup\limits_{i=1}^k\mathcal{S}_{\alpha_i}\;,$$ which is just the collection of all coordinates that appear in the set $\mathcal{P}^k_A$. To this set, a degree $N^k_A$ is assigned: $$N^k_A=\left|\mathcal{Q}^k_A\right|-k\,.$$
For a given $\mathcal{Q}=\mathcal{Q}^k_A$ the variables $\vec{y}$ that appear in the denominator of the rationom are now defined to be those that are contained in $\mathcal{Q}$, whereas the $\vec{x}$ coordinates are taken from the complement. For this given $\mathcal{Q}$ we can now construct all possible rationoms that match the GLSM charge of the divisor $D$ that defines the line bundle $\mathcal{O}_X(D)$. Each rationom contributes a generator of the cohomology group $H^N(X,\mathcal{O}_X(D))$, with $N=N^k_A$.
In some cases a single rationom will contribute multiple generators to the cohomology. This is associated with the calculation of a certain remnant cohomology, which has been clarified in [@Rahn:2010fm]. Although these multiplicities are implemented in the cohomCalg program, this complication will not appear in the examples that we study.
Once the sheaf cohomology of $X$ is computed, one can use the fact that the short exact sequence of sheaves induces a long exact sequence of cohomology groups $$\begin{aligned}
\cdots&\stackrel{\delta}{\to} H^i(\mathcal{O}_X(D-H))\stackrel{m_*}{\to}H^i(\mathcal{O}_X(D))\stackrel{res_*}{\to}\\
&\stackrel{res_*}{\to}H^i(\mathcal{O}_H(D))\stackrel{\delta}{\to}H^{i+1}(\mathcal{O}_X(D-H))\stackrel{m_*}{\to}\cdots
\end{aligned}\;,$$ where $\delta$ is the connecting homomorphism, in order to deduce the sheaf cohomology $H^\bullet(H,\mathcal{O}_H(D))$ on the hypersurface.
The reference implementation of the cohomCalg algorithm [@cohomCalg:Implementation] does not implement the maps in the Koszul-sequence and hence relies on the exactness of the sequence in order to derive the ranks of the cohomology groups. This works by first cutting the long sequence into shorter sequences at locations where zeros occur and then using the fact that for an exact sequence $$0\to G_1\to\dots\to G_n\to0$$ the ranks satisfy $\sum_{j=1}^n (-1)^j \text{rk}(G_j)=0$.
The above approach works as long as there are sufficiently many zeros in the sequence. In order to train our classifiers we need the cohomology ranks of *all* line bundles corresponding to a certain interval $[-\delta,+\delta]$ in charge space. Generically only some of those ranks can be solved by the cohomCalg program, whereas a large portion is left undetermined.
We improve the algorithm by cutting the sequences also at the multiplication maps $m_*$ as $$\begin{aligned}
\cdots\to H^i(\mathcal{O}_X(D-H))&\stackrel{m_*}{\to}\text{image}(m_*)\to 0\\
0\to\text{coker}(m_*)&\stackrel{res_*}{\to}H^i(\mathcal{O}_H(D))\to\cdots
\end{aligned}\;.$$
The price for inserting an additional zero is now that we have to compute the (rank of the) image of the map $m_*$. The induced map $m_*$ on the cohomologies is realised in this setting by multiplication of the rationom representatives of the cohomology generators with the defining section $s\in\Gamma(X,\mathcal{O}_X(H))$ of the hypersurface. If a resulting monomial is not contained in the set of rationoms spanning the codomain, it is equivalent to zero in cohomology.
For definitiveness we will always consider the hypersurface to be at the large complex structure point of its moduli space. This means that the map $m$ is just multiplication by the monomial $x_1\cdots x_I$.
In all cases studied the resulting exact sequences could now be solved for the cohomologies on the hypersurface. If this would have not been the case, we could have also introduced additional cuts at the restriction maps.
The procedure suggests a natural generalization to the case of CICYs in toric varieties for which there exists a similar Koszul sequence, the mappings of which can be implemented in an analogous way. We leave an implementation of this more general case for future work.
Let us outline the calculation in an example. The anticanonical hypersurface in $\mathbb{P}^3_{1112}$ is a K3 surface. The toric resolution of this is described by the charge vector $$Q=\bordermatrix{
&x_1&x_2&x_3&x_4&x_5\cr
&1&1&1&0&2\cr
&0&0&0&1&1
}\;,$$ with Stanley-Reisner ideal $\text{SR}=\left<x_1 x_2 x_3, x_4 x_5\right>$. We want to compute the image of the map $$H^1\left(\mathcal{O}(-3,-4)\right)\stackrel{m_*}{\to}H^1\left(\mathcal{O}(2,-2)\right)\;,$$ where we have introduced a basis $D_1=\{x_1=0\}\sim\{x_2=0\}\sim\{x_3=0\}$ and $D_2=\{x_4=0\}$ of divisors such that $\{x_5=0\}=2D_1+D_2$ and use the corresponding dual basis for the first cohomology. Using the cohomCalg algorithm we determine the generators of both cohomology groups to be $$\begin{aligned}
H^1\left(\mathcal{O}(-3,-4)\right)&=\left<\frac{(\text{deg } 1 \text{ in }x_{1,2,3})}{x_4^2x_5^2},\frac{(\text{deg } 3 \text{ in }x_{1,2,3})}{x_4x_5^3}\right>\\
H^1\left(\mathcal{O}(2,-2)\right)&=\left<\frac{(\text{deg } 4 \text{ in }x_{1,2,3})}{x_4x_5}\right>\;.
\end{aligned}\;$$ Under the map $m=\cdot\prod_i x_i$ it is clear that only the first class of generators with denominator $x_4^2 x_5^2$ will be mapped to rationoms that exist in $H^1\left(\mathcal{O}(2,-2)\right)$. The second class of generators with denominator $x_4x_5^3$ is mapped to monomials without $x_4$ in the denominator, which do not have the correct singularity structure to be members of $H^1\left(\mathcal{O}(2,-2)\right)$ and hence are cohomologous to zero. As a result we find that $\text{rk}(\text{im}(m_*))=3$.
For an arbitrary point in the complex structure moduli space the map $m_*$ will of course be more complicated. The polynomials that result from multiplication of the rationoms in $H^1\left(\mathcal{O}(-3,-4)\right)$ with the defining polynomial of the hypersurface will have to be reduced modulo the rationoms in the target cohomology. While this is straightforward to implement, it is computationally more expensive and we restrict to the large complex structure point to illustrate our methods.
Machine Learning Cohomologies
=============================
The aim of this paper is to examine the possible application of neural networks in the computation of line bundle cohomologies of toric varieties and hypersurfaces therein. There are different possible approaches. In [@Ruehle:2017mzq] genetic algorithms were used to evolve neural networks which were then used to perform a regression on the map between the line bundle charges and cohomologies. The resulting NNs reproduced the cohomology ranks with 72%/83% accuracy after training. On the other hand the authors of [@Bull:2018uow] used a classification neural network to learn the Hodge numbers of the Kreuzer-Skarke list and achieved a $80\%$ validation rate in predicting the cohomologies. They also used a regressional neural network to solve the same problem with worse results. While these approaches work in their respective areas of application, they require large data sets and fail at the extrapolation of large numbers.
Neural Networks for Classification
----------------------------------
A neural network for a classification problem maps an input vector via several hidden layers, which normally are taken to be ReLU, to a fixed number of output nodes representing the classes. The output is normalised to sum up to $1$ and interpreted as a probability and this is typically implemented by applying a softmax layer. The prediction is the class of highest probability. The loss function has to be proportional to the deviation from the true result and for classification networks often is taken to be the cross entropy. This approach has the severe limitation that one has to a priori fix the possible outcomes, as every possible value of the $h^i$ has a corresponding node. The authors of [@Bull:2018uow] avoided this problem by declaring all $h^i>50$ as large and do not try to classify these. In the examples we will be discussing the ranks can become arbitrarily large and this classification no longer makes sense. While this approach is easy to use, the rather bad results and limitations to very small ranks render it uninteresting.
Neural Networks for Regression
------------------------------
Another approach is a regressional neural network. Here the input vector is again mapped by several hidden ReLU layers to an output vector. This time the output vector is not normalised but takes any value in $\mathbb{R}^n$ and is interpreted as the ranks by rounding to the nearest integer. The loss function for training is taken to be the mean squared error of the prediction compared to the real ranks. This approach does not put a hard upper bound on possible ranks, but the precision of the result is limited by the number of neurons and the floating point precision used. Most standard implementations of NNs use only single precision, resulting in a precision of the ranks of $10^{-6}$. Thus if the ranks exceed $10^6$, the error becomes order one and the NN predicts wrong numbers.
Moreover, the NN only learns an interpolation of the given data. Therefore, if one trains the network on a data set where the entries of the charge vector are in a certain range, the predictions outside of this range are unreliable.
To illustrate these findings, we take the ambient space $dP_3$ and the hypersurface $\mathbb{P}_{11222}[8]$. We randomly generated $50000$ data points with line bundle charges in the range $[-50,50]$. In the case of $dP_3$, the cohomologies can be learned by a NN consisting of 3 hidden ReLU layers with 500 neurons each to a precision of $99.85\%$ within one hour. In the case of $\mathbb{P}_{11222}[8]$, this approach fails. Even large nets produce only $0.1\%$ correct results. The reasons are that the ranks in this example already exceed $2\cdot10^7$ and the high non-linearity of the problem. Sophisticated preprocessing of the data increased this to $55\%$ accuracy after 10 minutes of training, which is still not satisfactory. Thus for these kind of problems another approach is needed.
An Algorithm to Determine Analytic Formulas
===========================================
The algorithm described in section \[sec:cohomcalg\] allows the determination of the ranks of the cohomology groups for given values of the line bundle charges. In this section an algorithm using unsupervised learning is presented which allows the identification of analytic expressions.
First a data set $S$ of the cohomologies is calculated for all values of the line bundle charges $\vec{m}$ satisfying $|m_i|\le a \;\forall i$ for a fixed value of $a$. Tests have shown that $a=25$ is sufficient for the algorithm to find the analytic formulas.
The algorithm uses the observation that the $h^i$ have a distinct phase structure. In the interior of one phase the $h^i$ are polynomial functions of the line bundles of maximal degree $d$, where $d$ is the dimension of the variety. If one can identify the phase structure, it is then easy to perform a polynomial fit. This represents a classification problem. As one a priori does not know the phase structure, unsupervised learning has to be applied.
In unsupervised learning one faces the task to group data points into different sets without specifying any conditions. This leads to a clustering of similar data. The only input is the data to classify and the maximal number of sets to be used. We applied the pre-implemented ClusterClassify function of Mathematica 11.3 with 200 classes and “Quality" as optimization goal as well as “KMeans" as the method to generate the classifiers and the LinearModelFit function for the polynomial fits.
In the interior of one phase, the $d$-th derivatives of the $h^i$ with respect to $\vec{m}$ are constant and the $(d+1)$-th derivatives vanish. As the $h^i$ are only defined for integer $\vec{m}$, the data forms a lattice. The derivatives are therefore calculated using the central difference scheme with a lattice spacing of one. This leads to a non-vanishing $(d+1)$-th derivative exactly at the phase boundaries. The first step is to remove the boundaries out of the data set $S$. To do so a cluster classifier with a very large number of classes is trained on the data set $$\left\{\vec{m}\;,\;{\partial^{d+1} h^i\over \partial^{d+1} m_1}\;,\;{\partial^{d+1} h^i\over \partial^{d} m_1\partial m_2}\;,.....\;,\;{\partial^{d+1} h^i\over \partial^{d+1} m_R}\right\}\;,$$ where $i=0,\dots,d$ runs over all cohomology groups. This set takes for a point inside a phase the form $$\{\vec{m},0,0,0,.....,0\}\;$$ and for a point at a phase boundary at least one of the latter entries is non-vanishing. This leads to a classification where all data points which lie in the interior of a phase are classified into one set and various sets of boundary points. For large enough line bundle charges the interior will always be the largest set. The boundaries are simply thrown away. Tests show that the classification works better for a small dimensional space. The number of partial derivatives increases with the degree $d$ and the number of line bundle charges. Therefore this step was divided into several classification steps. First one trains one classifier on a subset of the derivatives of degree $d+1$ and removes the boundary. Then a second classifier is trained on the next subset and so on. As the training of one classifier takes only seconds, this is not a huge performance loss but drastically improves the result. In the examples presented in this paper we used a splitting into two randomly chosen subsets of equal size.
With the remaining points forming the interior of the phases the set $$S_3=\left\{\vec{m}\;,\;{\partial^{d} h^i\over \partial^{d} m_1}\;,\;{\partial^{d} h^i\over \partial^{d} m_1\partial m_2}\;,.....\;,\;{\partial^{d} h^i\over \partial^{d} m_R}\right\}$$ is formed and a second classifier trained on this set. The set $S_3$ is, in contrary to the original data set S, not connected in the $\vec{m}$, which improves the classification and is the reason for the two step procedure. This now classifies the phase structure of the problem. The number of allowed classes is again taken to be very large. While it can happen that one phase is grouped into two classes, this does not pose any problem as in this case the polynomials obtained will agree and the phases can be merged later on.
The final step is to perform the polynomial fit on each set and each $h^i$. Sets with identical polynomials for all $h^i$ are then merged. This concludes the algorithm. To summarise:
1. Calculate a set of data points using the extended cohomCalg.
2. Determine the $(d+1)$-th derivatives of these points.
3. Classify the data using these derivatives.
4. Determine the $d$-th derivatives of the remaining data points.
5. Classify the data using these derivatives.
6. Perform a polynomial fit of degree $d$ on each set for each $h^i$.
7. Merge sets with identical polynomials.
We note that this algorithm requires no input besides the geometric data describing the variety and can therefore be completely automatised. The only thing which has to be done by hand is to extract the boundaries of the phases, as the classifier encodes them not in closed form. This is quite tedious, but for practical purposes one does not need the functions. One can use the classifier to identify in which phase a given $\vec{m}$ lies and apply the polynomial of this phase. For convenience we added the phase boundaries in the tables.
As a non-trivial test of the procedure we calculated the Euler characteristic of the examples by summing up the polynomials and compare them to the Euler characteristic as obtained from the Hirzebruch-Riemann-Roch theorem. The two expressions agree in all examples and phases.
In the following sections this algorithm is applied to some examples.
Line Bundles on Toric Varieties
===============================
We start with an example where the analytic expressions are well known, the del Pezzo surface $dP_1$. This provides on one hand an easy method to cross-check the results and on the other hand is an easy example with only 3 phases.
Using cohomCalg, we generate a data set of the cohomology ranks with the line bundle charges in the range $a=[-25,25]$. These are shown in figure \[dp1fig1\]. The application of the unsupervised learning on the third derivatives cuts out two phase-boundaries where the underlying function describing the ranks is non-differentiable. The second cluster analysis then classifies the remaining points using the second derivatives into $6$ phases, three pairs of which have identical polynomials for $h^1$. The result is shown in figure \[dp1fig2\].
![$h^1\left(\mathcal{O}(m,n)\right)$ of $dP_1$.[]{data-label="dp1fig1"}](dP1fig1.pdf)
![Classification result for $h^1\left(\mathcal{O}(m,n)\right)$ of $dP_1$.[]{data-label="dp1fig2"}](dP1fig2.pdf)
Fitting a polynomial of degree $2$ to the ranks in each of these phases results in the polynomials listed in table \[tabledP1\]. These agree with the known analytic expressions, see e.g. [@CohomOfLineBundles:Algorithm].
Phase Polynomial
----------------------------------------------------------------------------------------------- -----------------------------------------------------
$\begin{aligned}(n\leq -2&\land m\geq 0)\\\lor (n\geq 0&\land m\leq -3)\end{aligned}$ $-1-m-{n\over 2}-mn+{n^2\over 2}$
$\begin{aligned}(n\leq -2&\land n+1\leq m<0)\\\lor (n\geq 0&\land -3<m\leq n-2)\end{aligned}$ ${m\over 2}+{m^2\over 2}-{n\over 2}-mn+{n^2\over2}$
else $0$
: Polynomials for $h^1\left(\mathcal{O}(m,n)\right)$ in the case of $dP_1$.[]{data-label="tabledP1"}
Line Bundles on Hypersurfaces
=============================
We now turn to the more complicated problem of finding analytic expressions for line bundle cohomologies of hypersurfaces in toric varieties. As an example for a hypersurface we take the K3 space $\mathbb{P}^3_{1112}[5]$. This hypersurface has two line bundle charges, so that $\vec{m}=(m,n)$. The expected degree of the polynomials is $d=2$. Figure \[K3fig1\] shows the ranks of the zeroth cohomology for different values of $m$ and $n$.
![$h^0\left(\mathcal{O}(m,n)\right)$ of $\mathbb{P}^3_{1112}[5]$.[]{data-label="K3fig1"}](K3fig1.pdf)
At first glance this seems to consist of 3 phases. But applying the algorithm described in the last section reveals that there are actually 6 phases. Figure \[K3fig2\] shows the result of the second classification. The fitted polynomials can be found in table \[tablep1112\]. One nicely sees the cut boundaries and phases. Also the separation between the orange and brown phase seems redundant from the point of view of $h^0$, but is necessary because of the higher cohomology groups. Especially interesting is the subdivision in the yellow/purple and red/green phases into even and odd $n$, which are also described by different polynomials. The phase structure thus is not only defined by some linear functions of $m$ and $n$. If one tried a polynomial fit in the whole of these phases instead of separating into even/odd one would not obtain rational coefficients. E.g. in the yellow/purple phase the polynomials are $\frac{5 m^2}{4}+2$ for $n$ even and $\frac{5 m^2}{4}+\frac{7}{4}$ for $n$ odd. If one mixes these phases, the interpolating polynomial obtained is $1.80407+0.0131771\, n+1.24945\, n^2$, which does obviously not reproduce any of the cohomologies correctly and cannot be extrapolated.
![$h^0\left(\mathcal{O}(m,n)\right)$ of $\mathbb{P}^3_{1112}[5]$ separated into phases.[]{data-label="K3fig2"}](K3fig2.pdf)
Phase Polynomial
------------------------------------ -------------------------------------------------
$m<0,n>{m\over 2}$ $0$
$m<0,n<{m\over 2}$ $\frac{m^2}{2}-2 m n-\frac{3 m}{2}+2 n^2+3 n+1$
$m>0,n>{m\over 2}, m \text{ even}$ $\frac{5 m^2}{4}+2$
$m>0,n>{m\over 2}, m \text{ odd}$ $\frac{5 m^2}{4}+\frac{7}{4}$
$m>0,0<n<{m\over 2}$ $m^2+m n-n^2+2$
$m>0,n<0$ $m^2-2 m n-3 m+2 n^2+3 n+2$
: Polynomials for $h^0\left(\mathcal{O}(m,n)\right)$ in the case of $\mathbb{P}^3_{1112}[5]$.[]{data-label="tablep1112"}
Another interesting example is the octic $\mathbb{P}^4_{11222}[8]$. Here we expect the polynomials to be of degree $d=3$. Figures \[P11222fig1\] and \[P11222fig2\] show again the input data for $h^0$ and the result after classification.
![$h^0\left(\mathcal{O}(m,n)\right)$ of $\mathbb{P}^4_{11222}[8]$.[]{data-label="P11222fig1"}](P11222fig1.pdf)
![Classification result for $h^0\left(\mathcal{O}(m,n)\right)$ of $\mathbb{P}^4_{11222}[8]$.[]{data-label="P11222fig2"}](P11222fig2.png)
The resulting polynomials for $h^0$ are listed in table \[tablep11222\].
--------------------------------------------------------------------------------------------------------------------------------------------
Phase Polynomial
-------------------------------- -----------------------------------------------------------------------------------------------------------
$m<0,n\in \mathbb{Z}$ $0$
$m>0,n<0$ $\frac{m^3}{3}-2 m^2+\frac{11
m}{3}-1$
$m>0,n>{m\over 2}$ $-\frac{8 m^3}{3}+2 m^2 n+\frac{2 m}{3}+2 n$
$m>0,0<n<{m\over 2}$, $m$ even $\begin{aligned}
&\tfrac{m^3}{3}-2 m^2+\tfrac{11 m}{3}+\tfrac{n^3}{8}+\tfrac{3 n^2}{8}\\&+\tfrac{5 n}{4}-1
\end{aligned}$
$m>0,0<n<{m\over 2}$, $m$ odd $\begin{aligned}
&\tfrac{m^3}{3}-2 m^2+\tfrac{11 m}{3}+\tfrac{n^3}{8}+\tfrac{3 n^2}{8}\\&+\tfrac{7 n}{8}-\tfrac{11}{8}
\end{aligned}$
--------------------------------------------------------------------------------------------------------------------------------------------
: Polynomials for $h^0\left(\mathcal{O}(m,n)\right)$ in the case of $\mathbb{P}^4_{11222}[8]$.[]{data-label="tablep11222"}
We note that the only disadvantage of this procedure is that the boundaries are cut out and it is not possible to determine the value at the boundaries itself, which is reflected in only $>$ statements in the table instead of $\geq$. But as these are only a limited number of points one can simply compare these with the results from cohomCalg. The tables for the other cohomology groups can be found in appendix \[app:A\].
Discussion
==========
We have presented a method for generating analytic expressions for all line bundle cohomology ranks of toric varieties or hypersurfaces therein. The algorithm takes as an input the toric data in form of GLSM charges and the Stanley-Reisner ideal. For the case of hypersurfaces we also need to specify a point in the complex structure moduli space in the form of a polynomial that defines a section of $\mathcal{O}_X(H)$ and hence a specific hypersurface. For demonstrative purposes we calculated at the large complex structure point but the method carries over to other generic and special points in the moduli space.
The output is a classifier that separates the space of line bundles into different phases, such that within a phase each cohomology is described by a single polynomial in the line bundle charges. Since the polynomials have coefficients in $\mathbb{Q}$ the result can be considered exact and we obtain a formula for all of the line bundles. As a cross-check we see that the alternating sum of polynomials in each phase reproduces the Euler characteristic as calculated from the Hirzebruch-Riemann-Roch theorem.
It was crucial to realise that we understand the local structure of the data and the problem of patching this to obtain the global structure could be broken down to a simple classification problem.
We expect that our methods carry over to similar problems of this type. For example the case of line bundles on complete intersections in toric varieties should be completely analogous. We leave the interesting case of vector bundles of higher rank in the form of monad bundles for future work.
Acknowledgements {#acknowledgements .unnumbered}
================
We are indebted to Ralph Blumenhagen for discussions about line bundle cohomologies and machine learning which initiated this project as well as contributions in the early stages. We are also grateful to Harold Erbin for illuminating conversations about neural networks.
Line Bundle Cohomologies {#app:A}
========================
Phase $h^0$ $h^1$ $h^2$
-------- ------------------------------------------------- ------------------------------------------------------------- -------------------------------------------------
$I$ 0 0 $-n^2+n m+m^2+2$
$II$ $-n^2+n m+m^2+2$ 0 0
$III$ $\frac{5 m^2}{4}+\frac{7}{4}$ $3 n^2-3 n m-3 n+\frac{3 m^2}{4}+\frac{3 m}{2}+\frac{3}{4}$ $2 n^2-2 n m-3 n+\frac{m^2}{2}+\frac{3 m}{2}+1$
$IV$ $2 n^2-2 n m+3 n+\frac{m^2}{2}-\frac{3 m}{2}+1$ $3 n^2-3 n m+3 n+\frac{3 m^2}{4}-\frac{3 m}{2}+\frac{3}{4}$ $\frac{5 m^2}{4}+\frac{7}{4}$
$V$ $2 n^2-2 n m+3 n+\frac{m^2}{2}-\frac{3 m}{2}+1$ $3 n^2-3 n m+3 n+\frac{3 m^2}{4}-\frac{3 m}{2}+1$ $\frac{5 m^2}{4}+2$
$VI$ $\frac{5 m^2}{4}+2$ $3 n^2-3 n m-3 n+\frac{3 m^2}{4}+\frac{3 m}{2}+1$ $2 n^2-2 n m-3 n+\frac{m^2}{2}+\frac{3 m}{2}+1$
$VII$ $0$ $3 n^2-3 n m-3 n+3 m$ $2 n^2-2 n m-3 n+m^2+3 m+2$
$VIII$ $2 n^2-2 n m+3 n+m^2-3 m+2$ $3 n^2-3 n m+3 n-3 m$ 0
: Polynomials for all $h^i$ in the case of $\mathbb{P}^3_{1112}[5]$.
Phase $h^0$ $h^1$ $h^2$ $h^3$
-------- ----------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------
$I$ 0 0 0 $\frac{8 m^3}{3}-2 m^2 n-\frac{2 m}{3}-2 n$
$II$ $-\frac{8 m^3}{3}+2 m^2 n+\frac{2 m}{3}+2 n$ 0 0 0
$III$ $\frac{m^3}{3}-2 m^2+\frac{11 m}{3}+\frac{n^3}{8}+\frac{3 n^2}{8}+\frac{5 n}{4}-1$ $-1 + 3 m - 2 m^2 + 3 m^3 - (3 n)/4 - 2 m^2 n + (3 n^2)/8 + n^3/8$ 0 0
$IV$ 0 0 $-3 m^3+2 m^2 n-2 m^2-3 m-\frac{n^3}{8}+\frac{3 n^2}{8}+\frac{3 n}{4}-1$ $-1 - (11 m)/3 - 2 m^2 - m^3/3 - (5 n)/4 + (3 n^2)/8 - n^3/8$
$V$ $\frac{m^3}{3}-2 m^2+\frac{11 m}{3}-2$ $3 m^3-2 m^2 n-2 m^2+3 m-2 n-2$ 0 0
$VI$ $\frac{m^3}{3}-2 m^2+\frac{11 m}{3}+\frac{n^3}{8}+\frac{3 n^2}{8}+\frac{7 n}{8}-\frac{11}{8}$ $3 m^3-2 m^2 n-2 m^2+3 m+\frac{n^3}{8}+\frac{3 n^2}{8}-\frac{9 n}{8}-\frac{11}{8}$ 0 0
$VII$ 0 0 $-3 m^3+2 m^2 n-2 m^2-3 m-\frac{n^3}{8}+\frac{3 n^2}{8}+\frac{9 n}{8}-\frac{11}{8}$ $-\frac{m^3}{3}-2 m^2-\frac{11 m}{3}-\frac{n^3}{8}+\frac{3 n^2}{8}-\frac{7 n}{8}-\frac{11}{8}$
$VIII$ 0 0 $-3 m^3+2 m^2 n-2 m^2-3 m+2 n-2$ $-\frac{m^3}{3}-2 m^2-\frac{11 m}{3}-2$
: Polynomials for all $h^i$ in the case of $\mathbb{P}^4_{11222}[8]$.
[^1]: Here power set means the set of all possible unions of generators of the ideal.
|
---
abstract: 'We explore a feasibility of measuring atom-wall interaction using atomic clocks based on atoms trapped in engineered optical lattices. Optical lattice is normal to the wall. By monitoring the wall-induced clock shift at individual wells of the lattice, one would measure a dependence of the atom-wall interaction on the atom-wall separation. We rigorously evaluate the relevant clock shifts and show that the proposed scheme may uniquely probe the long-range atom-wall interaction in all three qualitatively-distinct regimes of the interaction: van der Waals (image-charge interaction), Casimir-Polder (QED vacuum fluctuations) and Lifshitz (thermal bath fluctuations). The analysis is carried out for atoms Mg, Ca, Sr, Cd, Zn, and Hg, with a particular emphasis on Sr clock.'
author:
- 'A. Derevianko'
- 'B. Obreshkov'
- 'V. A. Dzuba'
title: ' Mapping out atom-wall interaction with atomic clocks '
---
Atomic clocks define the unit of time, the second. Usually the environmental effects (e.g., stray fields) degrade the performance of the clocks. One may turn this around and by measuring shifts of the clock frequency, characterize an interaction with the environment. The most fundamental experiments of this kind search for a potential variation of fundamental constants [@ForAshBer07], where the “environmental agent” is the fabric of the Universe itself, affecting the rate of ticking of atomic clocks. In this paper, we evaluate a feasibility of using atomic clocks to measure basic laws of atom-wall interactions. We find that a certain class of atomic clocks, the optical lattice clocks, are capable of accurately characterizing the atom-wall interaction. Moreover, this is a unique system where the atom-wall interaction may be probed in all three qualitatively-distinct regimes of the interaction: van der Waals (image-charge interaction), Casimir-Polder (QED vacuum fluctuations) and Lifshitz (thermal bath fluctuations). Understanding the basic atom-wall interaction [@BloDuc05] is important, for example, for probing a hypothetical “non-Newtonian” gravity at a $\mu$m scale (see e.g., Ref. [@Ran02]). Also, with miniaturization of atomic clocks, for example, using atomic chips [@GalHofSch09], the atom-wall interaction will become an important systematic issue.
In optical lattice clocks, ultracold atoms are trapped in minima (or maxima) of intensity of a standing wave of a laser light (optical lattices) operated at a certain “magic” wavelength [@KatTakPal03; @YeKimKat08]. The laser wavelength is tuned so that the differential light perturbations of the two clock levels vanishes exactly. Such ideas were experimentally realized [@TakHonHig05; @LeTBaiFou06; @LudZelCam08etal] for optical frequency clock transitions in divalent atoms, such as Sr, yielding fractional accuracies at a $10^{-16}$ level [@LudZelCam08etal]. The clock transition is between the ground $^1S_0$ and the lowest-energy excited $^3P_0$ state. $J=0$ spherical symmetry of the clock states makes the clock insensitive to stray magnetic fields and environmentally-induced decoherences.
An idealized setup for measuring atom-wall interaction is shown in Fig. \[Fig:setup\]. The conducting surface of interest acts as a mirror for the laser beam normally incident on the surface. The resulting interference of the beams forms an optical lattice. Laser operates at a “magic” wavelength $\lambda_m$ specific to the atom (see Table \[Tab:atoms\]). For all tabulated magic wavelengths, atoms are attracted to maxima of the laser intensity and one could work with 1D optical lattices. The first pancake-shaped atomic cloud would form at $\lambda_m/4$ distance from the mirror. The subsequent adjacent clouds are separated by a distance $\lambda_m/2$. Ref. [@SorAlbFer09] discusses an experimental procedure for loading atoms into sites close to a mirror.
An advantage of working with optical lattices lies with a tight spatial confinement at the lattice sites. Commonly, the accuracy of determination of the atom-wall interaction is limited by the spatial extent of an atomic ensemble [@BloDuc05]. In optical lattices, the size of ultracold atom wave-function can be reduced to a small fraction of the lattice laser wavelength.
![(Color online) Idealized setup for measuring atom-wall interaction with optical lattice clocks. Clouds of ultracold atoms are trapped in an optical lattice operating at a “magic” wavelength. By monitoring the wall-induced clock shift at individual trapping sites, one measures a dependence of the atom-wall interaction on the atom-wall separation. \[Fig:setup\]](setup-atom-wall.eps)
atom $\nu_{clock}$, Hz $\lambda_m$, nm $\Delta \alpha(0)$, a.u. $\Delta C_3$, a.u. $\beta_{vdW}$ $\beta_{CP}$ $\beta_L$
------ ------------------- ----------------- -------------------------- -------------------- --------------- -------------- -------------
Mg 6.55\[14\] 466 29 0.21 -3.1\[-12\] -7.9\[-13\] -1.0\[-13\]
Ca 4.54\[14\] 739 138 0.17 -8.8\[-13\] -8.6\[-13\] -1.8\[-13\]
Sr 4.29\[14\] 813 261 0.25 -1.1\[-12\] -1.2\[-12\] -2.6\[-13\]
Yb 5.18\[14\] 759 155 0.35 -1.5\[-12\] -7.6\[-13\] -1.6\[-13\]
Zn 9.69\[14\] 416 28 0.30 -4.2\[-12\] -8.2\[-13\] -9.4\[-14\]
Cd 9.03\[14\] 419 28 0.31 -4.6\[-12\] -8.7\[-13\] -1.0\[-13\]
Hg 1.13\[15\] 362 22 0.30 -5.5\[-12\] -9.8\[-13\] -9.8\[-14\]
Two earlier proposals, by Florence [@SorAlbFer09] and Paris [@WolLemLam07] groups, considered trapping divalent atoms in optical lattices for studying atom-wall interaction. In both proposals the lattices are oriented vertically and ultracold atoms experience a combination of periodic optical potential and linear gravitational potential. In the Florence proposal [@SorAlbFer09], the atom-wall interaction modifies Bloch oscillation frequencies of atomic wavepackets in this potential. In the Paris proposal[@WolLemLam07], laser pulses at different frequencies are used to create an interferometer with a coherent superposition of atomic wavepackets at different sites. Here we consider an alternative: by monitoring the clock shift at individual trapping sites, one measures a distance dependence of the atom-wall interaction.
[*Qualitative estimates –* ]{} As the separation $z$ between an atom and a wall increases, the atom-wall interaction evolves through several distinct regimes: (i) chemical-bond region that extends a few nm from the surface, (ii) van der Walls region, (iii) retardation (Casimir-Polder) region, and (iv) the thermal (Lifshitz) zone. The chemical-bond region is beyond the scope of our paper and we focus on the three longer-range regimes of the interaction between a perfectly conducting wall and a spherically-symmetric atom.
Qualitatively, the van der Waals interaction arises due to an interaction of atomic electrons and nucleus with their image charges $$U_{vdW}\left( z\right) =-C_{3}\, z^{-3},\label{Eq:vdWlim}%$$ where the coefficient $C_{3}$ depends on an atomic state. It may be expressed in terms of the electric-dipole dynamic polarizability of the atom as $$C_{3}=\frac{1}{4\pi}\int_{0}^{\infty}\alpha\left( i\omega\right) d\omega \,.
\label{Eq:C3}$$
Eq.(\[Eq:vdWlim\]) assumes instantaneous exchange of virtual photons. More rigorous consideration in the framework of QED leads to the Casimir-Polder limit [@CasPol48] $$U_{CP}\left( z\right) =-3/(8\pi) \, \hbar c~\alpha\left( 0\right)
~{z^{-4}}.\label{Eq:CPlim}%$$ Notice the appearance of the speed of light $c$ in this formula. A transition between the van der Walls and the retardation regions occurs at the length-scale $\hbar c/\Delta E_{a}$, where $\Delta E_{a}$ is a characteristic value of the atomic resonance excitation energy. Compared to the van der Waals interaction, the retardation potential has a steeper, $z^{-4}$, dependence on the atom-wall separation.
The Casimir-Polder interaction, Eq.(\[Eq:CPlim\]), is mediated by vacuum fluctuations of electromagnetic field. At finite temperatures $T$, populations of the vacuum modes are modified and a new length-scale, $\hbar
c/(k_{B}T)$, appears. As shown by Lifshitz [@Lif56], the distance dependence of the interaction switches back to the inverse cubic dependence of the van der Waals interaction, Eq.(\[Eq:vdWlim\]), $$U_{L}\left( z\right) =-{1}/{4} \, k_{b}T~\alpha\left( 0\right) ~
{z^{-3}} \, .\label{Eq:Tlim}$$
Due to the interaction with the wall, both clock levels would shift. We may parameterize the resulting fractional clock shifts as $$\frac{\delta\nu}{\nu_\mathrm{clock}}\left( z, T\right) =\left\{
\begin{array}
[c]{c}%
\beta_{vdW}\, \left( \frac{\lambda_{m}}{z}\right) ^{3} \, , \\
\beta_{CP} \, \left( \frac{\lambda_{m}}{z}\right) ^{4} \, ,\\
\beta_{L} \, \left( \frac{T}{300K}\right) \left( \frac{\lambda_{m}}{z}\right)^{3} \, .
\end{array}
\right.
\label{Eq:beta}$$ We evaluated coefficients $\beta$ for the clock transitions in Mg, Ca, Sr, Yb, Zn, Cd, and Hg (see discussion of the method later on). The results are presented in Table \[Tab:atoms\]. The estimates of Table \[Tab:atoms\] immediately show that the atom-wall interaction is a large effect, corresponding to $10^{-10}$ fractional clock shifts at the first well. This is roughly a millon time larger than the demonstrated accuracy of the Sr clock [@LudZelCam08etal].
[*Rigorous consideration —* ]{} In general, as the atom-wall separation is varied, there is a smooth transition between the three interaction regimes. To properly describe the crossover regions, we employ an expression by @BabKliMos04, which may be represented as $$U\left( z,T\right) =-\frac{k_{B}T}{4z^{3}}\left[ \alpha\left( 0\right)
+\sum_{l=1}^{\infty}\alpha\left( i\xi_{l}\right) I\left( \xi_{l}\frac
{2z}{c},\frac{c}{2z~\omega_{p}}\right) \right] \,, \label{Eq:VBabb}%$$ where the atomic dynamic polarizability is convoluted with $
I\left( \zeta ,\chi \right) =\left( 1+\zeta ^{2}\chi ^{2}\right) \Gamma \left( 3,\zeta \right) +\zeta ^{4}\chi \Gamma \left( 0,\zeta \right) -3\zeta ^{2}\chi \Gamma \left( 2,\zeta \right) +2\zeta ^{4}\chi ^{2}\Gamma \left( 1,\zeta \right) -\zeta ^{6}\chi ^{2}\Gamma \left( -1,\zeta \right)
$ at Matzubara frequencies $$\xi_{l}={2\pi}/{\hbar} \, k_{B}T~l,~~l=0,1,2,...,$$ $\Gamma\left( n,\zeta\right) $ being the incomplete gamma function. In addition to recovering various limiting cases, Eq.(\[Eq:VBabb\]) also accounts for realistic properties of conducting wall (described by plasma frequency $\omega_{p}$.)
Atomic properties enter the atom-wall interaction through the dynamic electric-dipole polarizability of imaginary frequency $\alpha(i \omega)$. For the two clock levels the perturbation of the clock frequency may be expressed in terms of the difference $\Delta \alpha(i \omega) =
\alpha_{^3\!P_0} (i \omega) - \alpha_{^1\!S_0} (i \omega)$. We carried out calculations of $\alpha(i \omega)$ and $\alpha(i \omega)$ for Mg, Ca, Sr, Yb, Zn, Cd, and Hg atoms. We used the [*ab initio*]{} relativistic configuration interaction method coupled with many-body perturbation theory. The summation over intermediate states entering the polarizability was carried out using the Dalgarno-Lewis method. Details of the formalism may be found in Refs. [@BelDzuDer09; @DerPorBab09]. Detailed dynamic polarizabilities $\alpha(i \omega)$ and $C_3$ coefficients for the ground states of alkaline-earth atoms may be found in Ref. [@DerPorBab09].
Dynamic polarizabilities of Sr atom are shown in Fig. \[Fig:difalphaSr\]. Notice that the individual polarizabilities $\alpha_{^3\!P_0} (i \omega)$ and $\alpha_{^1\!S_0} (i \omega)$ slowly decrease as $\omega$ increases. At large frequencies each polarizability approaches [*the same*]{} asymptotic limit $\alpha(i \omega) \sim N_e/\omega^2$, $N_e$ being the number of atomic electrons. As a result, compared to the individual $\alpha(i \omega)$, the differential polarizability, $\Delta \alpha(i \omega)$, is strongly peaked around $\omega=0$. Only the Matsubara frequencies inside this peak are relevant in Eq. (\[Eq:VBabb\]). In this regard, it is worth noting that when evaluating $C_3$ coefficients for individual levels with Eq. (\[Eq:C3\]), it was found [@DerJohSaf99] that neglecting core excitations while computing $\alpha(i \omega)$ could substantially underestimate $C_3$ for heavy atoms. Here we deal with the [*differential*]{} shift and a simpler approach of summing over few first valence excitations does provide the dominant part of the effect. Curiously, $\Delta \alpha(i \omega)$ passes through zero at $\omega \approx 0.05 \, \mathrm{a.u.}$ This is reminiscent of the “magic” frequency for the perturbation of the clock transition by laser field which is expressed in terms of differential polarizability of [*real*]{} argument, $\Delta \alpha(\omega)$.
![(Color online) Dynamic polarizabilities of imaginary frequency $\alpha(i \omega)$ of the Sr clock levels, $5s5p\,^3\!P_0$ (dotted line) and $5s^2\,^1\!S_0$ (dashed line) as a function of frequency. Differential polarizability $\Delta \alpha(i \omega) =
\alpha_{^3\!P_0} (i \omega) - \alpha_{^1\!S_0} (i \omega)$ is shown with a solid line. All quantities are in atomic units. \[Fig:difalphaSr\]](differentialpolarizability.eps)
With the computed $\Delta \alpha(i \omega)$, we evaluate the atom-wall clock shifts, Eq. (\[Eq:VBabb\]). We use plasma frequency $\omega_p = 9 \, \mathrm{eV}$ (gold wall) and consider several temperatures $T= 77 \, \mathrm{K}, 300 \, \mathrm{K}$, and $600 \, \mathrm{K}$. Results for Sr lattice clock are shown in Fig. \[Fig:SrClockShift\]. Individual points represent shifts in individual wells of the optical lattice. First well is placed at $\lambda_m/4$ and subsequent wells are separated by $\lambda_m/2$. Roughly the first 20 wells produce a fractional clock shift above the already demonstrated $10^{-16}$ accuracy limit [@LudZelCam08etal]. We observe that over 20 wells the clock shift varies by six orders of magnitude. As temperature of the surface is increased, the clock shifts become more pronounced.
It is worth pointing out that Eq.(\[Eq:VBabb\]) assumes that the temperatures of the environment and the wall are the same (otherwise see Refs. [@AntPitStr05; @ObrWilAnt07].) Moreover, the clock shifts in Fig. \[Fig:SrClockShift\] do not include the conventional black-body-radiation shifts ( $\sim T^4)$. The corresponding temperature coefficients are tabulated in Ref. [@PorDer06BBR].
![(Color online) Fractional clock shifts for Sr as a function of separation from a gold surface at three temperatures, $T=77\,\mathrm{K}$ (blue dots), $T=300 \,\mathrm{K}$ (red squares), and $T=600 \,\mathrm{K}$ (brown diamonds). Individual points represent shifts in individual trapping sites of the optical lattice. First well is placed at $\lambda_m/4$ and subsequent points are separated by $\lambda_m/2$. \[Fig:SrClockShift\]](fractional_clock_shift.eps)
Lattice clocks are sensitive to long-range atom-wall interactions in all three regimes: van der Walls, retardation (Casimir-Polder), and thermal-bath (Lifshitz) regimes. Indeed, in Fig. \[Fig:SrEta\] we draw a ratio $$\eta(z, T) = U(z,T)/U_{CP}(z) \, . \label{Eq:eta}$$ Parameter $\eta$ is equal to one in the region where the Casimir-Polder approximation is valid. From Fig. \[Fig:SrEta\], we observe that the transition between the van der Waals and the CP regimes occurs around well number four. The position of the second transition region, from the CP to the Lifshitz regimes depends on the temperature. For $T=77$ K, this crossover is delayed until well number 25 (not shown on the Fig. \[Fig:SrEta\]). $T=600$ K represents another extreme, as the van der Waals region immediately transforms into the Lifshitz region. Atom-wall interaction at room temperature, $T=300$ K, represents an intermediate case, where the Casimir-Polder approximation is valid over several wells, and all the three domains become distinguishable.
![(Color online) Sr clock shift of Fig. \[Fig:SrClockShift\] normalized to the Casimir-Polder limit, Eq. (\[Eq:eta\]) at $T=77\,\mathrm{K}$ (blue dots), $T=300 \,\mathrm{K}$ (red squares), and $T=600 \,\mathrm{K}$ (brown diamonds). \[Fig:SrEta\]](eta.eps)
We shown that the lattice clocks can be used to detect all three qualitative-distinct mechanisms of the atom-wall interaction. In this regard, the lattice clocks offer a unique opportunity to map out both van der Walls$\rightarrow$Casimir-Polder and Casimir$\rightarrow$Polder-Lifshitz transition regions. This distinguishes the clocks from previous experiments: the former transition was probed in Ref. [@SukBosCho93], while the latter was detected in Ref. [@ObrWilAnt07]. None of the experiments so far has been able to map out both transitions simultaneously.
The accuracy of determination of the atom-wall interaction is affected by how well the position of the clouds with respect to the surface is determined. At each well of the optical lattice, the atomic center-of-mass wavefunction is spread over some distance $\Delta z$. Then the clock shift acquires a width, leading to an uncertainty $\delta U(z,T)/ U(z,T) \approx 3 \Delta z/ z \approx
6/N_w \, \eta_{LD} \, (\lambda_{\rm clock}/\lambda_{m})$, where $\eta_{LD}=\lambda_{\rm clock}^{-1} (\hbar/(2M\omega_{ho}))^{1/2}$ is the Lamb-Dicke parameter for an atom of mass $M$ and $\omega_{ho}$ is the harmonic oscillator frequency of the trapping potential along the $z$-axis. $N_w$ is the site number counting from the surface. For a typical value of $\eta_{LD}\approx 0.1$, the error in determination of the atom-wall interaction would be in the order of a few per cent.
The accuracy can be potentially improved by increasing the intensity of the lattice laser, $I_L$, as the size of atomic cloud $\Delta z \propto 1/I_L^{1/4}$. At the same time, the factors limiting the maximum of intensity relate to the performance of the clock itself. The major factors here are hyperpolarizability (fourth order AC Stark shift) and photon scattering rate, which scale as $I_L^2$ and $I_L$, respectively. Depending on a specific trapping site, the intensity of the lattice laser could be optimized to attain a better accuracy. Finally, we notice that the present scheme could be extended to recently proposed microMagic lattice clocks [@BelDerDzu08Clock], operating in a more convenient microwave domain.
We thank J. Babb, J. Weinstein, H. Katori, and A. Cronin for discussions and Issa Beckun for drawing Fig.1. This work was supported in part by the US National Science Foundation, by the Australian Research Council and by the US National Aeronautics and Space Administration under Grant/Cooperative Agreement No. NNX07AT65A issued by the Nevada NASA EPSCoR program.
[24]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{}
, , , , ****, ().
, in **, edited by (, , ), vol. , pp. .
, ****, ().
, , , (), .
, , , , ****, ().
, , , ****, ().
, , , , ****, ().
, , , , , , , ****, ().
, , , ****, ().
, , , , , , , ****, () , ****, (), , , , , , , ****, ().
, , , , , , ****, (), , ****, ().
, ****, (), .
, , , ****, ().
, , , ****, (), , , (), .
, , , , ****, ().
, , , ****, ().
, , , , , , ****, (), , ****, ().
, , , , , ****, ().
, , , , ****, (),
|
---
abstract: 'This paper is the second installment in a series of papers concerning the boundary behavior of solutions to the $p$-parabolic equations. In this paper we are interested in the short time behavior of the solutions, which is in contrast with much of the literature, where all results require a waiting time. We prove a dichotomy about the decay-rate of non-negative solutions vanishing on the lateral boundary in a cylindrical $C^{1,1}$ domain. Furthermore we connect this dichotomy to the support of the boundary type Riesz measure related to the $p$-parabolic equation in NTA-domains, which has consequences for the continuation of solutions.'
address: ' Benny Avelin, Department of Mathematics, Uppsala University, S-751 06 Uppsala, Sweden '
author:
- Benny Avelin
title: 'Boundary behavior of solutions to the parabolic $p$-Laplace equation II'
---
Introduction
============
Recently there has been an upsurge in progress concerning the boundary behavior of solutions to the $p$-parabolic equation [@A; @AGS; @AKN; @BBG; @BBGP; @GNL; @GS; @KMN]. Building upon this line of work, specifically as a direct descendant of [@AKN] (hence the title), this work will be concerned with the ‘short time’ boundary behavior of solutions to the $p$-parabolic equation.
The purpose of this paper is a “proof of concept” for a certain decay-rate phenomenon that only occurs in the degenerate regime ($p > 2$). To describe this phenomenon let us first state our equation $$\label{ppar1}
u_t - \Delta_p u = u_t - \nabla \cdot (|Du|^{p-2} Du) = 0, \quad p > 2,$$ we call the degenerate $p$-parabolic equation. The phenomenon concerns the short time behavior of solutions vanishing on relatively smooth boundary with respect to the decay-rate up to the boundary. The main example of this phenomenon is the following, let $T \in \R$, then in $\R_+^n \times (-\infty,T)$ consider $$\begin{aligned}
\label{examp1}
u_1(x,t)&=C(p)(T-t)^{-\frac{1}{p-2} }x_{n}^{\frac p{p-2}}.\end{aligned}$$ Another solution which we simply obtain is $u_2 = x_n$, the point is that $u_1$ and $u_2$ behave very differently at the boundary and shows that a short time boundary Harnack inequality cannot hold. I.e. the ratio $$\begin{aligned}
\frac{u_2}{u_1} \qquad \text{is not bounded from above and below.}\end{aligned}$$ However herein lies the interesting point, is there anything in-between these two solutions in terms of decay-rate?
In [@AGS] we pointed out that the solution $u_1$ in is exactly so small that one cannot build Harnack chains up to the boundary, see \[cons:harnack\]. In fact for the $p$-parabolic equation, the waiting time for $u_1$ dictated by the Harnack inequality at a point $(x_0,t_0)$ is $$\nonumber
C (T-t_0)x_0^{-p}r^p.$$ This implies that if we wish to apply the Harnack inequality with a radius comparable to the distance to the boundary then the waiting time will be of the same order of magnitude as the distance from $t_0$ to $T$ (the end of existence), in essence this implies that a Harnack chain cannot be performed.
Solutions as in are inherent to parabolic equations which are non-homogeneous and they arise when looking for separable solutions (see \[sec:memory\]).
It can be argued that the $p$-parabolic equation has a certain “memory” concerning the decay-rate of the initial data. We highlight this idea with another example from [@AGS], namely consider the following supersolution to (a similar supersolution exists for the PME) $$\nonumber u(x,t)=C(p)(T-t)^{-\frac1{p-2}}\exp(-\frac1{x}),\qquad x\in (0,1/4).$$
It is now clear that if our domain is $E_T = (0,1) \times (0,T]$ then the Cauchy-Dirichlet problem in $E_T$ has a short time behavior that is heavily dictated by the decay of the initial data, in contrast to the linear case, where no matter what positive data we have we will always have that $\partial_x u(0,t) > 0$ for all $t \in (0,T)$. We explore this “memory effect” further in \[sec:memory\].
To describe our results regarding this decay-rate phenomenon we first need some definitions, we begin with what we term a degenerate point, i.e.
Let $\Omega \subset \R^n$ be a domain and let $u$ be a solution to \[ppar1\] in $\Omega_T = \Omega \times (0,T)$. We call $w \in \partial \Omega$ a degenerate point at $t_0 \in (0,T)$ with respect to $u$ if $$\begin{aligned}
\mathcal{N}^+\left [ \frac{u(x,t_0)}{|x-w|^{\frac{p}{p-2}}} \right ] (w) < \infty,
\end{aligned}$$ where $$\begin{aligned}
\mathcal{N}^+ [f] (w) &= \limsup_{x \in \Gamma (w), x \to w} f(x) \\
\Gamma_\alpha(w) &= {
\left \{ x \in \Omega: \alpha|x-w| < d(x,\partial \Omega) \right \}
}
\end{aligned}$$ for some $\alpha \in (0,1)$.
\[rem:Non-tangential\] The value of $\alpha > 0$ in the above definition does not enter into any of the estimates and is irrelevant as long as $\Gamma(w)$ satisfies $$\begin{aligned}
\label{eq:non-empty-NTA}
\Gamma(w) \cap B_r(w) \neq \emptyset, \quad \forall r > 0.
\end{aligned}$$ Since we will be working in $(M,r_0)$-NTA-domains (see \[NTA\]), we immediately see that if $\alpha < 1/M$ the above property is true, since $$\begin{aligned}
a_\varrho(w) \in \Gamma(w), \quad \forall \varrho < r_0.
\end{aligned}$$ Hence in the rest of this paper we will ignore the value of $\alpha > 0$, but we will always assume that $\alpha > 0$ is chosen such that \[eq:non-empty-NTA\] holds, which according to the above means that in $(M,r_0)$-NTA-domains we assume $\alpha < 1/M$.
Secondly we define what we mean by a non-degenerate point, i.e.
Let $\Omega \subset \R^n$ be a domain and let $u$ be a solution to \[ppar1\] in $\Omega_T = \Omega \times (0,T)$. We call $w \in \partial \Omega$ a non-degenerate point at $t_0 \in (0,T)$ with respect to $u$ if $$\begin{aligned}
\limsup_{\Omega \ni x \to w} \frac{d(x,\partial \Omega)}{u(x,t_0)} < \infty.
\end{aligned}$$
The main result in this paper is that for NTA-domains that satisfy the interior ball condition there can only be degenerate and non-degenerate points $(w,t)$ for a given $w \in \partial \Omega$ except for possibly a single time $\hat t$ that we call the threshold point, see Figure \[fig:first\].
\[fig:first\]
(0,0)–(0,5); at (0.1,4.8) [$t$]{}; (0,2.5) rectangle (4,4.5); at (2,3.5) [Non-degenerate]{}; (0,2.5)–(4.2,2.5); at (4.2,2.5) [Threshold level $\hat t$]{}; (0,0) rectangle (4,2.5); at (2,1.5) [Degenerate]{};
Outline of paper
----------------
We begin the contents of the paper in \[sec:Definitions\] and \[sec:preliminary\], where we provide all the definitions and results needed for the bulk of the paper. Next in \[sec:main results\] we state all the main results and prove some of the simple but powerful consequences. The rest of the sections is devoted to proofs of the main results, except for \[sec:cone,sec:interface\]. In \[sec:cone\] we give an example of a solution with support never reaching the boundary in a conical domain. Finally in \[sec:interface\] we deal with an example having stationary support for a non-zero time interval, we theorize about the length of that interval and provide some numerical computations concerning its length.
[**Acknowledgment**]{} The author was supported by the Swedish Research Council, dnr: 637-2014-6822.
Definitions and notation {#sec:Definitions}
========================
Points in $ \R^{n+1} $ are denoted by $ x = ( x_1, \dots, x_n,t)$. Given a set $E\subset \R^n$, let $ \bar E, \partial E$, $\mbox{diam } E$, $E^c$, $E^\circ $, denote the closure, boundary, diameter, complement and interior of $E$, respectively. Let $ \cdot $ denote the standard inner product on $ \R^{n} $, let $ | x | = (x \cdot x )^{1/2}$ be the Euclidean norm of $ x, $ and let ${\, {\rm d} x}$ be Lebesgue $n$-measure on $ \R^{n}. $ Given $ x \in \R^{n}$ and $r >0$, let $ B_{r} (x) = \{ y \in \R^{n} : | x - y | < r \}$. Given $ E, F \subset \R^{n}, $ let $ d ( E, F ) $ be the Euclidean distance from $ E $ to $ F $. In case $ E = \{y\}, $ we write $ d ( y, F )$. For simplicity, we define $\sup$ to be the essential supremum and $\inf$ to be the essential infimum. If $ O \subset \R^{n} $ is open and $ 1 \leq q \leq \infty, $ then by $ W^{1 ,q} ( O )$ we denote the space of equivalence classes of functions $ f $ with distributional gradient $ \nabla f = ( f_{x_1}, \dots, f_{x_n} ), $ both of which are $ q $-th power integrable on $ O. $ Let $$\begin{aligned}
\| f \|_{ W^{1,q} (O)} = \| f \|_{ L^q (O)} + \| \, | \nabla f | \, \|_{ L^q ( O )}\end{aligned}$$ be the norm in $ W^{1,q} ( O ) $ where $ \| \cdot \|_{L^q ( O )} $ denotes the usual Lebesgue $q$-norm in $O$. $ C^\infty_0 (O )$ is the set of infinitely differentiable functions with compact support in $ O$ and we let $ W^{1 ,q}_0 ( O )$ denote the closure of $ C^\infty_0 (O )$ in the norm $\| \cdot\|_{ W^{1,q} (O)}$. $ W^{1,q}_{\rm loc} ( O ) $ is defined in the standard way. By $ \nabla \cdot $ we denote the divergence operator. Given $t_1<t_2$ we denote by $L^q(t_1,t_2,W^{1,q} ( O ))$ the space of functions such that for almost every $t$, $t_1\leq t\leq t_2$, the function $x\to u(x,t)$ belongs to $W^{1,q} ( O )$ and $$\| u \|_{ L^q(t_1,t_2,W^{1,q} ( O ))}:=\biggl (\int\limits_{t_1}^{t_2}\int\limits_O\biggl (|u(x,t)|^q+|\nabla u(x,t)|^q\biggr ){\, {\rm d} x}{\, {\rm d} t}\biggr )^{1/q} <\infty.$$ The spaces $L^q(t_1,t_2,W^{1,q}_0 ( O ))$ and $L^q_{\rm loc}(t_1,t_2,W^{1,q}_{\rm loc} ( O ))$ are defined analogously. Finally, for $I \subset \R$, we denote $C(I;L^{q} ( O ))$ as the space of functions such that $t\to \| u(t,\cdot) \|_{ L^{q} (O)}$ is continuous whenever $t \in I$. $C_{\rm loc}(I;L^{q}_{\rm loc} ( O ))$ is defined analogously.
Weak solutions
--------------
Let $ \Omega \subset \R^n $ be a bounded domain, i.e., a connected open set. For $t_1<t_2$, we let $\Omega_{t_1,t_2}:= \Omega \times (t_1,t_2)$. Given $p$, $1<p<\infty$, we say that $ u $ is a weak solution to $$\label{Hu}
\partial_t u - \Delta_p u = 0$$ in $ \Omega_{t_1,t_2}$ if $ u \in L_{\rm loc}^p(t_1,t_2,W_{\rm loc}^{1,p} ( \Omega))$ and $$\label{1.1tr} \int_{\Omega_{t_1,t_2} } \left(- u \partial_t \phi + |\nabla u|^{p-2}\nabla u\cdot\nabla \phi \right) {\, {\rm d} x}{\, {\rm d} t}= 0$$ whenever $ \phi \in C_0^\infty(\Omega_{t_1,t_2})$. First and foremost we will refer to equation \[Hu\] as the $p$-parabolic equation and if $ u $ is a weak solution to in the above sense, then we will often refer to $u$ as being $p$-parabolic in $\Omega_{t_1,t_2}$. For $p \in (2,\infty)$ we have by the parabolic regularity theory, see [@DB], that any $p$-parabolic function $u$ has a locally H[ö]{}lder continuous representative. In particular, in the following we will assume that $p \in (2,\infty)$ and any solution $u$ is continuous. If holds with $=$ replaced by $\geq$ ($\leq$) for all $ \phi \in C_0^\infty(\Omega_{t_1,t_2})$, $\phi \geq 0$, then we will refer to $u$ as a weak supersolution (subsolution).
Geometry
--------
We here state the geometrical notions used throughout the paper.
\[NTA\] A bounded domain $\Omega$ is called non-tangentially accessible (NTA) if there exist $M \geq 2$ and $r_0$ such that the following are fulfilled:
1. \[NTA1\] *corkscrew condition:* for any $ w\in \partial\Omega, 0<r<r_0,$ there exists a point $a_r(w) \in \Omega $ such that $$\nonumber M^{-1}r<|a_r(w)-w|<r, \quad d(a_r(w), \partial\Omega)>M^{-1}r ,$$
2. \[NTA2\] $\R^n \setminus \Omega$ satisfies \[NTA1\],
3. \[NTA3\] *uniform condition:* if $ w \in \partial \Omega, 0 < r < r_0, $ and $ w_1, w_2 \in B_r(w) \cap \Omega, $ then there exists a rectifiable curve $ \gamma: [0, 1] \to \Omega $ with $ \gamma ( 0 ) = w_1, \gamma ( 1 ) = w_2, $ such that
1. $H^1 ( \gamma ) \, \leq \, M \, | w_1 - w_2 |,$
2. $\min\{H^1(\gamma([0,t])), \, H^1(\gamma([t,1]))\, \}\, \leq \, M \, d ( \gamma(t), \partial \Omega)$, for all $t \in [0,1]$.
The values $ M $ and $r_0$ will be called the NTA-constants of $ \Omega$. For more on the notion of NTA-domains we refer to [@JK].
\[defBall\] Let $\Omega \subset \R^n$ be a bounded domain. We say that $\Omega$ satisfies the interior ball condition with radius $r_0 > 0$ if for each point $y \in \partial \Omega$ there exist a point $x^+ \in \Omega$ such that $B_{r_0}(x^+) \subset \Omega$ and $\partial B_{r_0}(x^+) \cap \partial \Omega =\{y\}$.
The continuous Dirichlet problem
--------------------------------
Assuming that $\Omega$ is a bounded NTA-domain one can prove, see [@BBGP] and [@KL], that all points on the parabolic boundary $$\partial_p\Omega_T = S_T \cup (\bar \Omega \times \{0\})\,, \qquad S_T = \partial \Omega \times [0,T],$$ of the cylinder $\Omega_T$ are regular for the Dirichlet problem for equation . In particular, for any $f\in C( \partial_p\Omega_T)$, there exists a unique Perron-solution $u=u_f^{\Omega_T}\in C(\overline \Omega_T)$ to the Dirichlet problem \[Hu\] in $\Omega_T$ and $u =f$ on $\partial_p \Omega_T$.
In the study of the boundary behavior of quasi-linear equations of $p$-Laplace type, certain Riesz measures supported on the boundary and associated to non-negative solutions vanishing on a portion of the boundary are important, see [@LN1; @LN2]. These measures are non-linear generalizations of the harmonic measure relevant in the study of harmonic functions. Corresponding measures can also be associated to solutions to the $p$-parabolic equation. Let $u$ be a non-negative solution in $\Omega_T$, assume that $u$ is continuous on the closure of $\Omega_T$, and that $u$ vanishes on $\partial_p \Omega_T \cap Q$ with some open set $Q$. Extending $u$ to be zero in $Q \setminus \Omega_T$, we see that $u$ is a continuous weak subsolution to \[Hu\] in $Q$. From this one sees that there exists a unique locally finite positive Borel measure $ \mu$, supported on $S_T \cap Q$, such that $$\begin{aligned}
\label{eq:Riesz} & -\int_Q u \partial_t\phi{\, {\rm d} x}{\, {\rm d} t}+\int_Q|\nabla u|^{p-2}\nabla u\cdot\nabla \phi {\, {\rm d} x}{\, {\rm d} t}= -\int_Q \phi {\, {\rm d}}\mu\end{aligned}$$ whenever $ \phi \in C_0^\infty(Q)$. Whenever we have a solution and when there is no danger of confusion we will simply use $\mu$ to denote the corresponding measure, in other cases we will subscript the measure with the solution, i.e. for a solution $v$ we will use the notation $\mu_v$.
Preliminary estimates: Carleson and Backward Harnack chains {#sec:preliminary}
===========================================================
The proofs of this paper relies on the following estimates from [@AKN], we we include for the ease of the reader. The following estimate is a simple Harnack chain lemma for forward in time Harnack chain that we developed in [@AKN].
\[lem AKN HChain\] Let $\Omega\subset\R^N$ be a domain and let $T>0$. Let $x,y$ be two points in $\Omega$ and assume that there exist a sequence of balls ${
\left \{ B_{4r}(x_j) \right \}
}_{j=0}^k$ such that $x_0=x$, $x_k=y$, $B_{4r}(x_j)\subset\Omega$ for all $j=0,...,k$ and that $x_{j+1} \in B_{r}(x_j)$, $j=0,\ldots,k-1$. Let $u$ be a non-negative solution to in $\Omega_T$ and assume that $u(x,t_0)>0$. There exist constants $\bar c_i \equiv \bar c_i(p,n)$, $i \in {
\left \{ 1,2 \right \}
}$ and $\bar c_3\equiv \bar c_3(p,n,k)>1$ such that if $$t_0 - (\bar c_1/u(x,t_0))^{p-2} (4r)^p > 0,\ t_0 + \bar c_3(k) u(x,t_0)^{2-p} r^p <T,$$ then $$u(x,t_0) \leq \bar c_2^{k} \inf_{z \in B_r(y)} u(z,t_0+\bar c_3(k) u(x,t_0)^{2-p} r^p).$$
As we already mentioned in [@AKN] there is a vast difference between Harnack chains performed backwards in time versus chains performed forward in time. This is a point of philosophical nature. When building Harnack chains forward in time we solely use the known information at the point of reference. In contrast, when performing backward Harnack chains we are instead considering the question, how large could the solution have been in the past such that the solution is below a given value at a given reference point. In essence backwards chains does not really rely on the values of the actual solution but a forward chain is forced to do so. This has the consequence that forward chains have a waiting time that we have no control over, while for backward chains we have a fairly fine control over the waiting time, actually since we can change the reference value $\Lambda$ we have more control over the waiting time than we have in the linear setting (see [@A]).
We will need the following version of the backward Harnack chain theorem that we proved in [@AKN], this is an updated version of the results in [@AGS] with more control over the waiting time and also valid in NTA-domains.
\[NTA BChain\] Let $\Omega\subset\R^n$ be an NTA-domain with constants $M$ and $r_0$, let $x_0 \in \partial \Omega$, $T>0$, and let $0 < r < r_0$. Let $x,y$ be two points in $\Omega \cap B_r(x_0)$ such that $$\varrho := d(x,\partial \Omega) \leq r \qquad \mbox{and} \qquad d(y,\partial \Omega) \geq \frac{r}{4}.$$ Assume that $u$ is a non-negative solution to \[Hu\] in $\Omega_T$, and assume that $\Lambda \geq u( y , s)$ is positive. Let $\delta \in (0,1]$. Then there exist positive constants $C_i \equiv C_i(p,n)$ and $c_i \equiv c_i(p,n,M)$, $i\in\{4,5\}$, such that if $s<T$ and $$\nonumber \max\left\{ \left(\frac{c_4^{1/\delta}}{c_h} \left( \frac{r}{\varrho} \right)^{c_5/\delta} \Lambda \right)^{2-p } (\delta \varrho)^p , s - \tau \right\}\, \leq\, t\, \leq\, s - \delta^{p-1} \tau,$$ with $$\tau := C_4 \left[C_5 \Lambda \right]^{2-p} r^p$$ then $$u(x,t) \leq c_4^{1/\delta} \left( \frac{r}{\varrho} \right)^{c_5/\delta} \Lambda.$$ Furthermore, constants $c_i,C_i$, $i \in \{4,5\}$, are stable as $p \to 2^+$.
Rescaling such that $u(y,s) \leq \Lambda = 1$, the proof follows verbatim as in [@AKN].
The above version can then be used to prove the following slightly modified version of the same theorem found in [@AKN], this estimate is also an updated version of a similar statement found in [@AGS]. The difference is in the flexibility of it usage and the generality of its validity.
\[thm Carleson\] Let $\Omega \subset \R^n$ be an NTA-domain with constants $M$ and $r_0$. Let $u$ be a non-negative solution to \[Hu\] in $\Omega_T$. Let $(x,t) \in S_T$ and $0 < r < r_0$. Assume that $\Lambda \geq u(a_r(x),t)$ is positive and let $$\tau = \frac{C_4}{4} \left[C_5 \Lambda \right]^{2-p} r^p ,$$ where $C_4$ and $C_5$, both depending on $p,n$, are as in . Assume that $t > (\delta_1^{p-1}+\delta_2^{p-1} + 2\delta_3^{p-1}) \tau$ for $0< \delta_1\leq \delta_3 \leq 1$, $\delta_2 \in (0,1)$ and that for a given $\lambda \geq 0$, the function $(u - \lambda)_+$ vanishes continuously on $S_T \cap B_r(x) \times (t-(\delta_1^{p-1} + \delta_2^{p-1}+\delta_3^{p-1}) \tau, t-\delta_1^{p-1} \tau)$ from $\Omega_T$. Then there exist constants $c_i \equiv c_i(M,p,n)$ $i \{6,7\}$, such that $$\sup_{Q} u \leq \left( c_6 / \delta_3 \right)^{c_7/ \delta_1} \Lambda + \lambda,$$ where $Q := B_r(x) \times ( t - (\delta_1^{p-2}+\delta_2^{p-1})\tau, t - \delta_1^{p-1} \tau)$. Furthermore, constants $c_i$, $i \in \{6,7\}$, are stable as $p \to 2^+$
By scaling the function $u$ we can assume that $u(a_r(x),t) \leq \Lambda = 1$, and replacing $\lambda$ with its scaled version. The proof now follows verbatim as in [@AKN].
\[NTA FChain\] Let $\Omega\subset\R^n$ be an NTA-domain with constants $M$ and $r_0$, let $x_0 \in \partial \Omega$, $T>0$ and let $0 < r < r_0$. Let $x,y$ be two points in $\Omega \cap B_r(x_0)$ such that $$\varrho := d(x,\partial \Omega) \leq r \qquad \mbox{and} \qquad d(y,\partial \Omega) \geq \frac{r}{4}.$$ Assume that $u$ is a non-negative $p$-parabolic function in $\Omega_T$, and assume that $u(x,t_0)$ is positive. Let $\delta \in (0,1]$. Then there exist constants $c_i \equiv c_i(M,p,n)$, $i\in\{1,2,3\}$, such that if $$t_0 - (c_h/u(x,t_0))^{p-2} (\delta \varrho)^p > 0, \qquad t_0 + \tau < T,$$ with $$\tau := \delta^{p-1} \left( c_2^{-1/\delta} \left( \frac{r}{\varrho} \right)^{- c_3/\delta} u(x,t_0) \right)^{2-p} r^p ,$$ then $$u(x,t_0) \leq c_1^{1/\delta} \left( \frac{r}{\varrho} \right)^{c_3/\delta} \inf_{z \in B_{r/16}(y)} u(z ,t_0 + \tau).$$ Furthermore, constants $c_i$, $i \in \{1,2,3\}$, are stable as $p \to 2^+$.
Main results {#sec:main results}
============
As alluded to in the introduction we will mainly be concerned with the split between the degenerate and non-degenerate boundary points, and the first step in this direction is the below result. It essentially states that as soon as a point $w$ is no longer degenerate at a time $\hat t$ the point is non-degenerate for the following times.
\[thm linearization\] Let $\Omega \subset \R^N$ be a domain satisfying the interior ball condition with radius $r_0 > 0$. Let $u$ be a non-negative solution to in $\Omega_T$ and assume that $w \in \partial \Omega$, $t_0 \in (0,T)$ and that the following holds $$\label{eq linearization req}
\mathcal{N}^+\left [ \frac{u(x,t_0)}{|x-w|^{\frac{p}{p-2}}} \right ] (w) = \infty.$$ Then for any $t_+ \in (t_0,T)$ we have $$\label{eq linearized}
\limsup_{\Omega \ni x \to w} \frac{d(x,\partial \Omega)}{u(x,t_+)} < \infty.$$
Now that we know that once the threshold has been reached then behavior changes, let us look at the next result which states that in the degenerate regime, i.e. smaller than $|x-w|^{\frac{p}{p-2}}$ then it continues to be smaller than $|x-w|^{\frac{p}{p-2}}$ for a small time interval, in essence it states that the degeneracy is an open condition.
\[thm memory full\] Consider a bounded domain $\Omega \subset \R^N$, assume that $w \in \partial \Omega$. Consider the domain $E_T := E \times (0,T) := (\Omega \cap B(w,r)) \times (0,T)$, assume that $u \in C(\overline{E_T})$ is a non-negative solution to \[Hu\] vanishing on $(\partial \Omega \cap B(w,r)) \times [0,T)$, and that $u \leq M$ in $E_T$. If the initial data satisfies $$\nonumber
u_0(x) \leq M \frac{|x-w|^\delta}{r^\delta}, \quad \text{for a $\delta \geq \frac{p}{p-2}$,}$$ then there exists a time $$\hat T := \left [\frac{C(p,\delta)}{M} \right ]^{p-2} r^p$$ and a constant $c_0(p) > 0$ such that for ${\widetilde}T := \min\{\hat T/2, T \}$ the following upper bound holds $$\nonumber
u(x,t) \leq c_0 M |x-w|^{\delta}, \quad (x,t) \in E_{{\widetilde}T}.$$
Classifying boundary points: a dichotomy
----------------------------------------
In this section we take our results about the critical thresholds (\[thm linearization\]) and use them to prove that there are only two different behaviors, i.e. we prove a simple dichotomy about the boundary points. Whats more we prove that they are ordered as intervals dividing the whole existence of a solution.
\[thm dichotomy\] Let $\Omega \subset \R^N$ be a domain satisfying the interior ball condition with radius $r_0 > 0$. Let $u$ be a non-negative solution to \[Hu\] in $\Omega_T$. Let $w \in \partial \Omega$, and define the sets $$\begin{aligned}
{\mathcal{D}_{\mathcal{N}}}(w) &:= {
\bigg \{ t \in (0,T): \mathcal{N}^+\bigg [ \frac{u(x,t_0)}{|x-w|^{\frac{p}{p-2}}} \bigg ] (w) < \infty \bigg \}
}, \\
{\mathcal{D}}(w) &:= {
\bigg \{ t \in (0,T): \limsup_{\Omega \ni x \to w} \frac{d(x,\partial \Omega)}{u(x,t)} < \infty \bigg \}
},
\end{aligned}$$ then ${\mathcal{D}_{\mathcal{N}}}(w)$ and ${\mathcal{D}}(w)$ are disjoint intervals. If ${\mathcal{D}_{\mathcal{N}}}(w)$ and ${\mathcal{D}}(w)$ are both nonempty, then there exists a time $\hat t \in (0,T)$ such that $$\notag (0,T) = ({\mathcal{D}_{\mathcal{N}}}(w))^\circ \cup {
\left \{ \hat t \right \}
} \cup ({\mathcal{D}}(w))^\circ, \quad (0,\hat t) = ({\mathcal{D}_{\mathcal{N}}}(w))^\circ,$$ and the union is disjoint. Otherwise if ${\mathcal{D}_{\mathcal{N}}}(w) = \emptyset$ or ${\mathcal{D}}(w) = \emptyset$ then $$\nonumber
(0,T) = {\mathcal{D}}(w) \cup {\mathcal{D}_{\mathcal{N}}}(w).$$
Assume the first situation, i.e. that ${\mathcal{D}_{\mathcal{N}}}(w),{\mathcal{D}}(w) \neq \emptyset$.
We first note that if $t \in {\mathcal{D}}(w)$ then from we get that $[t,T) \subset {\mathcal{D}}(w)$, which implies that ${\mathcal{D}}(w)$ is a right-open interval. Thus ${\mathcal{D}}(w)$ can be written as either $(\hat t,T)$ or $[\hat t, T)$. From the assumption ${\mathcal{D}_{\mathcal{N}}}(w),{\mathcal{D}}(w) \neq \emptyset$ we have $\hat t \in (0,T)$.
We wish to show that $(0,\hat t) \subset {\mathcal{D}_{\mathcal{N}}}(w)$ which implies that the set ${\mathcal{D}_{\mathcal{N}}}(w)$ is left-open. To do this, assume the contrary, i.e. that there exists a $t \in (0,\hat t)$ such that $t \not \in {\mathcal{D}_{\mathcal{N}}}(w)$. Applying we get that $(t,T) \subset {\mathcal{D}}(w)$ contradicting that ${\mathcal{D}}(w)$ is of the form $(\hat t,T)$ or $[\hat t, T)$. It is now clear that ${\mathcal{D}_{\mathcal{N}}}(w)$ is also an interval, where $\hat t$ may or may not be included in ${\mathcal{D}_{\mathcal{N}}}(w)$.
Lastly assume that we have the situation that ${\mathcal{D}_{\mathcal{N}}}(w) = \emptyset$, this implies that for any $t \in (0,T)$ we can apply to get that $(t,T) \subset {\mathcal{D}}(w)$. Since $t$ was arbitrary we have that $(0,T) = {\mathcal{D}}(w)$. In a similar way we get that if ${\mathcal{D}}(w) = \emptyset$ then implies that $(0,T) = {\mathcal{D}_{\mathcal{N}}}(w)$.
Note that the interior ball condition only needs to hold in a neighborhood close to $w$.
If the domain in addition to satisfying the interior ball condition also satisfies the so-called NTA condition (see [@JK]), we can apply the Carleson estimate developed in [@AKN] (or even [@A]) to conclude that the non-tangential limsup in can be replaced with the regular limsup from inside $\Omega$. This rules out odd behavior in tangential directions. Furthermore we obtain that ${\mathcal{D}_{\mathcal{T}}}(w)$ (defined below) is an open interval.
\[thm complete dichotomy\] Let $\Omega \subset \R^N$ be an NTA-domain with constants $M,r_0$, satisfying the interior ball condition with radius $r_0 > 0$. Let $u$ be a non-negative solution to \[Hu\] in $\Omega_T$ vanishing continuously on a neighborhood of ${
\left \{ w \right \}
} \times (0,T)$ in $\partial \Omega \times (0,T)$. Let $w \in \partial \Omega$, and define the sets $$\begin{aligned}
{\mathcal{D}_{\mathcal{T}}}(w) &:= {
\bigg \{ t \in (0,T): \limsup_{\Omega \ni x \to w}\bigg [ \frac{u(x,t)}{|x-w|^{\frac{p}{p-2}}} \bigg ] < \infty \bigg \}
} \\
{\mathcal{D}}(w) &:= {
\bigg \{ t \in (0,T): \limsup_{\Omega \ni x \to w} \frac{d(x,\partial \Omega)}{u(x,t)} < \infty \bigg \}
},
\end{aligned}$$ then ${\mathcal{D}_{\mathcal{T}}}(w)$ and ${\mathcal{D}}(w)$ are disjoint intervals. If ${\mathcal{D}_{\mathcal{T}}}(w)$ and ${\mathcal{D}}(w)$ are both nonempty, then there exists a time $\hat t \in (0,T)$ such that $$\notag (0,T) = {\mathcal{D}_{\mathcal{T}}}(w) \cup {
\left \{ \hat t \right \}
} \cup ({\mathcal{D}}(w))^\circ, \quad (0,\hat t) = {\mathcal{D}_{\mathcal{T}}}(w),$$ and the union is disjoint. Otherwise if ${\mathcal{D}_{\mathcal{T}}}(w) = \emptyset$ or ${\mathcal{D}}(w) = \emptyset$ then $$\nonumber
(0,T) = {\mathcal{D}}(w) \cup {\mathcal{D}_{\mathcal{T}}}(w).$$
Consider now a point $w \in \partial \Omega$, then from it follows that unless $t = \hat t$ we have $t \in {\mathcal{D}_{\mathcal{N}}}(w)$ or $t \in {\mathcal{D}}(w)$. Let us prove that if $t \in {\mathcal{D}_{\mathcal{N}}}(w)$ $(0,t) \subset {\mathcal{D}_{\mathcal{T}}}(w)$.
Let $r < r_0$ and assume that $$\label{eq large nontan}
\mathcal{N}^+\bigg [ \frac{u(x,t_0)}{|x-w|^{\frac{p}{p-2}}} \bigg ] (w) < \infty$$ for some $t_0 \in (0,T)$. As mentioned in \[rem:Non-tangential\] we know that $a_\varrho(w) \in \Gamma(w)$ for $\varrho < r$. Thus from there exists a constant $\hat C$ such that $$\nonumber
u(a_\varrho(w),t_0) \leq \hat C \varrho^{\frac{p}{p-2}} =: \Lambda_\varrho.$$ With this at hand let us calculate $\tau_\varrho$ from as follows $$\nonumber
\tau_\varrho = \frac{C_4}{4} \left[C_5 \Lambda_\varrho \right]^{2-p} \varrho^p = \frac{C_4}{4} \left[C_5 \hat C \right]^{2-p}.$$ Let now $\epsilon > 0$ be an arbitrary parameter, then take $\delta_1^{p-1} = \delta_2^{p-1} = \delta_3^{p-1} = \epsilon/(4\tau_\varrho)$, in thus if $t_0 - \epsilon \in (0,T)$ and $u$ vanishes at the boundary piece $(\partial \Omega \cap B_r(w)) \times (t_0-\epsilon,t_0)$ then from we have $$\nonumber
\sup_{(\Omega \cap B_\varrho(w)) \times (t_0-\epsilon/2,t_0-\epsilon/4)} u \leq c_9(\epsilon) \Lambda_\varrho = c_{10} \varrho^{\frac{p}{p-2}}.$$ Hence we can conclude that for $t \in (t_0-\epsilon/2,t_0-\epsilon/4)$ we have $$\nonumber
\limsup_{\Omega \ni x \to w}\bigg [ \frac{u(x,t)}{|x-w|^{\frac{p}{p-2}}} \bigg ] < \infty.$$ Since $\epsilon$ and $t$ was arbitrary we obtain $({\mathcal{D}_{\mathcal{N}}}(w))^{\circ} \subset {\mathcal{D}_{\mathcal{T}}}(w)$.
Finally we prove that ${\mathcal{D}_{\mathcal{T}}}(w)$ is an open interval, to do this let us assume that ${\mathcal{D}_{\mathcal{T}}}(w)$ is closed, i.e. we know that there exists a $\hat t$ such that ${\mathcal{D}_{\mathcal{T}}}(w) = (0,\hat t]$. This implies that $$\nonumber
C_{m} := \sup_{[\hat t/2,\hat t]} \limsup_{\Omega \ni x \to w}\bigg [ \frac{u(x,t)}{|x-w|^{\frac{p}{p-2}}} \bigg ] < \infty$$ for each $\epsilon$ there is a $\delta$ such that $$\nonumber
C_m \leq \sup_{[\hat t/2,\hat t]} \sup_{B_\delta(w) \cap \Omega}\bigg [ \frac{u(x,t)}{|x-w|^{\frac{p}{p-2}}} \bigg ] \leq C_m+\epsilon$$ i.e. there is a new constant $C_m$ such that $$\nonumber
u(x,t) \leq C_m |x-w|^{\frac{p}{p-2}}$$ in $(B_\delta(w) \cap \Omega) \times [\hat t/2, \hat t]$. Thus it is easy to see that we can apply \[thm memory full\] to obtain that there is a time $t > \hat t$ such that $t \in {\mathcal{D}_{\mathcal{T}}}(w)$ and therefore we arrive at a contradiction.
Classifying the support of the boundary type Riesz measure
----------------------------------------------------------
There is a true equivalence of support of the measure $\mu$ and the regions “non-degenerate” and the “degenerate”. In effect if a point becomes non-degenerate it will permeate to the whole domain in finite time and provide support for the boundary type Riesz measure so a non-degenerate point is a true critical point for the behavior of the equation. However for simplicity we will in the following theorem consider when we have degeneracy on a space-time set on the boundary and prove that the measure vanishes.
\[thm:Riesz\_no\_support\] Let $\Omega \subset \R^N$ be a bounded domain, and let $u$ be a non-negative solution to \[Hu\] in $\Omega_T$. Let $E := B_r(w) \times (t_0-\epsilon,t_0)$, $(t_0-\epsilon,t_0) \subset (0,T)$, assume that $u$ vanishes continuously on $E \cap \partial_p \Omega_T$, and $$\nonumber
\limsup_{\Omega \ni x \to w}\bigg [ \frac{u(x,t)}{|x-w|^{\frac{p}{p-2}}} \bigg ] < \infty, \quad \forall (w,t) \in E \cap \partial_p \Omega_T.$$ Then the measure defined in vanishes on $E$.
\[rem:cont\] The above theorem implies that for a solution that has a section of degenerate points can actually be extended across the boundary as a solution. This is a fairly remarkable result and provides an example of solutions with a free boundary that is stationary for a positive time interval. I.e. consider $$\begin{aligned}
u_1(x,t)&=C(p)(T-t)^{-\frac{1}{p-2} }\max\{x_{n},0\}^{\frac p{p-2}}
\end{aligned}$$ which is a solution across $x_n = 0$ according to \[thm:Riesz\_no\_support\]. In fact it is an example of an ancient solution with a stationary free boundary.
Another example would be if we had initial datum satisfying $$\begin{aligned}
u_0(x) \leq d(x,\partial \Omega)^{\delta}, \qquad \delta \geq \frac{p}{p-2},
\end{aligned}$$ in an NTA-domain $\Omega$ and consider a solution in $\Omega \cap B_r(w) \times (0,T)$, $w \in \partial \Omega$ such that $u \leq M$ and $u$ vanishing on $B_r(w) \cap \partial \Omega \times (0,T)$, then if $T$ is small enough depending only on $M,p,\delta$, this solution can be extended across the boundary as a solution (see \[thm memory full,thm:Riesz\_no\_support\]). This solution is a more advanced example of a solution with a stationary free boundary.
\[thm:Riesz\_support\] Let $\Omega \subset \R^N$ be a bounded NTA-domain with constants $M,r_0$, and let $u$ be a non-negative solution to \[Hu\] in $\Omega_T$. Let $E := B_r(w) \times (t_0-\epsilon,t_0)$, $(t_0-\epsilon,t_0) \subset (0,T)$, assume that $u$ vanishes continuously on $E \cap \partial_p \Omega_T$, and $$\begin{aligned}
\label{eq:Riesz_support_upper_bound}
\sup_{E \cap \Omega_T} \frac{d(x,\partial \Omega)}{u(x,t)} = \Lambda^{-1} < \infty,
\end{aligned}$$ then $\mu$ is positive on $E \cap \partial_p \Omega_T$.
Consequences for Harnack chains {#cons:harnack}
-------------------------------
We begin by assuming that we have a solution to the $p$-parabolic equation in $\Omega_T$, where $\Omega \subset \R^n$ is an $(M,r_0)$-NTA-domain. We assume that for a given instant $t_0 \in (0,T)$ and a given point on the boundary $w \in \partial \Omega$ the following estimate holds from below $$\begin{aligned}
\label{cons:lower}
r \leq C u(a_r(w),t_0) \quad \text{for all $r \in (0,r_0)$.}\end{aligned}$$ Let $y \in \Omega$ be any point and assume that $d(y,\partial \Omega) \approx d(a_{r}(w),y) \leq L r$ holds for some $r \in (0,r_0)$ then the following holds: $$\begin{aligned}
u(a_r(w),t_0) \leq c u(y,t_0+C r^2), \quad \text{for some $C > 0$ depending on $L$.}\end{aligned}$$ Since $\Omega$ is an NTA-domain this follows from [@AKN Theorem 3.5] together with the lower bound \[cons:lower\]. The immediate consequence of this is that most Harnack-based estimates from below reduces to a non-intrinsic version and scales exactly as in the linear case (heat equation). This is yet another reason for denoting the estimate \[cons:lower\] a non-degeneracy estimate.
Immediate linearization: Proof of Theorem \[thm linearization\]
===============================================================
We begin this section with a barrier type argument together with a rescaling and iteration method to obtain a “sharp” lower bound of a useful comparison function. A bit more complicated proof, which is $p$-stable can be found in [@AKN]. In the proof below we are employing the Barenblatt solution, which is given by,
$$\begin{aligned}
{\mathcal U}(x,t) = t^{-k} \left ( C_0 - q \left ( \frac{|x|}{t^{k/n}}\right )^{\frac{p}{p-1}} \right )^{\frac{p-1}{p-2}}_+\end{aligned}$$
where $$\begin{aligned}
k = \left ( p-2 + \frac{p}{n} \right )^{-1}, \qquad q = \frac{p-2}{p} \left ( \frac{k}{n}\right )^{\frac{1}{p-1}}\end{aligned}$$ and $C_0$ is a constant depending only on $p,n$. In the following proof we will be using some properties of the Barenblatt function, the first is that the level sets are strictly increasing balls with time, the second is that the maximum of the function is for each time slice at the origin, the third is that the radial derivative is non-zero as long as the function is positive and we are not at the origin.
\[lower bound1\] Consider a solution to $$\begin{aligned}
\label{initial bastard}
\left \{
\begin{array}{rcll}
u_t - \Delta_p u &=& 0, & \text{ in $B_2 \times (0,\infty)$} \\
u &=& 0, &\text{ on $\partial B_2 \times (0,\infty)$}\\
u &\geq& \chi_{B_1}, &\text{ for $t=0$}\\
\end{array}
\right .
\end{aligned}$$ then there exists constants $c_1,c_2,c_3,c_4 > 0$ all depending only on $p$ and $n$ such that $$\label{inflowerbound}
\inf_{x \in B_1} u(x,t) \geq c_1\bigg (c_2 t+1\bigg )^{\frac{1}{2-p}}, \quad t \geq 0,$$ and $$\label{distancelowerbound_simple}
u(x,t) \geq c_3 (2-|x|) \bigg (c_2 t+1\bigg )^{\frac{1}{2-p}}, \quad t \geq c_4.$$
Let us consider the Barenblatt function ${\mathcal U}$, then consider $t_1$ such that $$\nonumber
\supp {\mathcal U}(\cdot,t_1) = B_1,$$ denote ${\mathcal U}(0,t_1) = \kappa_1$, and consider the rescaled Barenblatt $$\nonumber
\hat {\mathcal U}(x,t) = \frac{{\mathcal U}(x,\kappa_1^{2-p} t+t_1)}{\kappa_1}.$$ Now let $t_2$ be such that $\supp \hat {\mathcal U}(\cdot,t_2) = B_2$, then denote $$\nonumber
\sigma = \inf_{x \in B_1} \hat {\mathcal U}(x,t_2) > 0.$$ Note that $\sigma = \sigma(p,n) \in (0,1)$ is a constant. By the comparison principle we obtain that in $B_2 \times (0,t_2)$ we have that $u \geq \hat {\mathcal U}$ and thus $$\label{lower1}
\inf_{x \in B_1} u(x,t_2) \geq \sigma.$$
We will now use an iterative argument. Assume that we have a function $u$ in $B_2 \times (t_0,\infty)$ such that $$\label{iter1}
\inf_{x \in B_1} u(x,t_0) \geq m$$ then the function $$\label{iterrescale}
v(x,t) = \frac{u(x,m^{2-p} t+t_0)}{m}$$ satisfies and thus from $$\nonumber
\inf_{x \in B_1} v(x,t_2) \geq \sigma.$$ Rescaling back we obtain $$\label{iter2}
\inf_{x \in B_1} u(x,t_0+t_2 m^{2-p}) \geq m \sigma.$$
If we start from $u$ in with $t=0$ and iterate , each starting from the following times $$\label{itersum1}
\tau_k = \sum_{i=1}^k t_2 m_{i-1}^{2-p} = t_2 \sum_{i=0}^{k-1} (\sigma^{2-p})^i = t_2 \frac{(\sigma^{2-p})^{k-2}-1}{\sigma^{2-p}-1},$$ where $m_i := \sigma^i$, we get $$\label{itersum2}
\inf_{x \in B_1} u(x,t_0+\tau_k) \geq m_{k}.$$ Let us be given a $t \geq 0$, and let $k$ be the integer such that $$\label{tauk}
\tau_{k-1} \leq t \leq \tau_k$$ then by $$\label{tktranslation1}
\sigma^2 \bigg (t \frac{(\sigma^{2-p}-1)}{t_2}+1\bigg )^{\frac{1}{2-p}} \in (\sigma^{k},\sigma^{k-1})$$ thus collecting we get .
Let us now prove . This we do as follows. Consider again the Barenblatt function, with $t_1$ as before, but this time, let us consider the following rescaled Barenblatt $$\nonumber
{\widetilde}{\mathcal U}(x,t) = \frac{{\mathcal U}(x,(\kappa_1/2)^{2-p}t+t_1)-\kappa_1/2}{\kappa_1/2}.$$ Then find ${\widetilde}t_2$ where ${
\left \{ {\widetilde}{\mathcal U}> 0 \right \}
} = B_2$, and note that ${\widetilde}t_2 > t_2$ and thus there is a constant ${\widetilde}c > 0$ such that $$\label{tildet2lower}
{\widetilde}{\mathcal U}(x,{\widetilde}t_2) \geq {\widetilde}c (2-|x|).$$ Now, by the construction of ${\widetilde}{\mathcal U}$ and the parabolic comparison principle we get that in $\overline{B_2} \times [0,{\widetilde}t_2]$, ${\widetilde}U \leq u$ and thus holds also for $u$ at ${\widetilde}t_2$.
Going back to , let $k_0 > 0$ be the number such that $$\nonumber
\tau_{k_0} \leq {\widetilde}t_2 \leq \tau_{k_0+1}.$$ Consider $v(x):B_2 \setminus B_1 \to \R_+$ as the solution to $\Delta_p v = 0$ and $v = c_0$ at $\partial B_1$ and $0$ at $\partial B_2$, where $c_0$ is to be fixed. First take $c_0$ to be the largest number so that $$\nonumber
v \leq {\widetilde}c (2-|x|) \quad \text{and}\quad c_0 \leq \sigma^{k_0+1}.$$ This implies that $v(x) \leq u(x,t)$ on $\partial_p [ (B_2 \setminus B_1) \times ({\widetilde}t_2, \tau_{k_0+1})]$ and thus by the parabolic comparison principle we have $v \leq u$ in $(B_2 \setminus B_1) \times ({\widetilde}t_2,\tau_{k_0+1}]$. Moreover there is a new constant $\hat c > 0$ such that $$\nonumber
v \geq \hat c (2-|x|).$$ Now we can apply the same argument iteratively for $(\tau_{k_0+j},\tau_{k_0+j+1})$, $j=1,\ldots$ and obtain $$\nonumber
u(x,t) \geq c_2 (2-|x|) \sigma^j$$ for $t \in [\tau_{k_0+j},\tau_{k_0+j+1}]$, and thus we get as in that holds, and thus we have proved .
Due to translation invariance we can assume that $w = 0$. Let $\epsilon > 0$ be a given number such that $t-\epsilon > 0$ and $t+\epsilon < T$. The condition implies that there is a sequence of points ${
\left \{ x_j \right \}
}_{j=1}^{\infty}$, $x_j \in \Gamma(w)$ and $x_j \to w$ such that for a strictly decreasing function $\eta$ we get $$\label{eq eta definition}
u(x_j,t_0) \geq \eta(|x_j|) |x_j|^{\frac{p}{p-2}},$$ where $\eta(0^+) = \infty$. Start by considering $g(r) = r^{\frac{p}{p-2}} \eta(r)$, and define the “time-lag” function $\theta$ as $$\nonumber
\theta(r):= g(r)^{2-p} r^p = r^{(p-2)\left (\frac{p}{p-2}-\frac{p}{p-2} \right )}\eta(r)^{2-p} = \eta(r)^{2-p}$$ which implies that $\theta$ is a strictly increasing function such that $\theta(0^+) = 0$. We construct the sets for $r \leq r_0$ $$\begin{aligned}
N^r_1 &:= {
\big \{ x \in \Omega: d(x,\partial \Omega \cap B(0,r)) = d(x,\partial \Omega) = r \big \}
}, \\
N^r_2 &:= {
\big \{ x \in \Omega: d(x,N^r_1) \leq r/2 \big \}
}.
\end{aligned}$$ Note that since $\Omega$ is an NTA-domain we see that $N^r_1$ can be covered by $k(M,r_0)$ balls of size $r/4$. Now, consider the unique $r_\epsilon$ such that $$ \theta(r_\epsilon) = \frac{\epsilon}{4 \max {
\left \{ \bar c_1, c_4(\bar c_2^{k})^{p-2}, \bar c_3(k) \right \}
}},$$ where $c_4 > 1$ is from and $\bar c_i$, $i \in {
\left \{ 1,2,3 \right \}
}$ is from . Let $J \in \mathbb{N}$ be the smallest integer such that $|x_J| \leq \min{
\left \{ r_\epsilon,r_0 \right \}
}$. Denote $r = d(x_J,\partial \Omega)$ and note that since $x_J \in \Gamma(w)$, $r \approx |x_J|$. From and the choice of $r_\epsilon$ we can apply the forward Harnack chain () to obtain that in $N^r_2$ we have for a time $$\nonumber
t_1 = t_0 + \bar c_3(k) u(x_J,t_0)^{2-p} \left (\frac{r}{4} \right )^p,$$ where $t_0 < t_1 < t_0+\epsilon/4$ and $$\label{eq u larger than g r}
u(x,t_1) \geq \bar c_2^{-k} u(x_J,t_0) \geq \bar c_2^{-k} g(r).$$ Now consider any point $y$ in $N^r_1$, and let $v$ denote a function satisfying \[initial bastard\] but with $v(\cdot,0) = \chi_{B_1}$. Let us translate and scale $v$ as follows $$\nonumber
\bar v(x,t) = (\bar c_2^{-k} g(r)) v\left (\frac{x-y}{r/2}, (t-t_1) \frac{(\bar c_2^{-k})^{p-2} 2^p}{\theta(r)} \right ).$$ The function $\bar v$ now satisfies $$\begin{aligned}
\left \{
\begin{array}{rcll}
\bar v_t - \Delta_p \bar v &=& 0, & \text{ in $B_r \times (t_1,\infty)$} \\
\bar v &=& 0, &\text{ on $\partial B_r \times (t_1,\infty)$}\\
\bar v &=& \bar c_2^{-k} g(r)\chi_{B_{r/2}}, &\text{ for $t=t_1$}.
\end{array}
\right .
\end{aligned}$$ We can now use the parabolic comparison principle together with to conclude $u \geq \bar v$. Applying we get for $$\nonumber
t_2 := t_1 + c_4 \frac{\theta(r)}{(\bar c_2^{-k})^{p-2} 2^p} < t < T$$ that $$\nonumber u(x,t) \geq c_3 \frac{2(r-|x-y|)}{r} \bigg (c_2 (t-t_1) \frac{(\bar c_2^{-k})^{p-2} 2^p}{\theta(r)}+1\bigg )^{\frac{1}{2-p}}.$$ Furthermore our choice of $r_\epsilon$ gives $t_0 < t_2 < t_0 + \epsilon$. In particular since $y$ was an arbitrary point in $N^r_1$ and $\epsilon > 0$ was arbitrary, we can conclude that holds for $t > t_0+\epsilon$.
Memory-effect for degenerate initial data: Proof of Theorem \[thm memory full\] {#sec:memory}
===============================================================================
This next result is a theorem about a certain memory effect of the $p$-parabolic equation, essentially it states that if the boundary behavior at a fixed point has a certain decay-rate property that is higher than $r^{\frac{p}{p-2}}$ then the equation will remember this decay for some time forward dictated by the size of the solution. Another way to look at this is that a solution with this decay-rate does not regularize immediately.
\[thm memory\] Consider a bounded domain $\Omega \subset \R^N$ and assume that $w \in \partial \Omega$. Consider the domain $E_T := E \times (0,T) := \Omega \cap B(w,2) \times (0,T)$ and consider a solution to the following Cauchy-Dirichlet problem $$\label{upperproblem}
\left \{
\begin{array}{rcll}
u_t - \Delta_p u &=& 0, &\text{ in $E_T$ }\\
u &=& 0, &\text{ on $\partial \Omega \cap \partial_p E_T$}\\
u(x,t) &\leq& M, &\text{ on $\partial_p E_T$ } \\
u(x,0) &\leq& M |x-w|^\delta,&\text{ for $x \in E$}.
\end{array}
\right .$$ Then there exists a time $$\label{hatT}
\hat T := \left [\frac{C(p,\delta)}{M} \right ]^{p-2}$$ and a constant $c_0(p) > 0$ such that for $t \in (0,\min{
\left \{ \hat T/2,T \right \}
})$ the following upper bound holds $$\nonumber
u(x,t) \leq c_0 M |x-w|^{\delta}, \quad x \in E.$$
In order for us to be able to work in higher dimensions than $N=1$ we need to construct a radial version of . For this let us consider a solution of the type $v(x,t):=g(|x|)f(t)$, plugging this into equation \[Hu\] gives us $$\nonumber f_t g - f^{p-1} \Delta_p g = 0.$$ Let us first solve $$\nonumber f_t(t) = f^{p-1}(t),$$ which has as a solution $c_p (T-t)^{-\frac{1}{p-2}}$ for any value of $T$, let us use $T = 1$. Next let us solve $$\nonumber \Delta_p g \leq g$$ which in radial form $|x|=r$ looks like $$\label{radialsplit}
|g'|^{p-2} \bigg [(p-1) g'' +\frac{N-1}{r}g' \bigg ] \leq g.$$ Now let $\delta \geq \frac{p}{p-2}$ then for $g = c_o r^\delta$ $$\nonumber
|g'|^{p-2}\bigg [(p-1) g'' +\frac{N-1}{r}g' \bigg ] = c_o^{p-1} \delta^{p-1} \bigg [(p-1) (\delta-1) +(N-1) \bigg ]r^{\delta(p-1)-p}$$ So we only need to choose $c_o$ to be $$\nonumber c_o(p,\delta,N) := \bigg[ \delta^{p-1} \big [(p-1) (\delta-1) +(N-1) \big ] \bigg ]^{\frac{1}{2-p}} 2^{\frac{p}{p-2}-\delta},$$ in order for $g$ to satisfy for $r \leq 2$, since $\delta(p-1)-p \geq \delta$. In fact for our choice of $c_0$ we see that is solved with an equality if $\delta = \frac{p}{p-2}$, just as in the one dimensional case . Specifically $$\nonumber v(x,t) = c_p c_o (1-t)^{-\frac{1}{p-2}} |x|^{\delta}$$ is a supersolution to in $(B_2 \setminus {
\left \{ 0 \right \}
}) \times (-\infty,1)$.
Let now $u$ be as in \[upperproblem\] and assume for simplicity that $w = 0$. Let $v$ be the supersolution established above and consider the rescaled function $\bar v$ as $$\begin{aligned}
\bar v(x,t) = \frac{M}{c_o c_p} v\left ( x, \left ( \frac{c_o c_p}{M} \right )^{2-p} t \right )
\end{aligned}$$ then if we define $C(p,\delta) = c_o c_p$ and use $\hat T$ as in \[hatT\], we get that $\bar v$ satisfies $$\nonumber \left \{
\begin{array}{rcll}
\bar v_t - \Delta_p \bar v &=& 0, &\text{ in $E_{\hat T}$ }\\
\bar v &\geq& 0, &\text{ on $\partial \Omega \cap \partial_p E_{\hat T}$}\\
\bar v(x,t) &\geq& M, &\text{ on $\partial_p E_{\hat T}$ } \\
\bar v(x,0) &=& M |x|^\delta,&\text{ for $x \in E$}.
\end{array}
\right .$$ Thus by the parabolic comparison principle the conclusion of the theorem follows.
The proof of \[thm memory full\] now follows from \[thm memory\] by a simple scaling argument.
The boundary type Riesz measure
===============================
We begin this section with a simplified version of the upper bound of the measure (see \[eq:Riesz\]) that we developed in [@AKN Theorem 5.2], we have included the proof for ease of the reader.
\[MuUpperBound\] Let $\Omega \subset \R^n$ be a bounded domain. Let $0<r$ and let $u$ be a non-negative solution to \[Hu\] in $\Omega_T$. Fix a point $x_0 \in \partial \Omega$, $\delta \in (0,1)$ and a $\Lambda > 0$ such that $$Q = B_r(x_0) \times (t_0 - \delta \Lambda^{2-p} r^p , t_0),$$ $$\nonumber
\Lambda \geq \sup_{2Q \cap \Omega_T} u,$$ and $t_0 - 2\delta \Lambda^{2-p} r^p > 0$. Now if $u$ vanishes on $\partial_p \Omega_T \cap 2Q$ then the following upper bound holds ($\mu$ is as in \[eq:Riesz\]) $$\begin{aligned}
\frac{\mu(Q)}{r^n} \leq C \Lambda.
\end{aligned}$$
As in the construction of the measure $\mu$ in , we see that extending $u$ to the entire cylinder $Q$ as zero, we obtain a weak subsolution \[Hu\] in $Q$. Take a cut-off function $\phi \in C^\infty(2Q)$ vanishing on $\partial_p 2Q$ such that $0 \leq \phi \leq 1$, $\phi$ is one on $Q$, and $|\grad \phi| < C/r$ and $(\phi_t)_+ < C \frac{\Lambda^{p-2}}{\delta r^p}$. Then by \[eq:Riesz\], the definition of $\phi$ and H[ö]{}lder’s inequality we get $$\begin{aligned}
\int_{2Q} \phi^p {\, {\rm d}}\mu &\leq \int_{2Q} |\grad u|^{p-1} |\grad \phi| \phi^{p-1} {\, {\rm d} x}{\, {\rm d} t}+ \int_{2Q} u (\phi_t)_+ \phi^{p-1}{\, {\rm d} x}{\, {\rm d} t}\\
&\leq \frac{4}{r} \int_{2Q} |\grad u|^{p-1} \phi^{p-1} {\, {\rm d} x}{\, {\rm d} t}+ \int_{2Q} u (\phi_t)_+ \phi^{p-1} {\, {\rm d} x}{\, {\rm d} t}\\
&\leq \frac{4}{r} |2Q|^{1/p} \left ( \int_{2Q} |\grad u|^p \phi^p {\, {\rm d} x}{\, {\rm d} t}\right )^{\frac{p-1}{p}} + \int_{2Q} u (\phi_t)_+ \phi^{p-1} {\, {\rm d} x}{\, {\rm d} t}.
\end{aligned}$$ Now using the standard Caccioppoli estimate $$\begin{aligned}
\mu(Q) &\leq C \frac{|2Q|^{1/p}}{r} \left ( \int_{2Q} u^p |\grad \phi|^p + u^2 (\phi_t)_+ \phi^{p-1} {\, {\rm d} x}{\, {\rm d} t}\right )^{\frac{p-1}{p}} + \int_{2Q} u (\phi_t)_+ \phi^{p-1} {\, {\rm d} x}{\, {\rm d} t}, \\
&\leq C \frac{|2Q|^{1/p}}{r} \left (|2Q| \left (\frac{\Lambda^p}{r^{p}} + \frac{\Lambda^p}{\delta r^p } \right ) \right )^{\frac{p-1}{p}} + C |2Q| \frac{\Lambda^{p-1}}{\delta r^p} \\
&\leq C \frac{|2Q|}{\delta r^p} \Lambda^{p-1} = C \frac{r^n \delta \Lambda^{2-p} r^p}{\delta r^p} \Lambda^{p-1} \leq C r^n \Lambda .
\end{aligned}$$
We are now ready to tackle the proof of \[thm:Riesz\_no\_support\], which just utilizes the above estimate to get the radius dependency explicit such that when considering a covering will just imply that the measure has no support and its restriction to the set of degeneracy is simply zero.
Let $\hat E = \hat B \times \hat I$ be a space time cylinder such that $\hat E \Subset E$. The assumptions on $u$ gives that there exists a constant $C$ such that $$\label{eq meas zero 1}
u(x,t) \leq C |x-w|^{\frac{p}{p-2}}, \quad (x,t) \in \hat E, w \in \hat B \cap \partial \Omega.$$ Consider a cube $Q_{\varrho,\sigma}(y,s) = (\partial \Omega \cap B_{\varrho}(y)) \times (s-\sigma,s)$ such that $2Q_{\varrho,\sigma}(y,s) \subset \hat E \cap \partial_p \Omega_T$. Note that gives $$\nonumber
\sup_{2Q_{\varrho,\sigma}(y,s)} u \leq C_0 \varrho^{\frac{p}{p-2}}.$$ Thus setting $\Lambda = C_0 \varrho^{\frac{p}{p-2}}$ and setting $\delta = C_0^{p-2} \sigma$ we get from that $$\nonumber
\mu(Q_{\varrho,\sigma}(y,s)) \leq C(\sigma) \varrho^{n+\frac{p}{p-2}}.$$ Now, since the height is fixed as $\sigma$ irrespective of the radius $\varrho > 0$ the decay-rate of the measure is greater than $\varrho^n$ which simply implies that it is a zero measure inside $\hat E$.
For the proof of \[thm:Riesz\_support\] we will be needing the following two lemmas from [@AKN].
\[MuLowerSimple\] Let $\Omega \subset \R^n$ be an NTA-domain with constants $M$ and $r_0=2$. There exists constants $C,T_0$, both depending on $p,n,M$, such that if $v$ is a continuous solution to the problem $$\label{eq:vDirichlet}
\begin{cases}
v_t - \Delta_p v = 0 &\text{ in } (\Omega \cap B_2(0)) \times (0,T_0) \\
v = 0 &\text{ on } \partial (\Omega \cap B_2(0)) \times [0,T_0) \\
v = \chi_{B_{1/(4M)}(a_1(0))} & \text{ on } (\Omega \cap B_2(0)) \times \{0 \},
\end{cases}$$ then $$\mu_v\big(B_2(0) \times (0,T_0)\big) \geq 1/C.$$ Furthermore, constants $C, T_0$, are stable as $p \to 2^+$
\[meas order\] Let $\Omega \subset \R^n$ be a domain. Let $u$ and $v$ be weak solutions in $(\Omega \cap B_r(0)) \times (0,T)$ such that $u \geq v \geq 0$ and both vanish continuously on the lateral boundary $(\partial \Omega \cap B_r(0)) \times (0,T)$. Then $$\mu_v \leq \mu_u \qquad \mbox{in } B_r(0) \times (0,T)$$ in the sense of measures.
From \[eq:Riesz\_support\_upper\_bound\] we see that $$\begin{aligned}
u(x,t) \geq \Lambda d(x,\partial \Omega).
\end{aligned}$$ Let us now consider $(t_1,t_2) \Subset (t_0-\epsilon,t_0)$ and a point $a_{\varrho}(y)$ for $y \in \partial \Omega \cap B_r(w)$ such that $B \equiv B_{\varrho/M}(a_{\varrho}(y)) \subset B_r(w) \cap \Omega$ for some $\varrho < r$ then $$\begin{aligned}
u(x,t) \geq C \Lambda \varrho, \qquad (x,t) \in 2^{-1}B \times [t_1,t_2]
\end{aligned}$$ for some constant $C$ not depending on $u$. Let us now consider the rescaled function $v$ as follows $$\begin{aligned}
w(x,t) = \frac{1}{C \Lambda \varrho} u \left (y + \frac{\varrho}{2}x, t_1 + (C \Lambda \varrho)^{2-p} \left (\frac{\varrho}{2}\right )^p t \right )
\end{aligned}$$ then $w$ is a solution in $B_2(0) \times [0,T]$ (where $T = (C \Lambda \varrho)^{p-2} \left (\frac{\varrho}{2}\right )^{-p} (t_2-t_1)$) such that $$\begin{aligned}
w(x,t) \geq 1, \qquad (x,t) \in B_{1/(4M)}(a_1(0)).
\end{aligned}$$ Before using \[MuLowerSimple\] we need to know that $T \geq T_0$ (where $T_0$ is from \[MuLowerSimple\]), first note that $$\begin{aligned}
(C \Lambda \varrho)^{p-2} \left (\frac{2}{\varrho}\right )^p = (C \Lambda)^{p-2} 2^{p} \varrho^{-2}
\end{aligned}$$ thus taking $\varrho$ small enough depending on $t_2-t_1$ and $\Lambda$ we get $$\begin{aligned}
(C \Lambda)^{p-2} 2^{p} \varrho^{-2} (t_2-t_1) \geq T_0.
\end{aligned}$$ Now using \[MuLowerSimple,meas order\] we get that $$\begin{aligned}
\mu_{w}(B_2(0) \times (0,T_0)) \geq 1/C,
\end{aligned}$$ for a constant $C(p,n,M) > 1$. Scaling back to our original variables we obtain that $$\begin{aligned}
\frac{\mu(B_{\varrho}(y) \times [t_1,t_2])}{\varrho^{n+1}} \geq C \Lambda
\end{aligned}$$
What we can learn from the above proof is that if $t_2-t_1 \approx \varrho^2$ then the measure of a parabolic cylinder (heat equation $(\varrho,\varrho^2)$) is of size $\varrho^{n+1}$, which is exactly the same as for the caloric measure related to the heat equation. In this sense the non-degeneracy assumption that $u \geq d(x,\partial \Omega)$ ‘linearizes’ the equation.
The next lemma is essentially trivial in its conclusion given continuity of the gradient and the representation of the Riesz measure as the limit of $|\grad u|^{p-1}$ on the boundary, however the proof highlights a way of thought which would be important when moving to other domains. Moreover since we are assuming an NTA-domain with interior ball condition this proof is considering the circumstances fairly straight forward.
Let $\Omega \subset \R^n$ be a bounded NTA-domain satisfying the interior ball condition. Furthermore assume that in $Q = B_r(x_0) \cap \Omega \times (0,T)$ the measure $\mu$ vanishes. Then for any $w \in B_r(x_0) \cap \partial \Omega$ we have $$\begin{aligned}
(0,T) \in {\mathcal{D}_{\mathcal{T}}}(w).
\end{aligned}$$
Assume that there is a point $w \in B_r(x_0) \cap \partial \Omega$ such that there is a time $t \in (0,T)$ for which $$\begin{aligned}
t \in {\mathcal{D}}(w).
\end{aligned}$$ From this we can conclude that as in previous estimates (see proof of \[thm:Riesz\_support\]) that if we wish to connect a point $a_\varrho(w)$ using a forward Harnack chain (see \[NTA FChain\]) the waiting time will be of order $\varrho^2$. This implies via a barrier argument as in the proof of \[thm linearization\] that for any $\hat t > t$ there is a neighborhood of $w$ for which $\hat t \in {\mathcal{D}}(\cdot)$ which implies via \[thm:Riesz\_support\] that the measure $\mu \neq 0$ and thus we have a contradiction.
What happens in non-smooth domains? {#sec:cone}
===================================
In an NTA-domain $\Omega \subset \R^n$ that satisfies the interior ball condition we can use a barrier function as in \[lower bound1\] to obtain that given a solutions initial data, if the existence time is large enough we will eventually get a linearization effect, i.e. the set ${\mathcal{D}}\neq \emptyset$. In this section we provide an adaptation of a proof by Vázquez to the $p$-parabolic equation which proves that if we are in a conical domain there are solution for which the support never reaches the tip of the cone, this implies that we might be in ${\mathcal{D}_{\mathcal{N}}}$ no matter how long the solution exists.
No support in a cone
--------------------
Let $S$ be a open connected subset of the $n-1$ dimensional unit sphere $S^{n-1}$ with a smooth boundary, and define the conical domain $$\begin{aligned}
K(R,S) = \{r \sigma\, |\, \sigma \in S, 0 < r R \},\end{aligned}$$ we are looking for non-negative solutions vanishing on the lateral surface of the cone, $$\begin{aligned}
\Sigma(R,S)\times (0,T) = \{r \sigma\, |\, \sigma \in \partial S, 0 \leq r \leq R \} \times (0,T).\end{aligned}$$
The following argument is taken from [@Vaz p. 344-345] with exponents adapted to the parabolic $p$-Laplace equation.
To begin the argument we first need to define some similarity transforms and the scaling properties of the support. The similarity transform is given as $$u_{\lambda}(x,t) = (T_{\lambda} u)(x,t) := \lambda^{q}u(x/\lambda,\mu t).$$ Let $f : S \to \R$ be a non-negative function on the spherical cap $S$ such that $f = 0$ on $\partial S$, then the semi-radial function $U(x) = |x|^q f(x/|x|)$ is independent of this transform. The $p$-parabolic equation is invariant under $T_\lambda$ if $\mu = \lambda^{q(p-2)-p}$ in the following $$(u_\lambda)_t - \Delta_p u_\lambda = \lambda^{q}\mu u_t - \lambda^{q(p-1)-p} \Delta_p u.$$
Let $S_u(t)$ denote the support of a function $u$ at time $t$. Then $S_{u_\lambda}(t) = \lambda S_u(\mu t)$. Define the distance to the origin from the set $S_u$ $$r_u(t) = \inf \{|x|: x \in S_u(t)\}$$ which satisfies the scaling property $r_{u_\lambda}(t) = \lambda r_u(\mu t)$.
Let now $U(x)$ be a function that we will use as initial data, and let us construct a solution which has zero initial value close to the origin and coincides with $U(x)$ outside a ball of size 1. Specifically we will define $\bar u$ as a solution to the $p$-parabolic equation satisfying the initial data as follows $$\begin{aligned}
\bar u(x,t)= U(x) \text{ for $|x| \geq 1$ and $\bar u(x,t) = 0$ for $0 < |x| < 1$, $(x,t) \in \partial_p \Omega_\infty$.}\end{aligned}$$ Let $a < 1$ be a given number, then due to the finite propagation there exists a $\tau > 0$ such that $S_{\bar u}(\tau) \cap B_a = \emptyset$. Furthermore since $U$ is a solution to the $p$-Laplace equation we have by the comparison principle that $$\begin{aligned}
u(x,\tau) \leq U(x).\end{aligned}$$ Using the similarity transform with $\lambda = a$ produces a solution $\overline u_1 = T_a \overline u$ such that $\overline u_1 \leq U(x)$ and $\overline u_1(x,0) = 0$ iff $|x| < a$. We see that $\bar u_1$ satisfies $$\begin{aligned}
\bar u_1(x,0)= U(x) \text{ for $|x| \geq a$ and $\bar u(x,t) = 0$ for $0 < |x| < a$, $(x,t) \in \partial_p \Omega_\infty$},\end{aligned}$$ which implies that $\bar u_1(x,0) \geq \bar u(x,\tau)$ and thus by the comparison principle we get $$\begin{aligned}
\bar u_1(x,t) \geq \bar u(x,t+\tau)\end{aligned}$$ for all $t > 0$ which gives the following inequality concerning the supports $$\begin{aligned}
r_{\bar u}(t+\tau) \geq r_{\bar u_1}(t) = a r_{\bar u}(\mu t).\end{aligned}$$ We now use the above to get for $t = \tau/\mu$ $$\begin{aligned}
r_{\bar u}\left (\frac{\tau}{\mu} + \tau \right ) \geq a r_{\bar u}\left(\mu \frac{\tau}{\mu} \right ) = a r_{\bar u}(\tau) \geq a^2,\end{aligned}$$ then an iteration yields $$\begin{aligned}
r_{\bar u}(t_k) \geq a^{k+1} \quad \text{for} \quad t_k = \tau \sum_{j=0}^k \mu^{-j} = \tau \sum_{j=0}^{k} \left [\frac{1}{a} \right ]^{(q(p-2)-p)j}.\end{aligned}$$ We see that if $q(p-2)-p > 0$ then $t_k \to \infty$, i.e. $q > \frac{p}{p-2}$.
In conclusion we can say that if $q > \frac{p}{p-2}$ then the support of $u$ will never reach the vertex of the cone.
Stationary interfaces {#sec:interface}
=====================
We end this paper with a section considering the stationarity of the interface as mentioned in \[rem:cont\]. The setup here is to consider the $p$-parabolic function satisfying the following boundary condition, $$\label{eq:numericalproblem}
\left \{
\begin{array}{rcll}
u_t - \Delta_p u &=& 0, &\text{ in $(-1,1) \times (0,T)$ }\\
v &=& 0, &\text{ on $\{-1\} \times (0,T)$}\\
v &=& 1, &\text{ on $\{1\} \times (0,T)$ } \\
v(x,0) &=& x_+^\frac{p}{p-2},&\text{ for $x \in [-1,1]$}.
\end{array}
\right .$$ Considering \[rem:cont\] together with the proof of \[thm memory\] we see that if we consider $u$ a solution in $(0,1) \times (0,T)$ then this vanishes on $\{0\} \times (0,\hat t)$ if $$\begin{aligned}
\hat t \leq (c_p c_0)^{p-2}.\end{aligned}$$
\[conj\] The critical time $\hat t = (c_p c_0)^{p-2}$ is the largest time such that $u(0,t) = 0$.
Numerical experiment of \[conj\]
--------------------------------
We will explore the contents of \[conj\] via a numerical example of \[eq:numericalproblem\] in one spatial dimension. In this context we will be using MOL (Method of Lines), which amounts to discretizing the equation in space leaving us with a system of non-linear ODE’s (semi-discretization). To specify our numerical setup, consider the spatial discretization with $N$ steps and $dx \approx 1/N$, then the MOL equation becomes in a finite difference (FD) context $$\begin{aligned}
\label{eq:MOL}
u_t^i = (p-1) (D^i_j u^j)^{p-2} H^i_j u^j, \qquad i = 0,\ldots, N,\end{aligned}$$ where $$\begin{aligned}
D^i_j u^j =
\begin{cases}
\frac{u^{i+1} - u^{i-1}}{2 dx}, &\text{ if } 1 \leq i < N\\
0
\end{cases}\end{aligned}$$ is a basic FD type difference quotient (central difference quotients) and $$\begin{aligned}
D^i_j u^j =
\begin{cases}
(u^{i+1}-2u^i + u^{i-1})/(dx^2), &\text{ if } 1 \leq i < N\\
0
\end{cases}\end{aligned}$$ is a basic second order FD difference quotient. Thus we see that \[eq:MOL\] is a system of $N+1$ nonlinear ODE’s. The system \[eq:MOL\] turns out to be stiff and sparse, we will be using Matlab’s stiff solver *ode15s* to solve this system numerically with the data given as in \[eq:numericalproblem\], with $T = 1.2 \hat t$ ($\hat t$ from \[conj\]). See [@J] for convergence of a FEM semi-discretization.
Since we are interested in the support of our numerical solution, we will consider the equation satisfied by the interface ($u \equiv 0$), actually we will be considering the equation of a basic level set. That is, we are looking for a curve $\gamma : \R_+ \to \R$ such that for a given level $M \geq 0$ the following holds $$\begin{aligned}
u(\gamma(t),t) = M.\end{aligned}$$ We will be considering the above problem with the initial point $\gamma(0)$ be be a point in the initial data that is equal to $M$, and for $M = 0$ it will be the edge of the support, i.e. $x = 0$. Proceeding formally and differentiating, the condition for $\gamma$ gives us, $$\begin{aligned}
\label{eq:levelseteq}
u_x \gamma' + u_t = 0,\end{aligned}$$ which after inserting the equation \[Hu\] into \[eq:levelseteq\] yields $$\begin{aligned}
-\gamma'(t) = \frac{\Delta_p u}{u_x}(\gamma(t),t) = (p-1) (u_x(\gamma(t),t))^{p-3} u_{xx}(\gamma(t),t).\end{aligned}$$ In \[fig:Interface\] we see the result of the above equation when using the numerical solution of \[eq:MOL\] seen in \[fig:MOL\] as the approximate values for $u$ and approximating the first and second derivative with the respective FD quotients, as described in \[eq:MOL\].
Upon visual inspection of \[fig:Interface\] we see the sharp deviation of the support after roughly $t = 0.02$, which coincides quite well with the conjectured value of $0.0208$ for $p = 4$.
Based upon heuristic ideas we can expect that the profile of the solution towards the edge of the support will after the critical time behave like the Barenblatt solution, i.e. $u(x) \approx d(x,S(t)^C)^\frac{p-1}{p-2}$, where $S(t)$ is the support of the solution at $t$. The numerical result of the behavior at $1.2 \hat t$ can be found in \[fig:Profile\] and the coincidence is striking, the edge of the support is estimated using the solution in \[fig:Interface\].
An upper bound on $\hat t$ in \[conj\]
--------------------------------------
To show an upper bound for $\hat t$ we will be building a sequence of barriers from below based on rescalings of the Barenblatt solution. To begin with our construction we first find the time where the Barenblatt solution has support $B_{1-\delta}$, $\delta \in (0,1)$. We assume that $C_0 = q$ (changes only the mass of the solution) and do the following computation $$\begin{aligned}
t^{-k} \left ( q - q \left ( \frac{1-\delta}{t^{k/n}}\right )^{\frac{p}{p-1}} \right )^{\frac{p-1}{p-2}} = 0\end{aligned}$$ we get the value of $t$ to be $$\begin{aligned}
(1-\delta)^\frac{n}{k} \equiv t_1(\delta).\end{aligned}$$ At $t_1$ the value of ${\mathcal U}(0,t_1)$ becomes $$\begin{aligned}
{\mathcal U}(0,t_1)=(1-\delta)^{-n} q^{\frac{p-1}{p-2}}.\end{aligned}$$ To allow us the flexibility we need in the following argument we set $$\begin{aligned}
\epsilon \lambda \equiv {\mathcal U}(0,t_1),\end{aligned}$$ for $\epsilon \in (0,1)$ to be chosen depending on $\delta$. Now consider the rescaled Barenblatt solution (still a solution to \[Hu\] due to intrinsic scaling) $$\begin{aligned}
\hat {\mathcal U}(x,t) = \frac{1}{\lambda} {\mathcal U}(x,\lambda^{2-p} t+t_1)\end{aligned}$$ which at $t = 0$ is $$\begin{aligned}
\hat {\mathcal U}(x,0) = \frac{1}{\lambda} (1-\delta)^{-n} \left ( q - q \left ( \frac{|x|}{1-\delta}\right )^{\frac{p}{p-1}} \right )_+^{\frac{p-1}{p-2}} \end{aligned}$$ $$\begin{aligned}
k = \left ( p-2 + \frac{p}{n} \right )^{-1}, \qquad q = \frac{p-2}{p} \left ( \frac{k}{n}\right )^{\frac{1}{p-1}}.\end{aligned}$$ Next, let us assume that $\hat U(x-1,0) \leq u(x,0)$ for $u$ as in \[eq:numericalproblem\], and let us find $t_2(\delta)$ such that the support of $\hat {\mathcal U}$ is $B_1$. This implies using the parabolic comparison principle that after $t_2$ the support of $u$ has moved away from $\{x = 0\}$.
To proceed we need to choose $\epsilon$ given the value of $\delta$ such that $\hat U(x-1,0) \leq u(x,0)$ and is the unique largest value for which this inequality holds true. To find this $\epsilon$, we note that we wish to solve $\hat U(x,0) = |1-x|^{\frac{p}{p-2}}$ for a unique pair $x,\epsilon \in (0,1)$, i.e. we wish to solve $$\begin{aligned}
\frac{1}{\lambda^{p-2}} (1-\delta)^{-n} \left ( q - q \left ( \frac{r}{1-\delta}\right )^{\frac{p}{p-1}} \right )_+^{\frac{p-1}{p-2}} = (1-r)^{\frac{p}{p-2}}.\end{aligned}$$ Which by some manipulation yields the following $$\begin{aligned}
\label{eq:tomte}
\epsilon^{\frac{p-2}{p-1}} \left ( 1 - \left ( \frac{r}{1-\delta}\right )^{\frac{p}{p-1}} \right ) = (1-r)^{\frac{p}{p-1}}.\end{aligned}$$ We wish to find a value for $\epsilon$ and $r$ such that the left hand side equals the right hand side, but the left hand side being smaller than the right hand side for all other values of $r$. This implies that at the point of contact their derivatives match and we can thus consider the simplified equation of the $r$ derivative of \[eq:tomte\] (which has a solution for any $\epsilon$) $$\begin{aligned}
-\epsilon^{\frac{p-2}{p-1}} \frac{p}{p-1} \frac{1}{1-\delta} \left ( \frac{r}{1-\delta}\right )^{\frac{1}{p-1}} = - \frac{p}{p-1} (1-r)^{\frac{1}{p-1}}\end{aligned}$$ some algebraic manipulations later and we arrive at $$\begin{aligned}
\label{eq:r_intersect}
r = \frac{(1-\delta)^{p}}{\epsilon^{p-2}+(1-\delta)^{p}}.\end{aligned}$$ Plugging the value of $r$ from \[eq:r\_intersect\] into \[eq:tomte\] gives us the problem of solving $$\begin{aligned}
\epsilon^{\frac{p-2}{p-1}} \left ( 1 - \left ( \frac{(1-\delta)^{p-1}}{\epsilon^{p-2}+(1-\delta)^{p}}\right )^{\frac{p}{p-1}} \right ) = \left (1-\frac{(1-\delta)^{p}}{\epsilon^{p-2}+(1-\delta)^{p}} \right )^{\frac{p}{p-1}}.\end{aligned}$$ As can be shown by a tedious calculation, the above equation is equivalent to the following $$\begin{aligned}
\epsilon^{\frac{p-2}{p-1}} \left ( 1- \frac{1}{((1-\delta)^p + \epsilon^{p-2})^{\frac{1}{p-1}}}\right ) = 0,\end{aligned}$$ which is solved by $$\begin{aligned}
\label{eq:e_intersect}
\epsilon = \left (1 - (1-\delta)^p \right )^\frac{1}{p-2}.\end{aligned}$$ With the values of \[eq:r\_intersect,eq:e\_intersect\] we can calculate the value of $t_2$ for the function $\hat {\mathcal U}$. Considering the definition of $\hat {\mathcal U}$ we see that $t_2$ satisfies the following $$\begin{aligned}
t_2(\delta) &= \lambda^{p-2}(1-t_1) = (1-\delta)^{-n(p-2)} q^{p-1} \epsilon^{2-p} (1-(1-\delta)^{n/k}) \\
&=\frac{q^{p-1}(1-(1-\delta)^{n/k})}{(1-\delta)^{n(p-2)} (1 - (1-\delta)^p )}\end{aligned}$$ which when $\delta \to 0$ becomes (a lengthy calculation shows that $t_2(\delta)$ is decreasing as $\delta \to 0$) $$\begin{aligned}
t_2(0) = q^{p-1}\frac{n}{pk} = \left(\frac{p-2}{p} \right )^{p-1} \frac{k}{n} \frac{n}{pk} = \frac{(p-2)^{p-1}}{p^p}.\end{aligned}$$
This is in contrast to the value of $\hat t$ which is (for $n = 1$, see \[thm memory\]) $$\begin{aligned}
\hat t = \frac{(p-2)^{p-1}}{2 p^{p-1}(p-1)} = q^{p-1}.\end{aligned}$$ From this we see that we have gap between $t_2(0)$ and $\hat t$, in fact $$\begin{aligned}
\frac{t_2(0)}{\hat t} = 2 \frac{p-1}{p} > 1, \qquad p > 2.\end{aligned}$$ In the setting of the numerical experiment with $p = 4$ we have $\frac{t_2(0)}{\hat t} = \frac{3}{2}$.
[AAAA]{}
B. Avelin, On time dependent domains for the degenerate $p$-parabolic equation: Carleson estimate and H[ö]{}lder continuity, *Math. Ann.* **364**(1) (2016), 667–686.
B. Avelin, U. Gianazza and S. Salsa, Boundary estimates for certain degenerate and singular parabolic equations, *J. Eur. Math. Soc.*, **18**(2) (2016), 381–426.
B. Avelin, T. Kuusi and K. Nyström, Boundary behavior of solutions to the\
parabolic $p$-Laplace equation, Submitted, [arXiv:1510.08313](http://arxiv.org/abs/1510.08313).
A. Björn, J. Björn and U. Gianazza, The Petrovskiĭ criterion and barriers for degenerate and singular p-parabolic equations, *U. Math. Ann*, **368**(3–4) (2017), 885–904.
A. Björn, J. Björn, U. Gianazza and M. Parviainen, Boundary regularity for degenerate and singular parabolic equations, *Calc. Var. Partial Differential Equations* **52**(3) (2015), 797–827.
E. DiBenedetto, *Degenerate parabolic equations*, Springer Verlag, Series Universitext, New York, (1993).
U. Gianazza, N. Liao and T. Lukkari, A Boundary Estimate for Singular Parabolic Diffusion Equations, [arXiv:1703.04907](https://arxiv.org/abs/1703.04907).
U. Gianazza and S. Salsa, On the boundary behaviour of solutions to parabolic equations of $p$−Laplacian type, *Rend. Istit. Mat. Univ. Trieste* **48** (2016), 463–-483.
D. Jerison and C. Kenig, Boundary behavior of harmonic functions in non-tangentially accessible domains, *Adv. Math.* **46** (1982), 80–147.
N. Ju, Numerical Analysis of Parabolic p-Laplacian: Approximation of Trajectories, *SIAM Journal on Numerical Analysis* **37**(6) (2000), 1861–1884.
T. Kilpel[ä]{}inen and P. Lindqvist, On the Dirichlet boundary value problem for a degenerate parabolic equation., *SIAM J. Math. Anal.* **27**(3) (1996), 661–683.
T. Kuusi, G. Mingione and K. Nystr[ö]{}m, A boundary Harnack inequality for singular equations of $p$-parabolic type. *Proc. Amer. Math. Soc.* **142**(8) (2014), 2705–2719.
J. Lewis and K. Nystr[ö]{}m, Boundary behavior for $p$-harmonic functions in Lipschitz and starlike Lipschitz ring domains. *Ann. Sci. École Norm. Sup. (4)* **40**(5) (2007), 765–813.
J. Lewis and K. Nystr[ö]{}m, Boundary behavior and the Martin boundary problem for $p$-harmonic functions in Lipschitz domains. *Ann. of Math. (2)* **172**(3) (2010), 1907–1948.
J.L. Vázquez, The porous medium equation. Mathematical theory. Oxford Mathematical Monographs. The Clarendon Press, Oxford University Press, Oxford, 2007. xxii+624 pp.
|
---
abstract: 'A candidate Tidal Dwarf Galaxy, ce-61, was identified in the merger system IC 1182 in the Hercules supercluster. The multi-wavelength data we obtained so far do not prove, however, that it is kinematically detached from the IC 1182 system and gravitationally bound.'
author:
- 'W. van Driel$^1$'
- 'P.-A. Duc$^2$, P. Amram$^3$, F. Bournaud$^2$, C. Balkowski$^1$, V. Cayatte$^1$, J. Dickey$^4$, H. Hernández$^5$, J. Iglesias-Páramo$^6$, K. O’Neil$^6$, P. Papaderos$^7$, J.M. V[í]{}lchez$^8$'
title: 'ce-61: a Tidal Dwarf Galaxy in the Hercules cluster?'
---
\#1[[*\#1*]{}]{} \#1[[*\#1*]{}]{} =
\#1 1.25in .125in .25in
The Hercules supercluster (D=150 Mpc, H$_0$=75) is one of the most massive structures in the nearby Universe. We studied (Iglesias-Páramo et al. 2003) 22 -selected galaxies in this cluster, from the blind VLA survey by Dickey (1997), obtaining: deep CCD $B$, $V$ and $I$-band surface photometry of 10 galaxies, optical spectroscopy of 8 of these, Arecibo observations of all 22 galaxies and H$\alpha$ line Fabry-Perot observations of the IC 1182 merger system.
Based on these multi-wavelength observations, the object ce-61 was identified as a candidate Tidal Dwarf Galaxy (TDG) in a tidal tail of the peculiar IC 1182 system. IC 1182 ($B_T$=15.4, $V$=10,223 km/s) shows several characteristics typical of a merger system, e.g., a blue optical jet-like structure towards the East and tidal debris towards the NW, and an extended distribution with two tidal tails, towards the E and the NW. The candidate TDG ce-61 ($M_B= -18.24$ mag) lies at the tip of the eastern optical/tail, at about [$1'\,\hspace{-1.7mm}.\hspace{.0mm}5$]{} (65 kpc) projected distance from the centre of the parent system. Its CCD image shows two distinct peaks and the maximum in the tail coincides with the easternmost optical peak. Its metallicity (8.41) is on the high side for a dwarf galaxy of its luminosity, but typical for a TDG. It is a very gas-rich system, with an estimated $M_{HI}$/$L_{B}$ ratio of 6 $M_{\odot}$/L$_{\odot,B}$; its line width is about 220 km/s. CO line observations (Braine et al. 2001) show about 7 10$^9$ $M_\odot$ of H$_2$ in a resolved distribution in IC 1182, but none was detected in ce-61, putting an upper limit to its H$_2$ mass of about 6 10$^7$ $M_\odot$.
In order to study whether the TDG candidate is already kinematically detached from the IC 1182 system and gravitationally bound, we obtained and analyzed H$\alpha$ line Fabry-Perot observations and found (Bournaud, Duc & Amram 2003, in prep.) that: (1) the brightest knot at the tip of the tail, coinciding with ce-61, seems to be kinematically detached from the overall velocity field along the tail (which is governed by streaming motions); the offset is about 30 km/s. However, our numerical simulations show that this offset can be consistent with a projection effect along the line of sight (with the tail, seen edge-on, being bent in 3D space), (2) along direction 2 (see Fig.), there is a hint of an internal velocity gradient, of 70 km/s maximum, associated with one of the knots. We lack the spatial resolution to confirm it, however.
Braine J., Duc P.-A., Lisenfeld U., et al. 2001, A&A, 378, 51 Dickey J.M. 1997, AJ, 113, 1939 Iglesias-Páramo J., van Driel W., Duc P.-A. 2003, A&A, 406, 453 van Driel W., O’Neil K., Cayatte V., et al. 2003, A&A, 399, 433
|
---
abstract: 'Consider a general path planning problem of a robot on a graph with edge costs, and where each node has a Boolean value of success or failure (with respect to some task) with a given probability. The objective is to plan a path for the robot on the graph that minimizes the expected cost [until]{} success. In this paper, it is our goal to bring a foundational understanding to this problem. We start by showing how this problem can be optimally solved by formulating it as an infinite horizon Markov Decision Process, but with an exponential space complexity. We then formally prove its NP-hardness. To address the space complexity, we then propose a path planner, using a game-theoretic framework, that asymptotically gets arbitrarily close to the optimal solution. Moreover, we also propose two fast and non-myopic path planners. To show the performance of our framework, we do extensive simulations for two scenarios: a rover on Mars searching for an object for scientific studies, and a robot looking for a connected spot to a remote station (with real data from downtown San Francisco). Our numerical results show a considerable performance improvement over existing state-of-the-art approaches.'
author:
- 'Arjun Muralidharan and Yasamin Mostofi [^1] [^2]'
bibliography:
- 'ref.bib'
date: 1st April 2017
title: |
Path Planning for Minimizing the\
Expected Cost [until]{} Success
---
Introduction {#sec:intro}
============
Consider the scenario of a rover on mars looking for an object of interest, for instance a sample of water, for scientific studies. Based on prior information, it has an estimate of the likelihood of finding such an object at any particular location. The goal in such a scenario would be to locate one such object with a minimum expected cost. [Note that there may be multiple such objects in the environment, and that we only care about the expected cost until the first such object is found.]{} In this paper, we tackle such a problem by posing it as a graph-theoretic path planning problem where there is a probability of success in finding an object associated with each node. The goal is then to *plan a path* through the graph that would *minimize the expected cost* [until]{} an object of interest is successfully found. Several other problems of interest also fall into this formulation. For instance, the scenario of a robot looking for a location connected to a remote station can be posed in this setting [@muralidharan2018pconn], [where a connected spot is one where the signal reception quality from/to the remote node/station is high enough to facilitate the communication.]{} The robot can typically have a probabilistic assessment of connectivity all over the workspace, without a need to visit the entire space [@malmirchegini2012spatial]. Then, it is interested in planning a path that gets it to a connected spot while minimizing the total energy consumption. [Success in this example corresponds to the robot getting connected to the remote station.]{} Another scenario would be that of astronomers searching for a habitable exoplanet. Researchers have characterized the probability of finding exoplanets in different parts of space [@molaverdikhani2009mapping]. However, repositioning satellites to target and image different celestial objects is costly and consumes fuel. Thus, a problem of interest in this context, is to find an exoplanet while minimizing the expected fuel consumption, based on the prior probabilities. Finally, consider a human-robot collaboration scenario, where an office robot needs help from a human, for instance in operating an elevator [@rosenthal2012someone]. If the robot has an estimate of different people’s willingness to help, perhaps from past observations, it can then plan its trajectory to minimize its energy consumption [until]{} it finds help. Fig. \[fig:scenarios\] showcases a sample of these possible applications.
![Possible applications of the problem of interest: (top left) path planning for a rover, (top right) imaging of celestial objects, (bottom left) human-robot collaboration and (bottom right) path planning to find a connected spot. Image credit:(top left) and (top right) NASA, (bottom left) Noto: <http://www.noto.design/>.[]{data-label="fig:scenarios"}](scenarios){width="1\linewidth"}
Optimal path planning for a robot has received considerable interest in the research community, and several algorithms have been proposed in the literature to tackle such problems, e.g., A\*, RRT\* [@karaman2010incremental; @likhachev2008anytime]. These works are concerned with planning a path for a robot, with a minimum cost, from an initial state to a predefined goal state. However, this is different from our problem of interest in several aspects. For instance, the cost metric is additive in these works, which does not apply to our setting due to its stochastic nature. [In the probabilistic traveling salesman problem [@jaillet1985probabilistic] and the probabilistic vehicle routing problem [@bertsimas1992vehicle], each node is associated with a prior probability of having a demand to be serviced, and the objective is to plan an a priori ordering of the nodes which minimizes the expected length of the tour. A node is visited in a particular realization only if there is a demand to be serviced at it. Thus, each realization has a different tour associated with it, and the expectation is computed over these tours, which is a fundamentally different problem than ours.]{} Another area of active research is in path planning strategies for a robot searching for a target [@chung2012analysis; @hollinger2009efficient; @chung2011search]. For instance, in [@chung2012analysis], a mobile robot is tasked with locating a stationary target in minimum expected time. In [@hollinger2009efficient], there are multiple mobile robots and the objective is to find a moving target efficiently. In general, these papers belong to a body of work known as optimal search theory where the objective is to find a *single* hidden target based on an initial probability estimate, where the probabilities over the graph sum up to one [@bourgault2003optimal; @chung2011search]. [The minimum latency problem [@blum1994minimum] is another problem related to search where the objective is to design a tour that minimizes the average wait time until a node is visited.]{} In contrast, our setting is fundamentally different, and involves an *unknown* number of targets where each node has a probability of containing a target ranging from $0$ to $1$. Moreover, the objective is to plan a path that minimizes the expected cost to the *first target* found. This results in a different analysis and we utilize a different set of tools to tackle this problem. Another related problem is that of satisficing search in the artificial intelligence literature which deals with planning a sequence of nodes to be searched [until]{} the first satisfactory solution is found, which could be the proof of a theorem or a task to be solved [@simon1975optimal]. The objective in this setting is to minimize the expected cost [until]{} the first instance of success. However, in this setting there is no cost associated with switching the search from one node to another. To the best of the authors knowledge, the problem considered in this paper has not been explored before.
**Statement of contribution:** In this paper, we start by showing that the problem of interest, i.e., minimizing the expected cost [until]{} success, can be posed as an infinite horizon Markov Decision Process (MDP) and solved optimally, but with an exponential space complexity. We then formally prove its NP-hardness. To address the space complexity, we then propose an asymptotically $\epsilon$-suboptimal (i.e., within $\epsilon$ of the optimal solution value) path planner for this problem, using a game-theoretic framework. We further show how it is possible to solve this problem very quickly by proposing two sub-optimal but non-myopic approaches. Our proposed approaches provide a variety of tools that can be suitable for applications with different needs. A small part of this work has appeared in its conference version \[1\]. In \[1\], we only considered the specific scenario of a robot seeking connectivity and only discussed a single suboptimal non-myopic path planner. This paper has a considerably more extensive analysis and results.
The rest of the paper is organized as follows. In Section \[sec:problem\_formulation\], we formally introduce the problem of interest and show how to optimally solve it by formulating it in an infinite horizon MDP framework as a stochastic shortest path (SSP) problem. As we shall see, however, the state space requirement for this formulation is exponential in the number of nodes in the graph. In Section \[sec:comp\_complexity\], we formally prove our problem to be NP-hard, demonstrating that the exponential complexity result of the MDP formulation is not specific to it. In Section \[sec:asmpt\_near\_opt\_planner\], we propose an asymptotically $\epsilon$-suboptimal path planner and in Section \[sec:non\_myopic\_planners\] we propose two suboptimal but non-myopic and fast path planners to tackle the problem. Finally, in Section \[sec:numerical\_results\], we confirm the efficiency of our approaches with numerical results in two different scenarios.
Problem Formulation {#sec:problem_formulation}
===================
In this section, we formally define the problem of interest, which we refer to as the Min-Exp-Cost-Path problem. We next show that we can find the optimal solution of Min-Exp-Cost-Path by formulating it as an infinite horizon MDP with an absorbing state, a formulation known in the stochastic dynamic programming literature as the *stochastic shortest path* problem [@bertsekas1995dynamic]. However, we show that this results in a state space requirement that is exponential in the number of nodes of the graph, implying that it is only feasible for small graphs and not scalable when increasing the size of the graph.
Min-Exp-Cost-Path Problem {#subsec:min_exp_cost_path}
-------------------------
Consider an undirected [connected]{} finite graph $\mathcal{G} = (\mathcal{V}, \mathcal{E})$, where $\mathcal{V}$ denotes the set of nodes and $\mathcal{E}$ denotes the set of edges. Let $p_v \in [0,1]$ be the probability of success at node $v \in \mathcal{V}$ and let [$l_{uv} > 0$]{} denote the cost of traversing edge $(u,v) \in \mathcal{E}$. We assume that the success or failure of a node is independent of the success or failure of the other nodes in the graph. Let $v_s \in \mathcal{V}$ denote the starting node. The objective is to produce a path starting from node $v_s$ that *minimizes the expected cost incurred [until]{} success*. In other words, the average cost [until]{} success on the optimal path is smaller than the average cost on any other possible path on the graph. Note that the robot may only traverse part of the entire path produced by its planning, as its planning is based on a probabilistic prior knowledge and success may occur at any node along the path.
For the expected cost [until]{} success of a path to be well defined, the probability of failure after traversing the entire path must be $0$. This implies that the final node of the path must be one where success is guaranteed, i.e., a $v$ such that $p_{v}=1$. We call such a node a *terminal* node and let $T=\{v \in \mathcal{V}: p_{v}=1\}$ denote the set of terminal nodes. We assume that the set $T$ is non-empty in this subsection. We refer to this as the *Min-Exp-Cost-Path* problem. Fig. \[fig:problem\_setup\] shows a toy example along with a feasible solution path. In Section \[subsec:min\_exp\_cost\_tour\], we will extend our discussion to the setting when the the set $T$ is empty.
We next characterize the expected cost for paths where nodes are not revisited, i.e., simple paths, and then generalize it to all possible paths. Let the path, $\mathcal{P} = (v_1, v_2, \cdots, v_m=v_t)$, be a sequence of $m$ nodes such that no node is revisited, i.e., $v_i \neq v_j,\; \forall i \neq j$, and which ends at a terminal node $v_t \in T$. Let $C(\mathcal{P},i)$ represent the expected cost of the path from node $\mathcal{P}[i]=v_i$ onward. $C(\mathcal{P},1)$ is then given as $$\begin{aligned}
C(\mathcal{P},1) & = p_{v_1}\times 0 + (1-p_{v_1})p_{v_2}l_{v_1v_2}
+ \cdots \\
& \;\;\;\;\;+ \Bigg[\prod_{j\leq m-1} (1-p_{v_j})\Bigg] p_{v_{m}}(l_{v_1v_2} + \cdots+ l_{v_{m-1}v_{m}})\\
& = (1-p_{v_1})l_{v_1v_2} + (1-p_{v_1})(1-p_{v_2})l_{v_2v_3} + \cdots\\
& \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;+ \Bigg[\prod_{j\leq m-1} (1-p_{v_j})\Bigg]l_{v_{m-1}v_{m}}\\
& = \sum_{i=1}^{m-1} \left[\prod_{j\leq i} (1-p_{v_j})\right] l_{v_{i}v_{i+1}}.\end{aligned}$$ For a path which contains revisited nodes, the expected cost can then be given by $$\begin{aligned}
C(\mathcal{P},1) = & \sum_{i=1}^{m-1} \left[\prod_{j\leq i: v_j \neq v_k , \forall k<j} (1-p_{v_j})\right] l_{v_{i}v_{i+1}}\\
= & \sum_{e \in \mathcal{E}(\mathcal{P})} \left[\prod_{v \in \mathcal{V}(\mathcal{P}_{e})} (1-p_{v})\right] l_{e},\end{aligned}$$ where $\mathcal{E}(\mathcal{P})$ denotes the set of edges belonging to the path $\mathcal{P}$, and $\mathcal{V}(\mathcal{P}_{e})$ denotes the set of vertices encountered along $\mathcal{P}$ [until]{} the edge $e \in \mathcal{E}(\mathcal{P})$. Note that $\mathcal{C}(\mathcal{P},i)$ can be expressed recursively as $$\begin{aligned}
\label{eq:min_exp_cost_recursion}
C(\mathcal{P},i) = \left\{\begin{array}{lll} (1-p_{v_i})\left(l_{v_iv_{i+1}} + C(\mathcal{P},i+1)\right), \\ \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \text{if } v_i \neq v_k, \forall k<i \\
l_{v_iv_{i+1}} + C(\mathcal{P},i+1), \;\;\;\; \text{else}\end{array}\right. .\end{aligned}$$ The Min-Exp-Cost-Path optimization can then be expressed as $$\label{eq:min_exp_cost_path}
\begin{aligned}
& \underset{\mathcal{P}}{\text{minimize}} & & C(\mathcal{P},1)\\
& \text{subject to } & & \mathcal{P} \text{ is a path of } \mathcal{G} \\
& & & \mathcal{P}[1] = v_s\\
& & & \mathcal{P}[\text{end}] \in T.
\end{aligned}$$
\[scale=.6,auto=left,main node/.style=[circle,fill=blue!20]{}\] (n7) at (7,11) [$7$]{}; (n6) at (1,10) [$6$]{}; (n4) at (4,8) [$4$]{}; (n5) at (8,9) [$5$]{}; (n1) at (2,6) [$1$]{}; (n3) at (9,6) [$3$]{}; (n2) at (6,5) [$2$]{};
(p1) \[below=0.1cm of n1\] [$p_{1}<1$]{}; (p2) \[below=0.1cm of n2\] [$p_{2}<1$]{}; (p7) \[right=0.1cm of n7\] [$p_{7}=1$]{};
(n1) edge \[->, thick, bend right =25\] (n2); (n1) edge \[-, dashed\] node \[above\] [$l_{12}$]{}(n2);
(n2) edge \[->, thick, bend right =25\] (n4); (n4) edge \[->, thick, bend left =25\] (n6); (n6) edge \[->, thick, bend right =25\] (n7);
/in [n6/n1,n6/n4,n4/n5,n4/n1,n2/n4,n2/n3, n3/n5,n6/n7,n4/n7,n5/n7]{} () – ();
We next show how to optimally solve the Min-Exp-Cost-Path problem by formulating it as an infinite horizon MDP.
Optimal Solution via MDP Formulation {#subsec:mdp_formulation}
------------------------------------
The stochastic shortest path problem (SSP) [@bertsekas1995dynamic] is an infinite horizon MDP formulation, which is specified by a state space $S$, control/action constraint sets $A_s$ for $s \in S$, state transition probabilities $P_{ss'}(a_s)= \mathrm{P}\left(s_{k+1}=s'|s_{k}=s, a_k = a_s \right)$, an absorbing terminal state $s_{t} \in S$, and a cost function $g(s,a_s)$ for $s \in S$ and $a_s \in A_s$. The goal is to obtain a policy that would lead to the terminal state $s_t$ with a probability $1$ and with a minimum expected cost.
We next show that the Min-Exp-Cost-Path problem formulation of (\[eq:min\_exp\_cost\_path\]) can be posed in an SSP formulation. Utilizing the recursive expression of (\[eq:min\_exp\_cost\_recursion\]), we can see that the expected cost from a node, conditioned on the set of nodes already visited by the path can be expressed in terms of the expected cost from the neighboring node that the path visits next. Thus, the optimal path from a node can be expressed in terms of the optimal path from the neighboring node that the path visits next, conditioned on the set of nodes already visited. This motivates the use of a stochastic dynamic programming framework where a state is given by the current node as well as the set of nodes already visited.
More precisely, we formulate the SSP as follows. Let $\mathcal{V}' = \mathcal{V}\setminus T$ be the set of non-terminal nodes in the graph. A state of the MDP is given by $s=(v,H)$, where $v \in \mathcal{V}'$ is the current node and $H \subseteq \mathcal{V}'$ is the set of nodes already visited (keeping track of the history of the nodes visited), i.e., $u \in H$, if $u$ is visited. The state space is then given by $S = \left\{(v,H):v \in \mathcal{V}', H \subseteq \mathcal{V}' \right\} \cup \{s_t\}$, where $s_t$ is the absorbing terminal state. In this setting, the state $s_t$ denotes the state of success. The actions/controls available at a given state is the set of neighbors of the current node, i.e., $A_s = \{u \in \mathcal{V}: (v,u) \in \mathcal{E}\}$ for $s=(v,H)$. The state transition probabilities are denoted by $P_{ss'}(u) = \mathrm{P}\left(s_{k+1}=s'|s_{k}=s, a_k = u \right)$ where $s,s' \in S$ and $u \in A_s$. Then, for $s=(v,H)$ and $u \in A_s$, if $v \in H$ (i.e., $v$ is revisited), we have $$\begin{aligned}
P_{ss'}(u) = \left\{\begin{array}{ll} 1, & \text{if } s'=f(u,H)\\
0, & \text{else}\end{array}\right. ,\end{aligned}$$ and if $v \notin H$, we have $$\begin{aligned}
P_{ss'}(u) = \left\{\begin{array}{lll} 1-p_{v}, & \text{if } s'=f(u,H)\\
p_{v}, & \text{if } s'=s_{t}\\
0, & \text{else}\end{array}\right.,\end{aligned}$$ where $
f(u,H) = \left\{\begin{array}{ll} (u,H\cup\{v\}), & \text{if } u \in \mathcal{V}'\\
s_{t}, & \text{if } u \in T \end{array}\right. $. This implies that at node $v$, the robot will experience success with probability $p_v$ if $v$ has not been visited before, i.e., $v \notin H$. The terminal state $s_t$ is absorbing, i.e., $P_{s_{t}s_{t}}(u) = 1, \forall u \in A_{s_t}$. The cost $g(s,u)$ incurred when action/control $u \in A_{s}$ is taken in state $s\in S$ is given by $
g\left(s=(v,H), u\right) = \left\{\begin{array}{ll} (1-p_{v})l_{uv}, & \text{if } v \notin H\\
l_{uv}, & \text{if } v \in H\end{array} \right.$, representing the expected cost incurred when going from $v$ to $u$ conditioned on the set of already visited nodes $H$.
The optimal (minimum expected) cost incurred from any state $s_1$ is then given by $$\begin{aligned}
J^{*}_{s_1} = \min_{\mu} \underset{\{s_{k}\}}{\mathbb{E}}\left[ \sum_{k=1}^{\infty} g(s_k,\mu_{s_k}) \right],\end{aligned}$$ where $\mu$ is a policy that prescribes what action to take/neighbor to choose at a given state, i.e., $\mu_s$ is the action to take at state $s$. The policy $\mu$, specifies which node to move to next, i.e., if at state $s$, then $\mu_s$ denotes which node to go to next. The objective is to find the optimal policy $\mu^{*}$ that would minimize the expected cost from any given state of the SSP formulation. Given the optimal policy $\mu^{*}$, we can then extract the optimal solution path of (\[eq:min\_exp\_cost\_path\]). Let $\left(s_1, \cdots , s_m=s_{t}\right)$ be the sequence of states such that $s_1 = (v_s,H_1=\{\})$ and $s_{k+1}=(v^{*}_{k+1},H_{k+1})$, $k=1,\cdots,m-2$, where $v^{*}_{k+1} = \mu^{*}(s_{k})$ and $H_{k+1}=H_{k} \cup \{v_{k}^{*}\}$. This sequence must end at $s_m=s_t$ for some finite $m$, since the expected cost is not well defined otherwise. The optimal path starting from node $v_s$ is then extracted from this solution as $
\mathcal{P}^{*} = (v_s, v^{*}_{2}, \cdots, v^{*}_{m})
$.
In the following Lemma, we show that the optimal solution can be characterized by the Bellman equation.
\[lemma:mdp\_bellman\] The optimal cost function $J^{*}$ is the unique solution of the Bellman equation: $$\begin{aligned}
J^{*}_{s} & = \min_{u \in A_{s}} \left[g(s,u) + \sum_{s'\in S\setminus\{s_t\}}P_{ss'}(u)J^{*}_{s'}\right],\end{aligned}$$ and the optimal policy $\mu^{*}$ is given by $$\begin{aligned}
\mu^{*}_{s} = \operatorname*{arg\,min}_{u \in A_{s}} \left[g(s,u) + \sum_{s'\in S\setminus\{s_t\}}P_{ss'}(u)J^{*}_{s'}\right],\end{aligned}$$ for all $s \in S\setminus\{s_t\}$.
[ Let $J_{s}^{\mu}$ denote the cost of state $s$ for a policy $\mu$. We first review the definition of a *proper policy*. A policy $\mu$ is said to be proper if, when using this policy, there is a positive probability that the terminal state will be reached after at most $|S|$ stages, regardless of the initial state [@bertsekas1995dynamic]. We next show that the MDP formulation satisfies the following properties: 1) there exists at least one proper policy, and 2) for every improper policy $\mu$, there exists at least one state with cost $J_{s}^{\mu} = \infty$. We know that there exists at least one proper policy since the policy corresponding to taking the shortest path to the nearest terminal node, irrespective of the history of nodes visited, is a proper policy. Moreover, since $g(s,u)> 0 $ for all $s\neq s_t$, every cycle in the state space not including the destination has strictly positive cost. This implies property $2$ is true.]{} The proof is then provided in [@bertsekas1995dynamic].
The optimal solution can then be found by the value iteration method. Given an initialization $J_s(0)$, for all $s \in S\setminus\{s_t\}$, value iteration produces the sequence: $$\begin{aligned}
J_s(k+1) = \min_{u \in A_{s}} \left[g(s,u) + \sum_{s'\in S\setminus\{s_t\}}P_{ss'}(u)J_{s'}(k)\right],\end{aligned}$$ for all $s \in S\setminus\{s_t\}$. This sequence converges to the optimal cost $J^{*}_{s}$, for each $s \in S\setminus\{s_t\}$.
\[lemma:value\_iteration\_MDP\] When starting from $J_s(0) = \infty$ for all $s \in S\setminus \{s_t\}$, the value iteration method yields the optimal solution after at most $|S|=|\mathcal{V}'|\times 2^{|\mathcal{V}'|}+1$ iterations.
Let $\mu^{*}$ be the optimal policy. Consider a directed graph with the states of the MDP as nodes, which has an edge $(s,s')$ if $P_{ss'}(\mu^{*}_s) > 0$. We will first show that this graph is acyclic. Note that a state $s=(v,H)$, where $v\notin H$, can never be revisited regardless of the policy used, since a transition from $s$ will occur either to $s_t$ or a state with $H = H\cup \{v\}$. Then, any cycle in the directed graph corresponding to $\mu^{*}$ would only have states of the form $s = (v, H)$ with $v \in H$. Moreover, any state $s=(v,H)$ in the cycle cannot have a transition to state $s_t$ since $v \in H$. Thus, if there is a cycle, the cost of any state $s$ in the cycle will be $J_s^{\mu^{*}} = \infty$, which results in a contradiction. The value iteration method converges in $|S|$ iterations when the graph corresponding to the optimal policy $\mu^{*}$ is acyclic [@bertsekas1995dynamic].
Each stage of the value iteration process has a computational cost of $O(|\mathcal{E}|2^{|\mathcal{V}'|})$ since for each state $s=(v,H)$ there is an associated computational cost of $O(|A_v|)$. Then, from Lemma \[lemma:value\_iteration\_MDP\], we can see that the overall computational cost of value iteration is $O(|\mathcal{V}'||\mathcal{E}|2^{2|\mathcal{V}'|})$, which is exponential in the number of nodes in the graph. Note, however, that the brute force approach of enumerating all paths has a much larger computational cost of $O(|\mathcal{V}'|!)$.
[ The exponential space complexity prevents the stochastic shortest path formulation from providing a scalable solution for solving the problem for larger graphs.]{} A general question then arises as to whether this high computational complexity result is a result of the Markov Decision Process formulation. In other words, can we optimally solve the Min-Exp-Cost-Path problem with a low computational complexity using an alternate method? We next show that the Min-Exp-Cost-Path problem is inherently computationally complex (NP-hard).
Computational Complexity {#sec:comp_complexity}
========================
In this section, we prove that Min-Exp-Cost-Path is NP-hard. In order to do so, we first consider the extension of the Min-Exp-Cost-Path problem to the setting where there is no terminal node, which we refer to as the Min-Exp-Cost-Path-NT problem (Min-Exp-Cost-Path No Terminal node). We prove that Min-Exp-Cost-Path-NT is NP-hard, a result we then utilize to prove that Min-Exp-Cost-Path is NP-hard.
Motivated by the negative space complexity result of our MDP formulation, we then discuss a setting where we restrict ourselves to the class of *simple paths*, i.e., cycle free paths, and we refer to the minimum expected cost [until]{} success problem in this setting as the Min-Exp-Cost-Simple-Path problem. This serves as the setting for our path planning approaches of Section \[sec:asmpt\_near\_opt\_planner\] and \[sec:non\_myopic\_planners\]. Furthermore, we show that we can obtain a solution to the Min-Exp-Cost-Path problem from a solution of the Min-Exp-Cost-Simple-Path problem in an appropriately defined complete graph.
Min-Exp-Cost-Path-NT Problem {#subsec:min_exp_cost_tour}
----------------------------
Consider the graph-theoretic setup of the Min-Exp-Cost-Path problem of Section \[subsec:min\_exp\_cost\_path\]. In this subsection, we assume that there is no terminal node, i.e., the set $T = \{v \in \mathcal{V}: p_{v}=1\}$ is empty. There is thus a finite probability of failure for any path in the graph and as a result the expected cost [until]{} success is not well defined. The expected cost of a path then includes the event of failure after traversing the entire path and its associated cost. The objective in *Min-Exp-Cost-Path-NT* is to obtain a path that visits all the vertices with a non-zero probability of success, i.e., $\{v \in \mathcal{V}: p_{v} > 0\}$, such that the expected cost is minimized. This objective finds the minimum expected cost path among all paths that have a minimum probability of failure. More formally, the objective for Min-Exp-Cost-Path-NT is given as $$\label{eq:min_exp_cost_tour}
\begin{aligned}
& \underset{\mathcal{P}}{\text{minimize}} & & \sum_{e \in \mathcal{E}(\mathcal{P})} \left[\prod_{v \in \mathcal{V}(\mathcal{P}_{e})} (1-p_{v})\right] l_{e}\\
& \text{subject to } & & \mathcal{P} \text{ is a path of } \mathcal{G} \\
& & & \mathcal{P}[1] = v_s\\
& & & \mathcal{V}(\mathcal{P}) = \{v \in \mathcal{V}: p_{v} >0\},
\end{aligned}$$ where $\mathcal{V}(\mathcal{P})$ is the set of all vertices in path $\mathcal{P}$.
The Min-Exp-Cost-Path-NT problem is an important problem on its own (to address cases where no prior knowledge is available on nodes with $p_v=1$), even though we have primarily introduced it here to help prove that the Min-Exp-Cost-Path problem is NP-hard.
NP-hardness
-----------
In order to establish that Min-Exp-Cost-Path is NP-hard, we first introduce the decision versions of Min-Exp-Cost-Path (MECPD) and Min-Exp-Cost-Path-NT (MECPNTD).
Given a graph $\mathcal{G}=(\mathcal{V},\mathcal{E})$ with starting node $v_{s} \in \mathcal{V}$, edge weights $l_{e}$, $\forall e \in \mathcal{E}$, probability of success $p_{v} \in [0,1]$, $\forall v \in \mathcal{V}$, such that $T \neq \emptyset$, and budget $B_{\text{MECP}}$, does there exist a path $\mathcal{P}$ from $v_{s}$ such that the expected cost of the path $\mathcal{C}(\mathcal{P},1) \leq B_{\text{MECP}}$?
Given a graph $\mathcal{G}=(\mathcal{V},\mathcal{E})$ with starting node $v_{s} \in \mathcal{V}$, edge weights $l_{e}, \forall e \in \mathcal{E}$, probability of success $p_{v} \in [0,1), \forall v \in \mathcal{V}$ and budget $B_{\text{MECPNT}}$, does there exist a path $\mathcal{P}$ from $v_{s}$ that visits all nodes in $\{v \in \mathcal{V}: p_v>0\}$ such that $\sum_{e \in \mathcal{E}(\mathcal{P})} \left[\prod_{v \in \mathcal{V}(\mathcal{P}_{e})} (1-p_{v})\right] l_{e} \leq B_{\text{MECPNT}}$?
In the following Lemma, we first show that we can reduce MECPNTD to MECPD. This implies that if we have a solver for MECPD, we can use it to solve MECPNTD as well.
\[lemma:mecpntd\_red\_mecpd\] Min-Exp-Cost-Path-NT Decision problem reduces to Min-Exp-Cost-Path Decision problem.
Consider a general instance of MECPNTD with graph $\mathcal{G}=(\mathcal{V},\mathcal{E})$, starting node $v_{s} \in \mathcal{V}$, edge weights $l_{e}, \forall e \in \mathcal{E}$, probability of success $p_{v} \in [0,1), \forall v \in \mathcal{V}$, and budget $B_{\text{MECPNT}}$. We create an instance of MECPD by introducing a new node $v_t$ into the graph with $p_{v_t}=1$. We add edges of cost $l$ between $v_t$ and all the existing nodes of the graph. We next show that if we choose a large enough value for $l$, then the Min-Exp-Cost-Path solution would visit all nodes in $\bar{\mathcal{V}} = \{v \in \mathcal{V}: p_v>0\}$ before moving to the terminal node $v_t$. Let $l = 1.5D/\min_{v \in \bar{\mathcal{V}}}p_{v}$, where $D$ is the diameter of the graph. Then, the Min-Exp-Cost-Path solution, which we denote by $\mathcal{P}^{*}$ must visit all nodes in $\bar{\mathcal{V}}$ before moving to node $v_t$. We show this by contradiction. Assume that this is not the case. Since $\mathcal{P}^{*}$ has not visited all nodes in $\bar{\mathcal{V}}$, there exists a node $w \in \bar{\mathcal{V}}$ that does not belong to $\mathcal{P}^{*}$. Let $\mathcal{Q}^{*}$ be the subpath of $\mathcal{P}^{*}$ that lies in the original graph $\mathcal{G}$ and let $u$ be the last node in $\mathcal{Q}^{*}$. Consider the path $\mathcal{P}$ created by stitching together the path $\mathcal{Q}^{*}$, followed by the shortest path from $u$ to $w$ and then finally the terminal node $v_t$. Let $p_{f} = \prod_{v \in \mathcal{V}(\mathcal{Q}^{*})} (1-p_{v})$ be the probability of failure after traversing path $\mathcal{Q}^{*}$. The expected cost of path $\mathcal{P}$ then satisfies $$\begin{aligned}
\mathcal{C}(\mathcal{P},1) & \leq \sum_{e \in \mathcal{E}(\mathcal{Q}^{*})} \Bigg[\prod_{v \in \mathcal{V}(\mathcal{Q}^{*}_{e})} (1-p_{v})\Bigg] l_{e} + \\
& \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; p_{f} \left(l_{uw}^{\text{min}} + (1-p_{w})l\right)\\
& < \sum_{e \in \mathcal{E}(\mathcal{Q}^{*})} \Bigg[\prod_{v \in \mathcal{V}(\mathcal{Q}^{*}_{e})} (1-p_{v})\Bigg] l_{e}
+ p_{f} l = \mathcal{C}(\mathcal{P}^{*},1),\end{aligned}$$ where $l_{uw}^{\text{min}}$ is the cost of the shortest path between $u$ and $w$. We thus have a contradiction.
Thus, $\mathcal{Q}^{*}$ visits all the nodes in $\bar{\mathcal{V}}$. Moreover, since $\mathcal{P}^{*}$ is a solution of Min-Exp-Cost-Path, we can see that $\mathcal{Q}^{*}$ must also be a solution of Min-Exp-Cost-Path-NT. Thus, setting a budget of $B_{\text{MECP}} = B_{\text{MECPNT}} + p_{f}l$, where $p_{f} = \prod_{v \in \mathcal{V}(\mathcal{Q}^{*})} (1-p_{v}) = \prod_{v \in \bar{\mathcal{V}}} (1-p_{v})$, implies that the general instance of MECPNTD is satisfied if and only if our instance of MECPD is satisfied.
Even though we utilize the above Lemma primarily to analyze the computational complexity of the problems, we will also utilize the construction provided for path planners for Min-Exp-Cost-Path-NT in Section \[sec:numerical\_results\].
We next show that MECPNTD is [NP-complete (NP-hard and in NP)]{}, which together with Lemma \[lemma:mecpntd\_red\_mecpd\], implies that MECPD is NP-hard.
\[theorem:mecpntd\_np\_hard\] Min-Exp-Cost-Path-NT Decision problem is [NP-complete]{}.
[Clearly MECPNTD is in NP, since given a path we can compute its associated expected cost in polynomial time.]{} We [next]{} show that MECPNTD is NP-hard using a reduction from a rooted version of the NP-hard Hamiltonian path problem [@garey2002computers]. Consider an instance of the Hamiltonian path problem $\mathcal{G}=(\mathcal{V}, \mathcal{E})$, where the objective is to determine if there exists a path originating from $v_s$ that visits each vertex only once. We create an instance of MECPNTD by setting the probability of success to a non-zero constant for all nodes, i.e., $p_v = p>0$, $\forall v \in \mathcal{V}$. We create a complete graph and set edge weights as $
l_{e} = \left\{\begin{array}{ll}
1, & \text{if } e \in \mathcal{E} \\
2, & \text{else}
\end{array}\right..
$
A Hamiltonian path $\mathcal{P}$ on $\mathcal{G}$, if it exists, would have an expected distance cost of $$\begin{aligned}
\sum_{e \in \mathcal{E}(\mathcal{P})} \left[\prod_{v \in \mathcal{V}(\mathcal{P}_{e})} (1-p_{v})\right] l_{e}
& = \frac{1-p}{p}\left(1-(1-p)^{|\mathcal{V}|-1}\right).\end{aligned}$$ Any path on the complete graph that is not Hamiltonian on $\mathcal{G}$, would involve either more edges or an edge with a larger cost than $1$ and would thus have a cost strictly greater than that of $\mathcal{P}$. Thus, by setting $B_{\text{MECPNT}} = \frac{1-p}{p}\left(1-(1-p)^{|\mathcal{V}|-1}\right)$, there exists a Hamiltonian path if and only if the specific MECPNTD instance created is satisfied. Thus, the general MECPNTD problem is at least as hard as the Hamiltonian path problem. Since the Hamiltonian path problem is NP-hard, this implies that MECPNTD is NP-hard.
Min-Exp-Cost-Path Decision problem is [NP-complete]{}.
[We can see that MECPD is in NP. The proof of NP-hardness follows directly]{} from Lemma \[lemma:mecpntd\_red\_mecpd\]. [MECPD is thus NP-complete.]{}
Min-Exp-Cost-Simple-Path {#subsec:min_exp_cost_simple_path}
------------------------
We now propose ways to tackle the prohibitive computational complexity (space complexity) of our MDP formulation of Section \[subsec:mdp\_formulation\], which possesses a state space of size exponential in the number of nodes in the graph. If we can restrict ourselves to paths that do not revisit nodes, known as *simple paths* (i.e., cycle free paths), then the expected cost from a node could be expressed in terms of the expected cost from the neighboring node that the path visits next.[^3] We refer to this problem of minimizing the expected cost, while restricted to the space of simple paths, as the *Min-Exp-Cost-Simple-Path* problem. The Min-Exp-Cost-Simple-Path problem is also computationally hard as shown in the following Lemma.
\[lemma:mecspd\_np\_hard\] The decision version of Min-Exp-Cost-Simple-Path is NP-hard.
This follows from Theorem \[theorem:mecpntd\_np\_hard\] and Lemma \[lemma:mecpntd\_red\_mecpd\], since the optimal path considered in the construction of Theorem \[theorem:mecpntd\_np\_hard\] was a simple path that visited all nodes.
Note that the optimal path of Min-Exp-Cost-Path could involve revisiting nodes, implying that the optimal solution to Min-Exp-Cost-Simple-Path on $\mathcal{G}$ could be suboptimal. For instance, consider the toy problem of Fig \[fig:counter\_example\]. The optimal path starting from node $2$, in this case, is $\mathcal{P}^{*} = (2,1,2,3,4)$.
\[scale=.6,auto=left,main node/.style=[circle,fill=blue!20]{}\] (n1) at (1,1) [$1$]{}; (n2) at (5,1) [$2$]{}; (n3) at (9,1) [$3$]{}; (n4) at (13,1) [$4$]{}; (p1) \[below=0.2cm of n1\] [$p_1=0.9$]{}; (p2) \[below=0.2cm of n2\] [$p_2=0.1$]{}; (p3) \[below=0.2cm of n3\] [$p_3=0.1$]{}; (p4) \[below=0.2cm of n4\] [$p_4=1$]{};
(n1) edge\[dashed\] node\[above\] [$l_{12}=1$]{} (n2); (n2) edge \[dashed\] node\[above\] [$l_{23}=1$]{} (n3); (n3) edge \[dashed\] node\[above\] [$l_{34}=1$]{} (n4);
(n2) edge \[->, thick, bend right =45\] (n1); (n1) edge \[->, thick, bend right =25\] (n2); (n2) edge \[->, thick, bend right =25\] (n3); (n3) edge \[->, thick, bend right =25\] (n4);
Consider Min-Exp-Cost-Simple-Path on the following complete graph. This complete graph $\mathcal{G}_{\text{comp}}$ is formed from the original graph $\mathcal{G}=(\mathcal{V},\mathcal{E})$ by adding an edge between all pairs of vertices of the graph, excluding self-loops. The cost of the edge $(u,v)$ is the cost of the shortest path between $u$ and $v$ on $\mathcal{G}$ which we denote by $l_{uv}^{\text{min}}$. This can be computed by the all-pairs shortest path Floyd-Warshall algorithm in $O(|\mathcal{V}|^3)$ computations. We next show in the following Lemma that the optimal solution of Min-Exp-Cost-Simple-Path on this complete graph can provide us with the optimal solution to Min-Exp-Cost-Path on the original graph.
\[lemma:simple\_path\_complete\_graph\] The solution to Min-Exp-Cost-Simple-Path on $\mathcal{G}_{\text{comp}}$ can be used to obtain the solution to Min-Exp-Cost-Path on $\mathcal{G}$.
See Appendix \[appendix:lemma\_simple\_path\] for the proof.
Lemma \[lemma:simple\_path\_complete\_graph\] is a powerful result that allows us to asymptotically solve the Min-Exp-Cost-Path problem, with $\epsilon$ sub-optimality, as we shall see in the next Section.
Asymptotically $\epsilon$-suboptimal Path Planner {#sec:asmpt_near_opt_planner}
=================================================
In this section, we propose a path planner, based on a game theoretic framework, that asymptotically gets arbitrarily close to the optimum solution of the Min-Exp-Cost-Path problem, i.e., it is an *asymptotically $\epsilon$-suboptimal solver*. This is important as it allows us to solve the NP-hard Min-Exp-Cost-Path problem, with near optimality, given enough time. More specifically, we utilize log-linear learning to asymptotically obtain the global potential minimizer of an appropriately defined potential game.
We start with the space of simple paths, i.e., we are interested in the Min-Exp-Cost-Simple-Path problem on a given graph $\mathcal{G}$. A node $v$ will then route to a single other node. Moreover, the expected cost from a node can then be expressed in terms of the expected cost from the neighbor it routes through. The state of the system can then be considered to be just the current node $v$, and the actions available at state $v$, $A_v = \{u\in \mathcal{V}:(v,u) \in \mathcal{E}\}$, is the set of neighbors of $v$. The policy $\mu$ specifies which node to move to next, i.e., if the current node is $v$, then $\mu_{v}$ is the next node to go to.
We next discuss our game-theoretic setting. So far, we viewed a node $v$ as a state and $A_v$ as the action space for state $v$. In contrast, in this game-theoretic setting, we interpret node $v$ as a player and $A_v$ as the action set of player $v$. Similarly, $\mu$ was viewed as a policy with $\mu_{v}$ specifying the action to take at state $v$. Here, we reinterpret $\mu$ as the joint action profile of the players with $\mu_v$ being the action of player $v$.
We consider a game $\{\mathcal{V}', \{A_v\}, \{\mathcal{J}_{v}\}\}$, where the set of non-terminal nodes $\mathcal{V}'$ are the players of the game and $A_v$ is the action set of node/player $v$. Moreover, $\mathcal{J}_{v}:A\rightarrow \mathbb{R}$ is the local cost function of player $v$, where $A = \prod_{v \in \mathcal{V}'} A_{v}$ is the space of joint actions. Finally, $\mathcal{J}_{v}(\mu)$ is the cost of the action profile $\mu$ as experienced by player $v$.
We first describe the expected cost from a node $v$ in terms of the action profile $\mu$. An action profile $\mu$ induces a directed graph on $\mathcal{G}$, which has the same set of nodes as $\mathcal{G}$ and directed edges from $v$ to $\mu_{v}$ for all $v\in \mathcal{V}'$. We call this the successor graph, using terminology from [@garcia1993loop], and denote it by $\mathcal{SG}(\mu)$. As we shall show, our proposed strategy produces an action profile $\mu$ which induces a directed acyclic graph. This is referred to as an *acyclic successor graph* (ASG) [@garcia1993loop].
\[scale=.6,auto=left,main node/.style=[circle,fill=blue!20]{}\] (n7) at (7,11) [$7$]{}; (n6) at (1,10) [$6$]{}; (n4) at (4,8) [$4$]{}; (n5) at (8,9) [$5$]{}; (n1) at (2,6) [$1$]{}; (n3) at (9,6) [$3$]{}; (n2) at (6,5) [$2$]{};
(p1) \[below=0.1cm of n1\] [$p_{1}<1$]{}; (p2) \[below=0.1cm of n2\] [$p_{2}<1$]{}; (p7) \[right=0.1cm of n7\] [$p_{7}=1$]{};
(n1) edge \[->, semithick\](n2); (n2) edge \[->, semithick\] (n4); (n4) edge \[->, semithick\] (n6); (n6) edge \[->, semithick\] (n7); (n5) edge \[->, semithick\] (n4); (n3) edge \[->, semithick\] (n5);
[ Node $v$ is said to be *downstream* of $u$ in $\mathcal{SG}(\mu)$ if $v$ lies on the directed path from $u$ to the corresponding sink. Moreover, node $u$ is said to be *upstream* of $v$ in this case, and we denote the set of upstream nodes of $v$ by $U_{v}(\mu_{-v})$, where $\mu_{-v}$ denotes the action profile of all players except $v$. Let $v \in U_v(\mu_{-v})$ by convention. Note that $U_{v}(\mu_{-v})$ is only a function of $\mu_{-v}$ as it does not depend on the action of player $v$.]{}
Let $\mathcal{P}(\mu, v)$ be the path from agent $v$ on this successor graph. We use the shorthand $C_v(\mu) = C(\mathcal{P}(\mu, v),1)$, to denote the expected cost from node $v$ when following the path $\mathcal{P}(\mu, v)$. Since $\mathcal{P}(\mu, v)$ is a path along $\mathcal{SG}(\mu)$, it can either end at some node or it can end in a cycle. If it ends in a cycle or at a node that is not a terminal node, we define the expected cost $C_v(\mu)$ to be infinity. If it does end at a terminal node, we obtain the following recursive relation from (\[eq:min\_exp\_cost\_recursion\]): $$\begin{aligned}
\label{eq:rec_exp_dist}
C_{v}(\mu) = (1-p_{v})\left(l_{v\mu_v} + C_{\mu_{v}}(\mu)\right),\end{aligned}$$ where $C_{v_t}(\mu) = 0$ for all $v_t \in T$.
[ Let $A_{\text{ASG}}$ denote the set of action profiles such that the expected cost $C_{v}(\mu) < \infty$ for all $v \in \mathcal{V}$. This will only happen if the path $\mathcal{P}(\mu, v)$ ends at a terminal node for all $v$. This corresponds to $\mathcal{SG}(\mu)$ being an ASG with terminal nodes as sinks. Specifically, $\mathcal{SG}(\mu)$ would be a forest with the root or sink of each tree being a terminal node. An ASG is shown in Fig. \[fig:ASG\] for the toy example from Fig. \[fig:problem\_setup\]. ]{}
[ $\mu \in A_{\text{ASG}}$ implies that the action of player $v$ satisfies $\mu_{v} \in A_{v}^{c}(\mu_{-v})$, where $A_{v}^{c}(\mu_{-v}) = \{u \in \mathcal{V}: (v,u) \in \mathcal{E}, u \notin U_{v}(\mu_{-v}), C_u(\mu) < \infty\}$ is the set of actions that result in a finite expected cost from $v$. Note that $A_{v}^{c}(\mu_{-v})$ is a function of only $\mu_{-v}$. This is because $u \notin U_{v}(\mu_{-v})$ implies $v \notin \mathcal{P}(\mu, u)$ which in turn implies that $C_{u}(\mu)$ is a function of only $\mu_{-v}$. ]{}
We next define the local cost function of player $v$ to be $$\begin{aligned}
\label{eq:local_cost_func}
\mathcal{J}_{v}(\mu) = \sum_{u \in U_{v}(\mu)}\alpha_{u}C_{u}(\mu),\end{aligned}$$ where $U_v(\mu)$ is the set of upstream nodes of $v$, and $\alpha_u>0 $ are constants such that $\alpha_{v_s} = 1$ and $\alpha_v = \epsilon^{'}$, for all $v\neq v_s$, where $\epsilon^{'} > 0$ is a small constant.
We next show that these local cost functions induce a potential game [over the action space $A_{\text{ASG}}$]{}. In order to do so, we first define a potential game [over $A_{\text{ASG}}$]{}.[^4]
$\{\mathcal{V}', \{A_v\}, \{\mathcal{J}_v\}\}$ is an exact potential game [over $A_{\text{ASG}}$]{} if there exists a function [$\phi: A_{\text{ASG}} \rightarrow \mathbb{R}$]{} such that $$\begin{aligned}
\mathcal{J}_v(\mu_v^{'}, \mu_{-v}) - \mathcal{J}_{v}(\mu_v, \mu_{-v}) = \phi(\mu_v^{'}, \mu_{-v}) - \phi(\mu_v, \mu_{-v}),\end{aligned}$$ for all [$\mu_{v}^{'} \in A_{v}^{c}(\mu_{-v}), \mu=(\mu_v, \mu_{-v})\in A_{\text{ASG}}$]{}, and $v \in \mathcal{V}'$, where $\mu_{-v}$ denotes the action profile of all players except $v$.
The function $\phi$ is called the potential function. In the following Lemma, we show that using local cost functions as described in (\[eq:local\_cost\_func\]), results in an exact potential game.
\[lemma:potential\_game\] The game $\{\mathcal{V}', \{A_v\}, \{\mathcal{J}_v\}\}$, with local cost functions as defined in (\[eq:local\_cost\_func\]), is an exact potential game [over $A_{\text{ASG}}$]{} with potential function $$\begin{aligned}
\label{eq:pot_func}
\phi(\mu) = \sum_{v \in \mathcal{V}'}\alpha_{v}C_{v}(\mu) = C_{v_s}(\mu) + \epsilon^{'} \sum_{v\neq v_s} C_{v}(\mu).\end{aligned}$$
Consider a node $v$ and $\mu = (\mu_v, \mu_{-v})$ and $\mu_{v}^{'}$ such that $C_v(\mu_v^{'}, \mu_{-v}) < C_v(\mu_v, \mu_{-v})$. From (\[eq:rec\_exp\_dist\]), we have that $
C_u(\mu_v^{'}, \mu_{-v}) < C_u(\mu_v, \mu_{-v}), \;\; \forall u \in U_{v}(\mu),
$ where $U_{v}(\mu)$ is the set of upstream nodes from $v$. Furthermore, $
C_u(\mu_v^{'}, \mu_{-v}) = C_u(\mu_v, \mu_{-v}), \;\; \forall u \notin U_{v}(\mu).
$ Thus, we have $$\begin{aligned}
\phi(\mu_v^{'}, \mu_{-v}) - \phi(\mu) & = \sum_{u \in \mathcal{V}'}\alpha_u \left[C_{u}(\mu_v^{'}, \mu_{-v}) - C_{u}(\mu) \right] \\
& = \sum_{u \in U_v(\mu)}\alpha_u \left[C_{u}(\mu_v^{'}, \mu_{-v}) - C_{u}(\mu)\right]\\
& = \mathcal{J}_{v}(\mu_v^{'}, \mu_{-v}) - \mathcal{J}_{v}(\mu),\end{aligned}$$ for all [$\mu_{v}^{'} \in A^{c}_v(\mu_{-v})$, $\mu \in A_{\text{ASG}}$]{}, and $v \in \mathcal{V}'$.
Minimizing $\phi(\mu)$ gives us a solution that can be arbitrarily close to that of Min-Exp-Cost-Simple-Path since we can select the value of $\epsilon^{'}$ appropriately. Let $\mu^{*} = \operatorname*{arg\,min}_{\mu} \phi(\mu)$ and $\mu^{\text{OPT}} = \operatorname*{arg\,min}_{\mu} C_{v_s}(\mu)$. Then, $
C_{v_{s}}(\mu^{*}) + \epsilon^{'} \sum_{u \neq v_s} C_{u}(\mu^{*}) \leq C_{v_s}(\mu^{\text{OPT}}) + \epsilon^{'} \sum_{u \neq v_s} C_{u}(\mu^{\text{OPT}}).
$ Rearranging gives us $$\begin{aligned}
C_{v_s}(\mu^{*}) & \leq C_{v_s}(\mu^{\text{OPT}}) + \epsilon^{'} \left[\sum_{u \neq v_s} C_{u}(\mu^{\text{OPT}}) - \sum_{u \neq v_s} C_{u}(\mu^{*})\right]\\
& \leq C_{v_s}(\mu^{\text{OPT}}) + \epsilon^{'} |\mathcal{V}'|D,\end{aligned}$$ where $D$ is the diameter of the graph. Thus minimizing $\phi(\mu)$ gives us an $\epsilon$-suboptimal solution to the Min-Exp-Cost-Simple-Path problem, where $\epsilon=\epsilon^{'} |\mathcal{V}'|D$.
We next show how to asymptotically obtain the global minimizer of $\phi(\mu)$ by utilizing a learning process known as log-linear learning [@marden2012revisiting].
### Log-linear Learning {#subsubsec:log_linear_learning}
[ Let $\mu_{v} = a_{\emptyset}$ correspond to node $v$ not pointing to any successor node. We refer to this as a null action.]{} Then, the log-linear process utilized in our setting is as follows:
1. The action profile $\mu(0)$ is initialized with a null action, i.e., [$\mu_v(0) = a_{\emptyset}$ for all $v$]{}. The local cost function is thus $\mathcal{J}_{v}(\mu(0)) = \infty$, for all $v \in \mathcal{V}'$.
2. At every iteration $k+1$, a node $v$ is randomly selected [from $\mathcal{V}'$ uniformly]{}. [If $A_{v}^{c}(\mu_{-v}(k))$ is empty, we set $\mu_v(k+1) = a_{\emptyset}$. Else, node $v$ selects action $\mu_v(k+1)=\mu_v \in A_{v}^{c}(\mu_{-v}(k))$ with the following probability: $$\begin{aligned}
\mathrm{Pr}(\mu_v)= \frac{e^{-\frac{1}{\tau}\left(\mathcal{J}_v(\mu_v, \mu_{-v}(k))\right)}}{\sum_{\mu_{v}^{'} \in A_v^{c}(\mu_{-v}(k))}e^{-\frac{1}{\tau}\left(\mathcal{J}_{v}(\mu_v^{'}, \mu_{-v}(k))\right)}},\end{aligned}$$ ]{} where $\tau$ is a tunable parameter known as the temperature. The remaining nodes repeat their action, i.e., $\mu_{u}(k+1) = \mu_{u}(k)$ for $u\neq v$.
We next show that log-linear learning asymptotically obtains an $\epsilon$-suboptimal solution to the Min-Exp-Cost-Path problem. We first show, in the following Lemma, that it asymptotically provides an $\epsilon$-suboptimal solution to the Min-Exp-Cost-Simple-Path problem.
\[theorem:log\_lin\_asympt\] As $\tau \rightarrow 0$, log-linear learning on a potential game with a local cost function defined in (\[eq:local\_cost\_func\]), asymptotically provides an $\epsilon$-suboptimal solution to the Min-Exp-Cost-Simple-Path problem.
See Appendix \[appendix:proof\_log\_lin\_asympt\] for the proof.
As $\tau \rightarrow 0$, log-linear learning on a potential game with a local cost function defined in (\[eq:local\_cost\_func\]) on the complete graph $\mathcal{G}_{\text{comp}}$, asymptotically provides an $\epsilon$-suboptimal solution to the Min-Exp-Cost-Path problem.
From Theorem \[theorem:log\_lin\_asympt\], we know that log-linear learning asymptotically provides an $\epsilon$-suboptimal solution to the Min-Exp-Cost-Simple-Path problem on the complete graph $\mathcal{G}_{\text{comp}}$. Using Lemma \[lemma:simple\_path\_complete\_graph\], we then utilize this solution to obtain an $\epsilon$-suboptimal solution to the Min-Exp-Cost-Path problem on $\mathcal{G}$.
We implement the log-linear learning algorithm by keeping track of the expected cost $C_{v}(\mu(k))$ in memory, for all nodes $v \in \mathcal{V}'$. In each iteration, we compute the set of upstream nodes of the selected node $v$ in order to compute the set $A_v^{c}(\mu(-k))$. From (\[eq:rec\_exp\_dist\]), we can see that the expected cost of each node upstream of $v$ can be expressed as a linear function of $C_{v}(\mu)$. Then we can compute an expression for $\mathcal{J}_{v}(\mu) = \sum_{u \in \mathcal{U}_{v}(\mu_{-v})}C_{u}(\mu)$ as a linear function of the expected cost $C_{v}(\mu)$ with a computational cost of $O(|\mathcal{V}'|)$. We can then compute $\mathcal{J}_{v}(\mu_v, \mu_{-v})$ for all $\mu_v \in A_v^{c}(\mu(-k))$ using this pre-computed expression for $\mathcal{J}_{v}()$. Finally, once $\mu_{v}(k+1)$ is selected, we update the expected cost of $v$ and all its upstream nodes using (\[eq:rec\_exp\_dist\]). Thus, the overall computation cost of each iteration is $O(|\mathcal{V}'|)$.
Fast Non-myopic Path Planners {#sec:non_myopic_planners}
=============================
In the previous section, we proposed an approach that finds an $\epsilon$-suboptimal solution to the Min-Exp-Cost-Path problem asymptotically. However, for certain applications, finding a suboptimal but fast solution may be more important. This motivates us to propose two suboptimal path planners that are non-myopic and very fast. We use the term non-myopic here to contrast with the myopic approaches of choosing your next step based on your immediate or short-term reward (e.g., local greedy search). We shall see an example of such a myopic heuristic in Section \[sec:numerical\_results\].
In this part, we first propose a non-myopic path planner based on a game theoretic framework that finds a directionally local minimum of the potential function $\phi$ of (\[eq:pot\_func\]). We next propose a path planner based on an SSP formulation that provides us with the optimal path among the set of paths satisfying a mild assumption.
We assume simple paths in this Section. Lemma \[lemma:simple\_path\_complete\_graph\] can then be used to find a optimum non-simple path with minimal computation. Alternatively, the simple path solution can also be directly utilized.
Best Reply Process {#subsec:br_asg}
------------------
Consider the potential game $\{\mathcal{V}', \{A_v\}, \{\mathcal{J}_{v}\}\}$ of Section \[sec:asmpt\_near\_opt\_planner\] with local cost functions $\{\mathcal{J}_{v}\}$ as given in (\[eq:local\_cost\_func\]). We next show how to obtain a directionally local minimum of the potential function $\phi(\mu) = C_{v_{s}}(\mu) + \epsilon^{'}\sum_{v \neq v_s}C_{v}(\mu)$. In order to do so, we first review the definition of a Nash equilibrium.
An action profile $\mu^{\text{NE}}$ is said to be a pure Nash equilibrium if $$\begin{aligned}
\mathcal{J}_{v}(\mu^{\text{NE}}) \leq \mathcal{J}_v(\mu_v, \mu_{-v}^{\text{NE}}), \;\; \forall \mu_v \in A_v, \forall v \in \mathcal{V}'\end{aligned}$$ where $\mu_{-v}$ denotes the action profile of all players except $v$.
It can be seen that an action $\mu^{\text{NE}}$ is a Nash equilibrium of a potential game if and only if it is a directionally local minimum of $\phi$, i.e., $
\phi(\mu_v^{'}, \mu_{-v}^{\text{NE}}) \geq \phi(\mu^{\text{NE}}), \;\; \forall \mu_{v}^{'} \in A_v, \forall v \in \mathcal{V}'.
$ Since we have a potential game, a Nash equilibrium of the game is a directionally local minimum of $\phi(\mu)$. We can find a Nash equilibrium of the game using a learning mechanism such as the best reply process [@marden2012revisiting], which we next discuss.
[ Let $\mu_{v} = a_{\emptyset}$ correspond to node $v$ not pointing to any successor node. We refer to this as a null action. ]{} The best reply process utilized in our setting is as follows:
1. The action profile $\mu(0)$ is initialized with a null action, i.e., [$\mu_v(0) = a_{\emptyset}$ for all $v$]{}. The local cost function is thus $\mathcal{J}_{v}(\mu(0)) = \infty$, for all $v \in \mathcal{V}'$.
2. At iteration $k+1$, a node $v$ is randomly selected from $\mathcal{V}'$ [uniformly]{}. [If $A_{v}^{c}(\mu_{-v}(k))$ is empty, we set $\mu_v(k+1) = a_{\emptyset}$. Else,]{} the action of node $v$ is updated as $$\begin{aligned}
\mu_{v}(k+1) & = \operatorname*{arg\,min}_{\mu_v \in {A_v^{c}(\mu_{-v}(k))}} \mathcal{J}_{v}(\mu_{v}, \mu_{-v}(k))\\
& = \operatorname*{arg\,min}_{\mu_v \in {A_v^{c}(\mu_{-v}(k))}} C_{v}(\mu_{v}, \mu_{-v}(k))\\
& =\operatorname*{arg\,min}_{u \in {A_v^{c}(\mu_{-v}(k))}} \left\{ (1-p_v)\left[l_{vu} + C_{u}(\mu(k))\right] \right\},\end{aligned}$$ where the second and third equality follow from (\[eq:rec\_exp\_dist\]). The actions of the remaining nodes stay the same, i.e., $\mu_{u}(k+1) = \mu_{u}(k)$, $\forall u \neq v$.
The best reply process in a potential game converges to a pure Nash equilibrium [@marden2012revisiting], which is also a directionally local minimum of $\phi(\mu)= C_{v_{s}}(\mu) + \epsilon^{'}\sum_{v \neq v_s}C_{v}(\mu)$.
Since a node is selected at random at each iteration in the best reply process, analyzing its convergence rate becomes challenging. Instead, in the following Theorem, we analyze the convergence rate of the best reply process when the nodes for update are selected deterministically in a cyclic manner. We show that it converges quickly to a directionally local minimum, and is thus an efficient path planner.
\[theorem:BR\_finite\_iters\] Consider the best reply process where we [select the next node for update in a round robin fashion]{}. Then, this process converges after at most $|\mathcal{V}'|^2$ iterations.
See Appendix \[appendix:proof\_br\_conv\] for the proof.
We implement the best reply process by keeping track of the expected cost $C_{v}(\mu(k))$ in memory, for all nodes $v \in \mathcal{V}'$. In each iteration of the best reply process, we compute the set of upstream nodes of the selected node $v$ in order to compute the set $A_v^{c}(\mu(-k))$. Moreover, we compute $l_{v\mu_{v}} + C_{\mu_v}(\mu(k))$ for all $\mu_v \in A_v^{c}(\mu(-k))$ to find the action $\mu_v$ that minimizes the expected cost from $v$. Finally, once $\mu_{v}(k+1)$ is selected, we update the expected cost of $v$ as well as all the nodes upstream of it using (\[eq:rec\_exp\_dist\]). Then, the computation cost of each iteration is $O(|\mathcal{V}'|)$. Thus, from Theorem \[theorem:BR\_finite\_iters\], the best reply process in a round robin setting has a computational complexity of $O(|\mathcal{V}'|^{3})$.
Imposing a Directed Acyclic Graph {#subsec:idag}
---------------------------------
We next propose an SSP-based path planner. We enforce that a node cannot be revisited by imposing a directed acyclic graph (DAG), $\mathcal{G}_{\text{DAG}}$, on the original graph. The state of the SSP formulation of Section \[subsec:mdp\_formulation\] is then just the current node $v \in \mathcal{V}'$. The transition [probability from state $v$ to state $u$ is then simply given as]{} $P_{vu}(a_v) = \left\{\begin{array}{lll} 1-p_v, & \text{if } u=f(a_v)\\
p_v, & \text{if } u=s_t\\
0, & \text{else}\end{array}\right.$, where $f(a_v) = \left\{\begin{array}{ll} a_v, & \text{if } a_v \in \mathcal{V}'\\
s_{t}, & \text{if } a_v \in T\end{array}\right.$, and the stage cost [of action $u$ at state $v$]{} is given as $g(v,u)= (1-p_v)l_{vu}$. We refer to running value iteration on this SSP as the IDAG (imposing a DAG) path planner.
Imposing a DAG, $\mathcal{G}_{\text{DAG}} = (\mathcal{V}, \mathcal{E}_{\text{DAG}})$, corresponds to modifying the action space of each state $v$ such that only a subset of the neighbors are available actions, i.e., $A_v = \{u:(v,u) \in \mathcal{E}_{\text{DAG}} \}$. For instance, given a relative ordering of the nodes, a directed edge would be allowed from node $u$ to $v$, only if $v\geq u$ with respect to some ordering. As a concrete example, consider the case where a directed edge from node $u$ to $v$ exists only if $v$ is farther away from the starting node $v_s$ on the graph than node $u$ is, i.e., $l^{\text{min}}_{v_sv}>l^{\text{min}}_{v_su}$, where $l^{\text{min}}_{v_sv}$ is the cost of the shortest path from $v_s$ to $v$ on the original graph $\mathcal{G}$. More specifically, the imposed DAG has the same set of nodes $\mathcal{V}$ as the original graph, and the set of edges is given by $\mathcal{E}_{\text{DAG}} = \{(u,v) \in \mathcal{E}: l^{\text{min}}_{v_sv}>l^{\text{min}}_{v_su}\}$, where $(u,v)$ represents a directed edge from $u$ to $v$. For example, consider an $n\times n$ grid graph, where neighboring nodes are limited to $\{\text{left, right, top, down}\}$ nodes. In the resulting DAG, only outward flowing edges from the start node are allowed, i.e., edges that take you further away from the start node. For instance, consider the start node $v_s$ as the center and for each quadrant, form outward moving edges, as shown in Fig. \[fig:SSP\_DAG\]. In the first quadrant only right and top edges are allowed, in the second quadrant only left and top edges and so on. Fig. \[fig:SSP\_DAG\] shows an illustration of this, where several feasible paths from $v_s$ to a terminal node are shown.
![[]{data-label="fig:SSP_DAG"}](ssp_dag.pdf)
Imposing this DAG is equivalent to placing the following requirement that a feasible path must satisfy: *Each successive node on the path must be further away from the starting node $v_s$*, i.e., for a path $\mathcal{P}=(v_1=v_s, v_2, \cdots, v_m)$, the condition $l^{\text{min}}_{v_sv_i} > l^{\text{min}}_{v_sv_{i-1}}$ should be satisfied. In the case of a grid graph with a single terminal node, this implies that a path must always move towards the terminal node, which is a reasonable requirement to impose. We next show that we can obtain the optimal solution among all paths satisfying this requirement using value iteration.
The optimal solution with minimum expected cost on the imposed DAG $\mathcal{G}_{\text{DAG}}$ can be found by running value iteration: $$\begin{aligned}
J_{v}(k+1)& = \min_{u \in A_v} \left\{(1-p_v)l_{vu} + (1-p_v)J_{f(u)}(k)\right\},\end{aligned}$$ with the policy at iteration $k+1$ given by $$\begin{aligned}
\mu_{v}(k+1)& = \operatorname*{arg\,min}_{u \in A_v} \left\{(1-p_v)l_{vu} + (1-p_v)J_{f(u)}(k)\right\},\end{aligned}$$ for all $v \in \mathcal{V}'$, where $J_{s_t}(k) = 0$, for all $k$.
The following lemma shows that we can find this optimal solution efficiently.
\[lemma:idag\_comp\_complexity\] When starting from $J_{v}(0) = \infty$, for all $v \in \mathcal{V}'$, the value iteration method will yield the optimal solution after at most $|\mathcal{V}'|$ iterations.
This follows from the convergence analysis of value iteration on an SSP with a DAG structure [@bertsekas1995dynamic].
Each stage of the value iteration process has a computation cost of $O(|\mathcal{E}_{\text{DAG}}|)$ since for each node we have as many computations as there are outgoing edges. Thus, from Lemma \[lemma:idag\_comp\_complexity\], we can see that the computational cost of value iteration is $O(|\mathcal{V}'||\mathcal{E}_{\text{DAG}}|)$.
[Log-linear learning, best reply, and IDAG, each have their own pros and cons. For instance, log-linear learning has strong asymptotic optimality guarantees. In contrast, best reply converges quickly to a directionally-local minimum but does not possess similar optimality guarantees. Numerically, for the applications considered in Section \[sec:numerical\_results\], the best reply solver performs better than the IDAG solver. However, the IDAG approach is considerably fast and provides a natural understanding of the solution it produces, being particularly suitable for spatial path planning problems. For instance, as shown in Fig. \[fig:SSP\_DAG\], the solution of IDAG for the imposed DAG is the best solution among all paths that move outward from the start node. More generally, it is the optimal solution among all the paths allowed by the imposed DAG.]{}
Numerical Results {#sec:numerical_results}
=================
In this section, we show the performance of our approaches for Min-Exp-Cost-Path, via numerical analysis of two applications. In our first application, a rover is exploring mars, to which we refer as the SamplingRover problem. In our second application, we then consider a realistic scenario of a robot planning a path in order to find a connected spot to a remote station. We see that in both scenarios our solvers perform well and outperform the naive and greedy heuristic approaches.
Sampling Rover
--------------
The scenario considered here is loosely inspired by the RockSample problem introduced in [@smith2004heuristic]. A rover on a science exploration mission is exploring an area looking for an object of interest for scientific studies. For instance, consider a rover exploring Mars with the objective of obtaining a sample of water. Based on prior information, which could for instance be from orbital flyovers over the area of interest or from the estimation by experts, the rover has an a priori probability of finding the object at any location.
An instance of the SamplingRover\[$n$,[$n_t$]{}\] consists of an $n \times n$ grid with [$n_t$]{} locations of guaranteed success, i.e., [$n_t$]{} nodes such that $p_{v}=1$. The probability of success at each node is generated independently and uniformly from $[0,0.1]$. At any node, the actions allowed by the rover are $\{\text{left, right, top, down}\}$. The starting position of the rover is taken to be at the center of the grid, $v_s = \left(\lfloor \frac{n}{2} \rfloor, \lfloor \frac{n}{2} \rfloor \right)$. When the number of points of guaranteed success ([$n_t$]{}) is $1$, we take the location of the node with $p_v=1$ at $(0,0)$.
![Evolution of the expected traveled distance with time for the log-linear learning approach with $n=25$ and [$n_t=1$]{}.[]{data-label="fig:exp_dist_time_log_lin"}](exp_dist_time_log_lin){width="0.7\linewidth"}
![The expected traveled distance by the various approaches for different grid sizes ($n$) with a single connected point ([$n_t=1$]{}). The results are averaged over $1000$ different probability of success maps. The corresponding standard deviation is also shown in the form of error bars. We can see that the best reply and IDAG approaches outperform the greedy and closest terminal heuristics.[]{data-label="fig:exp_dist_all_approaches"}](exp_dist_averaged_vary_n){width="0.75\linewidth"}
[0.425]{} ![image](k_4_path_25_br_asg){width="\linewidth"}
[0.425]{} ![image](k_4_path_25_ll_asg){width="\linewidth"}
We found that log-linear learning on a complete graph produces similar results as log-linear learning on the original grid graph, but over longer run-times. Thus, unless explicitly mentioned otherwise, when we refer to the best reply or the log-linear learning approach, it is with respect to finding a simple path on the original grid graph. We set weight $\epsilon^{'} = 10^{-6}$ in $\phi(\mu)= C_{v_{s}}(\mu) + \epsilon^{'}\sum_{v \in \mathcal{V}', v \neq v_s}C_{v}(\mu)$. We use a decaying temperature for log-linear learning. Through experimentation, we found that a decaying temperature of $\tau \propto k^{-0.75}$ (where $k$ is the iteration number) performs well.
We first compare our approach with alternate approaches for solving the Min-Exp-Cost-Path problem. We consider one instance of a probability of success map. We then implement Real Time Dynamic Programming (RTDP) [@barto1995learning], which is a heuristic search method that tries to obtain a good solution quickly for the MDP formulation of Section \[subsec:mdp\_formulation\]. Furthermore, we also implemented Simulated Annealing as implemented in [@kirkpatrick1983optimization] for the traveling salesman problem, where we modify the cost of a state to be the expected cost from the starting node. Moreover, the starting position of the rover is fixed as the start of the simulated annealing path. Table \[table:exp\_dist\_rtdp\_sim\_ann\] shows the performance of RTDP, simulated annealing and our (asymptotically $\epsilon$-suboptimal) log-linear learning and (non-myopic fast) best reply approaches for various grid sizes ($n$) when [$n_t=1$]{}, where for each approach we impose a computational time limit of an hour. We see that RTDP is unable to produce viable solutions for $n\geq 10$ due to the state explosion problem of the MDP formulation, as discussed in Section \[subsec:mdp\_formulation\]. Moreover, the performance of simulated annealing worsens significantly with increasing values of $n$. On the other hand, the best reply and log-linear learning approach produce solutions with good performance that outperform simulated annealing considerably (e.g., simulated annealing has $15$ times more expected traveled distance than the best reply approach for $n=20$).
We next show the asymptotically $\epsilon$-suboptimal behavior of the log-linear learning approach of Section \[sec:asmpt\_near\_opt\_planner\]. Fig. \[fig:exp\_dist\_time\_log\_lin\] shows the evolution of the expected distance with time for the solution produced by log-linear learning for an instance of a probability of success map with $n=25$ and [$n_t=1$]{}. [In comparison, the best reply and IDAG approaches converged in $1.75$ s and $0.25$ s respectively.]{}
We note that based on several numerical results, we have observed that the best reply and IDAG approaches produce results very close to those produced by log-linear learning. They thus act as fast efficient solvers. On the other hand, the log-linear learning approach provides a guarantee of optimality (within $\epsilon$) asymptotically. Thus all $3$ approaches are useful depending on the application requirements.
We next compare our proposed approaches with two heuristics. The first is a heuristic of moving straight towards the closest node with $p_{v}=1$, which we refer to as the *closest terminal* heuristic. The second is a myopic greedy heuristic, where the rover at any time moves towards the node with the highest $p_v$ among its unvisited neighbors. We refer to this as the *nearest neighbor* heuristic. [These are]{} similar to strategies utilized in the optimal search theory literature [@chung2012analysis; @bourgault2003optimal], where myopic strategies with limited lookahead are typically utilized. Fig. \[fig:exp\_dist\_all\_approaches\] shows the performance of the best reply, IDAG, nearest neighbor and closest terminal heuristic for various grid sizes ($n$) when [$n_t=1$]{}. We generated a $1000$ different probability of success maps, and averaged the expected traveled distance over them to obtain the plotted performance for each $n$. Also, the error bars in the plot represent the standard deviation of each approach. In Fig. \[fig:exp\_dist\_all\_approaches\], we can see that the best reply and IDAG approach outperform the greedy nearest neighbor heuristic as well as the closest terminal heuristic significantly. Moreover, the best reply approach outperforms the IDAG approach for larger $n$.
In order to gain more insight into the nature of the solution produced by our proposed approaches, we next consider a scenario where [$n_t=4$]{}, where we place the four nodes of guaranteed success at the four corners of the workspace, i.e., at $(0,0)$, $(0,n-1)$ $(n-1,0)$ and $(n-1,n-1)$. Fig. \[fig:k\_4\_path\_25\_asg\] shows the ASG of the best reply process and log-linear learning for a sample such scenario, where we impose a computational time limit of $1$ hour on the log-linear learning approach. We see that in both cases, the resulting ASG is a forest with $4$ trees, each denoted with a different color in Fig. \[fig:k\_4\_path\_25\_asg\], where the roots of the $4$ trees correspond to the $4$ nodes of guaranteed success. As discussed in Section \[sec:non\_myopic\_planners\], the solution ASG of the best reply process is an equilibrium where no node can improve its expected traveled distance by switching the neighbor it routes to. The route followed by the rover is also plotted on the ASG, which can be seen to visit nodes of higher probability of success. Fig. \[fig:k\_4\_path\_25\_plan\_conn\_path\] shows a plot of the routes traveled by the IDAG and the nearest neighbor approach. In this instance, the paths produced by the best reply and log-linear learning approach were the same as that of the IDAG approach.
![Path traveled by IDAG and nearest neighbor approach for $n=25$, when there are four nodes with $p_v=1$. The solution path produced by the best reply and log-linear learning approaches are the same as that of the IDAG approach in this instance. The background color plot specifies the probability of success of each node. Readers are referred to the color pdf for better visibility.[]{data-label="fig:k_4_path_25_plan_conn_path"}](k_4_path_25_plan_conn_path){width="0.7\linewidth"}
We next consider the case of [$n_t=0$]{}, which corresponds to no terminal node being present. This is an instance of the Min-Exp-Cost-Path-NT problem. In this setting, the solution we are looking for is a tour of all nodes $\{v \in \mathcal{V}:p_{v}>0\}$ that minimizes $\sum_{e \in \mathcal{E}(\mathcal{P})} \prod_{v \in \mathcal{V}(\mathcal{P}_{e})} (1-p_v) l_{e}$. In order to facilitate the use of our approaches on the Min-Exp-Cost-Path-NT problem, we introduce a terminal node in the grid graph as discussed in the construction in the proof of Lemma \[lemma:mecpntd\_red\_mecpd\]. We include an edge weight $l = 1.5 \times \frac{D}{\min_{v} p_{v}}$ between the artificial terminal node and all other nodes, where $D=2n$ is the diameter of the graph. Note that these solution paths may not visit all the nodes in the grid graph, due to the limited computation time. The best reply process was run $100$ times and the best solution was selected among the solutions produced. Moreover, we impose a computational time limit of $1$ hour on the log-linear learning approach. Fig. \[fig:tour\_25\_asg\] shows the ASG for the best reply and log-linear learning process as well as the path traveled from the starting node for both cases. We can see that the paths produced by both approaches traverse through nodes of high probability of success. Since success is not guaranteed when traversing along a solution path of an approach, expected distance [until]{} success is no longer well defined. In other words, we no longer have a single metric by which to judge the quality of a solution. Instead, we now have two metrics, the probability of failure along a path and the expected distance of traversing the path. Table. \[table:tour\_performance\] shows the performance of the best reply and log-linear approaches on these metrics for the sample scenario shown in Fig. \[fig:tour\_25\_asg\]. We see that both best reply and log-linear approaches produce a solution with good performance.
![Acyclic successor graph (ASG) of (left) best reply process and (right) log-linear learning process for $n=25$ when there is no terminal node. The path traveled from the starting node is also plotted (in blue). The starting position at $(12,12)$ is marked by the orange “x”. The background color plot specifies the probability of success of each node.[]{data-label="fig:tour_25_asg"}](tour_25_merged_asg){width="1.1\linewidth"}
Connectivity seeking robot {#subsec:conn_seeking_robot}
--------------------------
In this section, we consider the scenario of a robot seeking to get connected to a remote station. We say that the robot is connected if it is able to reliably transfer information to the remote station. This would imply satisfying a Quality of Service (QoS) requirement such as a target bit error rate (BER), which would in turn imply a minimum required received channel power given a fixed transmit power. Thus, in order for the robot to get connected, it needs to find a location where the channel power, when transmitting from that location, would be greater than the minimum required channel power. However, the robot’s prior knowledge of the channel is stochastic. Thus, for a robot seeking to do this in an energy efficient manner, its goal would be to plan a path such that it gets connected with a minimum expected traveled distance.
For the robot to plan such a path, it would require an assessment of the channel quality at any unvisited location. In previous work, we have shown how the robot can probabilistically predict the spatial variations of the channel based on a few a priori measurements [@malmirchegini2012spatial]. Moreover, we consider the multipath component to be time varying as in [@muralidharan2018pconn]. See [@malmirchegini2012spatial] for details on this channel prediction as well as performance of this framework with real data and in different environments.
Consider a scenario where the robot is located in the center of a $50$ m $\times$ $50$ m workspace as shown in Fig. \[fig:path\_plan\_conn\_path\], with the remote station located at the origin. The channel is generated using the realistic probabilistic channel model in [@goldsmith2005wireless; @malmirchegini2012spatial], with the following parameters that were obtained from real channel measurements in downtown San Francisco [@smith2004urban] : path loss exponent $n_{\text{PL}} = 4.2$, shadowing power $\sigma_{\text{SH}} = 2.9$ and shadowing decorrelation distance $\beta_{\text{SH}} = 12.92$ m. Moreover, the multipath fading is taken to be uncorrelated Rician fading with the parameter $K_{\text{ric}} = 1.59$. In order for the robot to be connected, we require a minimum required received power of $P_{R,\text{th},\text{dBm}} = -80$ dBmW. We take the maximum transmission power of a node to be $P_{0,\text{dBm}} = 27$ dBmW [@lonn2004output].
The robot is assumed to have $5$ % a priori measurements in the workspace. It utilizes the channel prediction framework described above to predict the channel at any unvisited location. We discretize the workspace of the robot into cells of size $1$ m by $1$ m. A cell is connected if there exists a location in the cell that is connected. Then, the channel prediction framework of [@malmirchegini2012spatial] is utilized to estimate the probability of connectivity of a cell. See [@muralidharan2018pconn] for more details on this estimation. We next construct a grid graph with each cell serving as a node on our graph. This gives us a grid graph of dimension $50$x$50$ with a probability of connectivity assigned to each node. We also add a new terminal node to the graph with probability of connectivity $1$, which represents the remote station at the origin. We attach the node in the workspace closest to the remote station to this terminal node with an edge cost equal to the expected distance [until]{} connectivity when moving straight towards the remote station from the node. This can be calculated based on the work in [@muralidharan2017fpd].
![Solution paths produced by the best reply and IDAG approaches for a channel realization. Also shown is the first connected node on the respective paths for the true channel realization. The background plot denotes the predicted probability of connectivity, which is used by the robot for path planning.[]{data-label="fig:path_plan_conn_path"}](path_plan_conn_path){width="0.75\linewidth"}
![Histogram of the expected cost of the best reply and closest terminal heuristic over $500$ channel realizations.](hist_approaches){width="0.7\linewidth"}
\[fig:hist\_approaches\]
We next compare our proposed approaches with the greedy nearest neighbor heuristic as well as the closest terminal heuristic of moving straight towards the remote station. We calculate the performance of the approaches based on the true probability of connectivity of a node calculated based on the true value of the channel. Fig. \[fig:path\_plan\_conn\_path\] shows the solution path produced by the best reply and IDAG heuristic for a sample channel realization. The background plot denotes the predicted probability of connectivity. We see that the paths produced take detours on the path to the connected point to visit areas of good probability of connectivity. Table. \[table:performance\_ch\_approach\] shows the expected distance along with the corresponding standard deviation, for the best reply, IDAG, nearest neighbor and closest terminal approaches averaged over $500$ channel realizations. We do not include the performance of log-linear as it takes longer to arrive at a good solution and is thus impractical to average over $500$ channel realizations. However, in our simulations, we did observe that the performance of best reply was generally similar to the performance of log-linear learning. We see that the best reply and IDAG approach outperformed the nearest neighbor and closest terminal heuristics significantly. For instance, the best reply approach provided an overall $35 \%$ and $44 \%$ reduction in the expected traveled distance when compared to the nearest neighbor and closest terminal heuristics respectively. Fig. \[fig:hist\_approaches\] shows the histogram of the expected cost of the best reply and closest terminal heuristic over the $500$ channel realizations. We can see that the expected cost associated with the best reply heuristic is typically better than that associated with the closest terminal heuristic.
Note that our framework can be extended to the case where the robot updates the probabilities of success as it operates in the environment.
Conclusions [and future work]{}
===============================
In this paper, we considered the problem of path planning on a graph for minimizing the expected cost [until]{} success. We showed that this problem is NP-hard and that it can be posed in a Markov Decision Process framework as a stochastic shortest path problem. We proposed a path planner based on a game-theoretic framework that yields an $\epsilon$-suboptimal solution to this problem asymptotically. In addition, we also proposed two non-myopic suboptimal strategies that find a good solution efficiently. Finally, through numerical results we showed that the proposed path planners outperform naive and greedy heuristics significantly. We considered two scenarios in the simulations, that of a rover on mars searching for an object for scientific study, and that of a realistic path planning scenario for a connectivity seeking robot. Our results then indicated a significant reduction in the expected traveled distance (e.g., $35 \%$ reduction for the path planning for connectivity scenario), when using our proposed approaches.
[There are several open questions and interesting directions to pursue in this area. One such direction is developing algorithms with provable performance guarantees that run in polynomial time ($\alpha$-approximation algorithms) for the Min-Exp-Cost-Path problem. The applicability of the results of this paper to areas such as satisficing search and theorem solving [@simon1975optimal] is another interesting future direction. ]{}
Appendix
========
Proof of Lemma \[lemma:simple\_path\_complete\_graph\] {#appendix:lemma_simple_path}
------------------------------------------------------
We first describe some properties of the solution of Min-Exp-Cost-Path and Min-Exp-Cost-Simple-Path.
Consider a path $\mathcal{P}=(v_1,v_2,\cdots, v_m)$. A node $v_i$ is a *revisited* node in the $i^{\text{th}}$ location of $\mathcal{P}$ if $v_i=v_j$ for some $j<i$. A node $v_i$ is a *first-visit* node in the $i^{\text{th}}$ location of $\mathcal{P}$ if $v_i\neq v_j$ for all $j<i$.
\[property:min\_exp\_cost\_path\] Let $\mathcal{P}^{*}=(v_1,v_2,\cdots, v_m)$ be a solution to Min-Exp-Cost-Path on $\mathcal{G}$. Consider any subpath $(v_{i},v_{i+1},\cdots,v_{j-1},v_{j})$ of $\mathcal{P}^{*}$ such that $v_i$ and $v_j$ are first-visit nodes, and $v_{i+1},\cdots,v_{j-1}$ are revisited nodes. Then, [$(v_{i},v_{i+1},\cdots,v_{j-1}, v_{j})$]{} is the shortest path between $v_i$ and $v_j$.
We show this by contradiction. Assume otherwise, i.e., [$(v_{i},v_{i+1},\cdots,v_{j-1}, v_{j})$]{} is not the shortest path between $v_{i}$ and $v_{j}$. Let $\mathcal{Q}$ be the path produced by replacing this subpath in $\mathcal{P}^{*}$ with the shortest path between $v_i$ and $v_j$. [ Let us denote this shortest path by $(v_i, u_{i+1}, \cdots, u_{\tilde{j}-1}, u_{\tilde{j}})$ where $u_{\tilde{j}} = v_{j}$. Then, $$\begin{aligned}
C(\mathcal{Q}, i) & = (1-p_{v_i})\Bigg[ l_{v_{i}u_{i+1}} + \Big[\prod_{k \in \mathcal{K}_{i+1}}(1-p_{u_k})\Big]l_{u_{i+1}u_{i+2}} \\
& \;\;+ \cdots + \Big[\prod_{k \in \mathcal{K}_{\tilde{j}-1}}(1-p_{u_k})\Big]\Big[l_{u_{\tilde{j}-1}u_{\tilde{j}}} + C(\mathcal{Q},\tilde{j})\Big] \Bigg]\\
& \leq (1-p_{v_i})\Bigg[ l_{v_{i}v_{j}}^{\text{min}} + \Big[\prod_{k \in \mathcal{K}_{\tilde{j}-1}}(1-p_{u_k})\Big]C(\mathcal{Q},\tilde{j})\Bigg],\end{aligned}$$ where $\mathcal{K}_{m} = \{k\in \{i+1,\cdots,m\}: u_k \text{ is a first visit node of } \mathcal{Q}\}$. ]{} The nodes [$(u_{i+1}, \cdots, u_{\tilde{j}})$]{} could be first visit nodes of $\mathcal{Q}$ or repeated nodes. We next show that in either scenario the expected cost of $\mathcal{Q}$ would be smaller than that of $\mathcal{P}$. [ If they are all revisited nodes or if they are first-visit nodes that are not revisited after node $u_{\tilde{j}}$, then $C(\mathcal{Q},\tilde{j}) = C(\mathcal{P}^{*}, j)$. If some or all of $(u_{i+1}, \cdots, u_{\tilde{j}})$ are first-visit nodes of $\mathcal{Q}$ that are visited later on, then $[\prod_{k \in \mathcal{K}_{\tilde{j}-1}}(1-p_{u_k})]C(\mathcal{Q},\tilde{j}) \leq C(\mathcal{P}^{*}, j)$, since success at a first visit node $u_k$ can occur earlier in path $\mathcal{Q}$ in comparison to $\mathcal{P}^{*}$ (which discounts the cost of all following edges). Thus, in either case, we have the inequality $$\begin{aligned}
C(\mathcal{Q}, i) & \leq (1-p_{v_{i}}) \left[l_{v_{i}v_{j}}^{\text{min}}+C(\mathcal{P}^{*},j)\right]\\
& < (1-p_{v_i})\left[ l_{v_{i}v_{i+1}} + \cdots + l_{v_{j-1}v_{j}} + C(\mathcal{P}^{*},j) \right]\\
& = C(\mathcal{P}^{*}, i).\end{aligned}$$ This implies that $C(\mathcal{Q},1) < C(\mathcal{P}^{*},1)$ resulting in a contradiction. ]{}
\[property:min\_exp\_cost\_simple\_path\] Let $\mathcal{P}^{*}=(v_1,v_2,\cdots, v_m)$ be a solution of Min-Exp-Cost-Simple-Path on complete graph $\mathcal{G}_{\text{comp}}$. Consider any two consecutive nodes $v_i$ and $v_{i+1}$. The shortest path between $v_i$ and $v_{i+1}$ in $\mathcal{G}$ would only consist of nodes that have been visited earlier in $\mathcal{P}^{*}$.
Suppose this is not true for consecutive nodes $v_i$ and $v_{i+1}$. Then there exists at least a single node $u$ that lies on the shortest path between $v_i$ and $v_{i+1}$, and that has not been visited earlier in $\mathcal{P}^{*}$. Let $\mathcal{Q}$ be the path formed from $\mathcal{P}^{*}$ when $u$ is added between $v_{i}$ and $v_{i+1}$. The expected cost of $\mathcal{Q}$ from the $i^{\text{th}}$ node onwards is given by $$\begin{aligned}
C(\mathcal{Q},i) & = (1-p_{v_{i}}) \left[l_{v_{i}u} + (1-p_{u})\left[l_{uv_{i+1}} + C(\mathcal{Q},i+2)\right]\right] \\
& < (1-p_{v_{i}}) \left[l_{v_{i}v_{i+1}}+C(\mathcal{P}^{*},i+1)\right].\end{aligned}$$ This implies that the expected cost of $Q$ would be less than that of $\mathcal{P}^{*}$, resulting in a contradiction.
Let $\mathcal{P}$ be the solution to Min-Exp-Cost-Path on $\mathcal{G}$ and let $\mathcal{Q}$ be the solution of the Min-Exp-Cost-Simple-Path on $\mathcal{G}_{\text{comp}}$. From Property \[property:min\_exp\_cost\_path\], we know that the path produced by removing revisited nodes in $\mathcal{P}$, will produce a feasible solution to Min-Exp-Cost-Simple-Path on $\mathcal{G}_{\text{comp}}$ with the same cost as $\mathcal{P}$. Thus, the cost of $\mathcal{P}$ is greater than or equal that of $\mathcal{Q}$. Similarly, from Property \[property:min\_exp\_cost\_simple\_path\], we know that the path produced by expanding the shortest path between any adjacent nodes in $\mathcal{Q}$, will be a feasible solution to Min-Exp-Cost-Path on $\mathcal{G}$ with the same cost as $\mathcal{Q}$. Thus, this path produced from $\mathcal{Q}$ will be an optimal solution to Min-Exp-Cost-Path on $\mathcal{G}$.
Proof of Theorem \[theorem:log\_lin\_asympt\] {#appendix:proof_log_lin_asympt}
---------------------------------------------
Log-linear learning induces a Markov process on the action profile space $A_{\text{ASG}} \cup A_{\emptyset}$, where $A_{\emptyset} = \{\mu: \mu_{v}=a_{\emptyset} \text{ for some } v\}$. In the following lemma, we first show that $A_{\text{ASG}}$ is a closed communicating recurrent class.
\[lemma:A\_ASG\_closed\_comm\_class\] $A_{\text{ASG}}$ is a closed communicating recurrent class.
[ We first show that $A_{\text{ASG}}$ is a communicating class, i.e., there is a finite transition sequence from $\mu^{s}$ to $\mu^{f}$ with non-zero probability for all $\mu^{s}, \mu^{f} \in A_{\text{ASG}}$. Consider the set of states $R_0,R_1,\cdots,$ defined by the recursion $R_{k+1} = \{v:\mu_{v}^{f} \in R_{k}\}$, where $ R_0 = T$, i.e., $R_{k}$ is the set of all nodes that are $k$ hops away from the set of terminal nodes $T$ in the ASG $\mathcal{SG}(\mu^{f})$. Let $\bar{k}$ be the last of the sets that is non-empty. Since $\mu^{f} \in A_{\text{ASG}}$, we have $\bar{k} \leq |\mathcal{V}|$ and $\cup_{m=0}^{\bar{k}}R_{m} = \mathcal{V}$. We transition from $\mu^{s}$ to $\mu^{f}$ by sequentially switching from $\mu_{v} = \mu_{v}^{s}$ to $\mu_v = \mu_{v}^{f}$, for all $v \in R_{k}$, starting at $k=1$ and incrementing $k$ until $k=\bar{k}$, i.e., we first change the action of nodes in $R_1$, and then $R_2$ and so on until $R_k$. We next show that this transition sequence has a non-zero probability by showing that each component transition has a non-zero probability. At stage $k+1$, consider the transition where we switch the action of a node $v \in R_k$, and let $\mu$ be the current action. At this stage we have already changed the action of players in $R_1, \cdots, R_k$, and for the current graph $\mathcal{SG}(\mu)$, there is a path leading from $\mu_{v}^{f} \in R_k$ to a terminal node in $R_0$. Moreover, $\mu_{v}^{f}$ is not upstream of $v$ since the intermediate nodes of the path are in $R_{k-1}, \cdots, R_{1}$. Then, $\mu_{v}^{f} \in A_v^{c}(\mu_{-v})$, which implies that the transition $(\mu_v^{s},\mu_{-v}) \rightarrow (\mu_{v}^{f}, \mu_{-v})$ has a non-zero probability. Thus, $A_{\text{ASG}}$ is a communicating class. ]{}
[ We next show that $A_{\text{ASG}}$ is closed. Consider a state $\mu \in A_{\text{ASG}}$, and a node $v \in \mathcal{V'}$. Then, $A_v^{c}(\mu_{-v})$ is not empty, since $\mu_v \in A_{v}^{c}(\mu_{-v})$. This implies that $\mu_v$ can not be set as the null action $a_{\emptyset}$. Thus, $A_{\text{ASG}}$ is closed. Since $A_{\text{ASG}}$ is a closed communicating class, every action profile $\mu \in A_{\text{ASG}}$ is a recurrent state. ]{}
We next show, in the following lemma, that all states in $A_{\emptyset}$ are transient states.
\[lemma:null\_set\_transience\] Any state $\mu \in A_{\emptyset}$ is a transient state.
[ Consider a state $\mu^{s} \in A_{\emptyset}$ and a state $\mu^{f} \in A_{\text{ASG}}$. We can design a transition sequence of non-zero probability from $\mu^{s}$ to $\mu^{f}$ similar to how we did so in the proof of Lemma \[lemma:A\_ASG\_closed\_comm\_class\], as the sequence designed did not depend on $\mu^{s}$. Moreover, from Lemma \[lemma:A\_ASG\_closed\_comm\_class\], we know that $A_{\text{ASG}}$ is a closed class. Thus, there is a finite non-zero probability that the state $\mu^{s} \in A_{\emptyset}$ will never be revisited. ]{}
[ From Lemma \[lemma:A\_ASG\_closed\_comm\_class\] and Lemma \[lemma:null\_set\_transience\], we know that there is exactly one closed communicating recurrent class. Thus, the stationary distribution of the Markov chain induced by log-linear learning is unique. The transition probability from state $\mu$ to $\mu^{'}=(\mu_{v}^{'}, \mu_{-v})$ for $\mu, \mu^{'} \in A_{\text{ASG}}$ is given as $$\begin{aligned}
P_{\mu\mu^{'}} = \frac{1}{|\mathcal{V}^{'}|}\frac{e^{-\frac{1}{\tau}\left(\mathcal{J}_v(\mu_v^{'}, \mu_{-v})\right)}}{\sum_{\mu_{v}^{''} \in A_v^{c}(\mu_{-v})}e^{-\frac{1}{\tau}\left(\mathcal{J}_{v}(\mu_v^{''}, \mu_{-v})\right)}},\end{aligned}$$ denote . We can reformulate this as $$\begin{aligned}
P_{\mu\mu^{'}} = \frac{1}{|\mathcal{V}^{'}|}\frac{e^{-\frac{1}{\tau}\left(\phi(\mu_v^{'}, \mu_{-v})\right)}}{\sum_{\mu_{v}^{''} \in A_v^{c}(\mu_{-v})}e^{-\frac{1}{\tau}\left(\phi(\mu_v^{''}, \mu_{-v}(k))\right)}},\end{aligned}$$ using $\mathcal{J}_v(\mu_v^{'}, \mu_{-v}) - \mathcal{J}_{v}(\mu_v, \mu_{-v}) = \phi(\mu_v^{'}, \mu_{-v}) - \phi(\mu_v, \mu_{-v})$ from Lemma \[lemma:potential\_game\]. Then, we can see that the probability distribution $\Pi \in \Delta (A_{\text{ASG}})$ given by $$\begin{aligned}
\Pi(\mu) = \frac{e^{-\frac{1}{\tau}\phi(\mu)}}{\sum_{\mu^{''} \in A_{\text{ASG}}}e^{-\frac{1}{\tau}\phi(\mu^{''})}},\end{aligned}$$ satisfies the detailed balance equation $\Pi_{\mu}P_{\mu \mu^{'}} = \Pi_{\mu^{'}}P_{\mu^{'} \mu}$. Thus, $\Pi$ is the unique stationary distribution. As temperature $\tau \rightarrow 0$, the weight of the stationary distribution will be on the global minimizers of the potential function [@marden2012revisiting]. In other words, $
\lim_{\tau \rightarrow 0} \sum_{\mu \in \operatorname*{arg\,min}_{\mu^{'} \in A_{\text{ASG}}}\phi(\mu^{'})} \Pi(\mu) = 1.
$ Thus, asymptotically, log-linear learning provides us with the global minimizer of $\phi(\mu)= C_{v_{s}}(\mu) + \epsilon^{'}\sum_{v \neq v_s}C_{v}(\mu)$, an $\epsilon$-suboptimal solution to the Min-Exp-Cost-Simple-Path problem.]{}
Proof of Theorem \[theorem:BR\_finite\_iters\] {#appendix:proof_br_conv}
----------------------------------------------
[ We first show that there exists an $k_l$ such that $\mu(k) \in A_{\text{ASG}}$ for all $k \geq k_l$. Let $A_{\emptyset} = \{\mu: \mu_{v}=a_{\emptyset} \text{ for some } v\}$ denote the set of action profiles with at least one player playing a null action. Consider a action profile $\mu \in A_{\emptyset}$. Then there must exist a node $v \in \{u: \mu_{u} = a_{\emptyset}\}$ which has a neighbor in $T \cup \{u: \mu_{u} \neq a_{\emptyset}\}$, since otherwise $\{u:\mu_{u} = a_{\emptyset}\}$ and $T \cup \{u: \mu_{u} \neq a_{\emptyset}\}$ are not connected, contradicting the assumption that the graph is connected. Then, $A_{v}^{c}(\mu_{-v})$ is non-empty, and when node $v$ is selected in the round robin iteration it will play a non-null action. Moreover, $\mu_{v}\neq a_{\emptyset}$ for all subsequent iterations, since its current action at any iteration $k$ will always belong to $A_{v}^{c}(\mu_{-v}(k))$. We can apply this reasoning repeatedly to show that eventually at some iteration $k_l$ the set $\{u: \mu_{u}(k_l) = a_{\emptyset}\}$ will be empty, i.e., $\mu(k_l) \in A_{\text{ASG}}$. Furthermore, $\mu(k) \in A_{\text{ASG}}$ for all $k \geq k_l$. ]{}
[We next]{} prove that $C_{v}(\mu(k+1)) \leq C_{v}(\mu(k))$ for all $v \in \mathcal{V}'$ and for all $k$. Let $v$ be the node selected at stage $k+1$. [ Clearly, if $\mu_{v}(k) = a_{\emptyset}$ this is true. Else,]{} $$\begin{aligned}
C_{v}(\mu(k+1)) & = \min_{{u \in A_v^{c}(\mu_{-v}(k))}} \left\{ (1-p_v)\left[l_{vu} + C_{u}(\mu(k))\right] \right\} \nonumber\\
& \leq (1-p_v)\left[l_{v\mu_{v}(k)} + C_{\mu_{v}(k)}(\mu(k))\right] \label{eq:1}\\
& = C_{v}(\mu(k))\nonumber, \end{aligned}$$ where (\[eq:1\]) follows since $\mu_{v}(k) \in A_v^{c}(\mu_{-v}(k))$. From (\[eq:rec\_exp\_dist\]), we have that $ C_u(\mu(k+1)) \leq C_u(\mu(k)), \;\; \forall u \in U_{v}(\mu)$, where $U_{v}(\mu)$ is the set of upstream nodes from $v$. Furthermore, $ C_u(\mu(k+1)) = C_u(\mu(k)), \;\; \forall u \notin U_{v}(\mu)$. Thus, $C_{v}(\mu(k+1)) \leq C_{v}(\mu(k))$ for all $v \in \mathcal{V}'$.
Since $\{C_{v}(\mu(k))\}_{k}$ is a monotonically non-increasing sequence, bounded by below from $0$, we know that the limit exists. Moreover, since $\mu$ belongs to a finite space, we know that convergence must occur in a finite number of iterations. It should be noted however, that the limit can be different based on the order [of the nodes in the round robin]{}. Let [$\mu^{*} \in A_{\text{ASG}}$]{} denote the solution at convergence for the particular order of nodes. [We assume that, when selecting $\mu_{v}$, ties are broken using a consistent set of rules, since otherwise we may cycle repeatedly through action profiles having the same expected costs $\{C_{v}(\mu)\}_{v}$.]{}
We next show that we converge to this limit in $|\mathcal{V}'|^2$ iterations. Let $n=|\mathcal{V}'|$. Consider the set of states $R_0,R_1,\cdots,$ defined by the recursion $R_{k+1} = \{v:\mu_{v}^{*} \in R_{k}\}$, where $ R_0 = T$, i.e., $R_{k}$ is the set of all nodes that are $k$ hops away from the set of terminal nodes $T$ in the ASG $\mathcal{SG}(\mu^{*})$. Let $\bar{k}$ be the last of the sets that is non-empty. Since [$\mu^{*} \in A_{\text{ASG}}$]{}, we have $\bar{k} \leq n$ and $\cup_{m=0}^{\bar{k}}R_{m} = \mathcal{V}$. We next show by induction that $ \mu_v(nk) = \mu_v^{*}, \;\; \forall v \in \cup_{m=0}^{k}R_m$, for $k=0,1,\cdots,\bar{k}$. This is true for $k=0$. Assume that it holds true at stage $k$, i.e., $\mu_{v}(nk) = \mu_{v}^{*}$ for all $v \in\cup_{m=1}^{k}R_{0}$. Since $\{C_{v}(\mu(k))\}_{k}$ is monotonically non-increasing, we have $C_{v}(\mu^{*}) \leq C_{v}(\mu(k+1))$. Moreover, since any node $v \in \cup_{m=0}^{k+1}R_m$ would be selected once in [round]{} $k+1$ of the [round robin]{} process, we have $$\begin{aligned}
C_{v}(\mu(n(k+1))) &= \min_{{u \in A_{v}^{c}(\mu_{-v}(n(k+1)-1))}} \Big\{ (1-p_v) \times \nonumber \\
& \;\;\;\;\;\;\;\;\;\;\;\;\; \big[l_{vu} + C_{u}(\mu(n(k+1)-1))\big] \Big\} \nonumber\\
& \leq (1-p_v)\left[l_{v\mu_{v}^{*}} + C_{\mu_v}(\mu^{*})\right] \label{eq:2} \\
& = C_{v}(\mu^{*}). \nonumber\end{aligned}$$ where (\[eq:2\]) follows based on the induction hypothesis, since $\mu_{v}^{*}$ leads to a direct path to a terminal node, and is not an upstream node of $v$. Thus, $\mu_{v}(n(k+1)) = \mu_{v}^{*}$ for all $v \in \cup_{m=0}^{k+1}R_m$. This implies that the best reply process, when we cycle through the nodes [in a round robin]{}, converges within at most $n^2$ iterations.
Relation to [the Discounted-Reward Traveling Salesman Problem]{} {#appendix:pc_tsp_relation}
----------------------------------------------------------------
In this section, we show [the relationship between the Min-Exp-Cost-Path-NT problem of Section \[subsec:min\_exp\_cost\_tour\] and the Discounted-Reward-TSP, a path planning problem studied in the theoretical computer science community [@blum2007approximation]. Note that this section is merely pointing out the relationship between the objectives/constraints of the two problems, and is not claiming that one is reducible to the other.]{} In [Discounted-Reward-TSP]{}, each node $v$ has a prize $\pi_{v}$ associated with it and each edge $(u, v)$ has a cost $l_{uv}$ associated with it. The goal is to find a path $\mathcal{P}$ that visits all nodes and that maximizes the discounted reward collected $\sum_{v} \gamma^{l^{\mathcal{P}}_{v}} \pi_{v}$, where $\gamma<1$ is the discount factor, and $l^{\mathcal{P}}_{v} = \sum_{e \in \mathcal{E}(\mathcal{P}_{v})}l_e$ is the cost incurred along path $\mathcal{P}$ until node $v$.
[In the setting of our Min-Exp-Cost-Path-NT problem]{}, the prize of a node $v$ [is taken as]{} $\pi_{v} = \log_{\gamma}(1-p_{v})$ for a value of $\gamma < 1$. Our Min-Exp-Cost-Path-NT objective can [then]{} be reformulated as $\sum_{e \in \mathcal{E}(\mathcal{P})} \gamma ^{\pi^{\mathcal{P}}_{e}} l_{e}$, where $\pi^{\mathcal{P}}_{e} = \sum_{v \in \mathcal{V}(\mathcal{P}_{e})} \pi_{v}$ is the reward collected along path $\mathcal{P}$ until edge $e$ is encountered. We can refer to this problem as the Discounted-Cost-TSP problem, drawing a parallel to the Discounted-Reward-TSP problem described above. [However, note that our problem is not the same as the Discounted-Reward-TSP problem. Rather, we simply illustrated a relationship between the two problems, which can lead to further future explorations in this area.]{}
Formulation as Stochastic Shortest Path Problem with Recourse {#appendix:ssp_recourse}
-------------------------------------------------------------
In this section, we show that we can formulate the Min-Exp-Cost-Path problem as a special case of the stochastic shortest path problem with recourse [@polychronopoulos1993stochastic]. The terminology of stochastic shortest path here is different from its usage in \[subsec:mdp\_formulation\]. The stochastic shortest path problem with recourse consists of a graph where the edge weights are random variables taking values from a finite range. As the graph is traversed, the realizations of the cost of an edge is learned when one of its end nodes are visited. The goal is to find a policy that minimizes the expected cost from a source node $v_s$ to a destination node $v_{t}$. The best policy would determine where to go next based on the currently available information.
Consider the Min-Exp-Cost-Path problem on a graph $\mathcal{G}=(\mathcal{V},\mathcal{E})$, with probability of success $p_{v}\in [0,1]$, for all $v \in \mathcal{V}$. We can formulate this as a special case of this stochastic shortest path problem with recourse, by adding a node $v_t$ which acts as the destination node. Each node in $\mathcal{G}$ is connected to $v_t$ with a edge of random weight. The edge from $v$ to $v_t$ has weight $
l_{vv_{t}} = \left\{\begin{array}{ll} 0, & \text{w.p. } p_{v}\\
\infty, & \text{w.p. } 1-p_{v} \end{array}\right. .
$ The remaining set of edges $\mathcal{E}$ are deterministic. The solution to the shortest path problem from $v_s$ to $v_t$ with recourse, would provide a policy that would give us the solution to the Min-Exp-Cost-Path problem. The policy in this special case would produce a path from $v_{s}$ to a node in the set of terminal nodes $T$. However, the general stochastic shortest path with recourse is a much harder problem to solve than the Min-Exp-Cost-Path problem and the heuristics utilized for stochastic shortest path with recourse are not particularly suited to our specific problem. For instance, in the open loop feedback certainty equivalent heuristic [@gao2006optimal], at each iteration, the uncertain edge costs are replaced with their expectation and the next node is chosen according to the deterministic shortest path to the destination. In our setting this would correspond to the heuristic of moving along the deterministic shortest path to the closest terminal node. Such a heuristic would ignore the probability of success $p_{v}$ of the nodes.
[^1]: This work is supported in part by NSF RI award 1619376 and NSF CCSS award 1611254.
[^2]: The authors are with the Department of Electrical and Computer Engineering, University of California Santa Barbara, Santa Barbara, CA 93106, USA email: $\{$arjunm, ymostofi$\}$@ece.ucsb.edu.
[^3]: Note that depending on how we impose a simple path, we may need to keep track of the visited nodes. However, as we shall see, this keeping track of the history will not result in an exponential memory requirement, as was the case for the original MDP formulation. We further note that it is also possible to impose simple paths without a need to keep track of the history of the visited nodes, as we shall see in Section \[subsec:idag\].
[^4]: [This differs from the usual definition of a potential game in that the joint action profiles are restricted to lie in $A_{\text{ASG}}$.]{}
|
---
abstract: |
In this paper, we reduce the logspace shortest path problem to biconnected graphs; in particular, we present a logspace shortest path algorithm for general graphs which uses a logspace shortest path oracle for biconnected graphs. We also present a linear time logspace shortest path algorithm for graphs with bounded vertex degree and biconnected component size, which does not rely on an oracle. The asymptotic time-space product of this algorithm is the best possible among all shortest path algorithms.
[**Keywords:**]{} logspace algorithm, shortest path, biconnected graph, bounded degree
author:
- Boris Brimkov
title: A reduction of the logspace shortest path problem to biconnected graphs
---
Introduction
============
The logspace computational model entails algorithms which use a read-only input array and $O(\log n)$ working memory. For general graphs, there is no known deterministic logspace algorithm for the shortest path problem. In fact, the shortest path problem is NL-complete, so the existence of a logspace algorithm would imply that L=NL [@jakoby1]. In this paper, we reduce the logspace shortest path problem to biconnected graphs, and present a linear time logspace shortest path algorithm for parameter-constrained graphs.
An important result under the logspace computational model which is used in the sequel is Reingold’s deterministic polynomial time algorithm [@reingold] for the undirected $st$-connectivity problem (USTCON) of determining whether two vertices in an undirected graph belong to the same connected component. There are a number of randomized logspace algorithms for USTCON (see, for example, [@barnes_feige; @feige; @kosowski]) which perform faster than Reingold’s algorithm but whose output may be incorrect with a certain probability. There are also a number of logspace algorithms for the shortest path problem and other graph problems on special types of graphs (see [@asano6; @brimkov2; @ktrees; @jakoby1; @munro_ramirez]). As a rule, due to time-space trade-off, improved space-efficiency is achieved on the account of higher time-complexity. Often the trade-off is rather large, yielding time complexities of $O(n^c)$ “for some constant $c$ significantly larger than 1" [@jakoby2]. In particular, the time complexity of Reingold’s USTCON algorithm remains largely uncharted but is possibly of very high order. The linear time logspace shortest path algorithm presented in this paper avoids this shortcoming, at the expense of some loss of generality. In fact, its time (and space) complexity is the best possible, since a hypothetical sublinear-time algorithm would fail to print a shortest path of length $\Omega(n)$.
This paper is organized as follows. In the next section, we recall some basic definitions and introduce a few concepts which will be used in the sequel. In Section 3, we present a reduction of the logspace shortest path algorithm to biconnected graphs. In Section 4, we present a linear time logspace algorithm for parameter-constrained graphs. We conclude with some final remarks in Section 5.
Preliminaries
=============
A *logspace algorithm* is an algorithm which uses $O(\log n)$ working memory, where $n$ is the size of the input. In addition, the input and output are respectively read-only and write-only, and do not count toward the space used. The *shortest path problem* requires finding a path between two given vertices $s$ and $t$ in a graph $G$, such that the sum of the weights of the edges constituting the path is as small as possible. In general, if $s$ and $t$ are not in the same connected component, or if the connected component containing $s$ and $t$ also contains a negative-weight cycle, the shortest path does not exist. For simplicity, we will assume there are no negative-weight cycles in $G$, although the proposed algorithms can be easily modified to detect (and terminate at) such cycles without any increase in overall complexity. We will also assume that $G$ is encoded by its adjacency list, where vertices are labeled with the first $n$ natural numbers. The $j^{\text{th}}$ neighbor of vertex $i$ is accessed with $Adj(i,j)$ in $O(1)$ time, and *degree*$(i)=|Adj(i)|$.
An *articulation point* in $G$ is a vertex whose deletion increases the number of connected components of $G$. A *block* is a maximal subgraph of $G$ which has no articulation points; if $G$ has a single block, then $G$ is *biconnected*. The *block tree* $T$ of $G$ is the bipartite graph with parts $A$ and $B$, where $A$ is the set of articulation points of $G$ and $B$ is the set of blocks of $G$; $a\in A$ is adjacent to $b \in B$ if and only if $b$ contains $a$. We define the *id* of a block in $G$ to be $(\emph{largest, smallest})$, where *largest* and *smallest* are the largest and smallest vertices in the block with respect to their labeling from 1 to $n$. Clearly, each block in $G$ has a unique *id*. Note also that it is possible to lexicographically compare the $id$s of two or more blocks, i.e., if $id_1=(\ell_1,s_1)$ and $id_2=(\ell_2,s_2)$, then $id_1>id_2$ if $\ell_1> \ell_2$ or if $\ell_1= \ell_2$ and $s_1>s_2$.
Given numbers $a_1$, $a_2$, and $p$, we define the *next* number after $p$ as follows:
$$next(a_1,a_2,p)=\begin{cases}
a_1&\text{ if }a_2\leq p<a_1\text{ or }p<a_1\leq a_2\text{ or }a_1 \leq a_2 \leq p\\
a_2&\text{ otherwise.}
\end{cases}$$
We extend this definition to a list $L$ of not necessarily distinct numbers by defining the *next* number in $L$ after $p$ to be a number in $L$ larger than $p$ by the smallest amount, or if no such number exists, to be the smallest number in $L$. The *next* number in $L$ can be found with logspace and $O(|L|)$ time, given sequential access to the elements of $L$, by repeatedly applying the *next* function.
Reducing the logspace shortest path problem to biconnected graphs
=================================================================
Let *connected*$(H;v_1,v_2)$ be an implementation of Reingold’s USTCON algorithm which takes in two vertices of a graph $H$ and returns *true* if they belong to the same connected component, and *false* otherwise. Let *pathInBlock*$(H;v_1,v_2)$ be a polynomial time, logspace oracle which takes in two vertices of a biconnected graph $H$ and prints the shortest path between them.
Clearly, the encoding of a graph $H$ can be reduced with logspace and polynomial time to the encoding of some induced subgraph $H[S]$. Thus, by transitivity and closure of reductions, the functions *connected*$(H[S];v_1,v_2)$ and *pathInBlock*$(H[B];v_1,v_2)$ can be used with logspace and polynomial time, where $S$ and $B$ are sets of vertices computed at runtime and $H[B]$ is biconnected.
The *connected* function reduces the logspace shortest path problem to connected graphs. In this section, we will further reduce this problem to biconnected graphs, by presenting a logspace algorithm for finding the shortest path between two vertices in an arbitrary graph using the oracle *pathInBlock*.
Constructing a logspace traversal function
------------------------------------------
Let $G$ be a graph of order $n$, and $v_1$ and $v_2$ be two vertices that belong to the same block; the set of all vertices in this block will be referred to as *block*$(v_1,v_2)$. Using the *connected* function, is easy to construct a logspace function *isInBlock*$(v_1,v_2,v)$ which returns *true* when $v$ is part of *block*$(v_1,v_2)$ and *false* otherwise; see Table 1 for pseudocode. This procedure can be used to access the vertices in $block(v_1,v_2)$ sequentially. A similar procedure *areInBlock*$(u,v)$ can be defined which returns *true* when $u$ and $v$ are in the same block, and *false* otherwise.
A vertex of $G$ is an articulation point if and only if two of its neighbors are not in the same block. Thus, using the *isInBlock* function, we can construct a function *isArticulation*$(v)$ which returns *true* when $v$ is an articulation point and *false* otherwise; see Table 1 for pseudocode. We also define the function $id(v_1,v_2)$, which goes through the vertices of *block*$(v_1,v_2)$ and returns (*largest, smallest*), where *largest* and *smallest* are respectively the largest and smallest vertices in *block*$(v_1,v_2)$ according to their labeling.
Let $p$ be an articulation point[^1] in *block*$(v_1,v_2)$. To find the *next* articulation point in *block*$(v_1,v_2)$ after $p$, we can create a function *nextArticulation*$(v_1,v_2,p)$ which uses each articulation point in *block*$(v_1,v_2)$ as a member of list $L$ and applies the *next* function. Note that the vertices in $L$ do not have to be stored, but can be generated one at a time; see Table 1 for pseudocode. Similarly, to identify the block containing $p$ and having the *next* $id$ after $id(v_1,v_2)$, we can create a function *nextBlock*$(v_1,v_2,p)$ which uses the *id*s of the blocks identified by $p$ and each of its neighbors as members of a list $L$ and applies the *next* function. Note that the *id*s in $L$ do not have to be stored but can be computed one at a time; see Table 1 for pseudocode.
Finally, given articulation point $p$ and vertex $v$ in the same block, we will call the component of $G-\{block(v,p)\backslash\{p\}\}$ which contains $p$ the *subgraph of G rooted at block$(v,p)$ containing p*, or *subgraph*$(v,p)$. This subgraph can be traversed with logspace by starting from $p$ and repeatedly moving to the *next* block and to the *next* articulation point until the starting block is reached again. This procedure indeed gives a traversal, since it corresponds to visiting the *next* neighbor in the block tree $T$ of $G$, which generates an Euler subtour traversal (cf. [@tarjan_vishkin]). In addition, during the traversal of *subgraph*$(v,p)$, each vertex can be compared to a given vertex $t$, in order to determine whether the subgraph contains $t$. Thus, we can create a function *isInSubgraph*$(v,p,t)$ which returns *true* if $t$ is in *subgraph*$(v,p)$ and *false* otherwise; see Table 1 for pseudocode.
Main Algorithm
--------------
Using the subroutines outlined in the previous section and the oracle *pathInBlock*, we propose the following logspace algorithm for finding the shortest path in a graph $G$. The main idea is to print the shortest path one block at a time by locating $t$ in one of the subgraphs rooted at the current block.
Algorithm 1 finds the correct shortest path between vertices $s$ and $t$ in graph $G$ with logspace and polynomial time, using a shortest path oracle for biconnected graphs.
Let $p_0=s$ and $p_{\ell+1}=t$; the shortest path between $p_0$ and $p_{\ell+1}$ is $P=p_0 P_0 p_1 P_1\ldots p_{\ell}P_{\ell} p_{\ell+1}$, where $p_1,\ldots,p_{\ell}$ are articulation points and $P_0,\ldots,P_{\ell}$ are (possibly empty) subpaths which contain no articulation points. Let $b_i=block(p_i,p_{i+1})$ for $0\leq i \leq \ell$, so that *pathInBlock*$(G[b_i];p_i,p_{i+1})=p_iP_ip_{i+1}$.
Suppose the subpath $p_0 P_0 \ldots p_i$, $i\geq 0$, has already been printed and that the vertex $p_i$ is stored in memory. In each iteration of the main loop, the function *isInSubgraph*$(p_i,p,t)$ returns *true* only for $p=p_{i+1}$ when run for all articulation points $p$ in all blocks containing $p_i$. The function *pathInBlock*$(G[b_i],p_i,p_{i+1})$ is then used to print $P_{i+1}$ and $p_{i+1}$. Finally, $p_i$ is replaced in memory by $p_{i+1}$, and this procedure is repeated until $p_{\ell+1}$ is reached. Since the main loop is entered only if the shortest path is of finite length, the algorithm terminates, and since each subpath printed is between two consecutive articulation points of $P$, the output of Algorithm 1 is the correct shortest path between $s$ and $t$.
Since the *connected* function is logspace, the *isInBlock*, *isArticulation* and *isInSubgraph* functions are each logspace. Only a constant number of variables, each of size $O(\log n)$, are simultaneously stored in Algorithm 1, and every function call is to a logspace function (assuming the *pathInBlock* oracle is logspace); thus, the space complexity of Algorithm 1 is $O(\log n)$. Note that since the vertices in *block*$(v_1,v_2)$ cannot be stored in memory simultaneously, a call to the function *pathInBlock*$(G[block(v_1,v_2)],v_1,v_2)$ needs to be realized by a logspace reduction, i.e., the vertices $v_1$ and $v_2$ are stored, and whenever the function *pathInBlock* needs to access an entry of the adjacency list of $G[V(block(v_1,v_2))]$, it recomputes it by going through the vertices of $G$ and using the function *isInBlock*.
Similarly, since the *connected* function uses polynomial time, the *isInBlock*, *isArticulation* and *isInSubgraph* functions each use polynomial time. The main loop is executed at most $O(n)$ times, and each iteration calls a constant number of polynomial time functions (assuming the *pathInBlock* oracle uses polynomial time); thus, the time complexity of Algorithm 1 is $O(n^c)$ for some constant $c$. $\square$
Linear time logspace algorithm for parametrically constrained graphs
====================================================================
Let *BellmanFord*$(H;v_1,v_2)$ be an implementation of the Bellman-Ford shortest path algorithm [@bellman_ford] which takes in two vertices of a graph $H$ and prints out the shortest path between them. Let *HopcroftTarjan*$(H)$ be an implementation of Hopcroft and Tarjan’s algorithm [@hopcroft_tarjan] which returns all blocks and articulation points of a graph $H$. If the size of $H$ is bounded by a constant, *BellmanFord* and *HopcroftTarjan* can each be used with constant time and a constant number of memory cells.
Let $G$ be a graph of order $n$ with maximum vertex degree $\Delta$ and maximum biconnected component size $k$. We will regard $\Delta$ and $k$ as fixed constants, independent of $n$. Using these constraints and some additional computational techniques, we will reformulate Algorithm 1 as a linear-time logspace shortest path algorithm which does not rely on an oracle. Asymptotically, both the time and space requirements of this algorithm are the best possible and cannot be improved; see Corollary 1 for more information.
Constructing a linear time logspace traversal function
------------------------------------------------------
By the assumption on the structure of $G$, the number of vertices at distance at most $k$ from a specified vertex $v$ is bounded by $\lfloor \frac{\Delta^{k+1}-1}{\Delta-1}\rfloor$. Thus, any operations on a subgraph induced by such a set of vertices can be performed with constant time and a constant number of memory cells, each with size $O(\log n)$; note that since each vertex of $G$ has a bounded number of neighbors, $G[S]$ can be found in constant time for any set $S$ of bounded size. In particular, we can construct a function *blocksContaining*$(v)$ which uses *HopcroftTarjan* to return all blocks containing a given vertex $v$ and all articulation points in these blocks; see below for pseudocode.
Using the set of blocks and articulation points given by the *blocksContaining* function, we can define functions *isInBlock*$(v_1,v_2,v)$, *areInBlock*$(u,v)$, *isArticulation*$(v)$, *id*$(v_1,v_2)$, *nextArticulation*$(v_1,v_2,p)$, and *nextBlock*$(v_1,v_2,p)$ analogous to the ones described in Section 3, each of which uses $O(\log n)$ space and $O(1)$ time. We can also construct an analogue of *isInSubgraph*$(v,p,t)$, which uses time proportional to the size of *subgraph*$(v,p)$; in particular, the time for traversing the entire graph $G$ via an Euler tour of its block tree is $O(n)$ (provided $G$ is connected) since there are $O(n)$ calls to the *nextArticulation* function and $O(n)$ calls to the *nextBlock* function.
Finally, it will be convenient to define the following functions: *adjacentPoints*$(v_1,v_2)$ which returns the set of articulation points belonging to blocks containing $v_2$ but not $v_1$ if $v_1\neq v_2$ and the set of articulation points belonging to blocks containing $v_2$ if $v_1=v_2$ (this function is slight modification of *blocksContaining*); *last*$(L)$ which returns the last element of a list $L$; *traverseComponent*$(s,t)$ which traverses the component containing a vertex $s$ and returns *true* if $t$ is in the same component and *false* otherwise (this function is identical to *isInSubgraph*, with a slight modification in the stopping condition).
Linear time logspace shortest path algorithm
--------------------------------------------
We now present a modified version of Algorithm 1, which uses the subroutines outlined in the previous section as well as some additional computational techniques such as “simulated parallelization" (introduced by Asano et al. [@asano6]) aimed at reducing its runtime.
Algorithm 2 finds the correct shortest path between vertices $s$ and $t$ in graph $G$ with bounded degree and biconnected component size using logspace and linear time.
Using the notation in the proof of Theorem 1, suppose the subpath $p_0P_0\ldots p_i$ has already been printed; $p_{\ell+1}$ cannot be in *subgraph*$(p_i,p_{i-1})$, so there is no need to run *isInSubgraph*$(p_i,p_{i-1},t)$. Thus, *adjacentPoints*$(p_{i-1},p_i)$ is the set of feasible articulation points. Moreover, if $p_{\ell+1}$ is not in *subgraph*$(p_i,p)$ for all-but-one feasible articulation points, then the last of these must be $p_{i+1}$ and there is no need to run *isInSubgraph*$(p_i,p_{i+1},t)$. Finally, two subgraphs rooted at $block(p_{i-1},p_i)$ can be traversed concurrently with the technique of simulated parallelization: instead of traversing the feasible subgraphs one-after-another, we maintain two copies of the *isInSubgraph* function and use them to simultaneously traverse two subgraphs. We do this in serial (without the use of a parallel processor) by iteratively advancing each copy of the function in turn; if one subgraph is traversed, the corresponding copy of the function terminates and another copy is initiated to traverse the next unexplored subgraph. Thus, Algorithm 2 is structurally identical to Algorithm 1[^2] and prints the correct shortest path between $s$ and $t$.
Only a constant number of variables, each of size $O(\log n)$, are simultaneously used in Algorithm 2, and every function call is to a logspace function; moreover, keeping track of the internal states of two logspace functions can be done with logspace, so the space complexity of Algorithm 2 is $O(\log n)$.
Finally, to verify the time complexity, note that by traversing two subgraphs at once, we can deduce which subgraph contains $t$ in the time it takes to traverse all subgraphs which do *not* contain $t$ or $s$. Thus, each subgraph rooted at *block*$(p_i,p_{i+1})$, $0\leq i\leq \ell$, which does not contain $t$ or $s$ will be traversed at most once, so the time needed to print the shortest path is of the same order as the time needed to traverse $G$ once. $\square$
The time and space complexity of Algorithm 2 is the best possible for the class of graphs considered.
Let $G$ be a graph of order $n$; the shortest path between two vertices in $G$ may be of length $\Omega(n)$ so any shortest path algorithm will require at least $\Omega(n)$ time to print the path. Moreover, a pointer to an entry in the adjacency list of $G$ has size $\Omega(\log n)$, so printing each edge of the shortest path requires at least $\Omega(n)$ space. $\square$
Conclusion
==========
We have reduced the logspace shortest path problem to biconnected graphs using techniques such as computing instead of storing, transitivity of logspace reductions, and Reingold’s USTCON result. We have also proposed a linear time logspace shortest path algorithm for graphs with bounded degree and biconnected component size, using techniques such as simulated parallelization and constant-time and -space calls to functions over graphs with bounded size.
Future work will be aimed at further reducing the logspace shortest path problem to triconnected graphs using SPQR-tree decomposition, and to $k$-connected graphs using branch decomposition or the decomposition of Holberg [@decomposition]. Another direction for future work will be to generalize Algorithm 2 by removing or relaxing the restrictions on vertex degree and biconnected component size.
Acknowledgements {#acknowledgements .unnumbered}
================
This material is based upon work supported by the National Science Foundation under Grant No. 1450681.
[99]{}
Asano, T., Mulzer, W., Wang, Y., Constant work-space algorithms for shortest paths in trees and simple polygons. J. of Graph Algorithms and Applications 15(5), 569–586 (2011)
Barnes, G., Feige, U., Short random walks on graphs. Proc. 25th Annual ACM Symposium of the Theory of Computing (STOC), 728–737 (1993)
Bellman, R., On a routing problem. Quarterly of Applied Mathematics 16 87–90 (1958)
Brimkov, B., Hicks, I.V., Memory efficient algorithms for cactus graphs and block graphs. Discrete Applied Math. In Press, doi:10.1016/j.dam.2015.10.032 (2015)
Das, B., Datta, S., Nimbhorkar, P., Log-space algorithms for paths and matchings in k-trees. Theory of Computing Systems 53 (4) 669–689 (2013)
Feige, U., A randomized time-space trade-off of $\tilde{O}(mR)$ for USTCON. Proc. 34th Annual Symposium on Foundations of Computer Science (FOCS), 238–246 (1993)
Holberg, W., The decomposition of graphs into $k$-connected components. Discrete Mathematics 109(1), 133–145 (1992)
Hopcroft, J. and Tarjan, R., Algorithm 447: efficient algorithms for graph manipulation. Communications of the ACM 16(6), 372–378 (1973)
Jakoby, A., Tantau, T., Logspace algorithms for computing shortest and longest paths in series-parallel graphs. FSTTCS 2007: Foundations of Software Technology and Theoretical Computer Science. Springer Berlin Heidelberg, 216–227 (2007)
Jakoby, A., Liskiewicz, M., Reischuk, R., Space efficient algorithms for series-parallel graphs. J. of Algorithms 60, 85–114 (2006)
Kosowski, A., Faster walks in graphs: a $O(n^2)$ time-space trade-off for undirected st connectivity. Proceedings of the Twenty-Fourth Annual ACM-SIAM Symposium on Discrete Algorithms. SIAM (2013)
Munro, I.J., Ramirez, R. J., Technical note–reducing space requirements for shortest path problems. Operations Research 30.5, 1009-1013 (1982)
Reingold, O., Undirected connectivity in log-space. J. ACM 55(4), Art. 17 (2008)
Tarjan, R.E. and Vishkin, U., Finding biconnected components and computing tree functions in logarithmic parallel time. Proceedings of FOCS, 12–20 (1984)
[^1]: The subsequent definitions and functions remain valid when $p$ is not an articulation point, and can be used in special cases, e.g., when $G$ only has one block.
[^2]: Indeed each of the described modifications can be implemented in Algorithm 1 as well, but would not make a significant difference in its time complexity.
|
---
abstract: |
We obtain necessary and sufficient existence conditions for solutions of the boundary value problem $$\Delta_p u = f
\quad
\mbox{on } M,
\quad
\left.
\left|
\nabla u
\right|^{p - 2}
\frac{\partial u}{\partial \nu}
\right|_{
\partial M
}
=
h,$$ where $p > 1$ is a real number, $M$ is a connected oriented complete Riemannian manifold with boundary, and $\nu$ is the external normal vector to $\partial M$.
address:
- 'Department of Differential Equations, Faculty of Mechanics and Mathematics, Moscow Lomonosov State University, Vorobyovy Gory, Moscow, 119992 Russia'
- 'Department of Differential Equations, Faculty of Mechanics and Mathematics, Moscow Lomonosov State University, Vorobyovy Gory, Moscow, 119992 Russia'
author:
- 'V. V. Brovkin'
- 'A. A. Kon’kov'
title: ' On the existence of solutions of the second boundary value problem for $p$-Laplacian on Riemannian manifolds '
---
Introduction {#sec1}
============
Let $M$ be a connected oriented complete Riemannian manifold with boundary. We consider the problem $$\Delta_p u = f
\quad
\mbox{on } M,
\quad
\left.
\left|
\nabla u
\right|^{p - 2}
\frac{\partial u}{\partial \nu}
\right|_{
\partial M
}
=
h,
\label{1.1}$$ where $
\Delta_p u
=
\nabla_i
(
g^{ij}
|\nabla u|^{p - 2}
\nabla_j u
),
$ $p > 1$, is the $p$-Laplace operator, $\nu$ is the external normal vector to $\partial M$, and $f$ and $h$ are distributions from ${\mathcal D}' (M)$ with $\operatorname{supp} h \subset \partial M$.
As customary, by $g_{ij}$ we denote the metric tensor consistent with the Riemannian connection and by $g^{ij}$ we denote the dual metric tensor, i.e. $g_{ij} g^{jk} = \delta_i^k$. In so doing, $|\nabla u| = (g^{ij} \nabla_i u \nabla_j u)^{1/2}$. Following [@LU], by $W_{p, loc}^1 (\omega)$, where $\omega$ is an open subset of $M$, we mean the space of measurable functions belonging to $W_p^1 (\omega' \cap \omega)$ for any open set $\omega' \subset M$ with compact closure. The space $L_{p, loc} (\omega)$ is defined analogously.
A function $u \in W_{p, loc}^1 (M)$ is called a solution of problem if $$-
\int_M
g^{ij}
|\nabla u|^{p - 2}
\nabla_j u
\nabla_i \varphi
\,
dV
=
(f - h, \varphi)$$ for all $\varphi \in C_0^\infty (M)$, where $dV$ is the volume element of the manifold $M$.
As a condition at infinity, we require that solutions of satisfy the relation $$\int_M
|\nabla u|^p
\,
dV
<
\infty.
\label{1.2}$$
Denote for brevity $$F = f - h.
\label{1.3}$$ In the partial case of $f \in L_{1, loc} (M)$ and $h \in L_{1, loc} (\partial M)$, we obviously have $$(F, \varphi)
=
\int_M
f
\varphi
\,
dV
-
\int_{\partial M}
h
\varphi
\,
dS$$ for all $\varphi \in C_0^\infty (M)$, where $dV$ is the volume element of $M$ and $dS$ is the volume element of $\partial M$.
\[d1.1\] The capacity of a compact set $K \subset \omega$ relative to an open set $\omega \subset M$ is defined by $$\operatorname{cap}_p (K, \omega)
=
\inf_\varphi \int_\omega
|\nabla \varphi|^p
\,
dx,$$ where the infimum is taken over all functions $\varphi \in C_0^\infty (\omega)$ that are identically equal to one in a neighborhood of $K$. In the case of $\omega = M$, we write $\operatorname{cap}_p (K)$ instead of $\operatorname{cap}_p (K, M)$. For an arbitrary closed set $H \subset M$, we put $$\operatorname{cap}_p (H)
=
\sup_K
\operatorname{cap}_p (K),$$ where the supremum is taken over all compact sets $K \subset H$. The capacity of the empty set is assumed to be equal to zero.
In the case of $M = {\mathbb R}^n$, $n \ge 3$, the capacity $\operatorname{cap}_2 (K)$ coincides with the well-known Wiener capacity [@Landkoff]. It can be easily shown that $\operatorname{cap}_p (K, \omega)$ has the following natural properties.
1. [*Monotonicity.*]{} If $K_1 \subset K_2$ and $\omega_2 \subset \omega_1$, then $$\operatorname{cap}_p (K_1,\omega_1) \le \operatorname{cap}_p (K_2,\omega_2).$$
2. [*Semi-additivity.*]{} If $K_1$ and $K_2$ are compact subsets of an open set $\omega$, then $$\operatorname{cap}_p (K_1 \cup K_2, \omega)
\le
\operatorname{cap}_p (K_1,\omega)
+
\operatorname{cap}_p (K_2,\omega).$$
\[d1.2\] Manifold $M$ is called $p$-hyperbolic, if its capacity is positive, i.e. $\operatorname{cap}_p (M) > 0$; otherwise this manifold is called $p$-parabolic.
If $M$ is a compact manifold, it is obviously $p$-parabolic. It can be also shown that ${\mathbb R}^n$ is a $p$-parabolic manifold for $p \ge n$ and a $p$-hyperbolic manifold for $p < n$.
By $L_p^1 (\omega)$, where $\omega$ is an open subset of $M$, we denote the space of distributions $u \in {\mathcal D}' (\omega)$ for which $\nabla u \in L_p (\omega)$. The semi norm in $L_p^1 (\omega)$ is defined as $$\| u \|_{
L_p^1 (\omega)
}
=
\left(
\int_\omega
|\nabla u|^p
\,
dV
\right)^{1 / p}.$$ It is known [@Mazya] that ${L_p^1 (\omega) \subset L_p (K)}$ for any compact set $K \subset \omega$. It can be also shown that $L_p^1 (\omega) / {{<}1{>}}$ is a uniformly convex and therefore reflexive Banach space. By $\stackrel{\rm \scriptscriptstyle o}{L}\!\!{}_p^1 (\omega)$ we denote the closure of $C_0^\infty (\omega)$ in $L_p^1 (\omega)$. By $\stackrel{\rm \scriptscriptstyle o}{L}\!\!{}_p^1 (\omega)^*$ we mean the dual space to $\stackrel{\rm \scriptscriptstyle o}{L}\!\!{}_p^1 (\omega)$ or, in other words, the space of linear continuous functionals on $\stackrel{\rm \scriptscriptstyle o}{L}\!\!{}_p^1 (\omega)$. The norm of a functional $l \in {\stackrel{\rm \scriptscriptstyle o}{L}\!\!{}_p^1 (\omega)^*}$ is defined as $$\| l \|_{
\stackrel{\rm \scriptscriptstyle o}{L}{}_p^1 (\omega)^*
}
=
\sup_{
\varphi \in C_0^\infty (\omega),
\;
\| \varphi \|_{
L_p^1 (\omega)
}
=
1
}
|(l, \varphi)|.$$
For the solvability of problem , it is necessary and sufficient that the functional $F$ defined by is continuous in the space ${\stackrel{\rm \scriptscriptstyle o}{L}\!\!{}_p^1 (M)}$. Indeed, if $u$ is a solution of , , then $$-
\int_M
g^{ij}
|\nabla u|^{p - 2}
\nabla_j u
\nabla_i \varphi
\,
dV
=
(F, \varphi)
\label{1.4}$$ for all $\varphi \in C_0^\infty (M)$, whence in accordance with the Hölder inequality we obtain $$|(F, \varphi)|
\le
\| u \|_{
L_p^1 (M)
}^{p - 1}
\| \varphi \|_{
L_p^1 (M)
}.$$ Defining $F$ by continuity to the whole space $\stackrel{\rm \scriptscriptstyle o}{L}\!\!{}_p^1 (M)$, we complete the proof of necessity. To prove sufficiency, let us take a sequence $\varphi_i \in C_0^\infty (M)$, $i = 1,2,\ldots$, such that $$\lim_{i \to \infty}
J (\varphi_i)
=
\inf_{
\varphi \in C_0^\infty (M)
}
J (\varphi),$$ where $$J (\varphi)
=
\frac{1}{p}
\int_M
| \nabla \varphi |^p
\,
dV
+
(F, \varphi).$$ Since $F$ is a continuous functional in $\stackrel{\rm \scriptscriptstyle o}{L}\!\!{}_p^1 (M)$, the sequence $\{ \varphi_i \}_{i=1}^\infty$ is bounded in the semi norm of the space $L_p^1 (M)$, in particular, $$\lim_{i \to \infty}
J (\varphi_i)
=
\inf_{
\varphi \in C_0^\infty (M)
}
J (\varphi)
>
- \infty.$$ We select from the sequence $\varphi_i + {{<}1{>}} \in {L_p^1 (M) / {{<}1{>}}}$, $i = 1,2,\ldots$, a subsequence $\varphi_{i_j} + {{<}1{>}}$, $j = 1,2,\ldots$, converging weakly to some element $u + {{<}1{>}}$ of the space $L_p^1 (M) / {{<}1{>}}$. In view of the reflexivity of $L_p^1 (M) / {{<}1{>}}$, such a sequence exists. We denote by $R_m$ the convex hull of the set $\{ \varphi_{i_j} \}_{j \ge m}$. By Mazur’s theorem, there is a sequence $r_m \in R_m$, $m = 1,2,\ldots$, such that $$\| u - r_m \|_{
L_p^1 (M)
}
\to
0
\quad
\mbox{as } m \to \infty.$$ Since $r_m \in C_0^\infty (M)$, $m = 1,2,\ldots$, this implies the inclusion $u \in {\stackrel{\rm \scriptscriptstyle o}{L}\!\!{}_p^1 (M)}$. We can also assert that $$J (r_m)
\le
\sup_{j \ge m}
J (\varphi_{i_j}),
\quad
m = 1,2,\ldots,$$ as $J$ is a convex functional. Passing to the limit as $m \to \infty$ in the last inequality, we obtain $$J (u)
\le
\inf_{
\varphi \in C_0^\infty (M)
}
J (\varphi).$$ Since the converse inequality is obvious, this yields $$J (u)
=
\inf_{
\varphi \in C_0^\infty (M)
}
J (\varphi),$$ whence follows according to the variational principle. Thus, $u$ is a solution of problem , .
Exterior boundary value problems traditionally attract the attention of mathematicians \[1–7\]. In the paper presented to your attention, we give necessary and sufficient conditions for the solvability of the Neumann problem on Riemannian manifolds. These conditions are different for $p$-hyperbolic and $p$-parabolic manifolds. For example, in the simple case that $M = {\mathbb R}^n \setminus B_1$, where $B_1$ is a unit ball in ${\mathbb R}^n$, the exterior Neumann problem $$\Delta_p u = 0
\quad
\mbox{on } {\mathbb R}^n \setminus B_1,
\quad
\left.
\left|
\nabla u
\right|^{p - 2}
\frac{\partial u}{\partial \nu}
\right|_{
\partial B_1
}
=
h,
\quad
\int_{{\mathbb R}^n \setminus B_1}
| \nabla u|^p
\,
dx
<
\infty$$ has a solution for any function $h \in L_{p / (p - 1)} (\partial B_1)$ if $n \ge 3$. On the other hand, in the case of $n = 2$, for a solution to exist it is necessary and sufficient that $$\int_{\partial B_1}
h
\,
dS
=
0.$$ In this sense, $p$-parabolic manifolds are similar to bounded domains in ${\mathbb R}^n$ (see Corollaries \[c2.1\] and \[c2.2\]).
The case of the functional $F$ with a compact support {#PartialCase}
=====================================================
\[t2.1\] Let $M$ be a $p$-hyperbolic manifold and the functional $F$ defined by have a compact support. Then for problem , to have a solution, it is necessary and sufficient that $F$ is a continuous functional in $\stackrel{\rm \scriptscriptstyle o}{L}\!\!{}_p^1 (\omega)$ for some open set $\omega$ such that $\operatorname{supp} F \subset \omega$.
\[t2.2\] Let $M$ be a $p$-parabolic manifold and the functional $F$ defined by have a compact support. Then for problem , to have a solution, it is necessary and sufficient that $F$ is a continuous functional in $\stackrel{\rm \scriptscriptstyle o}{L}\!\!{}_p^1 (\omega)$ for some open set $\omega$ such that $\operatorname{supp} F \subset \omega$ and, moreover, $$\lim_{s \to \infty}
(F, \eta_s)
=
0
\label{t2.2.1}$$ for some sequence $\eta_s \in C_0^\infty (M)$ such that $$\lim_{s \to \infty}
\| \eta_s \|_{
L_p^1 (M)
}
=
0
\quad
\mbox{and}
\quad
\left.
\eta_s
\right|_{
K
}
=
1,
\quad
s = 1,2,\ldots,
\label{t2.2.2}$$ where $K$ is a compact set of positive measure.
\[c2.1\] Let $M$ be a $p$-hyperbolic manifold with compact boundary and $h$ be a functional from ${\mathcal D}' (M)$ such that $\operatorname{supp} h \subset \partial M$. Then the problem $$\Delta_p u = 0
\quad
\mbox{on } M,
\quad
\left.
\left|
\nabla u
\right|^{p - 2}
\frac{\partial u}{\partial \nu}
\right|_{
\partial M
}
=
h,
\label{c2.1.1}$$ has a solution satisfying condition if and only if $h$ is a continuous functional in $\stackrel{\rm \scriptscriptstyle o}{L}\!\!{}_p^1 (\omega)$ for some open set $\omega$ such that $\partial M \subset \omega$.
\[c2.2\] Let $M$ be a $p$-parabolic manifold with compact boundary and $h$ be a functional from ${\mathcal D}' (M)$ such that $\operatorname{supp} h \subset \partial M$. Then for problem , to have a solution, it is necessary and sufficient that $h$ is a continuous functional in $\stackrel{\rm \scriptscriptstyle o}{L}\!\!{}_p^1 (\omega)$ for some open set $\omega$ such that $\partial M \subset \omega$ and, moreover, $$(h, 1) = 0.
\label{c2.2.1}$$
Corollaries \[c2.1\] and \[c2.2\] immediately follow from Theorems \[t2.1\] and \[t2.2\]. Assuming without loss of generality that $\overline{\omega}$ is a compact set, we only note that condition implies for any sequence $\eta_s \in C_0^\infty (M)$ satisfying , where $K = \overline{\omega}$. In the case of $p$-parabolic manifold, such a sequence obviously exists. On the other hand, if $h \in {\stackrel{\rm \scriptscriptstyle o}{L}\!\!{}_p^1 (M)^*}$, then is valid for any sequence $\eta_s \in C_0^\infty (M)$ satisfying , where $K = \overline{\omega}$. This in turn implies the validity of .
To prove Theorems \[t2.1\] and \[t2.2\], we need the following lemmas.
\[l2.1\] Let $\operatorname{cap} (K) = 0$ for some compact set $K$ of positive measure. Then $M$ is a $p$-parabolic manifold.
\[l2.2\] Let $M$ be a $p$-hyperbolic manifold. Then for any compact set $K$ the space $\stackrel{\rm \scriptscriptstyle o}{L}\!\!{}_p^1 (M)$ is continuously embedded in $L_p (K)$. In other words, $$\| \varphi \|_{
L_p (K)
}
\le
C
\| \varphi \|_{
L_p^1 (M)
}
\label{l2.2.1}$$ for all $\varphi \in C_0^\infty (M)$, where the constant $C > 0$ does not depend of $\varphi$.
In the case of $p = 2$, Lemmas \[l2.1\] and \[l2.2\] are proved in [@KonkovPhD Chapter 3, §1]. For $p > 1$, they are proved in a similar way. For the convenience of readers, we give this proof in full.
Let $H$ be a compact set containing $K$ and $\omega$ be a domain with compact closure such that $H \subset \omega$. We represent $M$ as a union of Lipschitz domains $\omega_i$ with compact closures such that $\overline{\omega} \subset \omega_i \subset \omega_{i+1}$, $i = 1,2,\ldots$. Let us denote by $u_i$ the solution of the problem $$\Delta_p u_i = 0
\quad
\mbox{on } \omega_i \setminus K,
\quad
\left.
u_i
\right|_K
=
1,
\quad
\left.
u_i
\right|_{\partial \omega_i}
=
0,
\quad
\left.
\left|
\nabla u
\right|^{p - 2}
\frac{\partial u}{\partial \nu}
\right|_{
\omega_i \cap \partial M
}
=
0.$$ By the maximum principle, we have $0 \le u_i (x) \le u_{i+1} (x) \le 1$ for all $x \in \omega_i$, $i = 1,2,\ldots$. At the same time, $$\| u_i \|_{
L_p^1 (\omega_i)
}
=
\operatorname{cap}_p (K, \omega_i)
\to
0
\quad
\mbox{as } i \to \infty.$$ Hence, $\{ u_i \}_{i=1}^\infty$ is a fundamental sequence in $W_p^1 (\omega)$. Since $\operatorname{mes} K > 0$, we obviously obtain $u_i \to 1$ in $W_p^1 (\omega)$ as $i \to \infty$. It is known [@LU] that the sequence $\{ u_i \}_{i=1}^\infty$ is bounded in a Hölder norm in some neighborhood of the set $\partial \omega$; therefore, it has a subsequence converging to one uniformly on $\partial \omega$. By the maximum principle, on the set $\omega$ the function $u_i$ does not exceed one and is not less than the exact lower bound of this function on $\partial \omega$. Thus, $u_i \to 1$ uniformly on $\omega$ as $i \to \infty$. Further, let us take a function $\eta \in C^\infty ({\mathbb R})$ which is equal to zero on $(- \infty, 1 / 4]$ and is equal to one on $[3 / 4, \infty)$. It is easy to see that $\eta \circ u_i \in \stackrel{\rm \scriptscriptstyle o}{W}\!\!{}_p^1 (\omega_i)$ and, moreover, $\eta \circ u_i = 1$ on $\omega$ for all enough large $i$. Since $$\|\eta \circ u_i \|_{
L_p^1 (\omega_i)
}
\le
\|\eta' \|_{
C ({\mathbb R})
}
\|u_i \|_{
L_p^1 (\omega_i)
}
\to
0
\quad
\mbox{as } i \to \infty,$$ this allows us to assert that $$\operatorname{cap}_p (H)
\le
\lim_{i \to \infty}
\|\eta \circ u_i \|_{
L_p^1 (\omega_i)
}
=
0.$$
We take a Lipschitz domain $\omega$ with compact closure such that $K \subset \omega$. Assume to the contrary that there is a sequence of functions $\varphi_i \in C_0^\infty (M)$, satisfying the conditions $$\lim_{i \to \infty}
\| \varphi_i \|_{
L_p^1 (M)
}
=
0
\quad
\mbox{and}
\quad
\| \varphi_i \|_{
L_p (\omega)
}
=
\operatorname{mes} \omega > 0,
\quad
i = 1,2,\ldots.
\label{pl2.2.1}$$ Since $W_p^1 (\omega)$ is completely continuously embedded in $L_p (\omega)$, there is a subsequence of the sequence $\{ \varphi_i \}_{j = 1}^\infty$ converging in $L_p (\omega)$. In order not to clutter up indices, we denote this subsequence also by $\{ \varphi_i \}_{i = 1}^\infty$. In view of , we have $\varphi_i \to 1$ in $W_p^1 (\omega)$ as $i \to \infty$; therefore, some subsequence of this sequence which we again denote by $\{ \varphi_i \}_{i = 1}^\infty$ converges to one almost everywhere on $\omega$. According to Egorov’s theorem, there is a set $E \subset \omega$ of positive measure such that $\{ \varphi_i \}_{i = 1}^\infty$ tends to one uniformly on $E$. Since $\varphi_i$ are continuous functions, the sequence $\{ \varphi_i \}_{i = 1}^\infty$ also tends to one uniformly on the compact set $\overline{E}$. Thus, taking the function $\eta$ the same as in the proof of the Lemma \[l2.1\], we have $$\operatorname{cap}_p (\overline{E})
\le
\lim_{i \to \infty}
\|\eta \circ \varphi_i \|_{
L_p^1 (M)
}
=
0,$$ whence in accordance with Lemma \[l2.1\] it follows that $M$ is a $p$-parabolic manifold. This contradiction proves the inequality $$\| \varphi \|_{
L_p (\omega)
}
\le
C
\| \varphi \|_{
L_p^1 (M)
}$$ for all $\varphi \in C_0^\infty (M)$, where the constants $C > 0$ does not depend of $\varphi$, from which follows at once.
The necessity is obvious. Indeed, from the continuity of the functional $F$ in the space $\stackrel{\rm \scriptscriptstyle o}{L}\!\!{}_p^1 (M)$, it follows that $F$ is also continuous in $\stackrel{\rm \scriptscriptstyle o}{L}\!\!{}_p^1 (\omega)$ for any open subset $\omega$ of the manifolds $M$. We prove the sufficiency. Assume that $F \in {\stackrel{\rm \scriptscriptstyle o}{L}\!\!{}_p^1 (\omega)^*}$ for some open set $\omega$ with $\operatorname{supp} F \subset \omega$. Let us show that $F \in {\stackrel{\rm \scriptscriptstyle o}{L}\!\!{}_p^1 (M)^*}$. Take a function $\psi \in C_0^\infty (\omega)$ equal to one in a neighborhood of $\operatorname{supp} F$. It is easy to see that $$|(F, \psi \varphi)|
\le
\| F \|_{
\stackrel{\rm \scriptscriptstyle o}{L}{}_p^1 (\omega)^*
}
\| \psi \varphi \|_{
L_p^1 (\omega)
}$$ for all $\varphi \in C_0^\infty (M)$, whence in accordance with the fact that $(F, \varphi) = (F, \psi \varphi)$ and $$\| \psi \varphi \|_{
L_p^1 (\omega)
}
\le
\| \psi\|_{
C (\omega)
}
\|\varphi \|_{
L_p^1 (\omega)
}
+
\|\nabla \psi\|_{
C (\omega)
}
\|\varphi \|_{
L_p (\operatorname{supp} \psi)
}$$ we obtain $$|(F, \varphi)|
\le
\| F \|_{
\stackrel{\rm \scriptscriptstyle o}{L}{}_p^1 (\omega)^*
}
\left(
\| \psi\|_{
C (\omega)
}
\|\varphi \|_{
L_p^1 (\omega)
}
+
\|\nabla \psi\|_{
C (\omega)
}
\|\varphi \|_{
L_p (\operatorname{supp} \psi)
}
\right)$$ for all $\varphi \in C_0^\infty (M)$. At the same time, Lemma \[l2.2\] implies that $$\|\varphi \|_{
L_p (\operatorname{supp} \psi)
}
\le
C
\| \varphi \|_{
L_p^1 (M)
}$$ for all $\varphi \in C_0^\infty (M)$, where the constant $C > 0$ does not depend of $\varphi$.
Thus, to complete the proof, it remains to combine the last two estimates.
As in the case of Theorem \[t2.1\], we need only to prove the sufficiency as the necessity is obvious. Let $\Omega$ be a Lipschitz domain with compact closure containing $K$ and $\operatorname{supp} F$. Without loss of generality, it can be assumed that the norms $
\| \eta_s \|_{
W_p^1 (\Omega)
}
$ are bounded by a constant independent of $s$. If the last condition is not valid, we replace $\eta_s$ with $$\tilde \eta_s (x)
=
\left\{
\begin{aligned}
&
0,
&
&
\eta_s (x) \le 0,
\\
&
\eta_s (x),
&
&
0 < \eta_s (x) < 1,
\\
&
1,
&
&
1 \le \eta_s (x),
\end{aligned}
\right.
\quad
s = 1,2,\dots.
\label{pt2.2.1}$$ Since $W_p^1 (\Omega)$ is completely continuous embedded in $L_p (\Omega)$, there exists a subsequence of the sequence $\{ \eta_s \}_{s = 1}^\infty$ converging in ${L_p (\Omega)}$. Denote this subsequence also by $\{ \eta_s \}_{s = 1}^\infty$. In view of , we obtain $$\| 1 - \eta_s \|_{
W_p^1 (\Omega)
}
\to
0
\quad
\mbox{as } s \to \infty.
\label{pt2.2.2}$$
Assume that $\varphi \in {C_0^\infty (M)}$. By the Poincare inequality, $$\int_\Omega
|\varphi - \alpha|^p
\,
dV
\le
C
\int_\Omega
|\nabla \varphi|^p
\,
dV,
\label{pt2.2.3}$$ where $$\alpha
=
\frac{
1
}{
\operatorname{mes} \Omega
}
\int_\Omega
\varphi
\,
dV.
\label{pt2.2.4}$$ Hereinafter in the proof of Theorem \[t2.2\], by $C$ we mean various positive constants depending only on $p$, $\omega$, $\Omega$, and the support of the functional $F$. Take a function $\psi \in C_0^\infty (\omega \cap \Omega)$ equal to one in a neighborhood of $\operatorname{supp} F$. Let us denote $$\varphi_j'
=
(\varphi - \alpha \eta_j)
(1 - \psi)
\quad
\mbox{and}
\quad
\varphi_j''
=
(\varphi - \alpha \eta_j)
\psi,
\quad
j = 1,2,\ldots.
\label{pt2.2.5}$$ We have $\varphi = \varphi_j' + \varphi_j'' + \alpha \eta_j$; therefore, $$|(F, \varphi)|
\le
|(F, \varphi_j')|
+
|(F, \varphi_j'')|
+
|\alpha| |(F, \eta_j)|,
\quad
j = 1,2,\ldots,
\label{pt2.2.6}$$ Since $(F, \varphi_j') = 0$ and $\varphi_j'' \in C_0^\infty (\omega)$, this obviously implies the estimate $$|(F, \varphi)|
\le
\| F \|_{
\stackrel{\rm \scriptscriptstyle o}{L}{}_p^1 (\omega)^*
}
\| \varphi_j'' \|_{
L_p^1 (\omega)
}
+
|\alpha| |(F, \eta_j)|,
\quad
j = 1,2,\ldots.$$ Combining it with the inequality $$\| \varphi_j'' \|_{
L_p^1 (\omega)
}
\le
\| \psi \|_{
C (\Omega)
}
\| \varphi - \alpha \eta_j \|_{
L_p^1 (\Omega)
}
+
\|\nabla \psi \|_{
C (\Omega)
}
\| \varphi - \alpha \eta_j \|_{
L_p (\Omega)
},$$ we obtain $$\begin{aligned}
|(F, \varphi)|
\le
{}
&
C
\| F \|_{
\stackrel{\rm \scriptscriptstyle o}{L}{}_p^1 (\omega)^*
}
\left(
\| \varphi - \alpha \eta_j \|_{
L_p^1 (\Omega)
}
+
\| \varphi - \alpha \eta_j \|_{
L_p (\Omega)
}
\right)
\\
&
{}
+
|\alpha| |(F, \eta_j)|,
\quad
j = 1,2,\ldots.\end{aligned}$$ Passing to the limit in the last formula as $j \to \infty$ with account of the relations $$\| \varphi - \alpha \eta_j \|_{
L_p^1 (M)
}
\le
\| \varphi \|_{
L_p^1 (M)
}
+
|\alpha|
\| \eta_j \|_{
L_p^1 (M)
}
\to
\| \varphi \|_{
L_p^1 (M)
}
\quad
\mbox{as } j \to \infty
\label{pt2.2.7}$$ and $$\begin{aligned}
&
\| \varphi - \alpha \eta_j \|_{
L_p (\Omega)
}
=
\| \varphi - \alpha + \alpha (1 - \eta_j) \|_{
L_p (\Omega)
}
\le
\| \varphi - \alpha \|_{
L_p (\Omega)
}
+
|\alpha|
\| 1 - \eta_j \|_{
L_p (\Omega)
}
\nonumber
\\
&
\qquad
{}
\le
C
\| \varphi \|_{
L_p^1 (\Omega)
}
+
|\alpha|
\| 1 - \eta_j \|_{
L_p (\Omega)
}
\to
C
\| \varphi \|_{
L_p^1 (\Omega)
}
\quad
\mbox{as } j \to \infty,
\label{pt2.2.8}\end{aligned}$$ we have $$|(F, \varphi)|
\le
C
\| F \|_{
\stackrel{\rm \scriptscriptstyle o}{L}{}_p^1 (\omega)^*
}
\| \varphi \|_{
L_p^1 (M)
}.$$ The proof is completed.
The case of the general functional $F$ {#GeneralCase}
======================================
We assume that the manifold $M$ admits a locally finite cover $$M
=
\bigcup_{i = 1}^\infty
\Omega_i
\label{3.1}$$ of the multiplicity $k < \infty$, where $\Omega_i$ are Lipschitz domains with compact closure such that $\Omega_i \cap \Omega_{i + 1} \ne \emptyset$, $i = 1,2,\dots$. In so doing, let $\gamma : M \to (0, \infty)$ be a measurable function separated from zero and infinity on every compact subset of the manifold $M$ and $\psi_i \in C_0^\infty (\Omega_i)$ be a partition of unity on $M$ such that $$|\nabla \psi_i (x)|^p
\le
\gamma (x),
\quad
x \in \Omega_i,
\quad
i = 1,2,\dots.
\label{3.5}$$
We need the following well-known assertion.
\[l3.1\] Let $\omega$ be a Lipschitz domain with compact closure. Then $$\int_\omega
\gamma
|u - \bar u|
\,
dV
\le
C
\int_\omega
\gamma^{1 - 1 / p}
|\nabla u|
\,
dV
\label{l3.1.1}$$ for any function $u \in W_1^1 (\omega)$, where $$\bar u
=
\frac{
\int_\omega
u
\,
dV
}{
\int_\omega
\gamma
\,
dV
}$$ and the constant $C > 0$ does not depend on $u$.
We also assume that $$\sup_{
i \in {\mathbb N}
}
C_i
\left(
\frac{
1
}{
\int_{\Omega_i}
\gamma
\,
dV
}
+
\frac{
1
}{
\int_{\Omega_{i + 1}}
\gamma
\,
dV
}
\right)
\sum_{j = 1}^i
\int_{\Omega_j}
\gamma
\,
dV
<
\infty
\label{3.2}$$ in the case of the $p$-hyperbolic manifold $M$ and $$\sup_{
i \in {\mathbb N}
}
C_i
\left(
\frac{
1
}{
\int_{\Omega_i}
\gamma
\,
dV
}
+
\frac{
1
}{
\int_{\Omega_{i + 1}}
\gamma
\,
dV
}
\right)
\sum_{j = i + 1}^\infty
\int_{\Omega_j}
\gamma
\,
dV
<
\infty
\label{3.3}$$ in the case of the $p$-parabolic manifold $M$, where $\mathbb N$ is the set of positive integers and $C_i > 0$ is the constant in for $\omega = \Omega_i \cup \Omega_{i + 1}$.
\[t3.1\] Let $M$ be a $p$-hyperbolic manifold. Then for problem , to have a solution, it is necessary and sufficient that $$\sum_{i = 1}^\infty
\| F \|_{
\stackrel{\rm \scriptscriptstyle o}{L}{}_p^1 (\Omega_i)^*
}^{p / (p - 1)}
<
\infty,
\label{t3.1.1}$$ where $F$ is defined by .
\[t3.2\] Let $M$ be a $p$-parabolic manifold. Then for problem , to have a solution, it is necessary and sufficient that holds and, moreover, conditions and are valid for some sequence $\eta_s \in C_0^\infty (M)$.
The proof of Theorems \[t3.1\] and \[t3.2\] relies on Lemmas \[l3.2\] – \[l3.4\].
\[l3.2\] Let $\omega_1$ and $\omega_2$ are measurable subsets of a Lipschitz domain $\omega \Subset M$ such that $$\gamma_i
=
\int_{\omega_i}
\gamma
\,
dV
>
0,
\quad
i = 1,2.$$ Then $$\frac{
1
}{
\gamma_1
}
\int_{\omega_1}
\gamma
|u|
\,
dV
\le
C
\left(
\frac{
1
}{
\gamma_1
}
+
\frac{
1
}{
\gamma_2
}
\right)
\int_\omega
\gamma^{1 - 1 / p}
|\nabla u|
\,
dV
+
\frac{
1
}{
\gamma_2
}
\int_{\omega_2}
\gamma
|u|
\,
dV$$ for any function $u \in W_1^1 (\omega)$, where $C > 0$ is the constant in .
Taking into account , we have $$\int_{\omega_1}
\gamma
|u - \bar u|
\,
dV
\le
C
\int_\omega
\gamma^{1 - 1 / p}
|\nabla u|
\,
dV.$$ By the inequality $|u| - |\bar u| \le |u - \bar u|$, this implies that $$\int_{\omega_1}
\gamma
|u|
\,
dV
-
\int_{\omega_1}
\gamma
|\bar u|
\,
dV
\le
C
\int_\omega
\gamma^{1 - 1 / p}
|\nabla u|
\,
dV$$ or, in other words, $$\frac{1}{\gamma_1}
\int_{\omega_1}
\gamma
|u|
\,
dV
\le
\frac{C}{\gamma_1}
\int_\omega
\gamma^{1 - 1 / p}
|\nabla u|
\,
dV
+
|\bar u|.
\label{pl3.2.1}$$ By , we also have $$\int_{\omega_2}
\gamma
|u - \bar u|
\,
dV
\le
C
\int_\omega
\gamma^{1 - 1 / p}
|\nabla u|
\,
dV,$$ whence in accordance with the inequality $|\bar u| - |u| \le |u - \bar u|$ we immediately obtain $$\int_{\omega_2}
\gamma
|\bar u|
\,
dV
-
\int_{\omega_2}
\gamma
|u|
\,
dV
\le
C
\int_\omega
\gamma^{1 - 1 / p}
|\nabla u|
\,
dV.$$ This is obviously equivalent to $$|\bar u|
\le
\frac{C}{\gamma_2}
\int_\omega
\gamma^{1 - 1 / p}
|\nabla u|
\,
dV
+
\frac{1}{\gamma_2}
\int_{\omega_2}
\gamma
|u|
\,
dV.$$ Combining the last formula with , we complete the proof.
\[l3.3\] Let the cover satisfies condition . Then $$\int_M
\gamma
|\varphi|^p
\,
dV
\le
C
\int_M
|\nabla \varphi|^p
\,
dV
\label{l3.3.1}$$ for any function $\varphi \in C_0^\infty (M)$, where the constant $C > 0$ depend only on $p$, the multiplicity of the cover , and the left-hand side of .
We denote by $C_i > 0$ the constant in for $\omega = \Omega_i \cup \Omega_{i + 1}$. Put $$S_i
=
\sum_{j = 1}^i
\int_{\Omega_j}
\gamma
\,
dV
\quad
\mbox{and}
\quad
\gamma_i
=
\int_{\Omega_i}
\gamma
\,
dV,
\quad
i = 1,2,\ldots.$$ Let us also assume that $S_0 = 0$. We have $$\begin{aligned}
&
\int_M
\gamma
|\varphi|^p
\,
dV
\le
\sum_{i = 1}^\infty
\int_{\Omega_i}
\gamma
|\varphi|^p
\,
dV
=
\sum_{i = 1}^\infty
\frac{
S_i - S_{i - 1}
}{
\gamma_i
}
\int_{\Omega_i}
\gamma
|\varphi|^p
\,
dV
\\
&
\qquad
{}
=
\sum_{i = 1}^\infty
S_i
\left(
\frac{1}{\gamma_i}
\int_{\Omega_i}
\gamma
|\varphi|^p
\,
dV
-
\frac{1}{\gamma_{i + 1}}
\int_{\Omega_{i + 1}}
\gamma
|\varphi|^p
\,
dV
\right),\end{aligned}$$ whence in accordance with the inequality $$\begin{aligned}
\frac{
1
}{
\gamma_i
}
\int_{\Omega_i}
\gamma
|\varphi|^p
\,
dV
\le
{}
&
C_i
\left(
\frac{
1
}{
\gamma_i
}
+
\frac{
1
}{
\gamma_{i + 1}
}
\right)
p
\int_{\Omega_i \cup \Omega_{i + 1}}
\gamma^{1 - 1 / p}
|\varphi|^{p - 1} |\nabla \varphi|
\,
dV
\\
&
{}
+
\frac{
1
}{
\gamma_{i + 1}
}
\int_{\Omega_{i + 1}}
\gamma
|\varphi|^p
\,
dV\end{aligned}$$ which follows from Lemma \[l3.2\] we arrive at the estimate $$\int_M
\gamma
|\varphi|^p
\,
dV
\le
\sum_{i = 1}^\infty
S_i
C_i
\left(
\frac{
1
}{
\gamma_i
}
+
\frac{
1
}{
\gamma_{i + 1}
}
\right)
p
\int_{\Omega_i \cup \Omega_{i + 1}}
\gamma^{1 - 1 / p}
|\varphi|^{p - 1} |\nabla \varphi|
\,
dV.
\label{pl3.3.1}$$ At the same time, from Jensen’s inequality, it follows that $$\begin{aligned}
&
S_i
C_i
\left(
\frac{
1
}{
\gamma_i
}
+
\frac{
1
}{
\gamma_{i + 1}
}
\right)
p
\int_{\Omega_i \cup \Omega_{i + 1}}
\gamma^{1 - 1 / p}
|\varphi|^{p - 1} |\nabla \varphi|
\,
dV
\le
\varepsilon
\int_{\Omega_i \cup \Omega_{i + 1}}
\gamma
|\varphi|^p
\,
dV
\\
&
\qquad
{}
+
A
S_i^p
C_i^p
\left(
\frac{
1
}{
\gamma_i
}
+
\frac{
1
}{
\gamma_{i + 1}
}
\right)^p
\int_{\Omega_i \cup \Omega_{i + 1}}
|\nabla \varphi|^p
\,
dV\end{aligned}$$ for any $\varepsilon > 0$, where the constant $A > 0$ depends only on $\varepsilon$ and $p$; therefore, allows us to assert that $$\int_M
\gamma
|\varphi|^p
\,
dV
\le
2
k
\varepsilon
\int_M
\gamma
|\varphi|^p
\,
dV
+
B
\int_M
|\nabla \varphi|^p
\,
dV$$ for any $\varepsilon > 0$, where $k$ is the multiplicity of the cover and $B > 0$ is a constant, depending only on $\varepsilon$, $p$, $k$, and the left hand-side of . Thus, taking sufficiently small $\varepsilon > 0$ in the last inequality, we complete the proof.
\[l3.4\] Let the cover satisfy condition . Then inequality is valid for any function $\varphi \in C^\infty (M)$ equal to zero on $\Omega_1$, where the constant $C > 0$ depends only on $p$, the multiplicity of the cover and the left-hand side of .
We put $$S_i
=
\sum_{j = i + 1}^\infty
\int_{\Omega_j}
\gamma
\,
dV
\quad
\mbox{and}
\quad
\gamma_i
=
\int_{\Omega_i}
\gamma
\,
dV,
\quad
i = 1,2,\ldots.$$ Condition , in particular, means that $S_i < \infty$ for all positive integers $i$. As before, by $C_i > 0$ we denote the constant in inequality for $\omega = \Omega_i \cup \Omega_{i + 1}$.
It can be seen that $$\begin{aligned}
&
\int_M
\gamma
|\varphi|^p
\,
dV
\le
\sum_{i = 1}^\infty
\int_{\Omega_{i + 1}}
\gamma
|\varphi|^p
\,
dV
=
\sum_{i = 1}^\infty
\frac{
S_i - S_{i + 1}
}{
\gamma_{i + 1}
}
\int_{\Omega_{i + 1}}
\gamma
|\varphi|^p
\,
dV
\\
&
\qquad
{}
=
\sum_{i = 1}^\infty
S_i
\left(
\frac{1}{\gamma_{i + 1}}
\int_{\Omega_{i + 1}}
\gamma
|\varphi|^p
\,
dV
-
\frac{1}{\gamma_i}
\int_{\Omega_i}
\gamma
|\varphi|^p
\,
dV
\right),\end{aligned}$$ whence in accordance with the inequality $$\begin{aligned}
\frac{
1
}{
\gamma_{i + 1}
}
\int_{\Omega_{i + 1}}
\gamma
|\varphi|^p
\,
dV
\le
{}
&
C_i
\left(
\frac{
1
}{
\gamma_i
}
+
\frac{
1
}{
\gamma_{i + 1}
}
\right)
p
\int_{\Omega_i \cup \Omega_{i + 1}}
\gamma^{1 - 1 / p}
|\varphi|^{p - 1} |\nabla \varphi|
\,
dV
\\
&
{}
+
\frac{
1
}{
\gamma_i
}
\int_{\Omega_i}
\gamma
|\varphi|^p
\,
dV\end{aligned}$$ which follows from Lemma \[l3.2\] we obtain . In conclusion, it remains to repeat the arguments given in the proof of Lemma \[l3.3\].
\[c3.1\] If the cover satisfies condition , then the manifold $M$ is $p$-hyperbolic.
Indeed, let $K$ be a compact set of positive measure. Using Lemma \[l3.3\], we have $$0
<
\int_K
\gamma
\,
dV
\le
C
\int_M
|\nabla \varphi|^p
\,
dV$$ for any function $\varphi \in C_0^\infty (M)$ equal to one in a neighborhood of $K$, where the constant $C > 0$ does not depend of $\varphi$. Thus, $\operatorname{cap} (K) > 0$.
We shall follow the idea given in [@MP]. Assume that problem , has a solution. In this case, $F$ is a continuous functional in $\stackrel{\rm \scriptscriptstyle o}{L}\!\!{}_p^1 (M)$. Let us show the validity of . We take functions $\varphi_i \in C_0^\infty (\Omega_i)$ such that $$\| \varphi_i \|_{
L_p^1 (\Omega_i)
}
=
1
\quad
\mbox{and}
\quad
(F, \varphi_i)
\ge
\frac{1}{2}
\| F \|_{
\stackrel{\rm \scriptscriptstyle o}{L}{}_p^1 (\Omega_i)^*
},
\quad
i = 1,2,\ldots.$$ Putting $$\Phi_j (x)
=
\sum_{i=1}^j
\| F \|_{
\stackrel{\rm \scriptscriptstyle o}{L}{}_p^1 (\Omega_i)^*
}^{1 / (p - 1)}
\varphi_i (x),
\quad
j = 1,2,\ldots,$$ we have $$(F, \Phi_j)
\ge
\frac{1}{2}
\sum_{i=1}^j
\| F \|_{
\stackrel{\rm \scriptscriptstyle o}{L}{}_p^1 (\Omega_i)^*
}^{p / (p - 1)},
\quad
j = 1,2,\ldots.
\label{pt3.1.1}$$ On the other hand, $$(F, \Phi_j)
\le
\| F \|_{
\stackrel{\rm \scriptscriptstyle o}{L}{}_p^1 (M)^*
}
\| \Phi_j \|_{
L_p^1 (M)
},
\quad
j = 1,2,\ldots.$$ Therefore, taking in to account the relation $$\| \Phi_j \|_{
L_p^1 (M)
}^p
=
\int_M
|\nabla \Phi_j|^p
\,
dV
\le
k^p
\sum_{i=1}^j
\| F \|_{
\stackrel{\rm \scriptscriptstyle o}{L}{}_p^1 (\Omega_i)^*
}^{p / (p - 1)}
\int_{\Omega_i}
|\nabla \varphi_i|^p
\,
dV
=
k^p
\sum_{i=1}^j
\| F \|_{
\stackrel{\rm \scriptscriptstyle o}{L}{}_p^1 (\Omega_i)^*
}^{p / (p - 1)},$$ where $k$ is the multiplicity of the cover , we obtain $$(F, \Phi_j)
\le
k
\| F \|_{
\stackrel{\rm \scriptscriptstyle o}{L}{}_p^1 (M)^*
}
\left(
\sum_{i=1}^j
\| F \|_{
\stackrel{\rm \scriptscriptstyle o}{L}{}_p^1 (\Omega_i)^*
}^{p / (p - 1)}
\right)^{1 / p},
\quad
j = 1,2,\ldots.$$ Combining the last inequality with , we conclude that $$\left(
\sum_{i=1}^j
\| F \|_{
\stackrel{\rm \scriptscriptstyle o}{L}{}_p^1 (\Omega_i)^*
}^{p / (p - 1)}
\right)^{1 - 1 / p}
\le
2
k
\| F \|_{
\stackrel{\rm \scriptscriptstyle o}{L}{}_p^1 (M)^*
},
\quad
j = 1,2,\ldots,$$ whence follows in the limit as $j \to \infty$.
Now, we show that implies the continuity of the functional $F$ in ${\stackrel{\rm \scriptscriptstyle o}{L}\!\!{}_p^1 (M)}$. Really, let $\varphi \in C_0^\infty (M)$. Since $\operatorname{supp} \varphi$ is a compact set and is a locally finite cover, the support of $\varphi$ can intersect only with a finite number of domains $\Omega_i$. Consequently, $$\varphi
=
\sum_{i=1}^\infty
\psi_i \varphi,$$ where almost all terms in the right-hand side are equal to zero, whence we have $$\begin{aligned}
|(F, \varphi)|
&
{}
\le
\sum_{i=1}^\infty
|(F, \psi_i \varphi)|
\le
\sum_{i=1}^\infty
\| F \|_{
\stackrel{\rm \scriptscriptstyle o}{L}{}_p^1 (\Omega_i)^*
}
\| \psi_i \varphi \|_{
L_p^1 (\Omega_i)
}
\nonumber
\\
&
{}
\le
\left(
\sum_{i=1}^\infty
\| F \|_{
\stackrel{\rm \scriptscriptstyle o}{L}{}_p^1 (\Omega_i)^*
}^{p / (p - 1)}
\right)^{(p - 1) / p}
\left(
\sum_{i=1}^\infty
\| \psi_i \varphi \|_{
L_p^1 (\Omega_i)
}^p
\right)^{1 / p}.
\label{pt3.1.2}\end{aligned}$$ It is easy to see that $$\| \psi_i \varphi \|_{
L_p^1 (\Omega_i)
}^p
=
\int_{\Omega_i}
|\nabla (\psi_i \varphi)|^p
\,
dV
\le
2^p
\int_{\Omega_i}
|\nabla \psi_i|^p |\varphi|^p
\,
dV
+
2^p
\int_{\Omega_i}
\psi_i^p |\nabla \varphi|^p
\,
dV,$$ whence in accordance with and the fact that $0 \le \psi_i^p \le \psi_i \le 1$ on $\Omega_i$ we obtain $$\| \psi_i \varphi \|_{
L_p^1 (\Omega_i)
}^p
\le
2^p
\int_{\Omega_i}
\gamma |\varphi|^p
\,
dV
+
2^p
\int_{\Omega_i}
\psi_i |\nabla \varphi|^p
\,
dV,
\quad
i = 1,2,\ldots.$$ Therefore, $$\sum_{i=1}^\infty
\| \psi_i \varphi \|_{
L_p^1 (\Omega_i)
}^p
\le
2^p
k
\int_M
\gamma |\varphi|^p
\,
dV
+
2^p
\int_M
|\nabla \varphi|^p
\,
dV,$$ where $k$ is the multiplicity of the cover . By Lemma \[l3.3\], this implies the estimate $$\sum_{i=1}^\infty
\| \psi_i \varphi \|_{
L_p^1 (\Omega_i)
}^p
\le
C
\int_M
|\nabla \varphi|^p
\,
dV,$$ where the constant $C > 0$ depends only on $p$, $k$, and the left-hand side of . Thus, relation allows us to assert that $$|(F, \varphi)|
\le
C^{1/p}
\left(
\sum_{i=1}^\infty
\| F \|_{
\stackrel{\rm \scriptscriptstyle o}{L}{}_p^1 (\Omega_i)^*
}^{p / (p - 1)}
\right)^{(p - 1) / p}
\left(
\int_M
|\nabla \varphi|^p
\,
dV
\right)^{1 / p}.
\label{pt3.1.3}$$
Theorem \[t3.1\] is completely proved.
The necessity is proved in the same way as in the case of Theorem \[t3.1\]. We only note that since the manifold $M$ is $p$-parabolic, there is a sequence $\eta_s \in C_0^\infty (M)$ satisfying . This sequence also satisfies as the existence of a solution of problem , implies that $F$ is a continuous functional in the space ${\stackrel{\rm \scriptscriptstyle o} {L}\!\!{}_p^1 (M)}$.
We prove the sufficiency. Let $\eta_s \in C_0^\infty (M)$ be a sequence satisfying conditions , . Take a Lipschitz domain $\Omega$ with compact closure such that $K \subset \Omega$ and $\overline{\Omega}_1 \subset \Omega$. Without loss of generality, it can be assumed that the norms $
\| \eta_s \|_{
W_p^1 (\Omega)
}
$ are bounded by a constant independent of $s$; otherwise we replace $\eta_s$ with . Since $W_p^1 (\Omega)$ is completely continuous embedded in $L_p (\Omega)$, there exists a subsequence of the sequence $\{ \eta_s \}_{s = 1}^\infty$ converging in ${L_p (\Omega)}$. For this subsequence we keep the same notation $\{ \eta_s \}_{s = 1}^\infty$. Taking into account , one can assert that is valid.
We agree to denote by $C$ various positive constants depending only on $p$, the cover , the partition of unity $\{ \psi_i \}_{i=1}^\infty$, the set $\Omega$, and the left-hand side of . Let $\varphi \in {C_0^\infty (M)}$ and, moreover, $\alpha$ be the real number defined by . In view of the Poincare inequality, estimate holds. Also assume that $\varphi_j'$ and $\varphi_j''$ are defined by , where $\psi \in C_0^\infty (\Omega)$ is some function equal to one on $\Omega_1$. For any positive integer $j$ we have $$\varphi_j'
=
\sum_{i=1}^\infty
\varphi_j'
\psi_i,$$ where almost all terms in the right-hand side are equal to zero; therefore, $$\begin{aligned}
|(F, \varphi_j')|
&
{}
\le
\sum_{i=1}^\infty
|(F, \psi_i \varphi_j')|
\le
\sum_{i=1}^\infty
\| F \|_{
\stackrel{\rm \scriptscriptstyle o}{L}{}_p^1 (\Omega_i)^*
}
\| \psi_i \varphi_j' \|_{
L_p^1 (\Omega_i)
}
\\
&
{}
\le
\left(
\sum_{i=1}^\infty
\| F \|_{
\stackrel{\rm \scriptscriptstyle o}{L}{}_p^1 (\Omega_i)^*
}^{p / (p - 1)}
\right)^{(p - 1) / p}
\left(
\sum_{i=1}^\infty
\| \psi_i \varphi_j' \|_{
L_p^1 (\Omega_i)
}^p
\right)^{1 / p}.\end{aligned}$$ Thus, replacing in the arguments with which estimate was obtained the function $\varphi$ by $\varphi_j '$ and Lemma \[l3.3\] by Lemma \[l3.4\], we arrive at the inequality $$|(F, \varphi_j')|
\le
C
\left(
\sum_{i=1}^\infty
\| F \|_{
\stackrel{\rm \scriptscriptstyle o}{L}{}_p^1 (\Omega_i)^*
}^{p / (p - 1)}
\right)^{(p - 1) / p}
\left(
\int_M
|\nabla \varphi_j'|^p
\,
dV
\right)^{1 / p}.
\label{pt3.2.2}$$ It is not difficult to verify that $$\left(
\int_M
|\nabla \varphi_j'|^p
\,
dV
\right)^{1 / p}
\le
\| 1 - \psi \|_{
C (\Omega)
}
\| \varphi - \alpha \eta_j \|_{
L_p^1 (M)
}
+
\|\nabla \psi \|_{
C (\Omega)
}
\| \varphi - \alpha \eta_j \|_{
L_p (\Omega)
}$$ and, moreover, and are valid; therefore, implies the estimate $$\limsup_{j \to \infty}
|(F, \varphi_j')|
\le
C
\left(
\sum_{i=1}^\infty
\| F \|_{
\stackrel{\rm \scriptscriptstyle o}{L}{}_p^1 (\Omega_i)^*
}^{p / (p - 1)}
\right)^{(p - 1) / p}
\| \varphi \|_{
L_p^1 (M)
}.
\label{pt3.2.5}$$ Since $\operatorname{supp} \psi \subset \Omega$, the function $\varphi_j''$ can be represented as $$\varphi_j''
=
\sum_{
\Omega \cap \Omega_i \ne \emptyset
}
(\varphi - \alpha \eta_j)
\psi
\psi_i.$$ We note that the family of domains $\Omega_i$ satisfying the condition $\Omega \cap \Omega_i \ne \emptyset$ is finite as $\overline{\Omega}$ is a compact set and the cover is locally finite. Hence, $$|(F, \varphi_j'')|
\le
\sum_{
\Omega \cap \Omega_i \ne \emptyset
}
|(F, (\varphi - \alpha \eta_j) \psi \psi_i)|
\le
\sum_{
\Omega \cap \Omega_i \ne \emptyset
}
\| F \|_{
\stackrel{\rm \scriptscriptstyle o}{L}{}_p^1 (\Omega_i)^*
}
\| (\varphi - \alpha \eta_j) \psi \psi_i \|_{
L_p^1 (\Omega_i)
}.$$ At the same time, $$\| (\varphi - \alpha \eta_j) \psi \psi_i \|_{
L_p^1 (\Omega_i)
}
\le
\| \psi \psi_i \|_{
C (\Omega)
}
\| \varphi - \alpha \eta_j \|_{
L_p^1 (\Omega)
}
+
\|\nabla (\psi \psi_i) \|_{
C (\Omega)
}
\| \varphi - \alpha \eta_j \|_{
L_p (\Omega)
},$$ whence in accordance with and we obtain $$\limsup_{j \to \infty}
\| (\varphi - \alpha \eta_j) \psi \psi_i \|_{
L_p^1 (\Omega_i)
}
\le
C
\| \varphi \|_{
L_p^1 (M)
}$$ for all $i$ such that $\Omega \cap \Omega_i \ne \emptyset$. Thus, one can assert that $$\limsup_{j \to \infty}
|(F, \varphi_j'')|
\le
C
\sum_{
\Omega \cap \Omega_i \ne \emptyset
}
\| F \|_{
\stackrel{\rm \scriptscriptstyle o}{L}{}_p^1 (\Omega_i)^*
}
\| \varphi \|_{
L_p^1 (M)
}.$$ By the Hölder inequality, $$\sum_{
\Omega \cap \Omega_i \ne \emptyset
}
\| F \|_{
\stackrel{\rm \scriptscriptstyle o}{L}{}_p^1 (\Omega_i)^*
}
\le
N^{1 / p}
\left(
\sum_{
\Omega \cap \Omega_i \ne \emptyset
}
\| F \|_{
\stackrel{\rm \scriptscriptstyle o}{L}{}_p^1 (\Omega_i)^*
}^{p / (p - 1)}
\right)^{(p - 1) / p},$$ where $N$ is the number of domains $\Omega_i$ satisfying the condition $\Omega \cap \Omega_i \ne \emptyset$; therefore, $$\limsup_{j \to \infty}
|(F, \varphi_j'')|
\le
C
\left(
\sum_{
\Omega \cap \Omega_i \ne \emptyset
}
\| F \|_{
\stackrel{\rm \scriptscriptstyle o}{L}{}_p^1 (\Omega_i)^*
}^{p / (p - 1)}
\right)^{(p - 1) / p}
\| \varphi \|_{
L_p^1 (M)
}.$$ Combining this with , , and , we have $$|(F, \varphi)|
\le
C
\left(
\sum_{i=1}^\infty
\| F \|_{
\stackrel{\rm \scriptscriptstyle o}{L}{}_p^1 (\Omega_i)^*
}^{p / (p - 1)}
\right)^{(p - 1) / p}
\| \varphi \|_{
L_p^1 (M)
}.$$ Theorem \[t3.2\] is completely proved.
\[e3.1\] Let $M$ be a subset of ${\mathbb R}^n$ of the form $
\{
x = (x', x_n)
:
|x'| \le x_n^\lambda,
\:
x_n \ge 0
\}
$ with a smoothed boundary near zero, where $n \ge 2$ and $\lambda \ge 0$ is some real number.
The manifold $M$ is $p$-hyperbolic if and only if $$n > p
\quad
\mbox{and}
\quad
\lambda > (p - 1) / (n - 1).
\label{e3.1.1}$$ Indeed, if at least one of the inequalities in is not valid, then taking $$\varphi_{r, R} (x)
=
\varphi
\left(
\frac{
\ln \frac{R}{|x|}
}{
\ln \frac{R}{r}
}
\right),
\quad
0 < r < R,$$ where $\varphi \in C^\infty ({\mathbb R})$ is some function equal to zero in a neighborhood of $(-\infty, 0]$ and to one in the neighborhood of $[1, \infty)$, we immediately obtain $$\operatorname{cap}_p (\overline{B}_r)
\le
\int_M
\left|
\nabla \varphi_{r, R}
\right|^p
\,
dV
\to
0
\quad
\mbox{as } R \to \infty$$ for all $r > 0$, where $B_r = \{ x \in M : |x| < r \}$. Thus, $\operatorname{cap} (M) = 0$.
On the other hand, if both inequalities in are valid, then taking $
\Omega_1
=
\{
x \in M
:
|x| < 4
\},
$ $
\Omega_i
=
\{
x \in M
:
2^{i - 1} < |x| < 2^{i + 1}
\},
$ $i = 2,3,\ldots$, and $\gamma (x) = c (1 + |x|)^{- p}$, where $c > 0$ is enough large real number, we can construct a partition of unity $\psi_i \in C_0^\infty (\Omega_i)$ satisfying condition . Since holds, Corollary \[c3.1\] implies that $M$ is a $p$-hyperbolic manifold.
\[e3.2\] Let $M$ be the manifold from Example \[e3.1\]. We assume that $h$ is a measure on $\partial M$ with the density $(1 + |x|)^\sigma$. If $M$ is a $p$-hyperbolic manifold or, in other words, inequalities are fulfilled, then in accordance with Theorem \[t3.1\] problem , has a solution if and only if $$\sigma
<
\left\{
\begin{aligned}
&
-
\frac{\lambda n (p - 1)}{p}
-
(1 - \lambda)
\left(
2
-
\frac{1}{p}
\right),
&
&
\lambda < 1,
\\
&
- \frac{n (p - 1)}{p},
&
&
1 \le \lambda.
\end{aligned}
\right.$$ Indeed, by estimates based on the embedding theorems, we can show that $$\| h \|_{
\stackrel{\rm \scriptscriptstyle o}{L}{}_p^1 (\Omega_i)^*
}
\asymp
\left\{
\begin{aligned}
&
2^{
i
(
\sigma
+
\lambda n (p - 1) / p
+
(1 - \lambda)
(2 - 1 / p)
)
},
&
&
\lambda < 1,
\\
&
2^{
i
(
\sigma
+
n (p - 1) / p
)
},
&
&
1 \le \lambda,
\end{aligned}
\right.$$ where $\Omega_i$, $i = 1,2,\ldots$, is the cover constructed in Example \[e3.1\].
Let us note that, for $p$-parabolic manifold $M$, problem , has no solutions for any $\sigma$ as condition is not fulfilled.
[99]{}
R.R. Gadyl’shin, G.A. Chechkin, The boundary value problem for the Laplacian with rapidly changing type of boundary conditions in a multidimensional domain, Sib. Math. J. 40 (1999) 229–244.
A.A. Grigor’yan, Dimension of spaces of harmonic functions, Math. Notes, 48:5 (1990) 1114–1118.
V.N. Denisov, Necessary and sufficient conditions of stabilization of solutions of the first boundary-value problem for a parabolic equation, J. Math. Sci. 19 (2014) 303–324.
V.A. Kondratiev, O.A. Oleinik, Time-periodic solutions of a second-order parabolic equation in exterior domains, Vestnik Moskov. Univ. Ser. 1. Mat. Mekh., 1985, no. 4, 38–47.
A.A. Kon’kov, Uniqueness theorems for elliptic equations in unbounded domains, PhD Thesis, Moscow Lomonosov State University, 1988.
A.A. Kon’kov, On the dimension of the solution space of elliptic systems in unbounded domains Sbornik Mathematics 80:2 (1995) 411–434.
L.D. Kudryavtsev, The solution of the first boundary-value problem for self-adjoint elliptic equations in the case of an unbounded region, Math. USSR-Izv. 1:5 (1967) 1131–1151.
N.S. Landkof, Foundations of modern potential theory, Grundlehren Math. Wiss., vol. 180, Springer-Verlag, New York–Heidelberg, 1972.
O.A. Ladyzhenskaya, N.N. Ural’tseva, Linear and quasilinear elliptic equations, Academic Press, New York-London, 1968.
V.G. Maz’ya, Sobolev spaces, Springer Ser. Soviet Math., Springer-Verlag, Berlin 1985.
V.G. Maz’ya, S.V. Poborchii, On solvability of the Neumann problem in domains with peak, St. Petersburg Math. J. 20:5 (2009) 757–790
|
---
abstract: 'It is suggested that the distribution of orbital eccentricities for extrasolar planets is well-described by the Beta distribution. Several properties of the Beta distribution make it a powerful tool for this purpose. For example, the Beta distribution can reproduce a diverse range of probability density functions (PDFs) using just two shape parameters ($a$ and $b$). We argue that this makes it ideal for serving as a parametric model in Bayesian comparative population analysis. The Beta distribution is also uniquely defined over the interval zero to unity, meaning that it can serve as a proper prior for eccentricity when analysing the observations of bound extrasolar planets. Using nested sampling, we find that the distribution of eccentricities for 396 exoplanets detected through radial velocity with high signal-to-noise is well-described by a Beta distribution with parameters $a = 0.867_{-0.044}^{+0.044}$ and $b = 3.03_{-0.16}^{+0.17}$. The Beta distribution is shown to be 3.7 times more likely to represent the underlying distribution of exoplanet eccentricities than the next best model: a Rayleigh + exponential distribution. The same data are also used in an example population comparison utilizing the Beta distribution, where we find that the short- and long-period planets are described by distinct Beta distributions at a confidence of 11.6$\sigma$ and display a signature consistent with the effects of tidal circularization.'
author:
- |
David M. Kipping$^{1,2}$[^1]\
$^{1}$Harvard-Smithsonian Center for Astrophysics, 60 Garden St., Cambridge, MA 02138, USA\
$^{2}$Carl Sagan Fellow
date: 'Accepted 2013 June 3. Received 2013 May 22; in original form 2013 April 12'
title: Parametrizing the exoplanet eccentricity distribution with the Beta distribution
---
\[firstpage\]
methods: statistical — planets and satellites: general
Introduction {#sec:intro}
============
Thanks to the tireless efforts of observers in recent years, there now exists a sizeable library of orbital eccentricities ($e$) for extrasolar planets. Although photometric techniques are starting to emerge for measuring $e$, such as Multibody Asterodensity Profiling (MAP) [@map:2012], the precise determination of this quantity has been historically determined by the radial velocity variations (RV) of the host stars.
This library of $e$ values has several uses and we focus on two particularly useful applications here. The first is that the distribution can be exploited to test and refine theories of planet formation and evolution and offers a window into the possible scattering history of planetary systems [@rasio:1996; @juric:2008; @chatterjee:2008]. Such tests typically operate by taking a theoretical prediction for the distribution of various exoplanet parameters, in particular $e$, and comparing to the measured distribution from, say, RV surveys. This comparison of distributions can also be extended to subpopulations of exoplanets, such as seeking evidence of tidal circularization by comparing the distribution of $e$ between short- and long-period planets. To make a quantitative comparison, one may use the popular non-parametric and frequentist Kolmogorov-Smirnov (KS) test between the two populations. Alternatively, a parametric approach (useful for Bayesian analyses) would be to regress one or more analytic distributions to the observed one. The parameters describing the analytic distribution may then be compared to test for statistically significant differences, or lack thereof.
A second useful application of an observed eccentricity distribution is that it can be used to derive an informative prior on eccentricities in general. Before the availability of this information, observers have been forced to adopt uninformative priors, typically being a uniform prior over $0\leq e<1$, but an informative prior can be preferable in many situations. Some examples we consider are fitting RV data with phase gaps (which can lead to spurious eccentricities), non-detection radial velocities used to place upper limits on $M_P\sin i$ (e.g. Kepler-22b; @borucki:2012), blend analyses of transits requiring some eccentricity prior [@fressin:2011] and fitting transit light curves with an absence of any empirical eccentricity constraints. Using an informative prior naturally includes an observer’s experience of the known distribution, taking into account whether a particular solution is a surprisingly rare answer or a very typical one. Any prior of course requires a parametrization of the observed eccentricity distribution. Furthermore, for use as a prior, the distribution should not reproduce negative eccentricities or hyperbolic orbits (since any periodic transit, RV, asterometric, etc. signal cannot result from such an orbit) and should integrate to unity over the range $0\leq e<1$ to be defined as a proper prior.
From the aforementioned two major applications of the eccentricity distribution, we identify the following key requirements for any such parametrized probability density function (PDF), $\mathrm{P}(e)$:
- $\mathrm{P}(e)$ should be defined over the range $0\leq e<1$ only i.e. no hyperbolic orbits or negative eccentricities
- For a proper prior we require $\int_{e=0}^{1}\mathrm{P}(e)\,\mathrm{d}e=1$ i.e. the distribution is normalized over the defined range
- We require $\mathrm{P}(e)$ to be able to reproduce a wide range of plausible distributions and be as efficient as possible i.e. use few parameters
- The inverse of the cumulative density function (CDF) may be easily computed to serve as a practical (i.e. computationally efficient) prior for direct sampling
The Beta Distribution {#sec:beta}
=====================
Properties
----------
The Beta distribution, $\mathrm{P}_{\beta}(e;a,b)$, is a member of the exponential family defined over the range $0\leq e<1$ and satisfies all of the desired criteria described in the previous section. The functional form is expressed in terms of either Gamma functions, or equivalently the Beta function, as
$$\begin{aligned}
\mathrm{P}_{\beta}(e;a,b) &= \frac{ \Gamma(a+b) }{ \Gamma(a) \Gamma(b) } e^{a-1}
(1-e)^{b-1},\nonumber \\
\qquad&= \frac{ 1 }{ \mathrm{B}(a,b) } e^{a-1} (1-e)^{b-1}.\end{aligned}$$
![*Examples of the Beta probability density function, $\mathrm{P}_{\beta}(e;a,b)$, demonstrating the diverse range of distributions the function can produce. Going through red to purple and finally black, we explore from $a=1$ to $a=10$ in unity steps. For each colour we show 10 lines for $b=1$ to $b=10$ in unity steps.*[]{data-label="fig:betaexamples"}](beta_examples.eps){width="8.4"}
The first advantage of this form is that despite being described by just two parameters, $\mathrm{P}_{\beta}(e;a,b)$ is able to produce a wide and diverse range of probability distributions, as illustrated in Fig. \[fig:betaexamples\]. Secondly, the fact that the distribution is defined over the range zero to unity means it is suitable as a proper prior and it is trivial to show that
$$\begin{aligned}
\int_{e=0}^1 \mathrm{P}_{\beta}(e;a,b) = 1.\end{aligned}$$
Thirdly, $\mathrm{P}_{\beta}(x;a,b)$ is clearly efficient given that only two parameters ($a$ and $b$) reproduce the wide range of distributions illustrated in Fig. \[fig:betaexamples\]. Finally, it may be shown that the CDF may be inverted as a stable function, which is a requirement for using the Beta distribution as a prior via direct sampling. The CDF is given by
$$\begin{aligned}
\mathrm{C}_{\beta}(e;a,b) &= \frac{\mathrm{B}(e;a,b)}{\mathrm{B}(a,b)} = I_e(a,b).\end{aligned}$$
The inverse function is simply expressed $e=I_z^{-1}(a,b)$. A Beta distribution prior can therefore be invoked by generating $z$ as a random uniform number between zero and unity and computing $e$, thus directly sampling from the prior distribution. This inverse function is widely available in standard programming libraries. We note that @hogg:2010 used a Beta distribution to model the eccentricity distribution of a synthetic population but did not discuss how well the distribution matches the observed distribution nor its potential as a prior.
Comparison to other commonly used distributions
-----------------------------------------------
One of the most commonly used PDFs for modelling the distribution of exoplanet eccentricities is a mixture between a Rayleigh distribution and an exponential distribution (e.g. @steffen:2010 [@wang:2011; @map:2012]). The appeal of this mixture is that Rayleigh scattering reflects the effects of planet-planet scattering and the exponential component reflects the effects of tidal dissipation [@rasio:1996]. The associated PDF is
$$\begin{aligned}
\mathrm{P}_{\mathrm{Rayleigh}}(e;\alpha,\lambda,\sigma) &= \alpha \lambda \exp\Big[-\lambda e\Big] +
\frac{e (1-\alpha)}{\sigma^2} \exp\Big[-\frac{e^2}{2\sigma^2}\Big],
\label{eqn:rayleigh}\end{aligned}$$
where $\alpha$ gives the relative contributions of the two PDFs, $\lambda$ is the width parameter of the exponential distribution and $\sigma$ is the scale parameter of the Rayleigh distribution.
A major problem with the distribution of equation \[eqn:rayleigh\] is that hyperbolic orbits ($e\geq1$) have a non-zero probability. This is true for both the Rayleigh and exponential components taken individually too. Hyperbolic orbits (i.e. ejected planets) surely do naturally result from planet-planet scattering and planet-synthesis simulations may benefit from using this distribution [@rasio:1996]. However, it is not appropriate to use such a distribution as a prior for fitting, say, the RV time series of an exoplanet. This is because the very fact that a periodic planet signal has been observed precludes $e\geq1$.
@wang:2011 also used a uniform + exponential distribution to serve as a null-hypothesis against the presence of a Rayleigh + exponential distribution. As before, for the purpose of serving as a prior in fitting, the exponential component will reproduce unobservable scenarios.
Another example of a model used recently for exoplanet eccentricities comes from @shen:2008 (hereafter ST08), who used a PDF requiring two shape parameters, $k$ and $a$.
$$\begin{aligned}
\mathrm{P}_{\mathrm{ST08}}(e;k,a) = \frac{1}{k} \Big(\frac{1}{(1+e)^a} - \frac{e}{2^a}\Big).
\label{eqn:shen}\end{aligned}$$
It is easily shown that this distribution is not uniquely defined over the interval $0\leq e<1$.
Example Regressions {#sec:regressions}
===================
Regressing all planets {#sub:fitall}
----------------------
Regressing a PDF to a histogram of eccentricities is precarious in that the results are sensitive to the chosen bin sizes. A more robust approach is to regress to the CDF which can be calculated at the smallest step sizes possible i.e. the steps between each entry of the sorted list of eccentricities. As an example, we downloaded the eccentricites for all planets (413) discovered via RV from www.exoplanets.org [@wright:2011] on April $4^{\mathrm{th}}$ 2013. We make a cut in RV semi-amplitude of $(K/\sigma_K)>5$ in order to eliminate low signal-to-noise detections, leaving 396 exoplanets.
These eccentricities represent the maximum likelihood estimates of $e$ for each planet. @hogg:2010 argue that using the actual posteriors of $e$ for each planet allows for a more accurate determination of the underlying distribution. Unfortunately a large, homogenous and comprehensive database of such posteriors is not available and would require a global reanalysis, which is outside the scope of this short letter. Therefore, we proceed to use the maximum likelihood estimators of $e$ but acknowledge the possibility that this may be a biased indicator [@hogg:2010]. Despite this, we still argue that using the Beta distribution with the fitted parameters presented in this section is a better description of reality that other distributions suggested for reasons described in §\[sec:beta\].
The 396-length vector of eccentricities is first sorted from low to high. Duplicate entries are removed to create a vector representing the minimum step sizes in the CDF. For each entry in this vector, we then count the number of entries in the original eccentricity vector which have a value less than or equal to this. Normalizing by the total normal of entries provides the probability and thus the CDF array. For this example, we elected the simple approach of computing errors for each array entry using Poisson counting statistics.
For the regression, we used the [[MultiNest]{}]{} package [@feroz:2008; @feroz:2009], which is a multimodal nested sampling algorithm [@skilling:2004]. [[MultiNest]{}]{} not only finds the maximum likelihood shape parameters and their associated posterior distributions, but also computes the Bayesian evidence of each model regressed. This latter functionality obviates the need for using the frequently employed KS test, since Bayesian model selection can be easily performed using the evidences. A major benefit of using a Bayesian approach is that we essentially penalise models for using unnecessary complexity i.e. a built-in Occam’s razor.
For the parameter priors, we adopt modified Jeffrey’s priors for $a$ and $b$ over the range $0$ to $10^2$ with an inflection point at unity to aid in quickly scanning parameter space. After performing the regression, we derive $a =
0.867_{-0.044}^{+0.044}$ and $b = 3.03_{-0.16}^{+0.17}$ (see Fig. \[fig:cdffit\]), where we quote median values and the 68.3% credible intervals.
For comparison, other models were attempted starting with a simple uniform distribution with two free parameters, $e_{\mathrm{min}}$ and $e_{\mathrm{max}}$. We directly sample from uniform priors in $e_{\mathrm{min}}$-$e_{\mathrm{max}}$ parameter space, except those cases where $e_{\mathrm{min}}>e_{\mathrm{max}}$. Next, we regressed the popular Rayleigh + exponential distribution (equation \[eqn:rayleigh\]) using a modified Jeffrey’s prior on $\lambda$ and $\sigma$ between $0$ and $10^2$ with an inflection point at unity. The prior for $\alpha$ was uniform over the interval zero to unity. We also tried a uniform + exponential, where we fixed $e_{\mathrm{min}} = 0$ and fitted $e_{\mathrm{max}}$ as a uniform prior over the interval zero to unity. $\alpha$ and $\sigma$ were treated as before. Finally, we tried the intuitive model of ST08 provided in equation \[eqn:shen\]. For both $a$ and $k$, we used a modified Jeffrey’s prior between $0$ and $10^2$ with an inflection point at unity.
As the results show in Table \[tab:fits\], the preferred model we regressed to the data was that of a Beta distribution. The Beta distribution is favoured over the next best model (the Rayleigh + exponential distribution) with an odds ratio of 3.7 i.e. the Beta distribution is 3.7 times more likely to represent the underlying distribution. As already mentioned, the Beta distribution is defined over the interval zero to unity, unlike the other distributions attempted and is therefore favourable for use as a prior in subsequent analyses too.
Using the maximum likelihood parameters of $a$ and $b$, we generated a synthetic population of $10^5$ exoplanet eccentricities, which one would hope to reproduce the observed distribution. Indeed, in Fig. \[fig:pdffit\], this can be seen to be true, with each bin of the observed PDF falling within $\sim1$$\sigma$ of the synthetic one. The Beta distribution is therefore certainly an excellent description of the observed exoplanet eccentricity distribution.
------------------------------------------------------------ -------------------- ------------------------------------- ------------------------------ ---------------------------
**Distribution** **Evidence** **Parameter 1** **Parameter 2** **Parameter 3**
\[0.5ex\] Uniform\[$e_{\mathrm{min}}$,$e_{\mathrm{max}}$\] $-664.761\pm0.053$ $0.90_{-0.66}^{+1.48}\times10^{-4}$ $0.6071_{-0.0037}^{+0.0037}$ -
Beta\[$a$,$b$\] $+374.705\pm0.046$ $0.867_{-0.044}^{+0.044}$ $3.03_{-0.16}^{+0.17}$ -
Rayleigh+Exp\[$\alpha$,$\sigma$,$\lambda$\] $+373.400\pm0.049$ $0.781_{-0.132}^{+0.083}$ $0.272_{-0.036}^{+0.021}$ $5.12_{-0.61}^{+1.44}$
Uniform+Exp\[$\alpha$,$\sigma$,$e_{\mathrm{max}}$\] $+332.506\pm0.054$ $0.1292_{-0.070}^{+0.069}$ $0.2229_{-0.0048}^{+0.0051}$ $0.559_{-0.035}^{+0.037}$
ST08\[$k$,$a$\] $+371.475\pm0.051$ $0.2431_{-0.0059}^{+0.0060}$ $4.33_{-0.18}^{+0.18}$ -
\[1ex\]
------------------------------------------------------------ -------------------- ------------------------------------- ------------------------------ ---------------------------
\[tab:fits\]
![image](row.eps){width="18.0"}
![*Probability density distribution of $e$ for 396 extrasolar planets (black bars), taken from www.exoplanets.org. The error bars shown are computed using Poisson counting statistics. The red-dashed histogram shows a PDF of a synthetic population generated using the maximum likelihood parameters of a Beta distribution regressed to the observed sample. Using just two shape parameters, the fitted Beta distribution is fully consistent with the observed distribution.*[]{data-label="fig:pdffit"}](pdffit.eps){width="8.4"}
Population comparison example {#sub:fitlocal}
-----------------------------
Here, we show how population comparison may be achieved in a Bayesian sense without the use of the frequentist KS test and easily modelled with the Beta distribution. In this example, we consider two possible hypotheses which describe the underlying distribution of the eccentricity of exoplanet eccentricities:
- $\mathcal{H}_1$: The eccentricity of all exoplanets is described by a single Beta distribution, $\mathrm{P}_{\beta}(a_{\mathrm{global}},b_{\mathrm{global}};e)$
- $\mathcal{H}_2$: The eccentricity of the short-period exoplanets is described by a Beta distribution, $\mathrm{P}_{\beta}(a_{\mathrm{short}},b_{\mathrm{short}};e)$, and that of long-period planets by $\mathrm{P}_{\beta}(a_{\mathrm{long}},b_{\mathrm{long}};e)$
We define “short period” and “long period” planets by computing the median period of the 396 exoplanets analysed in the previous subsection. Two separate CDFs are generated, split by this median period (382.3days). The CDFs are computed using the same method described in §\[sub:fitall\]. The CDFs are then fitted with global shape parameters for hypothesis $\mathcal{H}_1$ and local shape parameters for hypothesis $\mathcal{H}_2$.
The results of this exercise are shown in Table \[tab:betadual\]. We note that the global fit retrieves slightly different parameters than those found when using a single CDF function. Parameter $a$ is found to differ by 2.4$\sigma$ and $b$ by 1.8$\sigma$. We attribute this difference to the binning procedure where the number of unique eccentricities defines the maximum resolution possible when constructing a CDF. As a result, the combined CDF result will have the higher resolution and thus greater reliability.
The Bayesian evidence yields an 11.6$\sigma$ preference for hypothesis $\mathcal{H}_2$. We therefore conclude that there is a significant difference between the eccentricity distributions of short- and long-period exoplanets. Furthermore, the short-period planets show a larger fraction of low-eccentricity planets relative to the flatter distribution found for long-period planets (see Fig. \[fig:betadual\]). This is consistent with the effects of tidal circularization [@rasio:1996].
--------------------------- -------------------------------------------------------------------------------------------- --------------------- --------------------------- ------------------------ --------------------------- ------------------------
**Hypothesis** **Distribution** **Evidence** **Parameter 1** **Parameter 2** **Parameter 3** **Parameter 4**
\[0.5ex\] $\mathcal{H}_1$ Beta\[$a_{\mathrm{global}}$,$b_{\mathrm{global}}$\] $264.528 \pm 0.044$ $0.711_{-0.044}^{+0.049}$ $2.57_{-0.17}^{+0.19}$ - -
$\mathcal{H}_2$ Beta’\[$a_{\mathrm{long}}$,$b_{\mathrm{long}}$,$a_{\mathrm{short}}$,$b_{\mathrm{short}}$\] $334.654 \pm 0.060$ $1.12_{-0.10}^{+0.11}$ $3.09_{-0.29}^{+0.32}$ $0.697_{-0.481}^{+0.066}$ $3.27_{-0.32}^{+0.35}$
\[1ex\]
--------------------------- -------------------------------------------------------------------------------------------- --------------------- --------------------------- ------------------------ --------------------------- ------------------------
\[tab:betadual\]
![image](betadual.eps){width="16.8"}
Discussion & Conclusions {#sec:discussion}
========================
We have shown how the Beta distribution is a useful tool for parametrizing the distribution of exoplanet orbital eccentricities. The Beta distribution is well suited for this purpose, thanks to its diverse range of PDFs using just two shape parameters ($a$ and $b$), a strictly defined interval between $0$ and $1$ as expected for bound exoplanets, and possessing an easily invertible CDF for the purpose of sampling from a Beta distribution prior.
By regressing the known CDF of orbital eccentricities from exoplanets detected through the RV technique at www.exoplanets.org [@wright:2011], we have shown how the Beta distribution is 3.7 times more likely to represent the underlying distribution of orbital eccentricites than the next best competing model: that of a Rayleigh + exponential distribution (see Table \[tab:fits\]). We find that the parameters $a = 0.867_{-0.044}^{+0.044}$ and $b = 3.03_{-0.16}^{+0.17}$ provide an excellent match to the data and are able to reproduce the observed distribution (see Fig. \[fig:pdffit\]). We suggest that observers may use these shape parameters to define an informative eccentricity prior. Sampling from this prior will not only naturally include an observer’s previous experience, but is also more computationally efficient since the distribution is skewed to lower eccentricities where Kepler’s transcendental equation is more expediently evaluated.
Finally, we have shown how the Beta distribution may be used for comparing populations of exoplanet eccentricities, with an example application to comparing short- and long-period planets. Here, we find that a two-population model is strongly favoured at more than $11$$\sigma$ and we find that short-period planets have a higher proportion of low-eccentricity planets where long-period planets exhibit a flatter distribution, consistent with tidal circularization (see Fig. \[fig:betadual\]).
Acknowledgements {#acknowledgements .unnumbered}
================
DMK has been supported by the NASA Carl Sagan Fellowships. Thanks to Joel Hartman and Kevin Schlaufman for useful discussions in preparing this manuscript. This research has made use of the Exoplanet Orbit Database and the Exoplanet Data Explorer at exoplanets.org.
[99]{} Borucki, W. J. et al., 2012, ApJ, 745, 120 Chatterjee, S., Ford, E. B., Matsumura, S. & Rasio, F. A., 2008, ApJ, 686, 580 Jurić, M. & Tremaine, S., 2008, ApJ, 686, 603 Feroz, F. & Hobson, M. P., 2008, MNRAS, 384, 449 Feroz, F., Hobson, M. P. & Bridges, M., 2009, MNRAS, 398, 1601 Fressin, F. et al., 2011, ApJS, 197, 5 Hogg, D. W., Myers, A. & Bovy, J., 2010, ApJ, 725, 2166 Kipping, D. M., Dunn, W. R., Jasinski, J. M. & Manthri, V. P., 2012, MNRAS, 421, 1166 Rasio, F. A. & Ford, E. B., 1996, Science, 274, 954 Shen, Y. & Turner, E. L., 2008, ApJ, 685, 553 Skilling, J. 2004, in Fischer R., Preuss R., Toussaint U. V., eds, American Institute of Physics Conference Series Nested Sampling., Vol. 735, pp 395–405 Steffen, J. H. et al., 2010, ApJ, 725, 1226 Wang, J. & Ford, E. B., 2011, MNRAS, 418, 1822 Wright, J. T. et al., 2011, PASP, 123, 412
\[lastpage\]
[^1]: E-mail: dkipping@cfa.harvard.edu
|
---
abstract: 'Fix a finite semigroup $S$ and let $a_1,\ldots,a_k, b$ be tuples in a direct power $S^n$. The subpower membership problem ([[SMP]{.nodecor}]{}) asks whether $b$ can be generated by $a_1,\ldots,a_k$. If $S$ is a finite group, then there is a folklore algorithm that decides this problem in time polynomial in $n k$. For semigroups this problem always lies in [[PSPACE]{.nodecor}]{}. We show that the ${\textnormal{SMP}}{}$ for a full transformation semigroup on $3$ or more letters is actually [[PSPACE]{.nodecor}]{}-complete, while on $2$ letters it is in [[P]{.nodecor}]{}. For commutative semigroups, we provide a dichotomy result: if a commutative semigroup $S$ embeds into a direct product of a Clifford semigroup and a nilpotent semigroup, then ${\textnormal{SMP}}(S)$ is in P; otherwise it is [[NP]{.nodecor}]{}-complete.'
address:
- 'School of Computing Science, Simon Fraser University, Burnaby BC, Canada'
- 'Theoretical Computer Science, Faculty of Mathematics and Computer Science, Jagiellonian University, Poland'
- 'Institute for Algebra, Johannes Kepler University Linz, Austria & Department of Mathematics, CU Boulder, USA'
- 'Institute for Algebra, Johannes Kepler University Linz, Austria & Department of Mathematics, CU Boulder, USA'
author:
- Andrei Bulatov
- Marcin Kozik
- Peter Mayr
- Markus Steindl
title: The subpower membership problem for semigroups
---
Introduction
============
Deciding membership is a basic problem in computer algebra. For permutation groups given by generators, it can be solved in polynomial time using Sims’ stabilizer chains [@Furst1980]. For transformation semigroups, membership is ${\textnormal{PSPACE}}$-complete by a result of Kozen [@Ko:LBNP].
In this paper we study a particular variation of the membership problem that was proposed by Willard in connection with the study of constraint satisfaction problems (CSP) [@IM:TLAA; @Willard2007]. Fix a finite algebraic structure ${S}$ with finitely many basic operations. Then the *subpower membership problem* ([[SMP]{.nodecor}]{}) for ${S}$ is the following decision problem: [ $$\parbox{\textwidth}{
\begin{tabular}{ @{} p{\ldprobleft} p{\ldprobmid} p{\ldprobright} @{} }
& \multicolumn{2}{l}{\textbf{\uppercase{SMP(\textit{S})}}} \\
& {\begin{minipage}[t]{\ldprobmid}Input:\vspace{1.5pt}\end{minipage}} & {\begin{minipage}[t]{\ldprobright}
$\{a_1,\ldots,a_k\} \subseteq S^n,b \in S^n$\vspace{1.5pt}\end{minipage}} \\
& {\begin{minipage}[t]{\ldprobmid}Problem:\end{minipage}} & {\begin{minipage}[t]{\ldprobright}
Is $b$ in the subalgebra ${ {\langle a_1,\ldots,a_k \rangle} }$ of $S^n$ generated by\\ $\{a_1,\ldots,a_k\}$?\end{minipage}} \\
\end{tabular}}$$ ]{} For example, for a one-dimensional vector space $S$ over a field $F$, ${\textnormal{SMP}}(S)$ asks whether a vector $b \in F^n$ is spanned by vectors $a_1,\ldots,a_k \in F^n$.
Note that ${\textnormal{SMP}}(S)$ has a positive answer iff there exists a $k$-ary term function $t$ on $S$ such that $t( a_1,\ldots,a_k ) = b$, that is $$\label{eq:tabi}
t( a_{1i},\ldots,a_{ki} ) = b_i \quad\text{for all} \quad i\in\{1,\dots,n\}.$$ Hence ${\textnormal{SMP}}(S)$ is equivalent to the following problem: Is the partial operation $t$ that is defined on an $n$ element subset of $S^k$ by the restriction of a term function on ${S}$?
Note that the input size of ${\textnormal{SMP}}(S)$ is essentially $n(k+1)$. Since the size of ${ {\langle a_1,\ldots,a_k \rangle} }$ is limited by $|S|^n$, one can enumerate all elements in time exponential in $n$ using a straightforward closure algorithm. This means that ${\textnormal{SMP}}(S)$ is in EXPTIME for each algebra $S$. Kozik constructed a class of algebras which actually have EXPTIME-complete subpower membership problems [@Kozik2008].
Still for certain structures the [[SMP]{.nodecor}]{} might be considerably easier. For $S$ a vector space, the [[SMP]{.nodecor}]{} can be solved by Gaussian elimination in polynomial time. For groups the [[SMP]{.nodecor}]{} is in [[P]{.nodecor}]{} as well by an adaptation of permutation group algorithms [@Furst1980; @Zweckinger2013]. Even for certain generalizations of groups and quasigroups the [[SMP]{.nodecor}]{} can be shown to be in [[P]{.nodecor}]{} [@Mayr2012].
In the current paper we start the investigation of algorithms for the [[SMP]{.nodecor}]{} of finite semigroups and its complexity. We will show that the [[SMP]{.nodecor}]{} for arbitrary semigroups is in ${\textnormal{PSPACE}}$ in Theorem \[thm:pspace\] For the full transformation semigroups $T_n$ on $n$ letters we will prove the following in Section \[se:sg\].
\[thm:tn\] ${\textnormal{SMP}}(T_n)$ is [[PSPACE]{.nodecor}]{}-complete for all $n\geq 3$, while ${\textnormal{SMP}}(T_2)$ is in [[P]{.nodecor}]{}.
This is the first example of a finite algebra with [[PSPACE]{.nodecor}]{}-complete [[SMP]{.nodecor}]{}. As a consequence we can improve a result of Kozen from [@Ko:LBNP] on the intersection of regular languages in Corollary \[co:automata\].
Moreover the following is the smallest semigroup and the first example of an algebra with ${\textnormal{NP}}$-complete [[SMP]{.nodecor}]{}.
Let $Z_2^1 := \{ 0,a,1 \}$ denote the 2-element null semigroup adjoined with a $1$, i.e., $Z_2^1$ has the following multiplication table: $$\begin{array}{r|lll}
Z_2^1 & 0 & a & 1 \\
\hline
0 & 0 & 0 & 0 \\
a & 0 & 0 & a \\
1 & 0 & a & 1 \\
\end{array}$$ Then ${\textnormal{SMP}}( Z_2^1 )$ is [[NP]{.nodecor}]{}-complete. ${\textnormal{NP}}$-hardness follows from Lemma \[lma:not\_clifford\_nilpotent\_sg\_0\] by encoding the exact cover problem. That the problem is in ${\textnormal{NP}}$ for commutative semigroups is proved in Lemma \[lma:smp\_commutative\_semigroups\].
Generalizing from this example we obtain the the following dichotomy for commutative semigroups.
\[thm:tfae\_clifford\_nilpotent\_commut\] Let $S$ be a finite commutative semigroup. Then ${\textnormal{SMP}}({S})$ is in [[P]{.nodecor}]{} if one of the following equivalent conditions holds:
1. \[lma:tfae\_clifford\_nilpotent\_list\_2\] $S$ is an ideal extension of a [Clifford semigroup]{} by a nilpotent semigroup;
2. \[lma:tfae\_clifford\_nilpotent\_list\_3\] the ideal generated by the idempotents of ${S}$ is a [Clifford semigroup]{};
3. \[lma:tfae\_clifford\_nilpotent\_list\_3b\] for every idempotent $e \in S$ and every $a \in S$ where $ea = a$ the element $a$ generates a group;
4. \[lma:tfae\_clifford\_nilpotent\_list\_1b\] $S$ embeds into the direct product of a Clifford semigroup and a nilpotent semigroup.
Otherwise ${\textnormal{SMP}}({S})$ is [[NP]{.nodecor}]{}-complete.
Theorem \[thm:tfae\_clifford\_nilpotent\_commut\] is proved in Section \[sec:SMP\_commut\]. Our way towards this result starts with describing a polynomial time algorithm for the [[SMP]{.nodecor}]{} for Clifford semigroups in Section \[sec:clifford\_nilpotent\]. In fact in Corollary \[thm:clifford\_nilpotent\] we will show that ${\textnormal{SMP}}({S})$ is in [[P]{.nodecor}]{} for every (not necessarily commutative) ideal extension of a [Clifford semigroup]{} by a nilpotent semigroup.
Throughout the rest of the paper, we write ${[n]}:=\{1,\dots,n\}$ for $n\in{\mathbb{N}}$. Also a tuple $a \in S^n$ is considered as a function $a\colon [n] \rightarrow S$. So the $i$-th coordinate of this tuple is denoted by $a(i)$ rather than $a_i$.
Full transformation semigroups {#se:sg}
==============================
First we give an upper bound on the complexity of the subpower membership problem for arbitrary finite semigroups.
\[thm:pspace\] The [[SMP]{.nodecor}]{} for any finite semigroup is in [[PSPACE]{.nodecor}]{}.
Let $S$ be a finite semigroup. We show that $$\label{eq:NPS}
{\textnormal{SMP}}(S) \text{ is in nondeterministic linear space.}$$ To this end, let $A\subseteq S^n,\,b\in S^n$ be an instance of ${\textnormal{SMP}}(S)$. If $b\in{ {\langle A \rangle} }$, then there exist $a_1,\dots,a_m\in A$ such that $b = a_1\cdots a_m$.
Now we pick the first generator $a_1 \in A$ nondeterministically and start with $c := a_1$. Pick the next generator $a \in A$ nondeterministically, compute $c := c\cdot a$, and repeat until we obtain $c = b$. Clearly all computations can be done in space linear in $n\cdot |A|$. This proves . By a result of Savitch [@Savitch1970] this implies that ${\textnormal{SMP}}(S)$ is in deterministic quadratic space.
The first part of Theorem \[thm:tn\] follows from the next result since $T_3$ embeds into $T_n$ for all $n\geq 3$.
\[thm:t3\] ${\textnormal{SMP}}(T_3)$ is [[PSPACE]{.nodecor}]{}-complete.
Kozen [@Ko:LBNP] showed that the following decision problem is [[PSPACE]{.nodecor}]{}-complete: input $n$ and functions $f,f_1,\dotsc,f_m:{[n]}\rightarrow{[n]}$ and decide whether $f$ can be obtained as a composition [^1] of $f_i$’s. The size of the input for this problem is $(m+1)n\log n$.
To encode this problem into ${\textnormal{SMP}}(T_3)$ let $T_3$ be the full transformation semigroup of $0,1$, and $\infty$. Transformations act on their arguments from the right. We identify $g$, an element of $T_3$, with the triple $(0^g,1^g,\infty^g)$ and name a number of elements of $T_3$:
- ${\bm 0}= (0,0,\infty)$ and ${\bm 1}= (1,1,\infty)$ are used to encode the functions ${[n]}\rightarrow{[n]}$;
- ${\operatorname{\textbf{id}}}= (0,1,\infty),\,{{\bm{0\mapsto0}}}= (0,\infty,\infty),\,{{\bm{0\mapsto1}}}= (1,\infty,\infty)$, and ${{\bm{1\mapsto0}}}= (\infty,0,\infty)$ are used to model the composition.
We call an element of $T_3$ [*bad*]{} if it sends $0$ or $1$ to $\infty$; and we call a tuple of elements [*bad*]{} if it is bad on at least one position. Note that all the named elements send $\infty$ to $\infty$. So multiplying a bad element on the right by any of the named elements yields a bad element again.
Let $n$ and $f,f_1,\dots, f_m$ be an input to Kozen’s composition problem. We will encode it as [[SMP]{.nodecor}]{} on $n^2+mn$ positions. We start with an auxiliary notation. Every function $g\colon{[n]}\rightarrow {[n]}$ can be encoded by a [*mapping tuple*]{} $m_g\in T_3^{n^2+mn}$ as follows: $$m_g(x) := \begin{cases} {\bm 1}& \text{if } x\in\{ 1^g, n+2^g,\dots,(n-1)n+n^g \}, \\
{\bm 0}& \text{otherwise}. \end{cases}$$ Hence the first $n$ positions encode the image of $1$, the next $n$ positions the image of $2$, and so on. The final $mn$ positions are used to distinguish mapping tuples from other tuples that we will define shortly. Note that mapping tuples are never bad.
We introduce the generators of the subalgebra of $T_3^{n^2+mn}$ gradually. The first generator is the mapping tuple $m_1$ for the identity on ${[n]}$.
Next, for each $f_i$ we add the [*choice tuple*]{} $c_i$ defined as $$c_i(x) := \begin{cases} {\operatorname{\textbf{id}}}& \text{if } x\in{[n^2]}, \\
{{\bm{0\mapsto1}}}& \text{if } x\in\{n^2+(i-1)n + 1,\dotsc,n^2+(i-1)n+n\}, \\
{{\bm{0\mapsto0}}}& \text{otherwise}. \end{cases}$$ Multiplying the mapping tuple for $g$ on the right by the choice tuple for $f_i$ corresponds to deciding that $g$ will be composed with $f_i$.
Finally, for each $f_i$ and $j,k\in{[n]}$ we add the [*application tuple*]{} $a_{ijk}$ with the semantics $$\text{apply $f_i$ on coordinate $j$ to $k$.}$$ If $k\neq k^{f_i}$, then $$a_{ijk}(x) := \begin{cases} {{\bm{1\mapsto0}}}& \text{if } x\in\{(j-1)n+k,n^2+(i-1)n+j\}, \\
{{\bm{0\mapsto1}}}& \text{if } x=(j-1)n+k^{f_i}, \\
{\operatorname{\textbf{id}}}& \text{otherwise}. \end{cases}$$ If $k = k^{f_i}$, then $$a_{ijk}(x) := \begin{cases} {{\bm{1\mapsto0}}}& \text{if } x=n^2+(i-1)n+j, \\
{\operatorname{\textbf{id}}}& \text{otherwise}. \end{cases}$$ Multiplication by the application tuples computes the composition decided by the choice tuples. More precisely, for $g\in T_n$ and $f_i$ we have $$\label{eq:mgfi}
m_{gf_i} = m_g c_i a_{i11^g}\cdots a_{inn^g}.$$ Here multiplying $m_g$ by $c_i$ turns the $i$-th block of $n$ positions among the last $nm$ positions of $m_g$ to ${\bm 1}$. The following multiplication with $a_{i11^g}\cdots a_{inn^g}$ resets these $n$ positions to ${\bm 0}$ again. At the same time, in the first $n$ positions of $m_gc_i$ the ${\bm 1}$ gets moved from position $1^g$ to $(1^g)^{f_i}$, in the next $n$ positions the ${\bm 1}$ gets moved from $n+2^g$ to $n+(2^g)^{f_i}$, and so on. Hence we obtain the mapping tuple of $gf_i$, and is proved.
It remains to choose an element which will be generated by all these tuples iff $f$ is a composition of $f_i$’s. This final element is the mapping tuple for $f$. We claim $$\label{eq:fiffmf}
f\in\langle f_1,\dots, f_m\rangle \text{ iff } m_f\in\langle m_1, c_1,\dots,c_m, a_{111},\dots,a_{mnn}\rangle.$$ The implication from left to right is immediate from our observation . For the converse we analyze a minimal product of generator tuples which yields $m_f$ and show that it essentially follows the pattern from . Recall that no partial product starting in the leftmost element of the product can be bad. In particular the leftmost element itself needs to be $m_1$ – the only generator which is not bad. If $m_1$ occurs anywhere else, then the product could be shortened as any tuple which is not bad multiplied by $m_1$ yields $m_1$ again. So we can disregard this case.
The second element from the left cannot be an application tuple as the ${{\bm{1\mapsto0}}}$ on one of the last $mn$ positions would turn the result bad. Thus the only meaningful option is the choice tuple for some function $f_i$. Multiplying $m_1$ by $c_i$ turns $n$ positions (among the last $mn$ positions) of $m_1$ to ${\bm 1}$.
The third element from the left cannot be a choice tuple: a multiplication by a choice tuple produces a bad result unless the last $mn$ positions of the left tuple are all ${\bm 0}$. So before any more choice tuples occur in our product, all $n$ ${\bm 1}$’s in the last $mn$ positions have to be reset to ${\bm 0}$. This can only be achieved by multiplying with $n$ application tuples of the form $a_{ijk_j}$ for $j\in{[n]}$. Focusing on the first $n^2$ positions of $m_1c_i$, we see that necessarily $k_j = j$ for all $j$. Hence the first $n+2$ factors of our product are $$m_1 c_i a_{i11}\cdots a_{inn} = m_{f_i}.$$ Note that the order of the application tuples do not matter.
Continuing this reasoning with the mapping tuple for $f_i$ (instead of the identity), we see that the next $n+1$ factors of our product are some $c_j$ followed by $n$ application tuples $a_{j11^{f_i}},\dots,a_{jnn^{f_i}}$. Invoking we then get the mapping tuple for $f_i f_j$. In the end we get a mapping tuple for $f$ iff $f$ can be obtained as a composition of the $f_i$’s and the identity. This proves .
The number of tuples we input into [[SMP]{.nodecor}]{} is $mn^2+m + 2$, so the total size of the input is $\mathcal{O}((mn^2+m+2)(n^2+mn))$, that is, polynomial with respect to the size of the input of the original problem. Thus Kozen’s composition problem has a polynomial time reduction to ${\textnormal{SMP}}(T_3)$ and the latter is [[PSPACE]{.nodecor}]{}-hard as well. Together with Theorem \[thm:pspace\] this yields the result.
Next we show the second part of Theorem \[thm:tn\].
\[thm:t2\] ${\textnormal{SMP}}(T_2)$ is in [[P]{.nodecor}]{}.
Let the underlying set of $T_2$ be $\{0,1\}$ and the constants of $T_2$ be denoted by ${\bm 0}$ and ${\bm 1}$ and the non-constants by ${\operatorname{\textbf{id}}}$ and ${\textbf{not}}$. For a tuple $a\in T_2^n$ the [*constant part*]{} (or [**cp**]{}) of $a$ is the set of indices $i\in{[n]}$ such that $a(i)\in T_2$ is a constant, the [*non-constant part*]{} (or [**ncp**]{}) are the remaining $i$’s.
Let $a_1,\dotsc,a_k,b\in T_2^n$ be an instance of ${\textnormal{SMP}}(T_2)$. Before starting the algorithm we preprocess the input by removing all the $a_i$’s with [**cp**]{}not included in [**cp**]{}of $b$. It is clear that the removed tuples cannot occur in a product that yields $b$. Next we call the function $\textbf{SMP}(a_1,\dotsc,a_k,b)$ from Algorithm \[alg:t2\].
let $a_1,\dotsc,a_\ell$ be the $a_i$’s with empty [**cp**]{}\
and $a_{\ell+1},\dotsc,a_k$ with non-empty [**cp**]{} \[t2:z2\] \[t2:z2true\] \[t2:z2fi\] \[t2:mainloop\] let $a'_1,\dotsc,a'_\ell$ be projections of $a_1,\dotsc,a_\ell$ to [**cp**]{}of $a_i$ let $b'$ (defined on [**cp**]{}of $a_i$) be $b'(j) = {\operatorname{\textbf{id}}}$ if $a_i(j)=b(j)$ and $b'(j)={\textbf{not}}$ else \[t2:innerif\] assume $b' = a'_{j_1}\dotsb a'_{j_m}$ for $j_1,\dots,j_m\in{[\ell]}$ \[t2:innerif2\] set $c:=ba_{j_1}\dotsb a_{j_m}$ let $a''_1,\dotsc,a''_k,c''$ be projections of $a_1,\dotsc,a_k,c$ to [**ncp**]{}of $a_i$ \[t2:recursion\] \[t2:false\]
We show the correctness of Algorithm \[alg:t2\] by induction on the size of ${\textbf{cp}\xspace}$ of $b$. Note that if $b$ has empty [**cp**]{}then, by the preprocessing, each $a_i$ has empty [**cp**]{}as well and the problem reduces to SMP over ${\mathbb Z}_2$ (which is solvable in polynomial time by Gaussian elimination). This is the essence behind lines \[t2:z2\]–\[t2:z2fi\] of the algorithm.
If $b$ has non-empty [**cp**]{}, we first assume that $b = a_{j_1}\dotsb a_{j_m}$, and let $a_{j_{p}}$ be the last element of the product with non-empty [**cp**]{}. The suffix $a_{j_{(p+1)}}\dotsb a_{j_m}$ consists of elements of empty [**cp**]{} which multiply $a_{j_{p}}$, on its [**cp**]{}, to $b$. This means that the condition on line \[t2:innerif\] will be satisfied for some $i$ (maybe with $i=j_{p}$, but maybe with some other $i$). Since $b$ is generated by $a_1,\dots,a_k$ by assumption, then so is $c = ba_{j_1}\dotsb a_{j_m}$ (for any sequence computed in a successful test in line \[t2:innerif\]). Now $c''$ is just a projection of $c$, and the recursive call in line \[t2:recursion\] will return the correct answer **TRUE** by the induction assumption.
Next assume that $b$ is not generated by $a_1,\dots,a_k$. Seeking a contradiction we suppose that the algorithm returns **TRUE**. That is, the recursive call in line \[t2:recursion\], in the loop iteration at some $i$, answers **TRUE**. Consequently $b' = a'_{j_1}\dotsb a'_{j_m}$ for some $j_1,\dots,j_m\in{[\ell]}$ by line \[t2:innerif2\] and $c''= a''_{i_1}\dots a''_{i_p}$ for some $i_1,\dots,i_p\in{[k]}$ by the induction assumption. We claim that $$\label{eq:bai}
b = a_{i_1}\dotsb a_{i_{p}}a_ia_ia_{j_1}\dotsb a_{j_m}.$$ Indeed on indices from the [**cp**]{}of $a_i$ only the last $m+1$ elements matter and they provide proper values by the choice of the sequence $j_1,\dotsc,j_m$ computed by the algorithm. For the [**ncp**]{}of $a_i$ the recursive call provides $c$. Since $a_ia_i$ is ${\operatorname{\textbf{id}}}$ on [**ncp**]{}of $a_i$ and $a_{j_1}\dotsb a_{j_m}a_{j_1}\dotsb a_{j_m}$ is a tuple of ${\operatorname{\textbf{id}}}$’s (since all the tuples in the product have empty [**cp**]{}’s) we obtain $b$ on ${\textbf{ncp}\xspace}$ of $a_i$ as well. This proves and contradicts our assumption that $b$ is not generated by $a_1,\dots,a_k$. Hence the algorithm returns **FALSE** in this case.
The complexity of the algorithm is clearly polynomial: The function $\textbf{SMP}$ works in polynomial time, and the depth of recursion is bounded by $n$ as during each recursive call we loose at least one coordinate.
For proving that membership for transformation semigroups is [[PSPACE]{.nodecor}]{}-complete, Kozen first showed that the following decision problem is [[PSPACE]{.nodecor}]{}-complete [@Ko:LBNP]. [ $$\parbox{\textwidth}{
\begin{tabular}{ @{} p{\ldprobleft} p{\ldprobmid} p{\ldprobright} @{} }
& \multicolumn{2}{l}{\textbf{\uppercase{Automata Intersection Problem}}} \\
& {\begin{minipage}[t]{\ldprobmid}Input:\vspace{1.5pt}\end{minipage}} & {\begin{minipage}[t]{\ldprobright}
deterministic finite state automata $F_1,\dots,F_n$ with common \\ alphabet $\Sigma$
\vspace{1.5pt}\end{minipage}} \\
& {\begin{minipage}[t]{\ldprobmid}Problem:\end{minipage}} & {\begin{minipage}[t]{\ldprobright}
Is there a word in $\Sigma^*$ that is accepted by all of $F_1,\dots,F_n$?
\end{minipage}} \\
\end{tabular}}$$ ]{} Using the wellknown connection between automata and transformation semigroups we obtain the following stronger version of Kozen’s result.
\[co:automata\] The Automata Intersection Problem restricted to automata with $3$ states is [[PSPACE]{.nodecor}]{}-complete.
The Automata Intersection Problem is in ${\textnormal{PSPACE}}$ by [@Ko:LBNP]. For [[PSPACE]{.nodecor}]{}-hardness we reduce ${\textnormal{SMP}}(T_3)$ to our problem. Let $T_3$ act on $\{0,1,\infty\}$, and let $a_1,\dots,a_k,b\in T_3^n$ be the input of ${\textnormal{SMP}}(T_3)$.
For each position $i\in [n]$ we introduce three automata $F_i^0,\,F_i^1$, and $F_i^{\infty}$ each with the set of states $\{0,1,\infty\}$. These automata are responsible for storing the image of $0,\,1$, and $\infty$, respectively, under the transformation on position $i$. The initial state of $F_i^j$ is $j$, its accepting state $j^{b(i)}$. The alphabet of the automata is $\{a_1,\dots,a_k\}$. For the automaton $F_i^j$ the letter $a_\ell$ maps the state $x$ to $x^{a_\ell(i)}$.
Now all the $3n$ automata accept a common word $a_{i_1}\dots a_{i_p}$ over $\{a_1,\dots,a_k\}$ iff $j^{a_{i_1}\dots a_{i_p}(i)} = j^{b(i)}$ for all $i\in [n], j\in \{0,1,\infty\}$. The latter is equivalent to $b\in\langle a_1,\dots,a_k\rangle$. Thus ${\textnormal{SMP}}(T_3)$ reduces to the Automata Intersection Problem for automata with $3$ states which is then [[PSPACE]{.nodecor}]{}-hard by Theorem \[thm:t3\].
Nilpotent semigroups
====================
A semigroup ${S}$ is called *$d$-nilpotent* for $d \in \mathbb{N}$ if $$\forall x_1,\ldots,x_d, y_1,\ldots,y_d \in S \colon x_1 \dotsm x_d = y_1 \dotsm y_d.$$ It is called *nilpotent* if it is $d$-nilpotent for some $d \in \mathbb{N}$. We let $0 := x_1 \dotsm x_d$ denote the zero element of a $d$-nilpotent semigroup ${S}$.
An *ideal extension* of a semigroup $I$ by a semigroup $Q$ with zero is a semigroup $S$ such that $I$ is an ideal of $S$ and the Rees quotient semigroup $S/I$ is isomorphic to $Q$.
$A \subseteq T^n,\, b \in T^n$. \[alg:nilpotent\_l01\] \[alg:nilpotent\_l02\] \[alg:nilpotent\_l03\] \[alg:nilpotent\_l04\] \[alg:nilpotent\_l05\] \[alg:nilpotent\_l08\] \[alg:nilpotent\_l09\] $B := \{ a_1 \cdots a_k \in S^n \mid k < 2d, a_1, \dots, a_k\in A \}$ \[alg:nilpotent\_l11\] \[alg:nilpotent\_l12\]
\[thm:nilpextension\] Let $T$ be an ideal extension of a semigroup $S$ by a $d$-nilpotent semigroup $N$. Then Algorithm \[alg:nilpotent\] reduces ${\textnormal{SMP}}(T)$ to ${\textnormal{SMP}}(S)$ in polynomial time.
*Correctness of Algorithm \[alg:nilpotent\]*. Let $A\subseteq T^n,\, b\in T^n$ be an instance of ${\textnormal{SMP}}(T)$.
Case $b\not\in S^n$. Since $T/S$ is $d$-nilpotent, a product that is equal to $b$ cannot have more than $d-1$ factors. Thus Algorithm \[alg:nilpotent\] verifies in lines \[alg:nilpotent\_l02\] to \[alg:nilpotent\_l08\] whether there are $\ell < d$ and $a_1,\ldots,a_\ell \in A$ such that $b = a_1 \dotsm a_\ell$. In line \[alg:nilpotent\_l05\], Algorithm \[alg:nilpotent\] returns true if such factors exist. Otherwise false is returned in line \[alg:nilpotent\_l09\].
Case $b\in S^n$. Let $B$ be as defined in line \[alg:nilpotent\_l11\]. We claim that $$\label{eq:nilpotent}
b\in{ {\langle A \rangle} } \text{ iff } b\in{ {\langle B \rangle} }.$$ The “if”-direction is clear. For the converse implication assume $b\in{ {\langle A \rangle} }$. Then we have $\ell\in{\mathbb{N}}$ and $a_1,\dots,a_\ell\in A$ such that $b = a_1\cdots a_\ell$. If $\ell < 2d$, then $b\in B$ and we are done. Assume $\ell \geq 2d$ in the following. Let $q\in{\mathbb{N}}$ and $r\in\{0,\dots,d-1\}$ such that $\ell = qd+r$. For $0\leq j \leq q-2$ define $b_j := a_{jd+1}\cdots a_{jd+d}$. Further $b_{q-1} := a_{(q-1)d+1}\dotsm a_{\ell}$. Since $T/S$ is $d$-nilpotent, any product of $d$ or more elements from $A$ is in $S^n$. In particular $b_0,\dots, b_{q-1}$ are in $B$. Since $$b = b_0 \cdots b_{q-1},$$ we obtain $b\in{ {\langle B \rangle} }$. Hence is proved.
Since Algorithm \[alg:nilpotent\] returns $b\in{ {\langle B \rangle} }$ in line \[alg:nilpotent\_l12\], its correctness follows from .
*Complexity of Algorithm \[alg:nilpotent\]*. In lines \[alg:nilpotent\_l02\] to \[alg:nilpotent\_l08\], the computation of each product $a_1 \dotsm a_\ell$ requires $n(\ell-1)$ multiplications in $S$. There are $|A|^\ell$ such products of length $\ell$. Thus the number of multiplications in $S$ is at most $\sum_{\ell=2}^{d-1} n(\ell-1) |A|^\ell$. This expression is bounded by a polynomial of degree $d-1$ in the input size $n(|A|+1)$.
Similarly the size of $B$ and the effort for computing its elements is bounded by a polynomial of degree $2d-1$ in $n(|A|+1)$. Hence Algorithm \[alg:nilpotent\] runs in polynomial time.
The [[SMP]{.nodecor}]{} for every finite nilpotent semigroup is in ${\textnormal{P}}$.
Immediate from Theorem \[thm:nilpextension\]
Clifford semigroups {#sec:clifford_nilpotent}
===================
Clifford semigroups are also known as semilattices of groups. In this section we show that their [[SMP]{.nodecor}]{} is in [[P]{.nodecor}]{}. First we state some well-known facts on Clifford semigroups and establish some notation.
In a finite semigroup $S$, each $s \in S$ has an *idempotent power* $s^m$ for some $m \in \mathbb{N}$, i.e., $(s^m)^2 = s^m$.
\[dfn:clifford\] A semigroup ${S}$ is *completely regular* if every $s \in S$ is contained in a subsemigroup of ${S}$ which is a also a group. A semigroup ${S}$ is a *Clifford semigroup* if it is completely regular and its idempotents are central. The latter condition may be expressed by $$\label{formula:idempotents_commute}
\forall e,s \in S \colon ( e^2 = e \Rightarrow es = se ) \text{.}$$
\[dfn:strong\_sl\_of\_grps\] Let $\langle I, \wedge \rangle$ be a semilattice. For $i \in I$ let $\langle G_i, \cdot \rangle$ be a group. For $i, j, k \in I$ with $i \geq j \geq k$ let $\phi_{i,j} \colon {G}_i \rightarrow {G}_j$ be group homomorphisms such that $\phi_{j,k} \circ \phi_{i,j} = \phi_{i,k}$ and $\phi_{i,i} = \operatorname{id}_{G_i}$. Let $S := \dot{\bigcup}_{i \in I} G_i$, and $$\text{for} \quad x \in G_i ,\, y \in G_j \quad \text{let} \quad
x*y := \phi_{i,i \wedge j}(x) \cdot \phi_{j,i \wedge j}(y).$$ Then we call $\langle S, * \rangle$ a *strong semilattice of groups*.
\[thm:clifford\_decomposition\] A semigroup is a strong semilattice of groups iff it is a Clifford semigroup.
Note that the operation $*$ extends the multiplication of $G_i$ for each $i \in I$. It is easy to see that $\{ G_i \mid i \in I \}$ are precisely the maximal subgroups of $S$. Moreover, each Clifford semigroup inherits a preorder $\leq$ from the underlying semilattice.
Let ${S}$ be a Clifford semigroup constructed from a semilattice ${I}$ and disjoint groups ${G}_i$ for $i \in I$ as in Definition \[dfn:strong\_sl\_of\_grps\]. For $x, y \in S$ define $$x \leq y \quad\text{if}\quad \exists i,j \in I \colon i \leq j, x \in G_i, y \in G_j.$$
\[prp:clifford\_preorder\] Let ${S}$ be a Clifford semigroup and $x,y,z \in S$. Then
1. \[list:clifford\_preorder\_1\] $x \leq yz$ iff $x \leq y$ and $x \leq z$,
2. \[list:clifford\_preorder\_2\] $xyz \leq y$, and
3. \[list:clifford\_preorder\_3\] $x \leq y$ and $y \leq x$ iff $x$ and $y$ are in the same maximal subgroup of $S$.
Straightforward.
The following mapping will help us solve the [[SMP]{.nodecor}]{} for Clifford semigroups.
Let ${S}$ be a finite Clifford semigroup constructed from a semilattice ${I}$ and disjoint groups ${G}_i$ for $i \in I$ as in Definition \[dfn:strong\_sl\_of\_grps\]. Let $$\gamma \colon
S \rightarrow \prod_{i \in I} G_i \quad\text{such that}\quad \gamma(s)(i) :=
\begin{cases}
s & \text{if } s \in G_i \text{,} \\
1_{G_i} & \text{otherwise}
\end{cases}$$ for $s \in S$ and $i \in I$.
Here $\prod$ denotes the direct product and $1_{G_i}$ the identity of the group $G_i$ for $i \in I$. Note that the mapping $\gamma$ is not necessarily a homomorphism.
$A \subseteq S^n,\, b \in S^n$. Set $\{ a_1,\ldots,a_k \} :=
\{ a \in A \mid \forall i \in {[n]} \colon a(i) \geq b(i) \}$ \[alg:clifford:a1ak\] Set $e$ to the idempotent power of $b$. \[alg:clifford:l2\] \[alg:clifford:exist\_e\] \[alg:clifford:ret1\] \[alg:clifford:l06\]
\[thm:smp\_clifford\] Let ${S}$ be a finite Clifford semigroup with maximal subgroups $G_i$ for $i \in I$. Then Algorithm \[alg:clifford\] reduces ${\textnormal{SMP}}({S})$ to ${\textnormal{SMP}}(\prod_{i \in I} G_i)$ in polynomial time. The latter is the [[SMP]{.nodecor}]{} of a group.
*Correctness of Algorithm \[alg:clifford\]*. Assume ${S} = { {\langle \dot{\bigcup}_{i \in I} G_i , \cdot \rangle} }$ as in Definition \[dfn:strong\_sl\_of\_grps\]. Fix an instance $A \subseteq S^n,\, b \in S^n$ of ${\textnormal{SMP}}({S})$. Let $a_1,\ldots,a_k$ be as defined in line \[alg:clifford:a1ak\] of Algorithm \[alg:clifford\].
First we claim that $$\label{eq:claim_alg_cliff}
b \in { {\langle A \rangle} }
\quad \text{iff} \quad
b \in { {\langle a_1,\ldots,a_k \rangle} }.$$ To this end, assume that $b = c_1 \dotsm c_m$ for $c_1,\ldots,c_m \in A$. Fix $j \in {[m]}$. Lemma \[prp:clifford\_preorder\][(\[list:clifford\_preorder\_1\])]{} implies that $b(i) \leq c_j(i)$ for all $i \in {[n]}$. Thus $c_j \in \{ a_1,\ldots,a_k \}$. Since $j$ was arbitrary, we have $c_1,\ldots,c_m \in \{ a_1,\ldots,a_k \}$ and follows.
Let $e$ be the idempotent power of $b$. If the condition in line \[alg:clifford:exist\_e\] of Algorithm \[alg:clifford\] is fulfilled, then neither $e$ nor $b$ are in ${ {\langle a_1,\ldots,a_k \rangle} }$. In this case false is returned in line \[alg:clifford:ret1\]. Now assume the condition in line \[alg:clifford:exist\_e\] is violated, i.e., $$\forall i \in {[n]} \colon e(i) \in { {\langle a_1(i),\ldots,a_k(i) \rangle} }.$$ We claim that $$\label{thm:smp_clifford_formula_0}
e \in { {\langle a_1,\ldots,a_k \rangle} }.$$ For each $i \in {[n]}$ let $d_i \in { {\langle a_1,\ldots,a_k \rangle} }$ such that $d_i(i) = e(i)$. Further let $f$ be the idempotent power of $d_1 \dotsm d_n$. We show $f = e$. Fix $i \in {[n]}$. Since $d_i(i) = e(i)$, we have $f(i) \leq e(i)$ by Lemma \[prp:clifford\_preorder\][(\[list:clifford\_preorder\_2\])]{}. On the other hand, $e(i) \leq b(i) \leq a_j(i)$ for all $j \leq k$. Hence $e(i) \leq f(i)$ by multiple applications of Lemma \[prp:clifford\_preorder\][(\[list:clifford\_preorder\_1\])]{}. Thus $f(i)$ and $e(i)$ are idempotent and are in the same group by Lemma \[prp:clifford\_preorder\][(\[list:clifford\_preorder\_3\])]{}. So $e(i) = f(i)$. This yields $e = f$ and thus holds.
Next we show $$\label{thm:smp_clifford_formula_1}
b \in { {\langle a_1,\ldots,a_k \rangle} }
\quad \text{iff} \quad
b \in { {\langle a_1e,\ldots,a_ke \rangle} }.$$ If $b = c_1 \dotsm c_m$ for $c_1,\ldots,c_m \in \{ a_1,\ldots,a_k \}$, then $b = be = c_1 \dotsm c_m e = (c_1e) \dotsm (c_me)$ since idempotents are central in Clifford semigroups. This proves .
Next we claim that $$\label{thm:smp_clifford_formula_2}
b \in { {\langle a_1e,\ldots,a_ke \rangle} }
\quad \text{iff} \quad
\gamma(b) \in { {\langle \gamma(a_1e),\ldots,\gamma(a_ke) \rangle} }.$$ Fix $i \in {[n]}$. By Lemma \[prp:clifford\_preorder\][(\[list:clifford\_preorder\_3\])]{} the elements $a_1e(i),\ldots,a_ke(i)$, and $b(i)$ all lie in the same group, say ${G}_l$. Note that $\gamma|_{G_l} \colon {G}_l \rightarrow \prod_{i \in I} {G}_i$ is a semigroup monomorphism. This means that the componentwise application of $\gamma$ to ${ {\langle a_1e,\ldots,a_ke, b \rangle} }$, namely $$\gamma |_{ {\langle a_1e,\ldots,a_ke, b \rangle} } \colon { {\langle a_1e,\ldots,a_ke, b \rangle} }
\rightarrow ( \prod_{i \in I} {G}_i )^n,$$ is also a semigroup monomorphism. This implies .
In line \[alg:clifford:l06\], the question whether $\gamma(b) \in { {\langle \gamma(a_1e),\ldots,\gamma(a_ke) \rangle} }$ is an instance of ${\textnormal{SMP}}( \prod_{i \in I} {G}_i )$, which is the [[SMP]{.nodecor}]{} of a group. By , , and , Algorithm \[alg:clifford\] returns true iff $b \in { {\langle A \rangle} }$.
*Complexity of Algorithm \[alg:clifford\]*. Line \[alg:clifford:a1ak\] requires at most $\mathcal{O}( n|A| )$ calls of the relation $\leq$. For line \[alg:clifford:l2\], let $(s_1,\ldots,s_{|S|})$ be a list of the elements of $S$ and let $v \in \mathbb{N}$ minimal such that $(s_1,\ldots,s_{|S|})^v$ is idempotent. Then $e = b^v$. Since $v$ only depends on ${S}$ but not on $n$ or $|A|$, computing $e$ takes $\mathcal{O}( n )$ steps. Line \[alg:clifford:exist\_e\] requires $\mathcal{O}( n|A| )$ steps. Altogether the time complexity of Algorithm \[alg:clifford\] is $\mathcal{O}( n|A| )$.
\[clr:clifford\] The [[SMP]{.nodecor}]{} for finite Clifford semigroups is in [[P]{.nodecor}]{}.
Let $S$ be a finite Clifford semigroup. Fix an instance $A \subseteq S^n,\, b \in S^n$ of ${\textnormal{SMP}}({S})$. Algorithm \[alg:clifford\] converts this instance into one of the [[SMP]{.nodecor}]{} of a group with maximal size of $|S|^{|S|}$ in $\mathcal{O}( n|A| )$ time. Both instances have input size $n(|A| + 1)$. The latter can be solved by Willard’s modification [@Willard2007] of the concept of strong generators, known from the permutation group membership problem [@Furst1980]. This requires $\mathcal{O}( n^3 + n|A| )$ time according to [@Zweckinger2013 p. 53, Theorem 3.4]. Hence ${\textnormal{SMP}}({S})$ is decidable in $\mathcal{O}( n^3 + n|A| )$ time.
\[thm:clifford\_nilpotent\] Let $S$ be a finite ideal extension of a Clifford semigroup by a nilpotent semigroup. Then ${\textnormal{SMP}}(S)$ is in [[P]{.nodecor}]{}.
By Theorem \[thm:nilpextension\] and Corollary \[clr:clifford\].
In the next lemma we give some conditions equivalent to the fact that a semigroup is an ideal extension of a Clifford semigroup by a nilpotent semigroup.
\[lma:tfae\_clifford\_nilpotent\] Let ${S}$ be a finite semigroup. Then the following are equivalent:
1. \[lma:tfae\_clifford\_nilpotent\_list\_2\] $S$ is an ideal extension of a Clifford semigroup $C$ by a nilpotent semigroup $N$;
2. \[lma:tfae\_clifford\_nilpotent\_list\_3\] the ideal $I$ generated by the idempotents of ${S}$ is a Clifford semigroup;
3. \[lma:tfae\_clifford\_nilpotent\_list\_3b\] all idempotents in $S$ are central, and for every idempotent $e \in S$ and every $a \in S$ where $ea = a$ the element $a$ generates a group;
4. \[lma:tfae\_clifford\_nilpotent\_list\_1b\] $S$ embeds into the direct product of a Clifford semigroup ${C}$ and a nilpotent semigroup $N$.
$(\ref{lma:tfae_clifford_nilpotent_list_2}) \Rightarrow
(\ref{lma:tfae_clifford_nilpotent_list_3})$: We show $I = C$. Since $S \setminus C$ cannot contain idempotent elements, all idempotents are in the ideal $C$. Thus we have $I \subseteq C$. Now let $c \in C$. Let $e \in I$ be the idempotent power of $c$. Then $c = ce \in I$. So $C \subseteq I$.
$(\ref{lma:tfae_clifford_nilpotent_list_3}) \Rightarrow (\ref{lma:tfae_clifford_nilpotent_list_3b})$: First we claim that all idempotents are central in $S$. To this end, let $e \in S$ be idempotent and $a \in S$. Then $$\begin{aligned}
ae
&= (ae)e \\
&= e(ae) \qquad \text{since $e,ae \in I$ and $e$ is central in $I$,} \\
&= (ea)e \\
&= e(ea) \qquad \text{since $e,ea \in I$ and $e$ is central in $I$,} \\
&= ea \text{.}\end{aligned}$$ Next assume that $ea = a$. Since $ea \in I$, we have that ${ {\langle a \rangle} } = { {\langle ea \rangle} }$ is a group.
$(\ref{lma:tfae_clifford_nilpotent_list_3b}) \Rightarrow
(\ref{lma:tfae_clifford_nilpotent_list_1b})$: Let $k \in \mathbb{N}$ such that $x^k$ is idempotent for each $x \in S$. For $x \in S$ and an idempotent $e \in S$ we have $$\label{eq:thm:tfae_cn_eq1}
ex = (ex)^{k+1} = ex^{k+1}$$ since ${ {\langle ex \rangle} }$ is a group and idempotents are central. We claim that $$\label{eq:thm:tfae_cn_eq1.5}
\alpha \colon S \rightarrow S,\, x \mapsto x^{k+1}
\quad
\text{is a homomorphism with}
\quad
\alpha^2 = \alpha \text{.}$$ For $x,y \in S$,
$$\begin{array}{rll}
\label{eq:thm:tfae_cn_eq2}
(xy)^{k+1} \!\!\!\!\!
&= (xy)^kxy \\
&= (xy)^kx^{k+1}y &\quad \text{by \eqref{eq:thm:tfae_cn_eq1} since $(xy)^k$ is idempotent,} \\
&= (xy)^kx^{k+1}y^{k+1} &\quad \text{by \eqref{eq:thm:tfae_cn_eq1} since $x^k$ is idempotent,} \\
&= (xy)^{k+1}x^ky^k &\quad \text{since $x^k, y^k$ are central,} \\
&= xyx^ky^k &\quad \text{by \eqref{eq:thm:tfae_cn_eq1} since $x^k$ is idempotent,} \\
&= x^{k+1}y^{k+1} &\quad \text{since $x^k, y^k$ are central.} \\
\end{array}$$
Also, $$\label{eq:thm:tfae_cn_eq3}
(x^{k+1})^{k+1} = x^{k^2+2k+1} = x^{k+1} \text{.}$$ This proves . Let $C := \alpha( S )$. We claim that $C$ is an ideal. For $x, y \in S \cup \{ 1 \}$ and $z^{k+1} \in C$, $$\begin{array}{rll}
\label{eq:thm:tfae_cn_eq4}
xz^{k+1}y \!\!\!\!\!
&= xzyz^k &\quad \text{since $z^k$ is central,} \\
&= (xzy)^{k+1}z^k &\quad \text{by \eqref{eq:thm:tfae_cn_eq1},} \\
&=( xz^{k+1}y )^{k+1} &\quad \text{since $z^k$ is central and idempotent,} \\
&\in C \text{.}
\end{array}$$
Now consider the Rees quotient $N := S/C$. We claim that $$\label{eq:thm:tfae_cn_eq5}
\text{$N$ is $|N|$-nilpotent.}$$ Let $n_1,\ldots,n_{|N|} \in S$. First assume $$\label{eq:N-N-nilpotent}
\exists i,j \in \{ 1,\ldots,|N| \},\, i < j \colon n_1 \cdots n_i = n_1 \cdots n_j \text{.}$$ Then $n_{i+1} \cdots n_j$ is a right identity of $n_1 \cdots n_i$. Thus $$n_1 \cdots n_i = n_1 \cdots n_i (n_{i+1} \cdots n_j)^{k+1} \in C$$ since $C$ is an ideal. So $n_1 \cdots n_{|N|} \in C$.
If does not hold, then $n_1, n_1 n_2,\ldots,n_1 \cdots n_{|N|}$ are $|N|$ distinct elements and at least one of them is in $C$. Again $n_1 \cdots n_{|N|} \in C$ by the ideal property of $C$. This proves . Now let $$\beta \colon S \rightarrow C \times N,\, s \mapsto ( \alpha(s), s/C ).$$ Apparently $\beta$ is a homomorphism. It remains to prove that $\beta$ is injective. Assume $\beta(x) = \beta(y)$ for $x,y \in S$. If $x \notin C$, then also $y \notin C$. Now $x/C = y/C$ implies $x = y$. Assume $x \in C$. Then $x = \alpha(x) = \alpha(y) = y$ since $\alpha^2 = \alpha$. We proved item (\[lma:tfae\_clifford\_nilpotent\_list\_1b\]) of Lemma \[lma:tfae\_clifford\_nilpotent\].
$(\ref{lma:tfae_clifford_nilpotent_list_1b}) \Rightarrow
(\ref{lma:tfae_clifford_nilpotent_list_2})$: Assume $S \leq C \times N$. Then $J := S \cap ( C \times \{ 0 \} )$ is an ideal of $S$. At the same time $J$ is a subsemigroup of a Clifford semigroup. By Definition \[dfn:clifford\] also $J$ is a Clifford semigroup. It is easy to see that the Rees quotient $N_1 := S / J$ is nilpotent. Thus $S$ is an ideal extension of the Clifford semigroup $J$ by the nilpotent semigroup $N_1$.
Commutative semigroups {#sec:SMP_commut}
======================
The main result of Section \[sec:clifford\_nilpotent\] was that ideal extensions of Clifford semigroups by nilpotent semigroups have the [[SMP]{.nodecor}]{} in [[P]{.nodecor}]{}. In this section we show that if a commutative semigroup does not have this property, then its [[SMP]{.nodecor}]{} is [[NP]{.nodecor}]{}-complete. This will complete the proof of our dichotomy result, Theorem \[thm:tfae\_clifford\_nilpotent\_commut\].
First we give an upper bound on the complexity of the [[SMP]{.nodecor}]{} for commutative semigroups.
\[lma:smp\_commutative\_semigroups\] The [[SMP]{.nodecor}]{} for a finite commutative semigroup is in [[NP]{.nodecor}]{}.
Let $\{a_1,\ldots,a_k\}\subseteq S^n,\,b \in S^n$ be an instance of ${\textnormal{SMP}}({S})$. Let $x := ( s_1,\ldots,s_{|S|} )$ be a list of all elements of $S$, and $r := | { {\langle x \rangle} } |$. Now ${ {\langle x \rangle} } = \{ x^1,\ldots,x^r \}$, and for each $\ell \in \mathbb{N}$ there is some $m \in {[r]}$ such that $x^\ell = x^m$. Since $x$ contains all elements of $S$, we have $$\forall y \in S^n \, \forall \ell \in \mathbb{N} \,
\exists m \in {[r]} \colon y^\ell = y^m.$$ If $b \in { {\langle a_1,\ldots,a_k \rangle} }$, then there is a witness $(\ell_1,\ldots,\ell_k) \in \{ 0,\ldots,r \}^k$ such that $b = {a_1}^{\ell_1} \dotsm {a_k}^{\ell_k}$. The size of this witness is $\mathcal{O}( k \log(r) )$. Note that $r$ depends only on ${S}$ and not on the input size $n(k+1)$. Given $\ell_1,\ldots,\ell_k$ we can verify $b = {a_1}^{\ell_1} \dotsm {a_k}^{\ell_k}$ in time polynomial in $n(k+1)$. Hence ${\textnormal{SMP}}({S})$ is in [[NP]{.nodecor}]{}.
\[lma:not\_clifford\_nilpotent\_sg\_0\] Let ${S}$ be a finite semigroup, $e \in S$ be idempotent, and $a \in S$. Assume that $ea = ae = a$ and ${ {\langle a \rangle} }$ is not a group. Then ${\textnormal{SMP}}(S)$ is [[NP]{.nodecor}]{}-hard.
We reduce EXACT COVER to ${\textnormal{SMP}}( {S} )$. The former is one of Karp’s 21 [[NP]{.nodecor}]{}-complete problems [@Karp1972]. [ $$\parbox{\textwidth}{
\begin{tabular}{ @{} p{\ldprobleft} p{\ldprobmid} p{\ldprobright} @{} }
& \multicolumn{2}{l}{\textbf{\uppercase{Exact Cover}}} \\
& {\begin{minipage}[t]{\ldprobmid}Input:\vspace{1.5pt}\end{minipage}} & {\begin{minipage}[t]{\ldprobright}
$n \in \mathbb{N}$, sets $C_1,\ldots,C_k \subseteq {[n]}$
\vspace{1.5pt}\end{minipage}} \\
& {\begin{minipage}[t]{\ldprobmid}Problem:\end{minipage}} & {\begin{minipage}[t]{\ldprobright}
Are there disjoint sets $D_1,\ldots,D_m \in \{ C_1,\ldots,C_k \}$ such that
$\bigcup_{i=1}^{m} D_i = {[n]}$?
\end{minipage}} \\
\end{tabular}}$$ ]{} Fix an instance $n, C_1,\ldots,C_k$ of EXACT COVER. Now we define characteristic functions $c_1,\ldots,c_k, b \in S^n$ for $C_1,\ldots,C_k, {[n]}$, respectively. For $j \in{[k]},\, i \in {[n]}$, let $$b(i) := a
\quad
\text{and}
\quad
c_j(i) :=
\begin{cases}
a & \text{if } i \in C_j \text{,} \\
e & \text{otherwise.}
\end{cases}$$ Now let $\{c_1,\ldots,c_k\} \subseteq S^n,\,b \in S^n$ be an instance of ${\textnormal{SMP}}({S})$. We claim that $$\begin{aligned}
&b \in { {\langle c_1,\ldots,c_k \rangle} } \quad\text{iff}\quad
\exists \text{ disjoint } D_1,\ldots,D_m \in \{ C_1,\ldots,C_k \} \colon
\bigcup_{i=1}^{m} D_i = {[n]}.\end{aligned}$$
“$\Rightarrow$”: Let $d_1,\ldots,d_m \in \{ c_1,\ldots,c_k \}$ such that $b = d_1 \cdots d_m$. Let $D_1,\ldots,D_m$ be the sets corresponding to $d_1,\ldots,d_m$, respectively. Then $\bigcup_{i=1}^{m} D_i = {[n]}$. The union is disjoint since $a \notin \{ a^2, a^3,\ldots \}$.
“$\Leftarrow$”: Fix $D_1,\ldots,D_m$ whose disjoint union is ${[n]}$. Let $d_1,\ldots,d_m \in \{ c_1,\ldots,c_k \}$ be the characteristic functions of $D_1,\ldots,D_m$, respectively. Then $b = d_1 \dotsm d_m$.
\[clr:not\_clifford\_nilpotent\_sg\] Let ${S}$ be a finite commutative semigroup that does not fulfill one of the equivalent conditions of Lemma \[lma:tfae\_clifford\_nilpotent\]. Then ${\textnormal{SMP}}({S})$ is [[NP]{.nodecor}]{}-hard.
The semigroup $S$ violates condition (\[lma:tfae\_clifford\_nilpotent\_list\_3b\]) of Lemma \[lma:tfae\_clifford\_nilpotent\]. Since the idempotents are central in $S$, there are $e \in S$ idempotent and $a \in S$ such that $ea = ae = a$ and ${ {\langle a \rangle} }$ is not a group. Now the result follows from Lemma \[lma:not\_clifford\_nilpotent\_sg\_0\].
Now we are ready to prove our dichotomy result for commutative semigroups.
The conditions in Theorem \[thm:tfae\_clifford\_nilpotent\_commut\] are the ones from Lemma \[lma:tfae\_clifford\_nilpotent\] adapted to the commutative case. Thus they are equivalent. If one of them is fulfilled, then ${\textnormal{SMP}}({S})$ is in [[P]{.nodecor}]{} by Corollary \[thm:clifford\_nilpotent\].
Now assume the conditions are violated. Then ${\textnormal{SMP}}({S})$ is [[NP]{.nodecor}]{}-complete by Lemma \[lma:smp\_commutative\_semigroups\] and Corollary \[clr:not\_clifford\_nilpotent\_sg\].
Conclusion {#sec:conclusion}
==========
We showed that the [[SMP]{.nodecor}]{} for finite semigroups is always in ${\textnormal{PSPACE}}$ and provided examples of semigroups $S$ for which ${\textnormal{SMP}}(S)$ is in [[P]{.nodecor}]{}, [[NP]{.nodecor}]{}-complete, [[PSPACE]{.nodecor}]{}-complete, respectively. For the [[SMP]{.nodecor}]{} of commutative semigroups we obtained a dichotomy between the ${\textnormal{NP}}$-complete and polynomial time solvable cases. Further we showed that the [[SMP]{.nodecor}]{} for finite ideal extensions of a Clifford semigroup by a nilpotent semigroup is in [[P]{.nodecor}]{}. For non-commutative semigroups there are several open problems.
Is the [[SMP]{.nodecor}]{} for every finite semigroup either in [[P]{.nodecor}]{}, [[NP]{.nodecor}]{}-complete, or [[PSPACE]{.nodecor}]{}-complete?
Bands (idempotent semigroups) are well-studied. Still we do not know the following:
What is the complexity of the [[SMP]{.nodecor}]{} for finite bands?[^2] More generally, what is the complexity in case of completely regular semigroups?
[10]{}
M. Furst, J. Hopcroft, and E. Luks. Polynomial-time algorithms for permutation groups. In [*Foundations of Computer Science, 1980., 21st Annual Symposium on*]{}, pages 36–41, Oct 1980.
J. Howie. . Clarendon Oxford University Press, 1995.
P. Idziak, P. Markovi[ć]{}, R. McKenzie, M. Valeriote, and R. Willard. Tractability and learnability arising from algebras with few subpowers. , 39(7):3023–3037, 2010.
R. M. Karp. Reducibility among combinatorial problems. In R. E. Miller, J. W. Thatcher, and J. D. Bohlinger, editors, [ *Complexity of Computer Computations*]{}, The IBM Research Symposia Series, pages 85–103. Springer US, 1972.
D. Kozen. Lower bounds for natural proof systems. In [*18th [A]{}nnual [S]{}ymposium on [F]{}oundations of [C]{}omputer [S]{}cience ([P]{}rovidence, [R]{}.[I]{}., 1977)*]{}, pages 254–266. IEEE Comput. Sci., Long Beach, Calif., 1977.
M. Kozik. A finite set of functions with an [EXPTIME]{}-complete composition problem. , 407(13):330–341, 2008.
P. Mayr. The subpower membership problem for [Mal’cev]{} algebras. , 22(07):1250075, 2012.
C. H. Papadimitriou. . Addison-Wesley Publishing Company, Reading, MA, 1994.
W. J. Savitch. Relationships between nondeterministic and deterministic tape complexities. , 4:177–192, 1970.
M. Steindl. The subpower membership problem for bands. Available at\
`http://arxiv.org/pdf/1604.01014v1.pdf`.
R. Willard. Four unsolved problems in congruence permutable varieties. Talk at International Conference on Order, Algebra, and Logics, Vanderbilt University, Nashville (June 1216, 2007), 2007.
S. Zweckinger. Computing in direct powers of expanded groups. Master’s thesis, Johannes Kepler Universität Linz, Austria, 2013.
[^1]: We will assume that the identity function can be obtained even from an empty set of functions. This little twist does not change the complexity of the problem.
[^2]: While this paper was under review, Markus Steindl showed that [[SMP]{.nodecor}]{} for any finite band is either in [[P]{.nodecor}]{} or ${\textnormal{NP}}$-complete [@smpbands].
|
---
abstract: 'We investigate the binding nature of the endohedral sodium atoms with the density functional theory methods, presuming that the clathrate I consists of a sheaf of one-dimensional connections of Na@Si$_{24}$ cages interleaved in three perpendicular directions. Each sodium atom loses 30% of the 3$s^1$ charge to the frame, forming an ionic bond with the cage atoms; the rest of the electron contributes to the covalent bond between the nearest Na atoms. The presumption is proved to be valid; the configuration of the two Na atoms in the nearest Si$_{24}$ cages is more stable by $0.189$ eV than that in the Si$_{20}$ and Si$_{24}$ cages. The energy of the beads of the two distorted Na atoms is more stable by $0.104$ eV than that of the two infinitely separated Na atoms. The covalent bond explains both the preferential occupancies in the Si$_{24}$ cages and the low anisotropic displacement parameters of the endohedral atoms in the Si$_{24}$ cages in the \[100\] directions of the clathrate I.'
author:
- Hidekazu Tomono
- Haruki Eguchi
- Kazuo Tsumuraya
bibliography:
- 'tomono.bib'
title: |
Binding between endohedral Na atoms in Si clathrate I;\
a first principles study
---
Introduction
============
The understanding of the mechanism of cohesion of condensed matter is essential for solid state physics. Silicon clathrates are compounds with endohedral atoms in the cages of the host frame network and expanded phases of diamond type silicon crystal. Cros [*et al*]{} [@bib:Cros1965]. inspired by the structure of the clathrate natural gas hydrates, have first synthesized silicon clathrate I containing Na atoms. Group 14 clathrates I have been successively synthesized only when alkaline [@bib:Bobev2000; @bib:Gryko1996; @bib:Nolas2001] or alkaline earth metal atoms [@bib:Cordier1991] or Cl, Br, or I in group 17 atoms [@bib:Reny2000] are encapsulated into the clathrate cages. The electro-negativity differences between these host and guest atoms are smaller than those in the ionic crystals. If the host and guest atoms have large differences, then the induced electron transfer forms the ionic compounds with simple structures like NaCl or CsCl type structures.
To date we have found few reports on the role of the endohedral atoms in the cohesion of the group 14 clathrates. The electron charge transfers, from the endohedral Na atom to the frame silicon atoms, have been predicted in clathrates I [@bib:Zhao1999; @bib:Gatti2003] and a partial transfer in a Ba@Si$_{20}$ cluster [@bib:Nagano2001]. In clathrate II, a displacement of the guest atoms has been predicted to be $0.17$ Å from the center of the Si$_{28}$ cage and been explained the displacement to be due to a combination of the Jahn-Teller and Mott transition [@bib:Demkov1994]. Brunet [*et al*]{} [@bib:Brunet2000]. have observed the displacement using EXAFS (extended x-ray absorption fine structure) analysis. The Na atom was displaced away from the Si$_{28}$ cage-center toward the center of a hexagonal ring by $0.9 \pm 0.02$ Å [@bib:Brunet2000]. Libotte [*et al*]{} [@bib:Libotte2003]. have calculated the displacements of the endohedral Na atoms in the clathrate II and found the displacements to be $0.456$ Å from the [*ab initio*]{} calculation and $0.91$ Å from a tight-binding calculation. Tournus [*et al*]{}. have observed the displacements to be $1$ Åin the the Si$_{28}$ cage of the clathrate II Na$_{2}$@Si$_{34}$ and $2$ Å in clathrate II Na$_{6}$@Si$_{34}$ [@bib:Tournus2004]. They also calculated the displacements of the Na atoms in the Si$_{28}$ cage as $0.65$ Å from the supercell calculation of the Na$_{2}$@Si$_{50}$H$_{44}$ cluster with the periodic DFT calculation. They proposed a possibility of the displacements to be due to the Peierls or Jahn-Teller effect.
Recently one of the authors has reported the displacements of the Na atoms in the two adjacent Si$_{28}$ cages hydrogenated to terminate the dangling bonds of the Si atoms on the surface of the clusters [@bib:Takenaka2006]. Each Na atom displaced by $0.63$ Å away from their centers of the cages to form a dimer between the endohedral Na atoms. The displacements was attributed to the formation of covalent bond between the endohedral Na atoms. They also found the electron charge transfered from the endohedral atoms to the silicon atoms.
So the following questions arise: What is the binding between the endohedral atoms in the cohesion of the clathrates? Why do not the host-guest combinations crystallize into the simple ionic structures? In the following we use a first principles analysis to address the questions through investigating the guest-guest and the host-guest interactions in the clathrate I.
Fig. \[fig:schematic\] shows a schematic drawing of the polyhedral structure of the clathrate I.
![\[fig:schematic\] Polyhedron structure of the clathrate I Si$_{46}$ by extending the simple cubic unit cell. Two horizontal white bamboos of the polyhedron are a one-dimensional bamboo like connection of the tetrakaidecahedron (Si$_{24}$) cages in the \[100\] direction. The connections, arranged in three perpendicular directions with spacing $a$ of lattice constant, forms the black voids of the pentagonal dodecahedron. This structure is the clathrate I Si$_{46}$, free of endohedral atoms, consisting of the tetrakaidecahedra only.](figure1.eps)
The structure is special in that it consists of the bamboo like Si$_{24}$ cages only; the cages (white) are arranged in bamboos, with spacing $a$ of lattice constant, in one-dimensional horizontal direction sharing hexagonal rings as the bamboo joints between the adjacent Si$_{24}$ cages. Weaving the bamboos in three dimensions with common pentagonal surfaces forms voids shown with black polyhedron regions in fig. \[fig:schematic\]. Each void is a pentagonal dodecahedron, separated in space, located at bcc position with a different orientation. All the previous papers have identified the existence of the Si$_{20}$ cages in the clathrate I. However we presume the structure to consist of only the Si$_{24}$ cages; fig. \[fig:schematic\] shows that Si$_{20}$ cages are merely accidental voids in the weaved bamboos in the three dimensions. The voids just correspond to $\alpha $ cages in zeolites, although the voids in the clathrate I are far smaller than the ones in the zeolites. The accidental voids are predicted to have a minor role in the cohesion of the clathrate I. Although this view on the clathrate I structure has been neglected so far, the experimental preferential occupancies of the endohedral atoms in the Si$_{24}$ cages [@bib:Cros2737; @bib:Yamanaka103; @bib:Ramanchandran626] and the experimental anisotropic displacement parameters support this bamboo model.
So presuming the clathrate I as consisting of the bamboo structures in the three perpendicular directions, we analyze the bonding nature between the endohedral atoms in the clathrate. First, we calculate the relaxed geometries of the one-dimensional clusters with different numbers of the Si$_{24}$ cages using real-space DFT method and show the binding nature between the guest atoms; the dimer formation due to the covalent bonding between the adjacent endohedral Na atoms and the charge transfer from the Na atoms to the cage atoms. Next, we evaluate the cohesion energy of the chained Na atoms using the periodic DFT method.
Computational details
=====================
We perform the real-space DFT calculation for the hydrogenated bamboo structures using the generalized gradient approximation of Perdew, Burke and Ernzerhof (GGA-PBE) [@bib:GGA-PBE]. We use frozen core 1$s^2$2$s^2$2$p^6$ approximation for the Na and the silicon atoms and the atomic orbitals with valence 3$s^1$ orbital for the Na atom and the valence 3$s^2$3$p^2$ orbitals for the silicon atom each for which we use double atomic functions for each orbital. No smearing for occupations is applied to the final geometrical optimization. Since we regard the bamboo clusters as representing the essential aspects of the clathrate I, we add hydrogen atoms to the three coordinated silicon atoms on the surface of the bamboo structures, to mimic both the electronic density of states (DOS) and the bonding configurations in the clathrate I with the clusters. The hydration on the surface of clusters mimics almost the same electronic states as in the crystalline clathrates. The calculated displacements [@bib:Tournus2004; @bib:Takenaka2006] in the hydrogenated double Si$_{28}$ cages in the clathrate II coincided with not only the experimentally observed displacements $0.9$ Å [@bib:Brunet2000] or $1$ Å [@bib:Tournus2004] but also the calculated displacements $0.456$ Å [@bib:Libotte2003] or $0.91$ Å [@bib:Libotte2003] in the crystalline clathrates II. This hydration has enabled the states of the dangling bonds on the surface of the bamboo structure to shift lower side in energy as will be shown in fig. \[fig:dos\]. This hydration has realized the same features as in the DOS’s of the clathrate Ba$_{8}$@Si$_{46}$ [@bib:Nakamura2005]. We use an ADF code [@bib:ADF1; @bib:ADF2], which uses a linear combination of Slater type orbitals. To evaluate the cohesion energy of the chain of the two endohedral Na atoms in the clathrate I, we use a periodic DFT code PHASE [@bib:PHASE] with the norm conserving pseudopotentials for the Na and the Si atoms. For the periodic DFT calculations, Brillouin zones are sampled at $\Gamma $ and $\mathbf{X}$ point set. Markov, Shah and Payne have shown that this set is an efficient $\mathbf{k}$-point set to remove defect interactions in the periodic cells [@bib:Makov1996]. The numbers of planewaves are kept 13,805 at $\Gamma $ point and 16,184 at $\mathbf{X}$ for any lattice constant. It corresponds to set the cutoff energy to be $20.0$ Ry ($272.11$ eV) at 11.0 Å. We use the PBE [@bib:GGA-PBE] exchange and correlation functionals for the electron correlations for the periodic DFT calculations. We use the spin unrestricted calculations for both the real-space and the periodic calculations with the convergence of interatomic forces reduced below to within $9.45\times 10^{-3}$ H/Å ($5.0 \times 10^{-3}$ H/bohr).
Results
=======
The distances between the endohedral Na atoms in the relaxed four-caged bamboo structure Si$_{78}$H$_{60}$ are shown in fig. \[fig:distance\](a) as an example of the relaxed structures of even
![\[fig:distance\] The inter-Na distances of (a) four-caged Na$_4$@Si$_{78}$H$_{60}$ bamboo cluster and (b) three-caged Na$_3$@Si$_{60}$H$_{48}$ cluster, where the triangles are the inter-hexagonal distances in the bamboo structures. The lines are for visual guidance.](figure2a.eps "fig:") ![\[fig:distance\] The inter-Na distances of (a) four-caged Na$_4$@Si$_{78}$H$_{60}$ bamboo cluster and (b) three-caged Na$_3$@Si$_{60}$H$_{48}$ cluster, where the triangles are the inter-hexagonal distances in the bamboo structures. The lines are for visual guidance.](figure2b.eps "fig:")
![\[fig:distanceHfree\] The inter-Na distances of (a) four-caged Na$_4$@Si$_{78}$ bamboo cluster and (b) three-caged Na$_3$@Si$_{60}$ cluster. The line is for visual guidance.](figure3a.eps "fig:") ![\[fig:distanceHfree\] The inter-Na distances of (a) four-caged Na$_4$@Si$_{78}$ bamboo cluster and (b) three-caged Na$_3$@Si$_{60}$ cluster. The line is for visual guidance.](figure3b.eps "fig:")
number of cage clusters. Although the distances between the hexagonal rings are almost constant, the inter-Na distances A and C are however shorter than the inter-hexagonal distances: the inter-Na distances A (4.84 Å) and C (4.85 Å) at the ends of the bamboo structure are shorter than the distance B (5.38 Å). The short distances are induced by a bonding between the Na atoms. The sum of the shorter and the longer distances is $10.21 \sim 10.23$ Å which almost equals the experimental lattice constant $10.19 \pm 0.02$ Å of clathrate I Na$_8$@Si$_{46}$ [@bib:Cros1971]. We show the distances in the three-caged Na$_3$@Si$_{60}$H$_{48}$ cluster in fig. \[fig:distance\](b) as an example of odd number of the cages. The inter-Na distances, which are smaller than the ones between the adjacent hexagonal rings, are almost the same for each endohedral atom. A balance of forces exists between the central Na atom and the adjacent two Na atoms. So the small inter-Na distances A and C in fig. \[fig:distance\](a) are induced by the dimer formation between the Na atoms. The formation may leads to a Peierls distortion in the bamboo clusters.
For the Peierls distortion of one-dimensional case with a free boundary condition, the inter-atom distances at the edge are different from those in a periodic boundary condition. Since two neighbor atoms at the edges form an edge state in Peierls gap [@bib:Figge2002], their distances are longer than the ones of inner inter-atom bonding since they are located at the free boundary edge. The present bamboo structures have the free boundary condition. Thus the Si-H bonds at the edges do form longer Si-H bond distances. The Na atoms just inside the bonds in the four caged structure form dimers with their adjacent inner Na atoms as shown A or C in fig. \[fig:distance\](a). The same situation occurs for the three-caged cluster in fig. \[fig:distance\](b). Here both the atoms forming the distances D and those forming the distance E try to form dimers. However they are balanced in force. Thus the length D is almost equal to that E.
The distances between the endohedral Na atoms in hydrogen free four-caged bamboo structure Si$_{78}$ are shown in fig. \[fig:distanceHfree\](a). A single dimer exists at the center of the bamboo structure. Since both the Na atom pairs at the edges have formed the edge state forming the relaxed longer distance, the Na atoms just inside the bond have formed the dimer. Fig. \[fig:distanceHfree\](b) shows the distances between the endohedral Na atoms in the hydrogen free three-caged bamboo structure Si$_{60}$. The inter-Na distances are the same for each endohedral atom; a balance of forces exists between the central Na atom and the adjacent two Na atoms. These fig. \[fig:distance\] and fig. \[fig:distanceHfree\] indicate that the Peierls distortion exists between the endohedral Na atoms in these bamboo structures.
Fig. \[fig:dos\] shows the molecular DOS of the double caged Na$_2$@Si$_{42}$H$_{36}$ cluster. The shape of the earlier density of states [@bib:Dong1999; @bib:Madsen2003] of the clathrate are similar to this density of states. The HOMO state is at $-3.869$ eV and the LUMO is $-3.703$ eV, where HOMO is the highest occupied state and LUMO is the lowest unoccupied state. The HOMO-LUMO
![\[fig:dos\] Molecular density of states (DOS) of the double-caged Na$_2$@Si$_{42}$H$_{36}$ cluster. The 3$s$ state of the isolate Na atom splits into the bonding HOMO state and the several unoccupied anti-bonding states.](figure4.eps)
gap is $0.166$ eV. The magnitude of the LDA gap has been $0.177$ eV with the electron correlation by Vosko, Wilk, and Nusair [@bib:Vosko1980]. No experimental band gap energy has been given, since HOMO is located at just above the gap. The eigenvalue $-2.754$ eV for 3$s$ state of the isolate Na atom splits into an occupied single bonding state $-3.869$ eV (18A1.g) which is the HOMO state and several higher anti-bonding states 16B3.u (LUMO, $-3.703$ eV), 28A1.g ($5.328$ eV) and 26B3.u ($10.02$ eV). The decrease of the eigenvalue from the 3$s$ at $-2.754$ eV to the HOMO edge at $-3.869$ eV is due to the formation of the bonding state between the endohedral atoms. This is just the bonding state formation in hydrogen molecule. Since the HOMO 18A1.g state is composed of a gerade function, the corresponding electron state gives even function with respective to the center of the molecule. There is a large forbidden region from the HOMO down to the state 9A1.u at $-6.593$ eV indicating the cluster is an insulator with HOMO-LUMO gap $2.724$ eV, if the double caged cluster has no endohedral atom. For the four caged bamboo structure, the HOMO-LUMO gap has been $0.255$ eV. This corresponds to a Peierls gap of this cluster.
To examine the bonding electron distribution between the endohedral Na atoms in the double caged cluster, we show the electron density profile in fig. \[fig:4terms\] given by $$\begin{aligned}
\Delta \rho _{\rm Na-Na} &=&
\left( \ {\rm Na} \ {\rm Na} \ \right)
+\left( \ \ \circ \ \ \circ \ \ \right) \nonumber \\
&-&\left( \ {\rm Na} \ \ \circ \ \ \right)
-\left( \ \ \circ \ \ {\rm Na} \ \right) \nonumber \\
&=&
\rho ({\rm Na}_2@{\rm Si}_{42}{\rm H}_{36})+\rho ({\rm Si}_{42}{\rm H}_{36}) \nonumber \\
&-& \rho ({\rm Na}@{\rm Si}_{42}{\rm H}_{36})-\rho ({\rm Na}@{\rm Si}_{42}{\rm H}_{36}),
\label{eq:4terms} \end{aligned}$$ where open circles represent the vacancies of the endohedral Na atoms. The coordinates of the last three terms are fixed at those of the first term to obtain the difference of the electron charge densities. This expression gives the interaction electron density between the Na atoms, since the net number of atoms is cancelled. We have used the spin polarized calculations for all the the terms; the non-spin states have been the lowest for the first two terms and the spin states with $\mu _B=1$ have been the lowest for the last two terms. We have evaluated the sum of the up-spin density and the down-spin density for each structure and substituted them into the above equation and show
![\[fig:4terms\] (Colour print) The spin unrestricted difference electron charge density profiles $\Delta \rho _{\rm Na-Na}$ given by eq. (\[eq:4terms\]), where the densities are plotted on a logarithmic scale, $10^{-5} \times 10^{5N/10}$ $e$/Å$^3$, $N=0$-$10$. The blue lines are higher densities than the purple ones. The two plus marks correspond to the positions of the endohedral Na atoms. The blank regions correspond to the densities to be negative or less than $10^{-5}$ $e$/Å$^3$. The density (a) is shown on the plane that intersects the two endohedral Na atoms and the midpoint between two Si atoms on the hexagonal ring shared by the adjacent two Si$_{24}$ cages. The density (b) on the hexagonal ring between the two adjacent Na atoms in the Si$_{24}$ cages.](figure5a "fig:")\
![\[fig:4terms\] (Colour print) The spin unrestricted difference electron charge density profiles $\Delta \rho _{\rm Na-Na}$ given by eq. (\[eq:4terms\]), where the densities are plotted on a logarithmic scale, $10^{-5} \times 10^{5N/10}$ $e$/Å$^3$, $N=0$-$10$. The blue lines are higher densities than the purple ones. The two plus marks correspond to the positions of the endohedral Na atoms. The blank regions correspond to the densities to be negative or less than $10^{-5}$ $e$/Å$^3$. The density (a) is shown on the plane that intersects the two endohedral Na atoms and the midpoint between two Si atoms on the hexagonal ring shared by the adjacent two Si$_{24}$ cages. The density (b) on the hexagonal ring between the two adjacent Na atoms in the Si$_{24}$ cages.](figure5b "fig:")
the density distribution in fig. \[fig:4terms\](a). This figure shows a clear covalent bonding density between the Na-Na bond. This is formed by the dimer formation. The density is due to the bonding state between each 3$s^1$ valence electron in the two Na atoms; this is just like the covalent bond formation between two hydrogen atoms. We show in fig. \[fig:4terms\](b) the difference density on the hexagonal ring located at the bisector plane between the two plus marks in (a). There is the finite covalent charge densities on the plane. Neither the total electron density nor the partial charge density due to the HOMO state in fig. \[fig:dos\] has shown this type of the covalent bond charge densities between the Na atoms. The densities of the bonding states have appeared between the dimers in the even number of cages.
To see the spatial distribution of the electron transfers around the endohedral atoms, we show in fig. \[fig:2terms\] the difference electron density profile $$\begin{aligned}
\label{eq:2terms}
\Delta \rho = \rho _{\rm opt} - \sum \rho _{\rm atom},\end{aligned}$$ where $\rho _{\rm opt}$ is the density of the geometrically optimized cluster and $\rho _{\rm atom}$ is the overlapped
![\[fig:2terms\](Colour print) The difference charge density between the converged self-consistent electron and the overlapped isolated atom density. The densities are plotted on the same plane as in fig. \[fig:4terms\](a) with a logarithmic scale, $10^{-5} \times 10^{5N/10}$ $e$/Å$^3$, $N=0$-$10$. The green contours are higher than the purple ones. The blank regions are lower area than $10^{-5}$ $e$/Å$^3$ including negative densities.](figure6.eps)
density of the isolated constituent atoms. The blank zone corresponds to regions with lower densities than $10^{-5}$ $e$/Å$^3$ or with negative densities. So the contour lines correspond to the increased charge ones comparing with the overlapped isolated atom densities. The electrons around the endohedral Na atoms are depleted to the cage silicon atoms except for the nucleus positions of the Na atoms.
We calculate electron transfers from the endohedral Na atom to the frame atoms. There have been several methods to calculate the transfer. Among them the Mulliken charges have been found to depend on the number of the linear combination of atomic orbitals for the basis functions [@bib:Guerra2003]. Voronoi charges, which have been named as Voronoi deformation charge VDC, have been found to gives reasonable values for the transfer [@bib:Guerra2003]. The transferred 3$s^1$ electron from each endohedral Na atom to the frame silicon atoms have been 0.320$e$ for the double caged bamboo structure. The ionic states also appeared in the triple caged bamboo structure: The electron transfers from Na atoms to frame atoms have been 0.343$e$ (middle Na atom) and 0.297$e$ (edge Na atoms) for the triple caged bamboo cluster showing that the remained 3$s^1$ electron of the endohedral Na atoms has formed the covalent bonding states between the endohedral Na atoms as shown in fig. \[fig:4terms\].
Here we evaluate the cohesion energy of the Na chain in the clathrate I. For this purpose we evaluate the energy using the periodic DFT method with the same type equation as eq. (\[eq:4terms\]); $$\begin{aligned}
\label{eq:eb4terms}
E^c &=&
\begin{picture}(50,50)(0,22)
\put(0, 0){\framebox(50,50){\ }}
\put(0, 11){\makebox(25,25){Na}}
\put(25,11){\makebox(25,25){Na}}
\end{picture}
+
\begin{picture}(50,50)(0,22)
\put(0, 0){\framebox(50,50){\ }}
\end{picture} \nonumber \\
&-&
\begin{picture}(50,50)(0,22)
\put(0, 0){\framebox(50,50){\ }}
\put(0, 11){\makebox(25,25){Na}}
\end{picture}
-
\begin{picture}(50,50)(0,22)
\put(0, 0){\framebox(50,50){\ }}
\put(25,11){\makebox(25,25){Na}}
\end{picture} \nonumber \\
& & \ \nonumber \\
& & \ \nonumber \\
&=& E_T({\rm Na}_2@{\rm Si}_{46})
+E_T({\rm Si}_{46}) \nonumber \\
&-&E_T({\rm Na}@{\rm Si}_{46})
-E_T({\rm Na}@{\rm Si}_{46}),\end{aligned}$$ where $E_T$’s are the total energies of each crystal. The net number of each kind of atoms is also cancelled in this equation. The last two terms correspond to that each Na atom is located at infinitely separated positions in the clathrate. Therefore this equation enable us to evaluate the cohesive energy of the Na chain in the clathrate I. The equation has been derived from the difference of the formation energies of each phase by Sawada [*et al*]{} [@bib:Sawada1140]. They proposed this equation to evaluate the binding energies between the substitutional solute atom and the interstitial solute atom in the bcc iron. They needed to calculate each energy in supercells as large as possible. For our calculation the use of the unit cell is sufficient, since we need to calculate the cohesion energy of the Na chain in the clathrate. We calculate four kinds of the equation of states for each clathrate in eq. (\[eq:eb4terms\]). Here we have assumed the energies of the last two terms to be equivalent due to their symmetry. The equation used is $$\begin{aligned}
\label{eq:eb3terms}
E^c =E_T({\rm Na}_2@{\rm Si}_{46})
+E_T({\rm Si}_{46}) % \\
-2E_T({\rm Na}@{\rm Si}_{46}).\end{aligned}$$ The energy of the first term have been the lowest for a spin-polarized state $\mu _B=0.188$ and the other terms for the non-spin states. The equilibrium lattice constant of this clathrate has been $10.1998$ Å, the shorter inter-Na distance has been $5.0915$ Å, and the longer one $5.1083$ Å, showing the difference by $0.0168$ Å in the \[100\] direction. The difference between these two distances is smaller than that in the hydrogenated cluster in fig. \[fig:distance\](a). This is because of the infinite chain connection of the Na atoms in the clathrate I. The cohesive energy $E^c$ of the chain has been $0.104$ eV which is finite and attractive, so the chain is more stable than the two infinitely separated Na atoms in the clathrate I.
To evaluate the energy gain of the distortion in the crystalline state, we calculate the total energy of the clathrate in which the two Na atoms are located at the centers of the gravity of the nearest Si$_{24}$ cages of the first term in eq. (\[eq:eb3terms\]). This energy has been higher by 0.00186 eV, with the shorter inter-Na distance $5.0978$ Å, than the full relaxed clathrate. The shorter inter-Na distance in the full relaxed clathrate is shorter by $0.0062$ Å than the inter-gravity distance. This quantity is a significant difference in the accuracy of the DFT calculations. This also indicates the existence of the attractive interaction between the shorter Na atom pairs.
Discussion
==========
The endohedral atoms have interacted with the cage atoms through the ionic bond and with the nearest endohedral atoms through the covalent bond.
We have assumed that the clathrate I consists of the bamboo structures in the three perpendicular directions. Here, we examine the validity of this assumption. We have calculated the total energy of the clathrate Na$_2$@Si$_{46}$ in which one of the two Na atoms is located in Si$_{20}$ cage and the other in Si$_{24}$ cage. We have already calculated the energy of the clathrate Na$_2$@Si$_{46}$ in which two Na atoms are located at the nearest Si$_{24}$ cages. The energy has been given as the first term in eq. (\[eq:eb3terms\]). The energy of this clathrate has been more stable by 0.189 eV than that of the former clathrate; the binding between the two Na atoms in the chain is more stable than the two Na atoms in the Si$_{20}$ and Si$_{24}$ cages. This is another evidence of the validity of our presumption for the structure of the clathrate I.
The covalent bond charge has existed in fig. \[fig:4terms\] between the endohedral Na atoms. The validity of our bamboo structure model for the clathrate I is supported by experimental evidence of the preferential occupation of the Ba atoms in the Si$_{24}$ cages by Yamanaka [*et al*]{} [@bib:Yamanaka103]. They reported that the Ba atoms occupy 0.985 of the six Si$_{24}$ cages and only occupy 0.189 of the two Si$_{20}$ cages. The high occupancy is a proof of the existence of the covalent bond between the Ba atoms Si$_{24}$ cages. No explanation has ever been given for the origin of the occupancies.
The present study has predicted the difference of the inter-Na distances to be only $0.0168$ Å in the \[100\] direction. No report has been existed for the experimental guest displacement in the clathrate I except for the guest displacement parameters [@bib:Paschen2001; @bib:Christensen2003]. This is because the displacement is too small to be measured.
The anisotropy of the atomic displacement parameters of the endohedral atoms in clathrate I has been reported by Chakoumakos *et al* [@bib:Chakoumakos80; @bib:Chakoumakos127], Nolas *et al* [@bib:Nolas3845] in which much smaller amplitudes in the \[100\] directions were reported than in its perpendicular directions. The present study explains the anisotropy to be due to the constrain of the displacements of the Na atoms in the \[100\] directions induced by the covalent bond: the bond constrains the displacements between the nearest Na atoms in the directions. No explanation has been given for the origin of the anisotropies.
The covalent bond between the endohedral Na atoms prevents the atoms from crystallizing into ordered ionic structure like NaCl or CsCl and crystallizes into the caged clathrate structures. The bond forms beads of the Na atoms in the clathrate I or three dimensional network of the Na atoms with $T{_d}$ symmetry in the clathrate II. This is because the electro-negativity of the host 14 group atoms is smaller than that of the halogen atoms that crystallize into ionic crystals. The smaller electro-negativity differences between the host and guest atoms allow the guest Na atoms to form both the covalent bond between the guest atoms and the ionic bond through the charge transfer to the cages. Thus the clathrates are a compromised electronic state between the ionic crystals and the covalent crystals.
Conclusions
===========
Presuming that the clathrate I consists of the sheaf of one-dimensional connections of Na@Si$_{24}$ cages interleaved in the three perpendicular directions, we have investigated the binding nature of the endohedral Na atoms with both the real-space and the periodic DFT methods. Each Na atom has lost 30% of the 3$s^1$ charge to the frame. The finite covalent bonding charge due to the Peierls distortion has existed between the endohedral Na atoms in the caged clusters. The cohesion energy has been $0.104$ eV for the chain in the \[100\] directions of the clathrate I. The presumption has been proved to be valid; the clathrate encapsulating two Na atoms in the \[100\] direction has been more stable by $0.189$ eV than the clathrate encapsulating the two atoms in the Si$_{20}$ and Si$_{24}$ cages. This covalent bond has explained the experimental anisotropic displacement parameters and the preferential occupancies of the endohedral atoms in the Si$_{24}$ cages of the clathrates I. The difference between the Na-Na distances has been $0.0168$ Å. This small magnitude of the displacement coincides with the absence of the experimental reports on the guest displacements in the clathrate I. The beads of the endohedral Na atoms in the directions are due to the the covalent bond between the endohedral atoms accompanying the electron charge transfer from the endohedral atoms to the cages. The covalent bond has explained both the preferential occupancies of the endohedral atoms and the low anisotropic displacement parameters in the \[100\] directions in the Si$_{24}$ cages of the clathrate I. The beads are just a precipitated state in the regular solution theory. The smaller electro-negativity of group 14 host atoms than the halogen atoms allows the endohedral Na atoms to prevent the atoms crystallizing into the ionic crystals and allows the atoms to form the covalent bonds with beads of the endohedral atoms.
Computations were performed in part using SCore systems at the Information Science Center in Meiji University and Altix 3700 BX2 at YITP in Kyoto University.
|
---
abstract: 'We use light-cone QCD sum rules to calculate the pion-photon transition form factor, taking into account radiative corrections up to the next-to-next-to-leading order of perturbation theory. We compare the obtained predictions with all available experimental data from the CELLO, CLEO, and the BaBar Collaborations. We point out that the BaBar data are incompatible with the convolution scheme of QCD, on which our predictions are based, and can possibly be explained only with a violation of the factorization theorem. We pull together recent theoretical results and comment on their significance.'
address:
- |
Bogoliubov Laboratory of Theoretical Physics, JINR, 141980 Dubna, Russia\
mikhs@theor.jinr.ru
- |
Institut für Theoretische Physik II, Ruhr-Universität Bochum, D-44780 Bochum, Germany\
stefanis@tp2.ruhr-uni-bochum.de
author:
- 'S. V. Mikhailov[^1]'
- 'N. G. Stefanis[^2]'
title: ' Pion transition form factor at the two-loop level vis-à-vis experimental data'
---
Introduction {#sec:intro}
============
For many years now the production of a pion by the fusion of two photons has attracted the attention of theorists and experimentalists. Theoretically, the process $\gamma^*(q_1)\gamma^*(q_2)\to \pi^0(p)$ can be treated within the convolution scheme of QCD[@ER80; @LB80] by virtue of the factorization theorem which allows one to treat the photon-parton interactions within perturbative QCD, while all binding effects are separated out and absorbed into the pion distribution amplitude (DA). This latter ingredient has a nonperturbative origin and can, therefore, not be computed within perturbative QCD. One has to apply some nonperturbative approach to derive it or reconstruct it from the data. A widespread framework to calculate static and dynamical nonperturbative quantities of hadrons is provided by QCD sum rules with local[@CZ84] or nonlocal condensates.[@MR89] This latter method was employed by us in collaboration with A. P.Bakulev (BMS)[@BMS01] to derive a pion DA that is able to give good agreement with various sets of data pertaining to various pion observables, e.g., the electromagnetic form factor[@BPSS04], the pion-photon transition form factor[@Ste08], diffractive di-jets production,[@BMS04kg] etc.
While the process with two off-shell photons is theoretically the most preferable, experimentally, another kinematic situation is more accessible, notably, when one of the photons becomes real, as probed by the CELLO[@CELLO91] and the CLEO Collaborations[@CLEO98]. Such a process demands more sophisticated techniques in order to take properly into account the hadronic content of the real photon. Indeed, first Khodjamirian,[@Kho99] then Schmedding and Yakovlev,[@SchmYa99] used light-cone sum rules (LCSR)s[@BFil90] to analyze the CLEO data, a method also applied by BMS up to the next-to-leading-order (NLO) level of QCD perturbation theory.[@BMS02]
Especially the high-precision CLEO data[@CLEO98] on $F^{\gamma\gamma^{*}\pi}$ give the possibility to verify the pion DAs quantitatively[@KR96; @SchmYa99; @BMS02; @Ste08]. It was found that the best agreement with the CLEO data is provided by pion DAs which have suppressed endpoints $x=0,1$, like those belonging to the “bunch” determined by BMS in[@BMS01] with the help of QCD sum rules with nonlocal condensates. Note that the endpoint suppression is a [*sui generis*]{} feature of the nonlocality of the quark condensate and is controlled by the vacuum quark virtuality $\lambda_q^2\approx 0.4$ GeV${}^2$.[@BMS01; @BMS02] All these approaches attempt to reverse engineer the pion DA from its (first few) moments. In the BMS approach[@BMS01] the first ten moments have been calculated, from which the corresponding Gegenbauer coefficients $a_n$ with $n=0,2,\ldots ,10$ were determined. It turns out that all coefficients with $n > 4$ are negligible, so that the proposed model DA has only two coefficients: $a_2$ and $a_4$.
Recently, we[@MS09] extended this type of calculation to the NNLO of QCD perturbation theory by taking into account those radiative corrections at this order which are proportional to the $\beta_0$-function. More specifically, we used the hard-scattering amplitude of this order, computed before in Ref. , in order to determine the spectral density within the LCSR approach mentioned above. In addition, we refined the phenomenological part of the SR by using a Breit-Wigner ansatz to model the meson resonances. Below, we report about the main results of this analysis and further discuss what conclusions can be drawn by comparing the obtained predictions with all the available experimental data. We focus attention on the new BaBar data,[@BaBar09] which turn out to be incompatible with our predictions, indicating a violation of collinear factorization in QCD. We perform a detailed comparison of these data with the theoretical expectations and some proposed scenarios to explain them.
Pion-photon transition form factor $\mathbf{F^{\gamma^{*}\gamma^{*}\pi}}$ in QCD {#sec:col-fac}
================================================================================
The transition form factor $F^{\gamma^{*}\gamma^{*}\pi}$ describes the process $\gamma^*(q_1)\gamma^*(q_2)\to \pi^0(p)$ and is given by the following matrix element ($-q_{1}^2\equiv Q^2>0, -q_2^2\equiv q^2\geq 0$) $$\begin{aligned}
\int d^{4}x e^{-iq_{1}\cdot z}
\langle
\pi^0 (p)\mid T\{j_\mu(z) j_\nu(0)\}\mid 0
\rangle
=
i\epsilon_{\mu\nu\alpha\beta}
q_{1}^{\alpha} q_{2}^{\beta}
\cdot F^{\gamma^{*}\gamma^{*}\pi}(Q^2,q^2)\, .
\label{eq:matrix-element}\end{aligned}$$ Provided the photon momenta are sufficiently large $Q^2, q^2 \gg m_\rho^2$ (where the hadron scale is set by the $\rho$-meson mass $m_\rho$), the pion binding effects can be absorbed into a universal pion distribution amplitude of twist-two. Then, one obtains the form factor in the form of a convolution by virtue of the collinear factorization:[@ER80; @LB80] $$\begin{aligned}
F^{\gamma^{*}\gamma^{*}\pi}(Q^2,q^2)
=
T(Q^2,q^2,\mu^2_{\rm F};x)
\otimes
\varphi^{(2)}_{\pi}(x;\mu^2_{\rm F})
+ \ O\left( Q^{-4} \right) \, .
\label{eq:convolution}\end{aligned}$$ Here the pion DA $\varphi^{(2)}_{\pi}$ represents a parametrization of the pion matrix element at the (low) factorization scale $\mu^2_{\rm F}$, whereas the amplitude $T$, describing the hard parton subprocesses, can be calculated in QCD perturbation theory: $T=T_0+a_s\, T_1+a_s^2\, T_2\ldots$, where $a_s = \alpha_s/(4\pi)$, and with $O\left( Q^{-4} \right)$ denoting the twist-four contribution. In leading order (LO) of the strong coupling and taking into account the twist-four term explicitly, one has[@Kho99] $$\begin{aligned}
F^{\gamma^{*}\gamma^{*}\pi}(Q^2,q^2)
\! = N_f\left[
\int_{0}^{1} \! dx
\frac{\varphi^{(2)}_{\pi}(x;\mu^2_{\rm F})}
{Q^{2}x + q^{2}\bar{x}}~
-\delta^2(\mu^2_{\rm F})\int_{0}^{1} \! dx
\frac{\varphi^{(4)}_{\pi}(x;\mu^2_{\rm F})}
{(Q^{2}x + q^{2}\bar{x})^2}
\right]
&&
\label{Eq:T_0}\end{aligned}$$ with $N_f=\frac{\sqrt{2}}{3}f_{\pi}$ and $\bar{x}\equiv 1-x$. The pion DA of twist-two, $\varphi^{(2)}_{\pi}$, is defined by $$\begin{aligned}
\langle
0|\bar{q}(z)\gamma_{\mu}\gamma_{5}\mathcal{C}(z,0) q(0)|\pi(P)
\rangle
\Big|_{z^2=0}~=
~iP_{\mu} f_\pi\int dx
e^{ix(z\cdot p)}\varphi^{(2)}_\pi(x,\mu^2_{\rm F}) \, ,&&
\label{eq:pion-DA}\end{aligned}$$ where $
\mathcal{C}(z,0)
=
\mathcal{P}\exp \left(ig\int^z_0 A_\mu(\tau) d\tau^\mu \right)
$ is a path-ordered exponential to ensure gauge invariance. The second term in Eq. (\[Eq:T\_0\]) represents the twist-four contribution, which is becoming important for small and intermediate values of $Q^2$. The pion DA of twist-four, $\varphi^{(4)}_{\pi}$, is an effective one[@BFil90] in the sense that it is composed from different pion DAs of twist-four. In our analysis it is taken in its asymptotic form. The parameter $\delta^2$ is determined from the matrix element $
\langle
\pi(P)|g_s\bar{d}\tilde{G}_{\alpha \mu}\gamma^{\alpha}u|0
\rangle
=
i\delta^2 f_\pi p_\mu
$, and is estimated[@BMS02] to be $\delta^2(1$ GeV$^2)=0.19\pm 0.02$GeV$^2$. Estimates for the twist-four term, based on the renormalon approach,[@BGG04] have been considered in the last entry of Ref..
On the other hand, also the evolution of $\varphi^{(2)}_\pi(x,\mu^2_{\rm F})$ with $\mu^2_{\rm F}$ is controlled by a perturbatively calculable evolution kernel $V$, following the Efremov-Radyushkin-Brodsky-Lepage (ERBL)[@ER80; @LB80] equation $$\begin{aligned}
\mu^2 \frac{d}{d\mu^2}\varphi(x;\mu^2)
=
\left(a_s\, V^{(0)}(x,y)+a_s^2\, V^{(1)}(x,y)+\ldots \right)
\otimes
\varphi(y;\mu^2)
&&\label{eq:ERBL} \\
V^{(0)}
\otimes
\psi_n
=
2C_\text{F}~v(n) \cdot \psi_n;~~~~~~ \! \psi_{n}(x)
=
6x\bar{x}~C^{(3/2)}_{n}(x-\bar{x});
&& \\
v(n)
=
1/(n+1)(n+2)-1/2+2(\psi(2)-\psi(n+2));~~~
\psi(z)
=
\frac{d}{dz}\ln(\Gamma(z)) \, .&&\end{aligned}$$ Here, $\{\psi_{n}(x)\}$ are the Gegenbauer harmonics, which constitute the LO eigenfunctions of the ERBL equation, $v(n)$ being the corresponding eigenvalues. Then, one has $$\begin{aligned}
\varphi^{(2)}_{\pi}(x;\mu^2)
=
\psi_0(x) + \sum_{n=2,4,\ldots} a_{n}(\mu^2)~\psi_{n}(x)\, ,\end{aligned}$$ where the coefficients $\{a_n \}$ evolve (in LO) with $\mu^2$ and have specific values for each pion DA model (a compilation of the coefficients of various proposed models can be found in Refs. ).
The radiative corrections to the hard amplitudes in NLO, encapsulated in $T_1$, have been computed in Ref. . More recently, the $\beta$–part of the NNLO amplitude $T_2$, i.e., $\beta_0 \cdot T_{2,\beta}$, was also calculated.[@MMP02] It is instructive to discuss the structure of this result at the scale $\mu^{2}_{\rm F}=\mu^{2}_{\rm R}$, especially in view of further considerations in Sec. \[sec:LCSR\]: $$\begin{aligned}
\beta_0 T_{2,\beta}
=
\beta_0T_0
\otimes
\left[
C_{\rm F} {\cal T}^{(2)}_{\beta}
- C_{\rm F} {\rm L}(y)
\cdot
{\cal T}^{(1)}
+ {\rm L}(y)\cdot \left(V^{(1)}_{\beta}\right)_
+ -\frac{1}{2}{\rm L^2}(y)
\cdot
V^{(0)}
\right]\, ,~
\label{eq:T2beta}\end{aligned}$$ where ${\rm L}(y)=\ln\left[(Q^2y +q^2\bar{y})/\mu^{2}_{\rm F} \right]$. The first term, $C_{\rm F} {\cal T}^{(2)}_{\beta}$, is the $\beta_0$-part of the NNLO coefficient function and represents, from the computational point of view, the most cumbersome element of the calculation in Ref. . The next term originates from the NLO coefficient function ${\cal T}^{(1)}$ using one-loop evolution of $a_s$. The third term appears as the $\beta_0$-part of the two-loop ERBL evolution (see Eq. (\[eq:ERBL\])), while the last term, which is proportional to $V^{(0)}$, stems from the combined effect of the ERBL-evolution and the one-loop evolution of $a_s$.
Light Cone Sum Rules for the process $\mathbf{\gamma^*(Q^2)\gamma(q^2\simeq 0) \to \pi^0}$ {#sec:LCSR}
==========================================================================================
The transition form factor, when one of the photons becomes quasi real ($q^2 \to 0$), has been measured by different Collaborations.[@CELLO91; @CLEO98; @BaBar09] However, this kinematics requires the modification of the standard factorization formula Eq. (\[eq:convolution\]) in order to take into account the long-distance interaction, i.e., the hadronic content of the on-shell photon. A viable way to reach this goal is to employ the method of LCSRs, which are based on a dispersion relation for $F^{\gamma^{*}\gamma^{*}\pi}$ in the variable $q^2$, viz., $$F^{\gamma^{*}\gamma^{*}\pi}\left(Q^2,q^2\right)
=
\int_{0}^{\infty} ds
\frac{\rho\left(Q^2,s\right)}{s+q^2} \, .
\label{eq:dis-rel}$$ The key element in this equation is the spectral density $
\rho(Q^2,s)
=
\frac{\mathbf{Im}}{\pi}
\left[F^{\gamma^*\gamma^*\pi}(Q^2,-s)
\right]
$ for which we make the ansatz[@Kho99] $
\rho
=
\rho^{\rm ph}(Q^2,s) \theta(s_0-s)
+ \rho^{\rm PT}(Q^2,s) \theta(s-s_0)
$, where the “physical” (ph) spectral density $\rho^{\rm ph}$ serves to accommodate the hadronic content of the photon (below an effective threshold $s_0$) by means of the transition form factors $F^{\gamma^*V \pi}$ of vector mesons, notably, $\rho$ or $\omega$, $$\rho^{\rm ph}(Q^2,s)
=
\sqrt{2}f_\rho F^{\gamma^*V \pi}(Q^2)
\cdot
\delta(s-m^2_{V}) \, .
\label{eq:resonance}$$ On the other hand, $\rho^{\rm PT}$ embodies the partonic part of the LCSR and derives from Eq. (\[eq:convolution\]) via the relation $
\rho^{\rm PT}(Q^2,s)
=
\frac{\mathbf{Im}}{\pi}
\left[F^{\gamma^*\gamma^*\pi}(Q^2,-s)
\right]
$. The pion-photon transition form factor $F^{\gamma^{*}\gamma\pi}(Q^2,0)$ can be expressed in terms of $\rho^{\rm PT}$ having recourse to quark-hadron duality in the vector channel: $$\begin{aligned}
F^{\gamma\gamma^*\pi}_\text{LCSR}(Q^2)
&=&
\frac{1}{\pi}\int_{s_0}^{\infty} \frac{\textbf{Im}\left(F^{\gamma^*\gamma^*\pi}(Q^2,-s)\right)}{s} ds
\nonumber \\
&&
+ \frac{1}{\pi}\int_{0}^{s_0} \frac{\textbf{Im}\left(F^{\gamma^*\gamma^*\pi}(Q^2,-s)\right)} {m_\rho^2}
e^{(m_\rho^2-s)/M^2}ds \ ,
\label{eq:LCSR}\end{aligned}$$ with $s_0 \simeq 1.5$ GeV$^2$ and $M^2$ denoting the Borel parameter in the interval ($0.5-0.9$) GeV$^2$. The LO spectral density $\rho^{(0)}$ has been determined in Ref. using for $F^{\gamma^*\gamma^*\pi}$ Eq.(\[Eq:T\_0\]).
Partial results for the NLO spectral density $\rho^{(1)}$ in the leading-twist approximation have been given in Ref. , while the general solution $
\rho_n^{(1)}(Q^2,s)
=
\frac{\mathbf{Im}}{\pi}
\left[\left(T_{1}\otimes \psi_n\right)(Q^2,-s)
\right]
$ was obtained in Ref. to read $$\rho_n^{(1)}(Q^2,s)
=
\frac{\rho^{(1)}_n\left(x;\mu^2_{\rm F}\right)}
{(Q^2+s)}\Bigg|_{x=\frac{Q^2}{Q^2+s}}$$ with $$\begin{aligned}
\rho^{(1)}_n\left(x;\mu^2_{\rm F}\right)
= &&
C_{\rm F} \left[
-3\left[1-v^{a}(n)\right]+\frac{\pi^2}{3}
-\ln^2\left(\frac{\bar{x}}{x}\right) + 2v(n)
\ln\left(\frac{\bar{x}}{x} \frac{Q^2}{\mu^2_{\rm F}}
\right)
\right] \psi_n(x)
\nonumber \\
&& \!\!\!
- C_{\rm F}\, 2\!\!\!
\sum^n_{l=0,2,\ldots}
\left[G_{nl}+v(n)\cdot b_{n l}\right] \psi_l(x) \, ,~
\label{eq:spec-den-NLO}\end{aligned}$$ where $v^a(n)= 1/(n+1)(n+2)-1/2$ and $G_{nl},~b_{n l}$ are calculable triangular matrices (see Ref. for details). Note that the spectral density $\rho^{(1)}_n$ allows one to obtain $F^{\gamma\gamma^*\pi}_\text{LCSR}(Q^2)$ for any number of the Gegenbauer harmonics in the expansion of $\varphi_\pi$.
Employing this approach, predictions at the NLO for $Q^2F^{\gamma\gamma^*\pi}_\text{LCSR}(Q^2)$ were derived,[@BMS02] using a variety of pion DAs: the asymptotic (Asy) one, the CZ model,[@CZ84] and the BMS “bunch”[@BMS01] (see left panel of Fig. \[F:DAmodels\]). It was found that the radiative corrections are important, being negative and contributing up to –17% at low and moderate $Q^2$ values. Let us recall the main results of this CLEO-data analysis, referring to Refs. for a full-fledged discussion and more details. The CLEO data[@CLEO98] were processed in terms of $\sigma$ error ellipses and the results are displayed in the right panel of Fig. \[F:DAmodels\] around the best fit point (). In this graphics, the BMS “bunch” of pion DAs[@BMS01] is shown as a slanted green rectangle, while the vertical dashed and solid lines denote the estimates for $a_2$ (related to the second moment of $\varphi_\pi$) of two recent lattice simulations in Refs., respectively. Several other models, explained in Refs. , are also shown. The main upshot is that wide pion DAs, like the CZ model, are excluded at the 4$\sigma$ level, whereas the asymptotic DA, and others close to it, are also excluded at the level of at least 3$\sigma$. As one sees from this figure, only endpoint-suppressed pion DAs of the BMS type are within the $1\sigma$ error ellipse of the CLEO data[@CLEO98] and simultaneously in agreement with the mentioned lattice constraints.
![**Left**: Predictions for $Q^2 F^{\gamma\gamma^*\pi}_\text{LCSR}(Q^2)$ using the CZ model (upper dashed red line), the BMS “bunch” (shaded green strip), and the Asy DA (low dashed black line) in comparison with the CELLO (diamonds) and the CLEO (triangles) data. **Right**: CLEO-data constraints on $F^{\gamma\gamma^*\pi}_\text{LCSR}(Q^2)$ in the ($a_2$, $a_4$) plane at the scale $\mu^2=(2.4$GeV$)^2$ in terms of error regions around the BMS best-fit point ,[[@BMS02]]{} using the following designations: $1\sigma$ (thick solid green line); $2\sigma$ (solid blue line); $3\sigma$ (dashed-dotted red line). Two recent lattice simulations[[@Lat06; @Lat07]]{} are denoted, respectively, by vertical dashed and solid lines together with predictions of QCD sum rules with nonlocal condensates (slanted green rectangle), — BMS model,[[@BMS01]]{} — asymptotic DA, — [CZ]{} DA.[[@CZ84]]{} []{data-label="F:DAmodels"}](fig-ff-b4-cello.eps "fig:"){width="47.00000%"} ![**Left**: Predictions for $Q^2 F^{\gamma\gamma^*\pi}_\text{LCSR}(Q^2)$ using the CZ model (upper dashed red line), the BMS “bunch” (shaded green strip), and the Asy DA (low dashed black line) in comparison with the CELLO (diamonds) and the CLEO (triangles) data. **Right**: CLEO-data constraints on $F^{\gamma\gamma^*\pi}_\text{LCSR}(Q^2)$ in the ($a_2$, $a_4$) plane at the scale $\mu^2=(2.4$GeV$)^2$ in terms of error regions around the BMS best-fit point ,[[@BMS02]]{} using the following designations: $1\sigma$ (thick solid green line); $2\sigma$ (solid blue line); $3\sigma$ (dashed-dotted red line). Two recent lattice simulations[[@Lat06; @Lat07]]{} are denoted, respectively, by vertical dashed and solid lines together with predictions of QCD sum rules with nonlocal condensates (slanted green rectangle), — BMS model,[[@BMS01]]{} — asymptotic DA, — [CZ]{} DA.[[@CZ84]]{} []{data-label="F:DAmodels"}](fig5a.eps "fig:"){width="48.00000%"}
The inclusion of the main, i.e., $\beta_0$-proportional NNLO contribution, in $F^{\gamma\gamma^*\pi}_\text{LCSR}$ proceeds via the dispersion integral in (\[eq:LCSR\]).[@MS09] The technical problem is how to obtain the contributions to $\rho^{(2,\beta)}$ from terms with various powers of the logarithms ${\rm L}(y)$ in Eq. (\[eq:T2beta\]). The outcome of this calculation turns out to be negative, like the NLO contribution, and about –7% (taken together with the effect of a more realistic Breit-Wigner (BW) ansatz for the meson resonances in Eq. (\[eq:resonance\])) at small $Q^2\sim 2$ GeV$^2$. The size of this suppression decreases rather fast to –2.5% with increasing $Q^2 \geq 6$ GeV$^2$.
[![image](fig6.eps){width="48.00000%"} ![image](fig7.eps){width="48.00000%"}]{}
The net result is a slight suppression of the prediction for the scaled form factor (see Fig. \[fig:NNLOvsCLEO\]).
Confronting NNLO LCSR results with the BaBar data {#sec:NNLO-BaBar}
=================================================
In the preceding section we have shown in detail that the CLEO data are strictly incompatible with wide pion DAs and demand that the endpoints $x=0,1$ are stronger suppressed than in the asymptotic DA. Surprisingly, the new data of the BaBar Collaboration[@BaBar09] on the pion-photon transition form factor are in contradiction with this behavior. More specifically, these data, which extend from intermediate up to high momenta in the range $4 < Q^2 < 40$ GeV$^2$, show a significant growth with $Q^2$ for values above $\sim 10$ GeV$^2$. Indeed, the corresponding data points lie above the asymptotic QCD prediction $\sqrt{2}f_{\pi}$ and continue to grow with $Q^2$ up to the highest measured momentum. This behavior of the BaBar data is clearly in conflict with the collinear factorization. This in turn means that, as we argued in Ref. , the inclusion of the NNLO radiative corrections cannot reconcile the BaBar data with perturbative QCD. This is true for any pion DA that vanishes at the endpoints $x=0,1$ (see Fig. \[fig:NNLO-BaBar\]). From Table \[table:1\] it becomes clear that also a wide pion DA, like the CZ model, though it lies above the asymptotic prediction shows exactly the same scaling behavior above $\sim 10$ GeV$^2$ as all discussed pion DAs, which is not even surprising because it also vanishes at the endpoints $x=0,1$. The conclusion is that — contrary to the statements of the BaBar Collaboartion[@BaBar09] — also the CZ DA cannot reproduce *all* BaBar data.[@MS09; @MS09HSQCD09; @Kho09] Indeed, from Fig. \[fig:NNLO-BaBar\] one sees that in the region of the CLEO data, the CZ model fails also with respect to the BaBar data points at the $4\sigma$ level, whereas above $\simeq 20$ GeV${}^2$, and up to the highest measured value of $Q^2$, $40$ GeV${}^2$, it fails again, because instead of growing with $Q^2$ it scales.
![ Predictions for $Q^2 F^{\gamma\gamma^*\pi}_\text{LCSR}(Q^2)$ calculated with the following pion DAs: Asy — lower solid line, BMS “bunch” — shaded green strip, and the CZ model — upper solid red line. The BaBar data[@BaBar09] are shown as diamonds with error bars. The CELLO[@CELLO91] and the CLEO[@CLEO98] data are also shown with the designations used in Fig. \[F:DAmodels\]. The displayed theoretical results include the NNLO$_{\beta}$ radiative corrections and the BW model for the meson resonances. The horizontal dashed line marks the asymptotic QCD prediction $\sqrt{2}f_\pi$.[]{data-label="fig:NNLO-BaBar"}](figBMSBaBar.eps){width="65.00000%"}
In summary,\
(i) The combined effect of the negative ${\rm NNLO}_\beta$, i.e., suppressing, radiative corrections and the enhancing effect of using a Breit-Wigner model for the vector-meson resonances finally amounts to a moderate overall suppression of $Q^2 F^{\gamma\gamma^*\pi}_\text{LCSR}(Q^2)$ in the range of momentum transfer 10-40 GeV$^2$,[@MS09; @MS09HSQCD09] probed in the BaBar experiment.\
(ii) The growth with $Q^2$ of the scaled form factor, measured by the BaBar Collaboration, cannot be attributed to the hadronic content of the real photon because this is a twist-four contribution that is rapidly decreasing with increasing $Q^2$.\
(iii) It is impossible to get enhancement of the form factor within the QCD collinear factorization using pion DA models which have a convergent projection onto the Gegenbauer harmonics and, hence, vanish at the endpoints $x=0, 1$ (cf. the $\chi^2$ values in Table \[table:1\]). It seems (see next section) that exactly the violation of this feature may provide an explanation of the BaBar data.
BaBar data — heuristic explanations {#sec:BaBar-scenarios}
===================================
Despite the claims by the BaBar Collaboration[@BaBar09] that their data are in agreement with QCD, such an explanation is outside of reach at present. Hence, once is forced to look for alternative explanations. There have been several proposals to explain the anomalous behavior of the BaBar data, among others, e.g., Refs.. We restrict attention to one sort of such proposals which assumes that the pion DA may be “practically flat”, hence violating the collinear factorization and entailing a (logarithmic) growth of $Q^2 F^{\gamma^*\gamma\pi}(Q^2)$ with $Q^2$. One has[@Rad09] ($\sigma^2 =0.53$ GeV$^2$) $$Q^{2}F^{\gamma^{*}\gamma\pi}
=
\frac{\sqrt{2}f_{\pi}}{3}
\int_{0}^{1} \frac{1}{x}
{\displaystyle}\left[1 - {\rm e}^{ -\frac{x Q^2}{\bar{x}2 \sigma}}\right]dx \, .
\label{eq:flat-Rad}$$ Another option[@Pol09] gives instead ($m\approx 0.65$ GeV, cf. Eq. (\[Eq:T\_0\])) $$Q^{2}F^{\gamma^{*}\gamma\pi}
=
\frac{\sqrt{2}f_{\pi}}{3}
\int_{0}^{1}
\frac{\varphi_{\pi}(x,Q)}{x + \frac{m^2}{Q^2}}dx\,
\label{eq:flat-Pol}$$ with $\varphi_{\pi}(x,\mu_{0})=N+(1-N)6x\bar{x}$ with $N\approx 1.3$ and $\mu_0=0.6-0.8$ GeV. Equation (\[eq:flat-Rad\]) can be compared with the available experimental data[@CELLO91; @CLEO98; @BaBar09] using for them the phenomenological fit ($\Lambda\approx 0.9$ GeV, $b\approx-1.4$) $$Q^{2}F^{\gamma^{*}\gamma\pi}
= \!
\frac{Q^2}{2 \sqrt{2} f_{\pi} \pi^2}
\!\left[\frac{\Lambda^2}{\Lambda^2+Q^2}+
b \left(\!\!\frac{\Lambda^2}{\Lambda^2+Q^2} \!\!\right)^2 \right]
\label{eq:dipole}$$
by means of a $\chi^2_{ndf}$ criterion, given in Table \[table:dipole-flat\]. One observes — diagonal from the BaBar (CELLO&CLEO) entry to the CELLO&CLEO (BaBar) one — that it is not possible to fit simultaneously the CELLO/CLEO data and the BaBar data with the same accuracy using these parameterizations.
This interpretation is given further impetus, when we consider the BaBar data as being two “experiments” BaBar1 and BaBar2 — see Fig. \[fig:split-BaBar\]. One sees from Table 3, in terms of $\chi^2_{ndf}$, that the flat-DA scenario cannot describe [*both*]{} BaBar “experiments” simultaneously with the same accuracy.
[lr]{}
![image](fig-babar1-2_corr.eps){width="65mm"} \[fig:split-BaBar\]
&
BaBar1 BaBar2
----------------------------- -------- --------
BaBar1 $3.3$ $0.33$
BaBar2$\vphantom{^\big|_|}$ $3.5$ $0.26$
\[table:split-BaBar\]
Conclusions {#sec:concl}
===========
We have studied in detail the pion-photon transition form factor using light-cone sum rules and including QCD radiative corrections up to the two loop level. We also took into account twist-four contributions. It has been our goal to derive predictions for $Q^2F^{\gamma\gamma^{*}\pi}$ using several pion distribution amplitudes that can be compared with the available experimental data. Our results have been deduced within the convolution scheme of QCD for distribution amplitudes that vanish at the endpoints $x=0,1$. They turn out to be unable to match the data of the BaBar Collaboration for momenta beyond $10$ GeV${}^2$, which grow with increasing $Q^2$. We analyzed this behavior and argued that proposed scenarios, which make use of flat pion distribution amplitudes, cannot match the high-$Q^2$ BaBar data and those of the CLEO and the CELLO Collaborations simultaneously, because the latter demand distribution amplitudes that vanish at the endpoints.
Acknowledgments {#acknowledgments .unnumbered}
===============
This report is dedicated to the 75th birthday of Anatoly Efremov. We are grateful to A. P. Bakulev for a fruitful collaboration. This work was partially supported by the Heisenberg–Landau Program (Grant 2009) and the Russian Foundation for Fundamental Research (Grants 07-02-91557 and 09-02-01149).
[10]{}
A. V. Efremov and A. V. Radyushkin, [*Phys. Lett. B*]{} [**94**]{}, 245 (1980); [*Theor. Math. Phys.*]{} [**42**]{}, 97 (1980). G. P. Lepage and S. J. Brodsky, [*Phys. Rev. D*]{} [**22**]{}, 2157 (1980). V. L. Chernyak and A. R. Zhitnitsky, [*Phys. Rept.*]{} [**112**]{}, 173 (1984). S. V. Mikhailov and A. V. Radyushkin, [*JETP Lett.* ]{} **43**, 712 (1986); [*Sov. J. Nucl. Phys.*]{} [**49**]{}, 494 (1989). A. P. Bakulev and S. V. Mikhailov, [*Phys. Lett. B*]{} [**436**]{}, 351 (1998). A. P. Bakulev, S. V. Mikhailov, and N. G. Stefanis, [*Phys. Lett. B*]{} [**508**]{}, 279 (2001); [*Phys. Lett. B*]{} [**590**]{}, 309 (2004) Erratum. " A. P. Bakulev, K. Passek-Kumerički, W. Schroers, and N. G. Stefanis, [*Phys. Rev. D*]{} [**70**]{}, 033014 (2004); [*Phys. Rev. D*]{} [**70**]{}, 079906 (2004) Erratum. N. G. Stefanis, [*Nucl. Phys. Proc. Suppl.*]{} [**181-182**]{}, 199 (2008). A. P. Bakulev, S. V. Mikhailov, and N. G. Stefanis, [*Annalen Phys.*]{} **13**, 629 (2004). H. J. Behrend [*et al.*]{}, [*Z. Phys. C*]{} [**49**]{}, 401 (1991). J. Gronberg [*et al.*]{}, [*Phys. Rev. D*]{} [**57**]{}, 33 (1998). A. Khodjamirian, [*Eur. Phys. J. C*]{} [**6**]{}, 477 (1999). A. Schmedding and O. Yakovlev, [*Phys. Rev. D*]{} [**62**]{}, 116002 (2000). V. M. Braun and I. E. Filyanov, [*Z. Phys. C*]{} [**48**]{}, 239 (1990). A. P. Bakulev, S. V. Mikhailov, and N. G. Stefanis, [*Phys. Rev. D*]{} [**67**]{}, 074012 (2003); [*Phys. Lett. B*]{} [**578**]{}, 91 (2004); [*Phys. Rev. D*]{} [**73**]{}, 056002 (2006). P. Kroll and M. Raulfs, [*Phys. Lett. B*]{} [**387**]{}, 848 (1996). S. V. Mikhailov and N. G. Stefanis, [*Nucl. Phys. B*]{}, [**821**]{}, 291 (2009). B. Melić, D. M[ü]{}ller, and K. Passek-Kumerički, [*Phys. Rev. D*]{} [**68**]{}, 014013 (2003). B. Aubert [*et al.*]{}, [*Phys. Rev. D*]{} [**80**]{}, 052002 (2009). V. M. Braun, E. Gardi, and S. Gottwald, [*Nucl. Phys. B*]{} [**685**]{}, 171 (2004). F. Del Aguila and M. K. Chase, [*Nucl. Phys. B*]{} [**193**]{}, 517 (1981). E. Braaten, [*Phys. Rev. D*]{} [**28**]{}, 524 (1983). E. P. Kadantseva, S. V. Mikhailov, and A. V. Radyushkin, [*Sov. J. Nucl. Phys.*]{} [**44**]{}, 326 (1986). V. M. Braun [*et al.*]{}, [*Phys. Rev. D*]{} [**74**]{}, 074501 (2006). M. A. Donnellan [*et al.*]{}, [*PoS*]{} [**LAT2007**]{}, 369 (2007). S. V. Mikhailov and N. G. Stefanis, arXiv:0909.5128 \[hep-ph\]. A. Khodjamirian, arXiv:0909.2154 \[hep-ph\]. A. V. Radyushkin, arXiv:0906.0323 \[hep-ph\]. M. V. Polyakov, [*JETP Lett.*]{} [**90**]{}, 228 (2009). A. E. Dorokhov, arXiv:0905.4577 \[hep-ph\]. H. n. Li and S. Mishima, arXiv:0907.0166 \[hep-ph\]. P. Kotko and M. Praszalowicz, arXiv:0907.4044 \[hep-ph\]. W. Broniowski and E. R. Arriola, arXiv:0910.0869 \[hep-ph\].
[^1]: Talk presented at Workshop “Recent Advances in Perturbative QCD and Hadronic Physics”, 20–25 July 2009, ECT\*, Trento (Italy), in Honor of Prof. Anatoly Efremov’s 75th Birthday Celebration.
[^2]:
|
---
bibliography:
- 'cite.bib'
---
\
\
\
\
\
$^*$ALBERT Inc., Japan\
$^{\dagger}$Graduate School of Science and Engineering, Yamagata University, Japan
### abstract: {#abstract .unnumbered}
Generalization is one of the most important issues in machine learning problems. In this study, we consider generalization in restricted Boltzmann machines (RBMs). We propose an RBM with multivalued hidden variables, which is a simple extension of conventional RBMs. We demonstrate that the proposed model is better than the conventional model via numerical experiments for contrastive divergence learning with artificial data and a classification problem with MNIST.
Introduction {#sec:introduction}
============
Generalization is one of the most important goals in statistical machine learning problems [@Bishop2006]. In various standard machine learning techniques, given a particular data set, we fit our probabilistic learning model to the empirical distribution (or the data distribution) of the data set. When our learning model is sufficiently flexible, it can fit the empirical distribution exactly via an appropriate learning method. A learning model that is too close to the empirical distribution frequently gives poor results for new data points. This situation is known as *over-fitting*. Over-fitting impedes generalization; therefore, techniques that can suppress over-fitting are needed to achieve good generalizations. Regularizations, such as $L_1$ and $L_2$ regularizations or their combination (the elastic net) [@ElasticNet2005], are popular techniques used for this purpose.
Here, we focus on a restricted Boltzmann machine (RBM) [@RBM1986; @CD2002]. RBMs have a wide range of applications such as collaborating filtering [@RBMcFilter2007], classification [@DRBM2008], and deep learning [@Hinton2006; @DBM2009; @DBM2012]. The suppression of over-fitting is also important in RBMs. An RBM is a probabilistic neural network defined on a bipartite undirected graph comprising two different layers: visible layer and hidden layer. The visible layer, which consists of visible (random) variables, directly corresponds to the data points, while the hidden layer, which consists of hidden (random) variables, does not. The hidden layer creates complex correlations among the visible variables. The sample space of the visible variables is determined by the range of data elements, whereas the sample space of the hidden variables can be set freely. Typically, the hidden variables are given binary values ($\{0,1\}$ or $\{-1,+1\}$).
In this study, we propose an RBM with multivalued hidden variables. The proposed RBM is a very simple extension of the conventional RBM with binary-hidden variables (referred to as the binary-RBM in this paper). However, we demonstrate that the proposed RBM is better than the binary-RBM in terms of suppressing the over-fitting. The remainder of this paper is organized as follows. We define the proposed RBM in Sec. \[sec:RBM\] and explain its maximum likelihood estimation in Sec. \[sec:log-likelihood-function\]. In Sec. \[sec:RBM\_experiment\], we demonstrate the validity of the proposed RBM using numerical experiments for contrastive divergence (CD) learning [@CD2002] with artificial data. We give an insight on the effect of our extension (i.e., the effect of multivalued hidden variables) using a toy example in Sec. \[sec:ToyExample\]. In Sec. \[sec:PatternRecognitionApplication\], we apply the proposed RBM to a classification problem and show that it is also effective in such type of problems. Finally, the conclusion is given in Sec. \[sec:conclusion\].
Restricted Boltzmann machine with multivalued hidden variables {#sec:RBM}
==============================================================
Let us consider a bipartite graph consisting of two different layers: the visible layer and hidden layer, as shown in Fig. \[fig:RBM\].
![Bipartite graph consisting of two layers: the visible layer and the hidden layer. $V$ and $H$ are the sets of indices of the nodes in the visible layer and hidden layer, respectively.[]{data-label="fig:RBM"}](RBM.eps){height="1.5cm"}
Binary (or bipolar) visible variables, $\bm{v} := \{v_i \in \{-1,+1\} \mid i \in V\}$, are assigned to the corresponding nodes in the visible layer. The corresponding hidden variables, $\bm{h} := \{h_j \in { \mathcal{X} }(s) \mid j \in H\}$, are assigned to the nodes in the hidden layer, where ${ \mathcal{X} }(s)$ is the sample space defined by $$\begin{aligned}
{ \mathcal{X} }(s):= \big\{ (2k - s)/s \mid k= 0,1,\ldots,s \big\} \quad s \in \mathbb{N},
\label{eqn:SampleSpace_of_hidden}\end{aligned}$$ where $\mathbb{N}$ is the set of all natural numbers. For example, ${ \mathcal{X} }(1) = \{-1,+1\}$, ${ \mathcal{X} }(2) = \{-1,0,+1\}$, and ${ \mathcal{X} }(3) = \{-1,-1/3, +1/3,+1\}$. Namely, ${ \mathcal{X} }(s)$ is the set of values that evenly partition the interval $[-1,+1]$ into $(s+1)$ parts. We define that in the limit of $s \to \infty$, ${ \mathcal{X} }(s)$ becomes a continuous space $[-1, +1]$, i.e., ${ \mathcal{X} }(\infty) = [-1,+1]$. On the bipartite graph, we define the energy function for $s \in \mathbb{N}$ as $$\begin{aligned}
E_s(\bm{v},\bm{h} ; \theta):=-\sum_{i \in V}b_i v_i-\sum_{j \in H}c_j h_j
-\sum_{i \in V}\sum_{j \in H}w_{i,j}v_i h_j,
\label{eqn:EnergyFunction}\end{aligned}$$ where $\{b_i\}$, $\{c_j\}$, and $\{w_{i,j}\}$ are the learning parameters of the energy function, and they are collectively denoted by $\theta$. Specifically, $\{b_i\}$ and $\{c_j\}$ are the biases for the visible and hidden variables, respectively, and $\{w_{i,j}\}$ are the couplings between the visible and hidden variables. Our RBM is defined in the form of a Boltzmann distribution in terms of the energy function given in Eq. (\[eqn:EnergyFunction\]): $$\begin{aligned}
P_s(\bm{v}, \bm{h} \mid \theta):=\frac{\omega(s)}{Z_s(\theta)} \exp\big(-E_s(\bm{v},\bm{h} ; \theta)\big),
\label{eqn:RBM}\end{aligned}$$ where $$\begin{aligned}
Z_s(\theta):=\sum_{\bm{v} \in \{-1,+1\}^{|V|}}\sum_{\bm{h} \in { \mathcal{X} }(s)^{|H|}} \omega(s)\exp\big(-E_s(\bm{v},\bm{h} ; \theta)\big)
\label{eqn:PartitionFunction_RBM}\end{aligned}$$ is the partition function. The multiple summations in Eq. (\[eqn:PartitionFunction\_RBM\]) mean $$\begin{aligned}
\sum_{\bm{v} \in \{-1,+1\}^{|V|}} &= \sum_{v_1 \in \{0,1\}}\sum_{v_2 \in \{0,1\}}\cdots \sum_{v_{|V|} \in \{0,1\}}, \\
\sum_{\bm{h} \in { \mathcal{X} }(s)^{|H|}} &= \sum_{h_1 \in { \mathcal{X} }(s)}\sum_{h_2 \in { \mathcal{X} }(s)}\cdots \sum_{h_{|H|} \in { \mathcal{X} }(s)}.\end{aligned}$$ The factor $\omega(s):= \{2/(s + 1)\}^{|H|}$ appearing in Eqs. (\[eqn:RBM\]) and (\[eqn:PartitionFunction\_RBM\]) is a constant unrelated to $\bm{v}$ and $\bm{h}$. Although it vanishes by reducing the fraction in Eq. (\[eqn:RBM\]), we leave it for the sake of the subsequent analysis. The factor lets the summation over $h_j$ be a Riemann sum and prevents the divergence of the partition function in $s \to \infty$. It is noteworthy that when $s = 1$, Eq. (\[eqn:RBM\]) is equivalent to the binary-RBM.
The marginal distribution of RBM is expressed as $$\begin{aligned}
P_s(\bm{v}\mid \theta)&=\sum_{\bm{h} \in { \mathcal{X} }(s)^{|H|}}P_s(\bm{v}, \bm{h} \mid \theta){\nonumber \\}&=\frac{1}{Z_s(\theta)}\exp\Big(\sum_{i \in V}b_i v_i + \sum_{j \in H}\ln \phi_s\big(\lambda_j(\bm{v},\theta)\big)\Big),
\label{eqn:MargialDistribution}\end{aligned}$$ where $\lambda_j(\bm{v},\theta):= c_j + \sum_{i \in V}w_{i,j}v_i$ and $\phi_s(x):= \sum_{h \in { \mathcal{X} }(s)}2(s+1)^{-1} e^{x h}$. It is noteworthy that factor $2(s + 1)^{-1}$ in the definition of $\phi_s(x)$ comes from $\omega(s)$. Using the formula of geometric series, we obtain $$\begin{aligned}
\phi_s(x) =\frac{2\sinh \{(s+1)x/s\}}{(s + 1)\sinh (x/s)}
\label{eqn:phi_s(x)}\end{aligned}$$ for $1 \leq s < \infty$. When $s \to \infty$, we obtain $$\begin{aligned}
\phi_{\infty}(x) =\int_{-1}^{+1} e^{x h} dh = \frac{2 \sinh x}{x}.
\label{eqn:phi_inf(x)}\end{aligned}$$ The additive factor, $2(s + 1)^{-1}$, ensures that $\lim_{s\to \infty}\phi_s(x) = \phi_{\infty}(x)$. The conditional distributions are $$\begin{aligned}
P_s(\bm{v} \mid \bm{h}, \theta)&=\prod_{i \in V}P_s(v_i \mid \bm{h}, \theta),\quad
P_s(v_i \mid \bm{h}, \theta) \propto \exp\big(\xi_i(\bm{h},\theta) v_i\big),
\label{eqn:CondDistribution_V|H}\\
P_s(\bm{h} \mid \bm{v}, \theta)&=\prod_{j \in H}P_s(h_j \mid \bm{v}, \theta),\quad
P_s(h_j \mid \bm{v}, \theta) \propto \exp\big(\lambda_j(\bm{v},\theta) h_j\big),
\label{eqn:CondDistribution_H|V}\end{aligned}$$ where $\xi_i(\bm{h},\theta):= b_i + \sum_{j \in H}w_{i,j}h_j$. We can easily sample $\bm{v}$ from a given $\bm{h}$ using Eq. (\[eqn:CondDistribution\_V|H\]) and sample $\bm{h}$ from a given $\bm{v}$ using Eq. (\[eqn:CondDistribution\_H|V\]). Alternately repeating these two kinds of conditional samplings yields a (blocked) Gibbs sampling on the RBM. It is noteworthy that when $s \to \infty$, the conditional sampling of $\bm{h}$ using Eq. (\[eqn:CondDistribution\_H|V\]) can be implemented using the inverse transform sampling. The cumulative distribution function of $P_{\infty}(h_j \mid \bm{v}, \theta)$ is $$\begin{aligned}
F(x):=\int_{-1}^x \frac{\exp\big(\lambda_j(\bm{v},\theta) h_j\big)}{\phi_{\infty}\big(\lambda_j(\bm{v},\theta)\big)} dh_j=\frac{\exp\big(\lambda_j(\bm{v},\theta) x\big)- \exp\big(-\lambda_j(\bm{v},\theta) \big)}{2 \sinh \lambda_j(\bm{v},\theta)},\end{aligned}$$ and therefore, its inverse function is $$\begin{aligned}
F^{-1}(u):=\frac{1}{\lambda_j(\bm{v},\theta)} \ln \big\{\exp\big(-\lambda_j(\bm{v},\theta) \big) + 2 u \sinh \lambda_j(\bm{v},\theta)\big\}.
$$ $F^{-1}(u)$ is the sampled value of $h_j$ from $P_{\infty}(h_j \mid \bm{v}, \theta)$, where $u$ is a sample point from the uniform distribution over $[0,1]$.
Log-likelihood function and its gradients {#sec:log-likelihood-function}
-----------------------------------------
Given $N$ training data points for the visible layer, $D_V:=\{ { \mathbf{v} }^{(\mu)} \in \{-1,+1\}^{|V|} \mid \mu = 1,2,\ldots,N\}$, the learning of RBM is done by maximizing the log-likelihood function (or the negative cross-entropy loss function), defined by $$\begin{aligned}
l_s(\theta):=\frac{1}{N}\sum_{\mu = 1}^N \ln P_s({ \mathbf{v} }^{(\mu)} \mid \theta),
\label{eqn:LogLikelihood}\end{aligned}$$ with respect to $\theta$ (namely, the maximum likelihood estimation). The distribution in the logarithmic function in Eq. (\[eqn:LogLikelihood\]) is the marginal distribution obtained in Eq. (\[eqn:MargialDistribution\]). The log-likelihood function is regarded as the negative training error. Usually, the log-likelihood function is maximized using a gradient ascent method. The gradients of the log-likelihood function with respect to the learning parameters are as follows. $$\begin{aligned}
\frac{\partial l_s(\theta)}{\partial b_i}&=\frac{1}{N}\sum_{\mu = 1}^N { \mathrm{v} }_i^{(\mu)} -{\langle v_i \rangle}_s,
\label{eqn:grad_RBM_b}\\
\frac{\partial l_s(\theta)}{\partial c_j}&=\frac{1}{N}\sum_{\mu = 1}^N \psi_s\big(\lambda_j({ \mathbf{v} }^{(\mu)},\theta)\big) -{\langle h_j \rangle}_s,
\label{eqn:grad_RBM_c}\\
\frac{\partial l_s(\theta)}{\partial w_{i,j}}&=\frac{1}{N}\sum_{\mu = 1}^N { \mathrm{v} }_i^{(\mu)} \psi_s\big(\lambda_j({ \mathbf{v} }^{(\mu)},\theta)\big) -{\langle v_ih_j \rangle}_s,
\label{eqn:grad_RBM_w}\end{aligned}$$ where ${\langle \cdots \rangle}_s$ is the expectation of RBM, i.e., $$\begin{aligned}
{\langle A(\bm{v},\bm{h}) \rangle}_s:=\sum_{\bm{v} \in \{-1,+1\}^{|V|}} \sum_{\bm{h} \in { \mathcal{X} }(s)^{|H|}} A(\bm{v},\bm{h}) P_s(\bm{v}, \bm{h} \mid \theta),\end{aligned}$$ and $$\begin{aligned}
\psi_s(x):=\frac{\partial}{\partial x}\ln \phi_s(x)=
\begin{dcases}
\frac{s + 1}{s \tanh \{(s + 1)x/s\}} - \frac{1}{s \tanh(x/s)} & 1 \leq s < \infty \\
\frac{1}{\tanh x} - \frac{1}{x} & s \to \infty
\end{dcases}
.
\label{eqn:psi_s(x)}\end{aligned}$$ The log-likelihood function can be maximized by a gradient ascent method with the gradients expressed in Eqs. (\[eqn:grad\_RBM\_b\])–(\[eqn:grad\_RBM\_w\]). However, the evaluation of the expectations, ${\langle \cdots \rangle}_s$, included in the above gradients is computationally hard. The computation time of the evaluation grows exponentially as the number of variables increases. Therefore, in practice, an approximate approach is used, for example, CD [@CD2002], pseudo-likelihood [@RBM-PLE2010], composite likelihood [@RBM-CLE2012], Kullback-Leibler importance estimation procedure (KLIEP) [@RBM-KLIEP2011], and Thouless-Anderson-Palmer (TAP) approximation [@RBM-TAP2015]. In particular, the CD method is the most popular method. In the CD method, the intractable expectations in Eqs. (\[eqn:grad\_RBM\_b\])–(\[eqn:grad\_RBM\_w\]) are approximated by the sample averages of the sampled points in which each sampled point is generated from the (one-time) Gibbs sampling using Eqs. (\[eqn:CondDistribution\_V|H\]) and (\[eqn:CondDistribution\_H|V\]), starting from each data point ${ \mathbf{v} }^{(\mu)}$.
Numerical experiment using artificial data {#sec:RBM_experiment}
------------------------------------------
In the numerical experiments in this section, we used two RBMs: the generative RBM (gRBM), $P_1^{{ \mathrm{gen} }}$, and the learning RBM (tRBM), $P_s^{{ \mathrm{train} }}$. We obtained $N = 200$ artificial training data points, $D_V$, from the gRBM using Gibbs sampling and subsequently, we trained the tRBM using the data points. The sizes of the visible layers of both RBMs were the same, namely, $|V| = 8$. The sizes of the hidden layers of the gRBM and tRBM were set to $|H| = 4$ and $|H| = 4 + R$, respectively. The sample space of the hidden variables in the gRBM was ${ \mathcal{X} }(1) = \{-1,+1\}$, implying that the gRBM is the binary-RBM. The parameters of gRBM were randomly drawn: $b_i,c_j \sim G(0,0.1^2)$ and $$\begin{aligned}
w_{i,j}\sim U [-\sqrt{6/(|V|+|H|)},\sqrt{6/(|V|+|H|)}]
\label{eqn:XavierInitialization}\end{aligned}$$ (Xavier’s initialization [@Xavier2010]), where $G(\mu,\sigma^2)$ is the Gaussian distribution and $U[{ \mathrm{min} },{ \mathrm{max} }]$ is the uniform distribution.
We trained the tRBM using the CD method. In the training, the parameters of tRBM were initialized by $b_i = c_j = 0$ and Eq. (\[eqn:XavierInitialization\]). In the gradient ascent method, we used the full batch learning with the Adam method [@Adam2015]. The quality of learning was measured using the Kullback-Leibler divergence (KLD) between the gRBM and tRBM: $$\begin{aligned}
{ \mathrm{KLD} }:=\frac{1}{|V|}\sum_{\bm{v} \in \{-1,+1\}^{|V|}}P_1^{{ \mathrm{gen} }}(\bm{v}) \ln \frac{P_1^{{ \mathrm{gen} }}(\bm{v})}{P_s^{{ \mathrm{train} }}(\bm{v})}.
\label{eqn:KLD}\end{aligned}$$ The KLD is regarded as the (pseudo) distance between the gRBM and tRBM. Thus, it is a type of generalization error. We can evaluate the KLD (the generalization error) and log-likelihood function in Eq. (\[eqn:LogLikelihood\]) (the negative training error) because the sizes of the RBMs are not large.
![KLDs against the number of parameter updates (epochs) when (a) $R = 0$ and (b) $R=5$. We used the tRBM with $s = 1,2,4,\infty$. These plots show the average over 300 experiments.[]{data-label="fig:KLD_CD_N200"}](KLD_CD_N200.eps){height="4cm"}
![Log-likelihoods (divided by $|V|$) against the number of parameter updates (epochs) when (a) $R = 0$ and (b) $R=5$. We used the tRBM with $s = 1,2,4,\infty$. These plots show the average over 300 experiments.[]{data-label="fig:LL_CD_N200"}](LL_CD_N200.eps){height="4cm"}
Figures \[fig:KLD\_CD\_N200\] (a) and (b) show the KLDs against the number of parameter updates (i.e., the number of gradient ascent updates). We observe that all KLDs increase as the learnings proceed owing to the effect of over-fitting. In Fig. \[fig:KLD\_CD\_N200\] (a), because the gRBM and tRBM have the same structure (in other words, there is no model error), the effect of over-fitting is not severe. In contrast, in Fig. \[fig:KLD\_CD\_N200\] (b), because the tRBM is more flexible than the gRBM, the effect of over-fitting tends to become severe. In fact, in Fig. \[fig:KLD\_CD\_N200\] (b), the KLDs increase more rapidly as the learnings proceed. The increase in the KLD of higher $s$ is evidently slower. Figures \[fig:LL\_CD\_N200\] (a) and (b) show the log-likelihood functions divided by $|V|$ against the number of parameter updates. We observe that the log-likelihood function with lower $s$ grows more rapidly. In other words, the training error in the tRBM with lower $s$ decreases more rapidly. These results indicate that the multivalued hidden variables suppress over-fitting. In these experiments, the tRBM with $s = \infty$ is the best in terms of generalization.
Effect of multivalued hidden variables {#sec:ToyExample}
--------------------------------------
In the numerical experiments described in the previous section, we demonstrated that the multivalued hidden variables suppress over-fitting. In this section, we provide an insight into the effect of multivalued hidden variables using a toy example. Although the consideration presented below is for a simple RBM, which is significantly different from practical RBMs, it is expected to provide an insight into the effect of multivalued hidden variables.
First, let us consider a simple RBM with two visible variables: $$\begin{aligned}
P_s(v_1,v_2, \bm{h} \mid w) = \frac{\omega(s)}{Z_s(w)}\exp\Big(w \sum_{i=1}^2\sum_{j \in H} v_ih_j\Big).
\label{eqn:Toy_RBM}\end{aligned}$$ The marginal distribution of Eq. (\[eqn:Toy\_RBM\]) is $$\begin{aligned}
P_s(v_1,v_2 \mid w)=\frac{\exp\big\{ |H| \ln \phi_s \big(w(v_1 + v_2)\big)\big\}}
{ 2\exp\big\{ |H| \ln \phi_s(2w)\big\} + 2 \exp\big( |H| \ln 2\big)},
\label{eqn:Toy_marginal_RBM}\end{aligned}$$ where we used $\lim_{x \to 0}\phi_s(x) = 2$ and $\phi_s(x) = \phi_s(-x)$. Because $v_1, v_2 \in \{-1,+1\}$, Eq. (\[eqn:Toy\_marginal\_RBM\]) can be expanded as $$\begin{aligned}
P_s(v_1,v_2 \mid w) =\frac{1+ m_{s,1}(w) v_1 + m_{s,2}(w) v_2 + \alpha_s(w)v_1v_2}{4},
\label{eqn:Toy_marginal_RBM_expand}\end{aligned}$$ where $$\begin{aligned}
m_{s,i}(w) &:= \sum_{v_1,v_2 \in \{-1,+1\}} v_i P_s(v_1,v_2 \mid w) = 0, \\
\alpha_s(w)&:= \sum_{v_1,v_2 \in \{-1,+1\}} v_1v_2 P_s(v_1,v_2 \mid w)\\
&\>=\frac{\exp\big\{ |H| \ln \phi_s(2w)\big\} - \exp\big( |H| \ln 2\big)}
{\exp\big\{ |H| \ln \phi_s(2w)\big\} + \exp\big( |H| \ln 2\big)}\geq 0.\end{aligned}$$ Next, we consider the empirical distribution of $D_V$: $$\begin{aligned}
Q_{D_V}(v_1,v_2):= \frac{1}{N}\sum_{\mu=1}^N \prod_{i = 1}^2\delta\big(v_i, { \mathrm{v} }_i^{(\mu)}\big),\end{aligned}$$ where $\delta(x,y)$ is the Kronecker delta function. Similar to Eq. (\[eqn:Toy\_marginal\_RBM\_expand\]), the empirical distribution is expanded as $$\begin{aligned}
Q_{D_V}(v_1,v_2) =\frac{1+ d_1 v_1 + d_2 v_2 + \beta v_1v_2}{4},
\label{eqn:empiricalDist_expand}\end{aligned}$$ where $d_i := \sum_{\mu=1}^N { \mathrm{v} }_i^{(\mu)} / N$ and $\beta := \sum_{\mu=1}^N { \mathrm{v} }_1^{(\mu)}{ \mathrm{v} }_2^{(\mu)} / N$. For simplicity, in the following discussion, we assume that $d_1 = d_2 = 0$ and $\beta \geq 0$. Under this assumption, using the expanded forms in Eqs. (\[eqn:Toy\_marginal\_RBM\_expand\]) and (\[eqn:empiricalDist\_expand\]), the log-likelihood function of the simple RBM is expressed by $$\begin{aligned}
l_s(w) &= \sum_{v_1,v_2 \in \{-1,+1\}}Q_{D_V}(v_1,v_2)\ln P_s(v_1,v_2 \mid w) {\nonumber \\}&=\sum_{v_1,v_2 \in \{-1,+1\}}\frac{1 + \beta v_1 v_2}{4}\ln \frac{1 + \alpha_s(w)v_1v_2}{4}.
\label{eqn:LogLikelihood_ToyRBM}\end{aligned}$$
Ultimately, the aim of the maximum likelihood estimation is to find the value of $w$ that realizes $P_s(v_1,v_2 \mid w) = Q_{D_V}(v_1,v_2)$ or in other words, to find a value of $w_s^*$ that satisfies $\alpha_s(w_s^*) = \beta$. The log-likelihood function in Eq. (\[eqn:LogLikelihood\_ToyRBM\]) is globally maximized at $w = w_s^*$ and the RBM with $w_s^*$ over-fits the data distribution. It can be shown that the function $\alpha_s(w)$ has the following three properties: (i) it is symmetric with respect to $w$, (ii) it monotonically increases with an increase in $w \geq 0$, and (iii) it monotonically decreases with an increase in $s$ when $|x| \not=0$. The function $\alpha_s(w)$ with $|H| = 2$ is shown in Fig. \[fig:result\_toyRBM\_H2\] (a) as an example. These three properties lead to the inequality $|w_s^*| < |w_{s+1}^*|$ for a certain $\beta > 0$, which implies that the global maximum point of the log-likelihood function in Eq. (\[eqn:LogLikelihood\_ToyRBM\]) moves away from the origin, $w = 0$, as $s$ increases (see Fig. \[fig:result\_toyRBM\_H2\] (b)).
![(a) Plot of $\alpha_s(w)$ versus $w$ for various $s$ when $|H| = 2$. For $\beta = 0.6$, the values of $|w_s^*|$ for $s = 1,2,4$, and $\infty$ are approximately 0.6585, 0.7834, 0.8941, and 1.0887, respectively. (b) Plot of the log-likelihood function versus $w$ for various $s$ when $|H| = 2$ and $\beta = 0.6$. The shape of the peak around the global maximum point becomes sharper as $s$ decreases.[]{data-label="fig:result_toyRBM_H2"}](result_toyRBM_H2.eps){height="4cm"}
Usually, the initial value of $w$ is set to a value around the origin. As shown in Fig. \[fig:result\_toyRBM\_H2\] (b), the global maximum point moves closer to the origin and the peak becomes sharper (in other words, the global maximum point becomes the stronger attractor) as $s$ decreases. This implies that with a gradient ascent type of algorithm, the RBM with a lower $s$ can reach the global maximum point more rapidly and causes over-fitting during an early stage of the learning. Whereas, the convergence with the global maximum point of the RBM with a higher $s$ is slower and it prevents over-fitting during an early stage of the learning [^1]. In fact, the increases in the generalization error (the KLD) and negative training error (the log-likelihood function) become faster as $s$ decreases in the numerical results obtained in the previous section (cf. Figs. \[fig:KLD\_CD\_N200\] and \[fig:LL\_CD\_N200\]).
From the above analysis, we found that the global maximum point moves away from the origin and becomes a weaker attractor as $s$ increases. This could lead to some expectations, for example: (i) in a more practical RBM, its log-likelihood function usually has several local maximum points, and thus, the RBM with a higher $s$ is more easily trapped by one of the local maximum points before converging with the global maximum point (namely, the over-fitting point) and (ii) some regularization methods, such as early stopping or $L_2$ regularization, are more effective in the RBM with a higher $s$.
Application to classification problem {#sec:PatternRecognitionApplication}
=====================================
Let us consider a classification (or pattern recognition) problem in which an $n$-dimensional input vector $\bm{x} = (x_1, x_2,\ldots, x_n)^{{ \mathrm{T} }} \in \mathbb{R}^n$ is classified into $K$ different classes, $C_1, C_2, \ldots, C_K$. It is convenient to use a 1-of-$K$ representation (or a 1-of-$K$ coding) to identify each class [@Bishop2006]. In the 1-of-$K$ representation, each class corresponds to the $K$-dimensional vector $\bm{t} = (t_1, t_2,\ldots, t_K)^{{ \mathrm{T} }}$, where $t_k \in \{0,1\}$ and $\sum_{k = 1}^K t_k = 1$, i.e., $\bm{t}$ is a vector in which the value of only one element is one and the remaining elements are zero. When $t_k = 1$, $\bm{t}$ indicates class $C_k$. For simplicity of the notation, we denote the 1-of-$K$ vector, whose $k$th element is one, by $\bm{1}_k$. In the following section, we consider the application of the proposed RBM to the classification problem.
Discriminative restricted Boltzmann machine {#sec:DRBM}
-------------------------------------------
A discriminative restricted Boltzmann machine (DRBM) was proposed to solve the classification problem [@DRBM2008; @DRBM2012], which is a conditional distribution of the output 1-of-$K$ vector $\bm{t}$ conditioned with a continuous input vector $\bm{x}$. The conventional DRBM can be obtained by a simple extension to the binary-RBM. The DRBM is obtained by the following process. The visible variables in the RBM are divided into two layers, the input and output layers. The $K$ visible variables assigned to the output layer, $\bm{t}$, are redefined as the 1-of-$K$ vector with $\bm{1}_k$ as its realization (i.e., $\bm{t} \in \{\bm{1}_k \mid k = 1,2,\ldots, K\}$) and the $n$ visible variables assigned to the input layer, $\bm{x}$, are redefined as the continuous input vector (see Fig. \[fig:DRBM\]). Subsequently, we make a conditional distribution conditioned with the variables in the input layer: $P(\bm{t}, \bm{h} \mid \bm{x})$. Finally, by marginalizing the hidden variables out, we obtain the DRBM: $P(\bm{t}\mid \bm{x}) = \sum_{\bm{h}}P(\bm{t}, \bm{h} \mid \bm{x})$.
![Discriminative restricted Boltzmann machine is obtained to an extension of the RBM. Because the output layer corresponds to the 1-of-$K$ vector, it takes only $K$ different states. For distinction, the couplings between the input and hidden layers are represented by $\bm{w}^{(1)}$ and those between the hidden and output layers are represented by $\bm{w}^{(2)}$.[]{data-label="fig:DRBM"}](DRBM.eps){height="2.5cm"}
By using the proposed RBM instead of the binary-RBM, we obtain an extension to the conventional DRBM, i.e., we obtain a DRBM with multivalued hidden variables. The proposed DRBM for $s \in \mathbb{N}$ is obtained by $$\begin{aligned}
P_s(\bm{t} \mid \bm{x},\theta)&:=\frac{1}{{ \mathcal{Z} }_s(\bm{x},\theta)}
\exp\Big(\sum_{k=1}^K b_kt_k + \sum_{j \in H} \ln \phi_s\big(\zeta_j(\bm{t},\bm{x}, \theta) \big)\Big).
\label{eqn:DRBM}\end{aligned}$$ where $\zeta_j(\bm{t},\bm{x}, \theta):= c_j + \sum_{k=1}^K w_{j,k}^{(2)}t_k + \sum_{i=1}^n w_{i,j}^{(1)}x_i$ and $$\begin{aligned}
{ \mathcal{Z} }_s(\bm{x},\theta):=\sum_{k=1}^K
\exp\Big(b_k + \sum_{j \in H} \ln \phi_s\big(\zeta_j(\bm{1}_k,\bm{x}, \theta) \big)\Big).
\label{eqn:PartitionFunction_DRBM}\end{aligned}$$ The function $\phi_s(x)$ appearing in Eqs. (\[eqn:DRBM\]) and (\[eqn:PartitionFunction\_DRBM\]) is already defined in Eq. (\[eqn:phi\_s(x)\]). It is noteworthy that when $s = 1$, Eq. (\[eqn:DRBM\]) is equivalent to the conventional DRBM proposed in Ref. [@DRBM2008]. Eq. (\[eqn:DRBM\]) is regarded as the class probability, indicating that $P_s(\bm{t} = \bm{1}_k \mid \bm{x},\theta)$ is the probability of the input $\bm{x}$ belonging to class $C_k$. The input $\bm{x}$ should be assigned into a class that gives the maximum class probability.
Given $N$ supervised training data points, $D:=\{({ \mathbf{t} }^{(\mu)} ,{ \mathbf{x} }^{(\mu)}) \mid \mu = 1,2,\ldots, N\}$, the log-likelihood function of the proposed DRBM in Eq. (\[eqn:DRBM\]) is defined as $$\begin{aligned}
l_s^{\dagger}(\theta):=\frac{1}{N}\sum_{\mu=1}^N \ln P_s({ \mathbf{t} }^{(\mu)} \mid { \mathbf{x} }^{(\mu)},\theta).
\label{eqn:LogLikelihood_DRBM}\end{aligned}$$ The gradients of the log-likelihood function with respect to the parameters are obtained as follows. $$\begin{aligned}
\frac{\partial l_s^{\dagger}(\theta)}{\partial b_k}&=\frac{1}{N}\sum_{\mu = 1}^N \big[ { \mathrm{t} }_k^{(\mu)} - P_s(\bm{1}_k \mid { \mathbf{x} }^{(\mu)},\theta)\big],
\label{eqn:grad_DRBM_b}\\
\frac{\partial l_s^{\dagger}(\theta)}{\partial c_j}&=\frac{1}{N}\sum_{\mu = 1}^N \big[\psi_s\big(\zeta_j({ \mathbf{t} }^{(\mu)},{ \mathbf{x} }^{(\mu)}, \theta)\big)
-{\langle \psi_s\big(\zeta_j(\bm{t},{ \mathbf{x} }^{(\mu)}, \theta) \rangle}_{\bm{t}}^{(\mu, s)} \big],
\label{eqn:grad_DRBM_c}\\
\frac{\partial l_s^{\dagger}(\theta)}{\partial w_{i,j}^{(1)}}&=\frac{1}{N}\sum_{\mu = 1}^N { \mathrm{x} }_i^{(\mu)} \big[\psi_s\big(\zeta_j({ \mathbf{t} }^{(\mu)},{ \mathbf{x} }^{(\mu)}, \theta)\big)
-{\langle \psi_s\big(\zeta_j(\bm{t},{ \mathbf{x} }^{(\mu)}, \theta) \rangle}_{\bm{t}}^{(\mu, s)}\big],
\label{eqn:grad_DRBM_w1} \\
\frac{\partial l_s^{\dagger}(\theta)}{\partial w_{j,k}^{(2)}}&=\frac{1}{N}\sum_{\mu = 1}^N \psi_s\big(\zeta_j(\bm{1}_k,{ \mathbf{x} }^{(\mu)}, \theta)\big)
\big[ { \mathrm{t} }_k^{(\mu)} - P_s(\bm{1}_k \mid { \mathbf{x} }^{(\mu)},\theta)\big],
\label{eqn:grad_DRBM_w2}\end{aligned}$$ where ${\langle \cdots \rangle}_{\bm{t}}^{(\mu, s)}$ denotes the expectation defined by $$\begin{aligned}
{\langle A(\bm{t}) \rangle}_{\bm{t}}^{(\mu, s)}:=\sum_{k=1}^K A(\bm{1}_k) P_s(\bm{1}_k \mid { \mathbf{x} }^{(\mu)},\theta).\end{aligned}$$ The function $\psi_s(x)$ appearing in the above gradients is already defined in Eq. (\[eqn:psi\_s(x)\]). It is noteworthy that the gradients expressed in Eqs. (\[eqn:grad\_DRBM\_b\])–(\[eqn:grad\_DRBM\_w2\]) are computed without an approximation, unlike those in the RBM, owing to the special structure of DRBM. In the training, we maximize $l_s^{\dagger}(\theta)$ with respect to $\theta$ using a gradient ascent method with Eqs. (\[eqn:grad\_DRBM\_b\])–(\[eqn:grad\_DRBM\_w2\]).
Numerical experiment using MNIST data set {#sec:DRBM_experiment}
-----------------------------------------
![Missclassification errors against the number of parameter updates (epochs): (a) training error and (b) test error. Here, one epoch consists of one full update cycle over the training data set, implying that one epoch involves $N /B = 10$ updates by the SGA in this case. We used the DRBM with $s = 1,\infty$. These plots show the average over 120 experiments.[]{data-label="fig:DRBM_MNIST"}](DRBM_MNIST.eps){height="4cm"}
In this section, we show the results of the numerical experiment using MNIST. MNIST is a data set of 10 different handwritten digits, $0, 1, \ldots,$ and $9$, and is composed of $60000$ training data points and $10000$ test data points. Each data point includes the input data, a $28 \times 28$ digit (8-bit) image, and the corresponding target digit label. Therefore, for the data set, we set $n = 784$ and $K = 10$. All input images were normalized by dividing by $255$ during preprocessing.
We trained the proposed DRBM with $|H|=200$ using $N = 1000$ training data points in MNIST and tested it using 10000 test data points. In the training, we used the stochastic gradient ascent (SGA), for which the mini-batch size was $B = 100$, with the AdaMax optimizer [@Adam2015]. All coupling parameters were initialized by the Xavier method [@Xavier2010] and all bias parameters were initialized by zero. Figure \[fig:DRBM\_MNIST\] shows the plots of the missclassification rates for (a) training data set and (b) test data set versus the number of parameter updates. All input images in the test data set were corrupted by the Gaussian noise with $\sigma = 120$ before the normalization [^2]. We observe that the DRBM with $s = \infty$ is better in terms of generalization because it shows a higher training error and lower test error. This indicates that the multivalued hidden variables are also effective in the DRBM.
Conclusion {#sec:conclusion}
==========
In this paper, we proposed an RBM with multivalued hidden variables, which is a simple extension to the conventional binary-RBM, and showed that the proposed RBM is better than the binary-RBM in terms of the generalization property via numerical experiments conducted on CD learning with artificial data (in Sec. \[sec:RBM\_experiment\]) and classification problem with MNIST (in Sec. \[sec:DRBM\_experiment\]).
It is important to understand the reason why the multivalued hidden variables are effective in terms of over-fitting. We provided a basic insight into it by analyzing a simple example in Sec. \[sec:ToyExample\]. However, practical RBMs are much more complex than the simple example used in this study. Therefore, we need to perform further analysis to clarify this reason. We think that a mean-field analysis [@NishimoriBook] can be used to perform the further analysis. Moreover, a criteria for over-fitting was provided in Ref. [@Coolen2017]. The relationship between the criteria and our multivalued hidden variables is also interesting. These issues will be addressed in our future studies.
### acknowledgment {#acknowledgment .unnumbered}
This work was partially supported by JSPS KAKENHI (Grant Numbers 15K00330, 15H03699, 18K11459, and 18H03303), JST CREST (Grant Number JPMJCR1402), and the COI Program from the JST (Grant Number JPMJCE1312).
[^1]: Because the value of the log-likelihood function at the global maximum point for a higher $s$ is the same as that for a lower $s$, the RBM with a higher $s$ also causes over-fitting at that point.
[^2]: Each corrupted input image $\hat{{ \mathbf{x} }}$ was created from the corresponding original image ${ \mathbf{x} }$ by $\hat{{ \mathrm{x} }}_i = { \mathrm{x} }_i + \epsilon_i$, where $\epsilon_i$ is the additive white Gaussian noise drawn from $G(0,120^2)$. If $\hat{{ \mathrm{x} }}_i > 255$, we set $\hat{{ \mathrm{x} }}_i = 255$ and if $\hat{{ \mathrm{x} }}_i < 0$, we set $\hat{{ \mathrm{x} }}_i = 0$.
|
---
abstract: 'We combine a pair of independent Weyl fermions to compose a Dirac fermion on the four-dimensional Euclidean lattice. The obtained Dirac operator is antihermitian and does not reproduce anomaly under the usual chiral transformation. To simulate the correct chiral anomaly, we modify the chiral transformation. We also show that chiral gauge theories can be constructed nonperturbatively with exact gauge invariance. The formulation is based on a doubler-free lattice derivative, which is a simple matrix defined as a discrete Fourier transform of momentum with antiperiodic boundary conditions. Long-range fermion hopping interactions are truncated using the Lanczos factor.'
address: 'RIKEN BNL Research Center, Brookhaven National Laboratory, Upton, NY 11973, USA'
author:
- Takanori Sugihara
title: Vector and chiral gauge theories on the lattice
---
\#1[0= 0=0 1= 1=1 0>1 \#1 / ]{}
Introduction
============
In the continuum Euclidean path-integral, chiral anomaly comes from Jacobian of the fermion measure [@fujikawa1]. The point is that ${\rm Tr}\gamma_5$ of the transformed fermion measure gives nonzero contribution because of the infinite degrees of freedom. On the other hand, lattice field theory is a framework to simulate the continuum theory with a finite number of lattice sites. Chiral anomaly cannot be reproduced on the finite lattice in the same way as the continuum theory. We should definitely distinguish finite and infinite lattice formulations and concentrate on reproducing chiral anomaly based on the finite lattice for practical numerical studies.
The species doubling problem of the lattice fermion [@ks; @slac; @wilson; @kaplan; @Shamir:1993zy; @neuberger; @nn] is closely related with chiral anomaly [@karsten; @Karsten:1980wd; @Seiler:1981jf; @Ginsparg]. According to the Nielsen-Ninomiya theorem, a single Weyl fermion cannot exist on the lattice [@nn]. When formulating lattice Dirac fermion, one has almost no choice but to break chiral symmetry explicitly using Wilson terms [@wilson]. Otherwise, one has to give up one of the other assumptions of the theorem as seen in the literature.
Lüscher’s implementation of lattice chiral symmetry based on the Ginsparg-Wilson relation [@Ginsparg] was a breakthrough [@luscher]. The most significant feature to be stressed is that chiral anomaly can be devised even with a finite lattice. In the Lüscher’s formulation, new chiral transformation is introduced and chiral anomaly is obtained from Jacobian of the fermion measure in a different way from the continuum theory. The lattice index theorem holds for arbitrary lattice spacing [@Hasenfratz; @luscher] and chiral anomaly agrees with the continuum result in the continuum limit [@kikukawa; @fujikawa2; @suzuki; @adams]. The lesson to be learned is that one can modify the axial current in order to reproduce the correct chiral anomaly on the finite lattice.
In the electroweak theory, left- and right-handed fermions couple to gauge fields in different ways. This type of formulation is called chiral gauge theory. For consistent construction of chiral gauge theories, gauge symmetry needs to be maintained at the quantum level (see Ref. [@bertlmann], for example). Introduction of Wilson terms has trouble because the mixing of left- and right-handed fermions complicates discussion of gauge anomaly cancelation [@gaugeanomaly; @Luscher:2000hn]. SLAC fermion partially simplifies the problem because it does not use Wilson terms [@slac; @Melnikov:2000cc]. However, it has not been successful due to breakdown of locality and Lorentz invariance associated with axial currents in the continuum limit [@karsten]. There have been discussions that deny these defects and the non-conservation of the axial current [@Rabin:1981nm; @Ninomiya:hd]. The defects originate in the definition of axial currents with a derivative different from the SLAC derivative. Careful and consistent treatment is necessary when discussing species doubler and chiral anomaly with SLAC fermion.
In this paper, we give a method to save the SLAC derivative. On a finite lattice, a derivative is defined as a discrete Fourier transform of momentum in a similar way to the conventional SLAC derivative. To remove doubler modes, antiperiodic boundary conditions are chosen for the derivative in real space. The doubler modes at the momentum boundary are lifted and the dispersion relation of the continuum theory is reproduced discretely. Although long-range hopping interactions appear, the pathology of SLAC fermion is avoided and the correct continuum limit is guaranteed. However, long-range interactions are not useful in practical numerical calculations because sparser Dirac operator is better for numerical efficiency. By using the Lanczos factor technique demonstrated in Ref. [@sugihara], we effectively truncate long-range hopping interactions and improve the fermion propagator. Since the proposed lattice derivative does not use Wilson terms, left- and right-handed fermions are completely independent. As a result, the Dirac operator constructed with the derivative has exact chiral symmetry. This means that the Dirac fermion does not reproduce chiral anomaly because the fermion measure of path-integral is trivially invariant under the chiral transformation on the finite lattice. To simulate the correct chiral anomaly, we modify chiral transformation using the Neuberger’s solution to the Ginsparg-Wilson relation [@neuberger]. As a result, we obtain an anomalous Ward identity for the modified chiral transformation. The zero mode of the axial current divergence is generally non-zero and gives the index theorem. The absence of chiral anomaly under the usual chiral transformation implies the existence of a single anomaly-free Weyl fermion. Chiral gauge theories can be constructed using the single Weyl fermion as a building block.
This work is a tricky extension of SLAC fermion. In the conventional SLAC fermion, periodic boundary conditions and an infinite lattice are assumed, where the doubler remains as a singularity at the momentum boundary. On the other hand, our formulation is based on antiperiodic boundary conditions and a finite lattice and therefore free from species doubler. In addition, the axial current is defined with the modified chiral transformation and evaluated nonperturbatively. The Nielsen-Ninomiya theorem does not apply to our implementation of chiral anomaly because the modified chiral transformation is used to define a symmetry.
This paper is organized as follows. In Sec. \[lattice\_derivative\], the lattice derivative is defined based on the finite lattice formulation. Locality of the derivative is discussed. Long-range hopping interactions are truncated using the Lanczos factor. In Sec. \[chiral\_anomaly\], a modified chiral transformation is introduced to reproduce the correct chiral anomaly. A way of constructing gauge-invariant chiral gauge theories is given in a nonperturbative way. Sec. \[sumamry\_and\_discussions\] is devoted to summary and discussions.
Lattice derivative {#lattice_derivative}
==================
We define lattice derivative as a discrete Fourier transform of momentum $$\nabla_n = \frac{1}{N} \sum_{l=-N/2+1}^{N/2} ip_l e^{i2\pi \tilde{l}n/N}
= -\frac{2}{N} \sum_{l=1}^{N/2} p_l
\sin \left(\frac{2\pi\tilde{l}n}{N}\right),
\label{nabla}$$ where $n$ represents lattice sites and takes integer values between $-N/2+1$ and $N/2$. The lattice size $N$ is a finite even number and $p_l$ corresponds to momentum. $$p_l\equiv \frac{2\pi \tilde{l}}{N},\quad
\tilde{l}\equiv l-\frac{1}{2}.$$ Antiperiodic boundary conditions have been chosen in real space, $\nabla_{n+N}=-\nabla_n$. [^1] In Eq. (\[nabla\]), the summation can be carried out easily. $$\nabla_n
= \frac{\pi}{N^2}
\left[
(N+1)
\frac{\cos\left(\displaystyle\frac{Ns}{2}\right)}
{\sin\left(\displaystyle\frac{s}{2}\right)}
-\frac{\sin\left(\displaystyle\frac{(N+1)s}{2}\right)}
{\sin^2\left(\displaystyle\frac{s}{2}\right)}
\right],$$ where $s=2\pi n/N$. The lattice derivative (\[nabla\]) is doubler-free and reproduces discretely the dispersion relation of the continuum theory for arbitrary lattice spacing.
In the large $N$ limit, Eq. (\[nabla\]) becomes an integral $$\frac{1}{a}\nabla_n = \frac{a}{2\pi} \frac{\partial}{\partial x}
\int_{-\pi/a}^{\pi/a} dk \; e^{ikx},
\label{nabla2}$$ where $a$ is a lattice spacing and $x=na$ is a space coordinate. In the continuum limit $a\to 0$, we obtain the first order derivative of the continuum theory. $$\lim_{a\to 0}\frac{1}{a}\nabla_n
= a \frac{\partial}{\partial x} \delta(x).
\label{nabla3}$$ The lattice derivative (\[nabla\]) is local in the continuum limit.
Figure \[fig1\] plots the lattice derivative $\nabla(s)\equiv\nabla_n$ with $s=2\pi n/N$ in a range $|s|\le 2\pi$ for two lattice sizes $N=50$ and $1000$. The absolute value of $\nabla(s)$ is large around $|s|=0$ and $2\pi$ and small around $|s|=\pi$. The points $|s|=0$ and $2\pi$ are equivalent because of antiperiodicity. Therefore, $\nabla(s)$ around $|s|=2\pi$ does not mean severe nonlocality. The points $|s|=\pi$ give the most long-range hopping. As expected from Eq. (\[nabla3\]), locality of the derivative $\nabla(s)$ is quite good when $N=1000$. On the other hand, decay of the derivative $\nabla(s)$ is slow when the lattice size is small.
We are interested in constructing a theory with better locality for a practical purpose. In order for Eq. (\[nabla\]) to be a useful lattice derivative, long-range hopping interactions need to be truncated. However, truncation of hopping interactions may cause errors. We need to find a systematic way to reproduce spectra effectively with only short-range hopping interactions. For simplicity, let us consider a classical action for a free massless fermion in one-dimensional space. $$S=a\sum_{m,n=-N/2+1}^{N/2}
\bar{\psi}_m \frac{1}{a}\nabla_{m-n} \psi_n.$$ Fourier transforms of the one-component fermions are $$\begin{aligned}
\displaystyle
\psi_n &=& \frac{1}{\sqrt{N}} \sum_{l=-N/2+1}^{N/2}
e^{i2\pi\tilde{l}n/N} \zeta_l,
\\
\displaystyle
\bar{\psi}_n &=& \frac{1}{\sqrt{N}} \sum_{l=-N/2+1}^{N/2}
e^{-i2\pi\tilde{l}n/N} \bar{\zeta}_l,\end{aligned}$$ where antiperiodicity is assumed $$\psi_{n+N}=-\psi_n,\quad
\bar{\psi}_{n+N}=-\bar{\psi}_n.$$ Then we have $$S=\sum_{l,l'} \bar{\zeta}_l ip_{l,l'} \zeta_{l'},$$ where $$\begin{aligned}
p_{l,l'} &=& -\frac{i}{N}\sum_{m,n=-N/2+1}^{N/2}
\nabla_{m-n} e^{-i2\pi\tilde{l}m/N+i2\pi\tilde{l}'n/N}
\nonumber
\\
&=& p_l \delta_{l,l'}.
\label{pll1}\end{aligned}$$ with the inverse transform $p_l$ of Eq. (\[nabla\]) $$p_l = -2\sum_{n=1}^{N/2-1} \nabla_n
\sin\left(\frac{2\pi\tilde{l}n}{N}\right)+(-1)^l\nabla_{N/2}.
\label{pl0}$$ To obtain this, antiperiodicity of $\nabla_n$, $\psi_n$, and $\bar{\psi}_n$ has been used. Antiperiodicity in real space gives rise to periodicity in momentum space, $p_{l+N}=p_l$, $\zeta_{l+N}=\zeta_l$, and $\bar{\zeta}_{l+N}=\bar{\zeta}_l$. Some explanations would be necessary for the derivation of Eq. (\[pl0\]). Consider the matrix contained in Eq. (\[pll1\]) $$C_{m,n}\equiv
\nabla_{m-n} e^{-i2\pi\tilde{l}m/N+i2\pi\tilde{l}'n/N},$$ which has periodicity, $C_{m+N,n}=C_{m,n+N}=C_{m,n}$. In Fig. \[fig2\], the matrix $C_{m,n}$ is shown schematically (see the upper diagram). Some examples for the indices $(m,n)$ are given. The matrix elements on the dotted lines are not contained in Eq. (\[pll1\]). In the upper diagram, the vertices $(N/2,-N/2)$ and $(-N/2,N/2)$ correspond to the points with $|s|=2\pi$ in Fig. \[fig2\]. Using the periodicity of the matrix, the triangles 1 and 2 can be moved to form the parallelogram (see the lower diagram). As a result, Eq. (\[pll1\]) can be evaluated by calculating the summation for each $|m-n|$, which draws a line segment parallel to the oblique sides of the parallelogram. The last term of Eq. (\[pl0\]) is a contribution of the term with $|m-n|=N/2$, which corresponds to the left oblique side of the parallelogram.
In Eq. (\[pl0\]), we truncate long-range terms with a parameter $N_{\rm c}\le N/2$, which represents the largest distance of fermion hopping. $$p_l = \sum_{n=1}^{N_{\rm c}} \nabla_n
\Bigg[ (\delta_{n,\frac{N}{2}}-1)2
\sin\left(\frac{2\pi\tilde{l}n}{N}\right)
+\delta_{n,\frac{N}{2}}(-1)^l\Bigg].
\label{pl1}$$ The truncation can be implemented to Eq. (\[pll1\]). $$p_{l,l'} = -\frac{i}{N} \sum'_{m,n}
\nabla_{m-n} e^{-i2\pi\tilde{l}m/N+i2\pi\tilde{l}'n/N}.
\label{pll2}$$ The indices $m$ and $n$ run from $-N/2+1$ to $N/2$. The prime symbol means that the summation is taken over $|m-n|\le N_{\rm c}$, $m-n+N\le N_{\rm c}$, and $m-n-N\ge -N_{\rm c}$. Fermion hopping has been restricted to a finite range.
Figure \[fig3\] plots $p_l$ of Eq. (\[pl1\]) as a function of $2\pi\tilde{l}/N$ for $N_{\rm c}=5$, $10$, and $25$ with a lattice size $N=50$. $N_{\rm c}=25$ gives the exact result with no truncation, which satisfies the dispersion relation of the continuum theory. If one does not mind inclusion of long-range hopping, doubler-free formulation of a single Weyl fermion is possible maintaining the correct dispersion relation. When $N_{\rm c}$ is small, there is also no doubler modes because of antiperiodic boundary conditions. Although some modes around the momentum boundary deviate from the correct dispersion, those are not so harmful because there is no genuine degeneracy with low-lying modes. However, $p_l$ oscillates around the exact result. The oscillation tends to be large as $N_{\rm c}$ goes to small.
The small oscillation around the correct dispersion comes from the truncated terms having large $n$’s in Eq. (\[pl1\]). As shown in Ref. [@sugihara], such oscillation can be removed by introducing the Lanczos factor, which is used in Fourier analysis to cancel the Gibbs phenomenon. [@aw] We modify Eq. (\[pl1\]) as follows: $$p_l = \sum_{n=1}^{N_{\rm c}} F_n \nabla_n
\Bigg[ (\delta_{n,\frac{N}{2}}-1)2
\sin\left(\frac{2\pi\tilde{l}n}{N}\right)
+\delta_{n,\frac{N}{2}}(-1)^l\Bigg].
\label{pl2}$$ where $$F_n \equiv \frac{N_{\rm c} +1}{\pi n}
\sin\left(\frac{\pi n}{N_{\rm c} +1}\right).$$ is the Lanczos factor. As a result, Eq. (\[pll2\]) becomes $$p_{l,l'} = -\frac{i}{N} \sum'_{m,n} F_{|m-n|}
\nabla_{m-n} e^{-i2\pi\tilde{l}m/N+i2\pi\tilde{l}'n/N}.
\label{pll3}$$ The final form of the action is given as $$S=a\sum'_{m,n}
\bar{\psi}_m \frac{1}{a}F_{|m-n|}\nabla_{m-n} \psi_n.
\label{action1}$$
Figure \[fig4\] plots $p_l$ of Eq. (\[pl2\]) improved with the Lanczos factor as a function of $2\pi\tilde{l}/N$ for $N_{\rm c}=5$, $10$, and $25$ with $N=50$, which are compared with the exact result ($N_{\rm c}=25$) shown in Fig. \[fig3\]. As before, there is no doubler for every $N_{\rm c}$. In addition to this, the oscillation has been removed with the Lanczos factor. As $N_{\rm c}$ goes to large, the deviation around the momentum boundary tends to be small. In this way, we can construct a doubler-free ultralocal derivative. If the Lanczos factor is introduced, ultralocal formulation of a single Weyl fermion is possible maintaining almost correct dispersion relation.
Chiral and gauge anomaly {#chiral_anomaly}
========================
On the finite four-dimensional Euclidean lattice, consider an effective action $\Gamma[U]$ $$e^{-\Gamma[U]}=\int {\cal D}\psi {\cal D}\bar{\psi} e^{-S[U]},
\label{ea}$$ where $$S=a^4\sum_{m,n} \bar{\psi}_m \slashchar{D}_{m,n} \psi_n,$$ is a classical action for a massless Dirac fermion coupled to gauge. $\slashchar{D}\equiv \gamma_\mu D_\mu$ is a Dirac operator and the Euclidean Dirac matrices satisfy $\gamma_\mu^\dagger=\gamma_\mu$ and $\{\gamma_\mu,\gamma_\nu\}=2 \delta_{\mu\nu}$. Chirality is defined with $\gamma_5\equiv \gamma_1\gamma_2\gamma_3\gamma_4$. The fermion variables $\bar{\psi}_m$ and $\psi_n$ are Grassmann valued. The indices $m$ and $n$ are four-component numbers to indicate lattice sites and each component runs from $-N/2+1$ to $N/2$ as before. The lattice covariant derivative $$(D_\mu)_{m,n} \equiv \frac{1}{a}
\nabla_{m_\mu-n_\mu} U_{m,n}(\mu)
\prod_{\nu=1 (\nu\ne\mu)}^4 \delta_{m_\nu,n_\nu}
\label{lcd}$$ is diagonal with respect to the space-time indices $m$ and $n$ except for the $\mu$-th ones. The Dirac operator $\slashchar{D}$ is antihermitian and satisfies $\{\slashchar{D},\gamma_5\}=0$. The classical action is invariant under the usual chiral transformation. The derivative $\nabla_n$ can be replaced with the truncated one in the same way as Eq. (\[action1\]), if ultralocal construction is preferred. The gauge variable $U_{m,n}(\mu)$ is a product of all link variables that compose a line segment between the two sites $m$ and $n$ parallel to the $\mu$-th direction. When connecting the two sites with link variables along the $\mu$-th direction, there are two ways because of periodicity of the action. The most natural choice is the shorter path. One of the two ways is chosen depending on the distance between two sites. When $|m_\mu-n_\mu|\le N/2$, $$U_{m,n}(\mu)=\left\{
\begin{array}{ll}
U_{m,m-\hat{\mu}}\dots U_{n+\hat{\mu},n} &(m_\mu>n_\mu)\\
U_{m,m+\hat{\mu}}\dots U_{n-\hat{\mu},n} &(m_\mu<n_\mu)\\
\end{array}\right.,$$ which corresponds to the hexagon sandwiched between the triangles 1 and 2 in Fig. \[fig2\]. $\hat{\mu}$ is a unit vector in the $\mu$-th direction. When $|m_\mu-n_\mu|>N/2$, $$U_{m,n}(\mu)=\left\{
\begin{array}{ll}
U_{m,m+\hat{\mu}}\dots U_{n+(N-1)\hat{\mu},n+N\hat{\mu}}
&(m_\mu>n_\mu)\\
U_{m,m-\hat{\mu}}\dots U_{n-(N+1)\hat{\mu},n-N\hat{\mu}}
&(m_\mu<n_\mu)\\
\end{array}\right.,$$ which corresponds to the triangles 1 and 2 in Fig. \[fig2\] and therefore intersects the boundary. The link variables $U_{n+\hat{\mu},n}$ are elements of a gauge group and satisfy $U_{n,n+\hat{\mu}}=U_{n+\hat{\mu},n}^\dagger$ and $U_{n+\hat{\mu}+N\hat{\mu},n+N\hat{\mu}}=U_{n+\hat{\mu},n}$. In the continuum limit, the lattice covariant derivative becomes $$\lim_{a\to 0} (D_\mu)_{m,n}=a^4\delta^4(x-y)D_\mu^{({\rm c})},
\label{cntlcd}$$ where $x=ma$ and $y=na$, and $D_\mu^{({\rm c})}=\partial_\mu-igA_\mu$ with a parameterization $U_{m+\hat{\mu},m}=e^{iag A_\mu(x)}$. Eq. (\[lcd\]) reduces to the covariant derivative of the continuum theory in the continuum limit.
With our Dirac operator, the usual chiral transformation does not reproduce chiral anomaly because the classical action and the fermion measure is invariant. To simulate the correct chiral anomaly on the finite lattice, we introduce the modified chiral transformation with $\theta_n\ll 1$ $$\begin{aligned}
\psi'_m &=&
\sum_n \left(1+i\theta_m\hat{\gamma}_5\right)_{m,n}\psi_n,
\label{nct1}
\\
\bar{\psi}'_m &=&
\sum_n \bar{\psi}_n\left(1+i\theta_m\hat{\gamma}_5\right)_{n,m},
\label{nct2}\end{aligned}$$ where $$(\hat{\gamma}_5)_{m,n} \equiv \gamma_5
\left(1-\frac{1}{2}a G \right)_{m,n}.$$ The operator $G$ is the Neuberger’s solution [@neuberger] to the Ginsparg-Wilson relation $\gamma_5 G + G \gamma_5 = aG \gamma_5 G$ and has nothing to do with the Dirac operator (\[lcd\]). The modified chiral transformation is a symmetry of the effective action $\Gamma$ for arbitrary lattice spacing independent of whether the transformation is local or global. When the transformation is global $\theta_n=\theta$, it is also a symmetry of the classical action $S$ in the continuum limit $a\to 0$ because the variation of the Lagrangian density induced by the transformation is proportional to lattice spacing. $$\delta S = \frac{i}{2} \theta a^4 \sum_{m,n}
\bar{\psi}_m \gamma_5 a(\slashchar{D}G-G\slashchar{D})_{m,n}\psi_n.
\label{var}$$ Although the variation is classically zero in the continuum limit, the vacuum expectation value of the variation gives the index theorem for arbitrary lattice spacing.
The axial current divergence is defined as a variation of the classical action under the local chiral transformation with $\hat{\gamma}_5$. $$\begin{aligned}
(\partial_\mu \hat{J}^5_{\mu})_m=\sum_{n_1,n_2}\Big[
&&\bar{\psi}_{n_1}(\hat{\gamma}_5)_{n_1,m}\slashchar{D}_{m,n_2}\psi_{n_2}
\nonumber
\\
+&&\bar{\psi}_{n_2}
\slashchar{D}_{n_2,m}(\hat{\gamma}_5)_{m,n_1}\psi_{n_1}
\Big].\end{aligned}$$ The axial current is obtained by inverting the derivative $$\begin{aligned}
(\hat{J}^5_{\mu})_n&=&
\sum_{m}(\nabla^{-1})_{n_\mu,m_\mu}
\prod_{\nu=1 (\nu\ne\mu)}^4 \delta_{n_\nu,m_\nu}
\nonumber
\\
&& \times
\sum_{n_1,n_2}\Big[
\bar{\psi}_{n_1}(\hat{\gamma}_5)_{n_1,m}(D_\mu)_{m,n_2}\psi_{n_2}
\nonumber
\\
&& \hspace{1.0cm}+\bar{\psi}_{n_2}
(D_\mu)_{n_2,m}(\hat{\gamma}_5)_{m,n_1}\psi_{n_1}
\Big],\end{aligned}$$ which is gauge invariant. In the free theory, the currents are local in the continuum limit because the Leibniz rule holds in the limit. [^2] The fermion measure transforms as $${\cal D}\psi' {\cal D}\bar{\psi}' =
\exp\left(-2i\sum_n \theta_n {\cal A}_n \right)
{\cal D}\psi {\cal D}\bar{\psi}.$$ Chiral anomaly $${\cal A}_n\equiv {\rm tr} (\hat{\gamma}_5)_{n,n}
\label{anomaly}$$ is a gauge-invariant quantity. As shown in Ref. [@luscher], the index theorem holds for arbitrary lattice spacing. $$\sum_n {\cal A}_n={\rm Index}(G).$$ (See also Ref. [@Chiu:1998bh; @Fujikawa:1999ku].) Since the effective action $\Gamma$ is invariant under transformation of the integration variables, the modified axial current $\hat{J}_\mu^5$ does not conserve. $$\langle (\partial_\mu \hat{J}^5_{\mu})_n \rangle
=-\frac{2}{a^4} {\cal A}_n.$$ The Ward identity for the modified chiral transformation holds also for the zero mode, which gives the index theorem and corresponds to the variation under the global transformation (\[var\]). [^3] In the continuum limit, the chiral anomaly agrees with the continuum result [@kikukawa; @fujikawa2; @suzuki; @adams] $$\langle \partial_\mu \hat{J}_\mu^5 \rangle = \frac{1}{16\pi^2}
\epsilon_{\mu\nu\rho\sigma}{\rm tr}(F_{\mu\nu}F_{\rho\sigma}),$$ where $\epsilon_{1234}=+1$ and $F_{\mu\nu}\equiv [D_\mu^{({\rm c})},D_\nu^{({\rm c})}]$. The effect of chiral anomaly can be implemented to physical quantities via the modified anomalous axial current.
Construction of anomaly-free chiral gauge theory is easy if our lattice derivative is used. In our formulation, Weyl fermions are defined with the ordinary $\gamma_5$ (not $\hat{\gamma}_5$). As a result, the fermion measure of the path integral do not depend on gauge variables. The Weyl fermions are free from gauge anomaly beforehand. Consider a classical action for a Dirac fermion $$S=a^4\sum_{m,n} \bar{\psi}_m \slashchar{\hat{D}}_{m,n} \psi_n,
\label{action2}$$ where only the right-handed fermion is coupled to gauge. The left-handed fermion is redundant and does not contribute to the effective action. Once the absence of gauge anomaly is confirmed, the left-handed fermion can be integrated out. The definition of the Dirac operator $\slashchar{\hat{D}}$ is same as Eq. (\[lcd\]) except that gauge is defined as a product of link variables $$U_{n+\hat{\mu},n}=e^{iagA_{\mu,n} P_+},$$ where $A_{\mu,n}\equiv A_{\mu,n}^a T_a$ and $P_\pm\equiv (1\pm \gamma_5)/2$. $\slashchar{\hat{D}}$ is no longer antihermitian. Under local gauge transformation, the classical action (\[action2\]) is invariant. On the finite lattice, infinitesimal local gauge transformation $$\psi_n' = (1+i\theta_n^aT_a P_+)\psi_n,\quad
\bar{\psi}_n' = \bar{\psi}_n(1-i\theta_n^aT_a P_-),$$ does not change the fermion measure $${\cal D}\psi' {\cal D}\bar{\psi}'=
\exp\left[-i\sum_n \theta_n^a {\rm tr}(T_a\gamma_5) \right]
{\cal D}\psi {\cal D}\bar{\psi}
={\cal D}\psi {\cal D}\bar{\psi}$$ and hence the effective action. A single Weyl fermion can exist on the lattice without violating gauge symmetry. [^4]
Summary and discussions {#sumamry_and_discussions}
=======================
We have constructed a doubler-free covariant derivative and an anomalous Ward identity for the modified chiral symmetry on the lattice. The index theorem holds for arbitrary lattice spacing and the dependence of chiral anomaly on gauge fields agrees with the continuum result in the continuum limit. The zero mode of the anomalous Ward identity gives the index theorem. In this formulation, a single Weyl fermion can exist on the lattice maintaining gauge symmetry. Chiral gauge theories can be constructed nonperturbatively using the single anomaly-free Weyl fermion as a building block.
The proposed lattice derivative is a simple matrix that gives the correct dispersion. We have shown that introduction of the Lanczos factor enables us to construct lattice derivatives with good locality. In this case, almost the correct dispersion is reproduced except for deviation at high momentum. It depends on fermion mass how large the truncation parameter $N_{\rm c}$ should be.
The correct chiral anomaly has been obtained without violating locality and Lorentz invariance in the continuum limit in spite of the existence of long-range hopping interactions. This is a result of a nonperturbative formulation based on the modified chiral transformation (or equivalently the modified axial current). To reproduce chiral anomaly, Wilson terms have to be introduced somewhere. We have introduced them only in the modified chiral transformation to maintain complete independence of left- and right-handed fermions in the classical action.
When calculating physical quantities dependent on chiral anomaly such as the $\eta'$ meson mass, $\gamma_5$ needs to be replaced with $\hat{\gamma}_5$ in the concerned vertex operators to include the effect of chiral anomaly. It depends on the construction of the operator $G$ how precisely the effect of chiral anomaly is implemented with finite lattice spacing. In lattice QCD, the modification of the axial current does not affect physical quantities independent of chiral anomaly because the axial current does not couple to gauge directly.
Acknowledgments {#acknowledgments .unnumbered}
===============
This research was supported in part by RIKEN.
[0]{} K. Fujikawa, Phys. Rev. Lett. [**42**]{}, 1195 (1979); Phys. Rev. D [**21**]{}, 2848 (1980); Phys. Rev. D [**22**]{}, 1499 (1980). J. Kogut and L. Susskind, Phys. Rev. D [**11**]{}, 395 (1975); S. D. Drell, M. Weinstein, and S. Yankielowicz, Phys. Rev. D [**14**]{}, 487 (1976). K. G. Wilson, in [*New Phenomena in Subnuclear Physics*]{}, Erice, 1975, edited by A. Zichichi (Plenum, New York, 1977). D. B. Kaplan, Phys. Lett. B [**288**]{}, 342 (1992); M. F. Golterman, K. Jansen and D. B. Kaplan, Phys. Lett. B [**301**]{}, 219 (1993). Y. Shamir, Nucl. Phys. B [**406**]{}, 90 (1993); V. Furman and Y. Shamir, Nucl. Phys. B [**439**]{}, 54 (1995). H. Neuberger, Phys. Lett. B [**417**]{}, 141 (1998); [*ibid*]{} [**427**]{}, 353 (1998). H. B. Nielsen and M. Ninomiya, Phys. Lett. B [**105**]{}, 219 (1981); Nucl. Phys. B 185, 20 (1981)\[E: B 195, 541 (1982)\]; B 193, 173 (1981). L. H. Karsten and J. Smit, Nucl. Phys. B [**144**]{}, 536 (1978); Phys. Lett. B [**85**]{}, 100 (1979). L. H. Karsten and J. Smit, Nucl. Phys. B [**183**]{}, 103 (1981). E. Seiler and I. O. Stamatescu, Phys. Rev. D [**25**]{}, 2177 (1982) \[Erratum-ibid. D [**26**]{}, 534 (1982)\]. P. H. Ginsparg and K. G. Wilson, Phys. Rev. D [**25**]{}, 2649 (1982). M. Lüscher, Phys. Lett. B [**428**]{}, 342 (1998). P. Hasenfratz, V. Laliena and F. Niedermayer, Phys. Lett. B [**427**]{}, 125 (1998) M. Lüscher, Nucl. Phys. B [**549**]{}, 295 (1999); [**568**]{}, 162 (2000). Y. Kikukawa and A. Yamada, Phys. Lett. B [**448**]{}, 265 (1999). K. Fujikawa, Nucl. Phys. B [**546**]{}, 480 (1999). H. Suzuki, Prog. Theor. Phys. [**102**]{}, 141 (1999). D. H. Adams, Annals Phys. [**296**]{}, 131 (2002). R. A. Bertlmann, [*Anomalies in Quantum Field Theory*]{} (Oxford University Press, Oxford, 1996). M. Lüscher, arXiv:hep-th/0102028. K. Melnikov and M. Weinstein, Phys. Rev. D [**62**]{}, 094504 (2000) J. M. Rabin, Phys. Rev. D [**24**]{}, 3218 (1981). M. Ninomiya and C. I. Tan, Phys. Rev. Lett. [**53**]{}, 1611 (1984). T. Sugihara, Phys. Rev. D [**68**]{}, 034502 (2003). G. B. Arfken, [*Mathematical Methods for Physicists*]{} (Academic Press, New York, 1970). T. W. Chiu, Phys. Rev. D [**58**]{}, 074511 (1998) K. Fujikawa, Phys. Rev. D [**60**]{}, 074505 (1999) E. Witten, Phys. Lett. B [**117**]{}, 324 (1982).
[^1]: If periodic boundary conditions are chosen, there appear degenerate zero modes on the momentum boundary. Such unphysical doubler modes must be removed because they cause errors especially when fermion mass is small.
[^2]: In Ref. [@karsten] for the conventional SLAC derivative, axial currents are defined by introducing a derivative independent of the SLAC derivative, which is the cause of breaking of locality and Lorentz invariance .
[^3]: The explicit breaking term (\[var\]) is necessary for the existence of the non-vanishing zero mode of the axial current divergence.
[^4]: The number of Weyl fermions needs to be even when there exists Witten’s global anomaly.[@Witten:fp]
|
---
abstract: 'By cross-correlating templates constructed from the 2 Micron All Sky Survey (2MASS) Extended Source (XSC) catalogue with WMAP’s first year data, we search for the thermal Sunyaev-Zel’dovich signature induced by hot gas in the local Universe. Assuming that galaxies trace the distribution of hot gas, we select regions on the sky with the largest projected density of galaxies. Under conservative assumptions on the amplitude of foreground residuals, we find a temperature decrement of -35 $\pm$ 7 $\mu$K ($\sim 5\sigma$ detection level, the highest reported so far) in the $\sim$ 26 square degrees of the sky containing the largest number of galaxies per solid angle. We show that most of the reported signal is caused by known galaxy clusters which, when convolved with the average beam of the WMAP W band channel, subtend a typical angular size of 20–30 arcmins. Finally, after removing from our analyses all pixels associated with known optical and X-ray galaxy clusters, we still find a tSZ decrement of -96 $\pm$ 37 $\mu$K in pixels subtending about $\sim$ 0.8 square degrees on the sky. Most of this signal is coming from five different cluster candidates in the Zone of Avoidance (ZoA), present in the Clusters In the ZoA (CIZA) catalogue. We found no evidence that structures less bound than clusters contribute to the tSZ signal present in the WMAP data.'
author:
- 'C. Hernández–Monteagudo'
- 'R. Genova–Santos'
- 'F. Atrio–Barandela'
title: 'The Effect of Hot Gas in WMAP’s First Year Data'
---
Introduction
============
The study of the Cosmic Microwave Background (CMB) has become a powerful cosmological tool with applications in various astrophysical scenarios. Recently, the WMAP team has determined the main cosmological parameters from the CMB temperature field with unprecedented accuracy [@wmap_parm]. This temperature field is composed of primordial anisotropies, generated at the Last Scattering Surface, and secondary fluctuations, which arise as the CMB photons travel to the observer. Among these secondary anisotropies, we shall concentrate on the so called thermal Sunyaev-Zel’dovich effect (hereafter tSZ, @tSZ), associated with the distortion of the black body spectrum of the CMB photons due to Compton scattering on fast moving thermal electrons. This spectral distortion is independent of redshift and proportional to the integrated electron pressure along the line of sight. It has been detected in the direction of several galaxy clusters [@carlstrom02]. This makes the tSZ effect an useful tool for the detection of ionized hot gas.
Recently, @Fuk04 have argued that about 90% of all baryons are in the form of intergalactic plasma, not dense and hot enough to be detectable as X-ray sources except in clusters and groups of galaxies. At low redshfits, 30% of all baryons could be in the form of Ly-$\alpha$ absorbers [@Penton04], but a large fraction of them has not yet been accounted for observationally. Most models of structure formation predict baryons to be located in filaments and sheets, associated to galaxy overdensities [@VSpringel]. @Xgas1 show evidence of filamentary X-ray emission at the core of the Shapley Supercluster, whereas @Zappa02 report a detection of diffuse X-ray emission by warm gas ($T\sim 10^6$K) associated with an overdense galaxy region. The aim of this [*letter*]{} is to use the tSZ effect to detect directly the diffuse warm baryon component in the local Universe. The analyses of the first year WMAP data have indicated that a small contribution due to the tSZ induced by clusters is present in the data (@wmap_foreg, @scjal). @pablo and @afshordi claimed a tSZ detection at small angular scales but did not clarify the nature of the astrophysical sources associated with it. @Myers analysed the angular extension of the tSZ signal in the W band of WMAP by examining the CMB map in the direction of the largest galaxy groups and clusters found in the catalogues of @aco (hereafter ACO), @apm (hereafter APM) and the 2 Micron All Sky Survey Extended Source Catalogue, [@Jarrett]. Their analyses showed evidence for diffuse tSZ emission up to an angular scale of $\sim 1 \degr$, which implied a baryon fraction larger than the WMAP estimate.
If baryons are distributed like the dark matter on scales comparable to the virial radius of galaxies [@Fuk04], the projected density of galaxies on the sky will correlate with the tSZ signal, [*independently*]{} of whether galaxies form clusters, groups or filaments. In this [*letter*]{}, we shall carry out a pixel-to-pixel comparison of WMAP W band data with templates constructed from the 2MASS galaxy catalogue, to test the distribution of hot gas in the local Universe. In Sec.2 we describe the pixel to pixel comparison method employed to estimate the tSZ signal and the data used in the analysis. In Sec.3 we present and discuss our results, and we conclude in Sec.4.
Method and Data Sets.
=====================
The brightness temperature measured by a CMB experiment is the sum of different components: cosmological ${\mbox{\bf {T}}_{cmb}}$, tSZ, instrumental noise ${\mbox{\bf {N}}}$ and foreground residuals ${\mbox{\bf {F}}}$. If the tSZ signal is well traced by a known spatial template, denoted here as ${\mbox{\bf {M}}}$ and built for example from a galaxy catalogue, the total anisotropy at a fixed position on the sky can be modelled as ${\mbox{\bf {T}}}= {\mbox{\bf {T}}_{cmb}}+ {\tilde \alpha}\cdot{\mbox{\bf {M}}}/ \langle {\mbox{\bf {M}}}\rangle + {\mbox{\bf {N}}}+ {\mbox{\bf {F}}}$, where ${\tilde \alpha}$ measures the amplitude of the template induced signal and $\langle {\mbox{\bf {M}}}\rangle$ denotes the spatial average of the template. If all other components have zero mean and well known correlation functions, then it is possible to use a pixel to pixel comparison to estimate ${\tilde \alpha}$ (see @scjal for details). If ${\cal C}$ denotes the correlation matrix of the CMB and noise components (foregrounds residuals will be discussed later) then the estimate of ${\tilde \alpha}$ and its statistical error are $$\alpha = \frac{ {\mbox{\bf {T}}}{\cal C}^{-1} {\mbox{\bf {M}}}^T} {{\mbox{\bf {M}}}{\cal C}^{-1} {\mbox{\bf {M}}}^T },
\;\;\;\; \sigma_\alpha = \sqrt{\frac{1}{{\mbox{\bf {M}}}{\cal C}^{-1}{\mbox{\bf {M}}}^T}}.
\label{eq:alpha1}$$ Since our galaxy template ${\mbox{\bf {M}}}$ will be positive by construction, this equation demands that CMB and noise fields have zero mean. We impose the average of all pixels outside the Kp0 mask to be zero. We checked that our results were insensitive to this requirement by carrying out a similar analysis usig the more conservative Kp2 mask. In all cases our results changed by less than a few percent. Notice that our method requires the inversion of the correlation matrix, a computationally expensive procedure. To speed up the process we will carry out the analysis in pixel subsets as described below. We checked with Monte Carlo simulations that $\sigma_\alpha$ is an unbiased estimator of the error.
We centered our analyses on WMAP W band, which has the highest angular resolution. In this band, the instrumental noise shows almost no spatial correlation and its position dependent amplitude is determined by the number of observations[^1] [@wmap_noise]. The tSZ template was built from the 2 Micron All Sky Survey (2MASS) Extended Source Catalogue [@Jarrett]. This catalogue contains about 1.5 million galaxies, extending up to redshift $z \sim 0.1$ (400 Mpc), detected in the near-infrared ($J$, $H$ and $K_s$ bands). In order to build a tSZ template (${\mbox{\bf {M}}}$), all 2MASS galaxies were projected onto the sphere using the HEALPix[^2] pixelization [@healpix] with the same resolution as the CMB data. The amplitude in every pixel was made proportional to the number of galaxies. The template was then convolved with the window function of the noise weighted average beam of the four Difference Assemblies (DA’s) corresponding to the WMAP W band (clean map), and multiplied by the Kp0 mask, like the CMB data. Since our method requires the CMB and noise components to have zero mean, we substracted the map average outside the Kp0 mask.
Pixels were sorted in sets of size $N_{pix}$, denoted as $^{\beta}$, where the superscript index $\beta$ indicates galaxy density, in such a way that low $\beta$ corresponds to higher projected galaxy density. For patches of $N_{pix}=$2048, the average projected galaxy density for $\beta=$1 was $\sim$ 420 galaxies per square degree, whereas for $\beta=$500 the density dropped to $\sim$ 44 galaxies per square degree, which roughly coincides with the average projected density outside the Kp0 mask. $N_{pix}$ ranged from 64 to 2048. The pixel to pixel comparison was then performed on each of these subsets: we compared [*all*]{} pixels in $^{\beta}$ to their counterparts in the CMB map. Our working hypothesis is that galaxies are fair tracers of the gas density and within each set electron temperature is similar, that is, in those pixels galaxies trace the electron pressure. Within each subset ${\mbox{\bf {M}}}^{\beta}$, the galaxy density remained roughly constant, so our method returned $\alpha$ as a weighted mean of the measured temperature in those pixels. If no tSZ signal is present, then $\alpha$ will scatter around zero, as a consequence of CMB and noise being random fields of zero mean.
Results and Discussion.
=======================
Fig. (\[fig:allpxls\]) summarizes our main results: Fig. (\[fig:allpxls\]a) shows the estimated $\alpha$’s for the sets having the highest projected density of galaxies. In abscissas we give the set index ($\beta$). Crosses, filled circles, triangles and diamonds correspond to $N_{pix}=$ 256, 512, 1024 and 2048, respectively. Symbols are slightly shifted for clarity, error bars denote 1$\sigma$ confidence levels. At WMAP instrumental frequencies, the tSZ effect causes temperature decrements, i.e., template and CMB data should anticorrelate giving negative $\alpha$’s, as found. Sets 1 and 2 of size $N_{pix}=256$ correspond to the first set of $N_{pix}=512$ and similarly for all other sizes and sets. Consistently, $\alpha$ of a larger set is always bracketed by the $\alpha$’s measured from its subsets. The largest signal comes from the densest 256 pixels but the highest statistical level of significance is achieved for $N_{pix}=2048$ ($\alpha = -35\;\mu$K at the $4.9\sigma$ detection level), since the error bars shrink due to the higher number of pixels contributing with tSZ signal. In Fig.(\[fig:allpxls\]b) the same data sets are plotted versus the average projected galaxy density, i.e., the average number of galaxies per square degree within each subset, as seen by the W band beam. The symbol coding is identical: first sets of $N_{pix}=$ 256, 512 show a projected galaxy density as high as 420–380 galaxies per square degree, whereas the 6th set for $N_{pix}=$ 2048 contains around 140 galaxies per square degree.
In order to evaluate the significance of the previous results, we repeat the analysis for pixels of intermediate and low projected density of galaxies. In Fig. (\[fig:allpxls\]c) diamonds correspond to sets of size $N_{pix}=2048$ with $\beta \in [1,30]$, whereas filled circles correspond to indices $\beta \in [501, 530]$. The shaded area limits the 1$\sigma$ error bar for diamonds, that is about a factor of 1.2 bigger than for filled circles. While the latter scatter around zero with the expected dispersion, diamonds are clearly biased towards negative values: besides the first patch ($\alpha = -35
\pm 7\;\mu$K), there are [*seven*]{} other sets above the 2$\sigma$ level, one of them at $3\sigma$.
We also tested the consistency of our results with respect to frequency. In Fig. (\[fig:allpxls\]d) we estimate $\alpha$ by cross-correlating the densest pixel set ($N_{pix}=2048$, $\beta=1$) with all WMAP bands: K (23GHz), Ka (33GHz), Q (41GHz), V(61GHz) and W(93GHz). Diamonds give the obtained $\alpha$’s from [*raw*]{} maps, whereas triangles refer to analyses performed on [*foreground cleaned*]{} maps available in the LAMBDA site. Note that the WMAP team only provided clean maps for the three highest frequency channels. In all maps, the signal is compatible with being due to tSZ. A more quantitative comparison is not straighforward since the maps have different angular resolution and galactic contamination. When compared in pairs, all $\alpha$’s are within 1$\sigma$ of each other and they are all within 2$\sigma$ of the expected frequency dependence of the tSZ effect, whose best fit for the cleaned Q, V and W band maps is plotted as a solid line.
We measure the extent of the tSZ regions by rotating the template around the z-axis (perpendicular to the galactic plane) and cross-correlating the densest 2048 pixels. In Fig. (\[fig:rottest\]) squares give $\alpha$ versus the average angular displacement of the pixel set. Error bars are again 1$\sigma$. The solid line represents the gaussian approximation of the W-band beam. The size of the tSZ sources is typically 20–30 arcmins, slightly bigger than the beam but remarkably smaller than the values obtained by @Myers from rich ACO clusters. Actually, when one studies the angular distribution of the 2048 densest pixels, one finds that they are associated in groups of typically 3–4 members, and that the groups are uniformly distributed on the sky.
By comparing, in Fig. (\[fig:allpxls\]d), results from the raw and [*cleaned*]{} maps built from the W band, one can conclude that foregrounds will have little impact on our results. We made a more detailed study using Monte Carlo simulations: we performed 100 simulations of the CMB and W-band noise components and added and removed a foreground residual template. As a conservative model for foreground residuals left after cleaning the W-band, we took the sum of the dust, free-free and synchrotron emission maps for the W band released by the WMAP team. In each simulation we (i) added and (ii) substracted [*the whole foreground template*]{}. In Fig. (\[fig:rottest\]) we plot the average $\alpha$’s for these simulations, after adding (dotted line) and substracting (dashed line) the residuals for several angular displacements. The shaded bands display the 1$\sigma$ dispersion areas. The errors practically equal the statistical estimates of eq. (\[eq:alpha1\]). To summarize, foregrounds do not significantly affect our estimated amplitude of the tSZ contribution.
Finally, we removed from our galaxy template all those pixels that were associated with known galaxy clusters. We used the ACO and APM catalogues of optically selected clusters and the XBC [@Ebeling00], de Grandi [@deGrandi], NORAS [@Noras], ROSAT-PSPC [@pspc] and Vogues [@Vogues] X-ray cluster catalogues. We excised from the analyses all pixels lying within a virial radius of the cluster center (taken to be ten times the core radius). For clusters without measured core radius but with known redshift, we assumed a virial radius of 1.7 Mpc, and removed all pixels within that distance. For the rest (the majority), we conservatively removed all pixels within a circle of 30 arcmin from the cluster centre. Out of the 2048 pixels for $\beta=1$, 1681 were eliminated. For the patches with $\beta = 2-30$, i.e., the next $\sim$60,000 densest pixels in Fig.(\[fig:allpxls\]c), a large fraction of them were also associated to known clusters and were eliminated with the excising. In Fig.(\[fig:diffuse\]) we show the cross correlation of these remaining pixels outside known clusters with the clean W band map. Note that here patch sortening has been regenerated using surviving pixels. For the densest sets of $N_{pix}=$ 64 (filled circles), 128 (triangles) and 256 (diamonds), we still find evidence of tSZ, but at much lower level of significance. For the densest 64 pixels, subtending $\sim 0.8$ square degrees on the sky, we obtain $\alpha=-96\pm 37\;\mu$K, at $\sim 2.6\sigma$ significance level. The signal gets dilluted rapidly as more pixels are included in the analysis: $\alpha = -50 \pm 27\;
\mu$K and $\alpha = -30 \pm 18\; \mu$K for $N_{pix}=$ 128, 256, respectively. Out of the 64 densest pixels, 54 pixels are in the ZoA, and 45 of them coincide with five different cluster candidates in the CIZA [@ciza] catalogue, (Ebeling, private communication). The remaining group of pixels are not associated to any known galaxy cluster. In Fig. (\[fig:mappxls\]) we plot the location of those pixels in the sky. The shaded area correspond to the Kp0 mask, that also remove many point sources outside the galactic plane (dark grey dots). The 64 pixels are ploted as big white circles for convenience.
Conclusions
===========
Under the assumption that galaxies trace hot gas, we have used the 2MASS galaxy catalogue to search for tSZ signal present in WMAP data. In $\sim 26$ square degrees on the sky we have found a contribution of average amplitude -35 $\pm$ 7 $\mu$K, spectrally compatible with tSZ, and mostly generated by ACO clusters of galaxies. Our study, based on a pixel-to-pixel comparison, reaches the highest sensitivity level reported so far. Compared with methods based on power spectrum or correlation function analysis, our method gives a larger level of significance since we restrict the analysis to the regions of the sky where the tSZ contribution is expected to be the largest.
We have found that the typical angular extension of this signal is somewhere between 20–30 arcmins. Furthermore, once all known clusters of galaxies are excised from the analysis, we are left with $\sim 0.8$ square degrees with an average amplitude of $\alpha = -96 \pm 37\; \mu$K: those pixels fall mostly in the ZoA and after performing our analyses we found that 45 of them are associated to five different galaxy clusters in the CIZA catalogue. We have found no conclusive evidence that, in the volume probed by 2MASS, structures less bound than clusters contribute to the tSZ signal present in the WMAP data.
We thank R.Rebolo and J.A.Rubiño–Mart[í]{}n for enlightening discussions. We also thank H.Ebeling for comments on the CIZA catalogue and an anonymous referee for useful criticism. C.H.M. acknowledges the financial support from the European Community through the Human Potential Programme under contract HPRN-CT-2002-00124 (CMBNET) and useful discussions with V.Müller, R.Croft and A.Banday. F.A.B. acknowledges finantial support from the Spanish Ministerio de Educación y Ciencia (projects BFM2000-1322 and AYA2000-2465-E) and from the Junta de Castilla y León (project SA002/03). Some of the results in this paper have been derived using the HEALPix package, [@healpix]. We acknowledge the use of the Legacy Archive for Microwave Background Data Analysis (LAMBDA, http://lambda.gsfc.nasa.gov). Support for LAMBDA is provided by the NASA Office of Space Science. This publication makes use of data products from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation.
Abell, G.O, Corwin, H.G.Jr. & Olowin, R.P. 1989 ApJSS 70, 1
Afshordi, N., Loh, Y., & Strauss, M. A. 2004, PhRvD, 69, 083524
Banday, A. J., Górski, K. M., Bennett, C. L., Hinshaw, G., Kogut, A., & Smoot, G. F. 1996, ApJ 468, L85
Bennett, C.L. et al. 2003 ApJS, 148, 97
Bennett, C. L., Hinshaw, G., Banday, A., Kogut, A., Wright, E. L., Loewenstein, K., & Cheng, E. S. 1993, ApJ 414, L77
Böhringer, H. et al. 2000 ApJS 129, 435
Carlstrom, J. E., Holder, G. E., Reese, E. D. (2002) ARA&A, 40, 643
Dalton, G. B., Maddox, S. J., Sutherland, W. J., & Efstathiou, G. 1997, MNRAS 289, 263
de Grandi, S. et al. 1999 ApJ 514, 148
Ebeling, H., Edge, A. C., Allen, S. W., Crawford, C. S., Fabian, A. C., & Huchra, J. P. 2000, MNRAS 318, 333
Ebeling, H., Mullis, C. R., & Tully, R. B. 2002, , 580, 774
Fosalba, P., Gazta[\~ n]{}aga, E., & Castander, F. J. 2003, ApJL, 597, L89
Fukugita, M. & Peebles, P. J. E. (2004). Astro-ph/0406095
Górski, K.M., Hivon, E. & Wandelt, B.D., 1999, Proceedings of the MPA/ESO Cosmology Conference “Evolution of the Large Scale Structure”, eds. A.J.Banday, R.S.Sheth, and L.Da Costa, PrintPartners Ipskamp, NL, pp.37-42, (also astro-ph/9812350)
Hern[' a]{}ndez-Monteagudo, C. & Rubi[\~ n]{}o-Mart[í]{}n, J. A. 2004, MNRAS, 347, 403
Jarosik, N., et al. 2003, ApJS, 148, 29
Jarrett, T. H., Chester, T., Cutri, R., Schneider, S. E., & Huchra, J. P.2003, AJ, 125, 525
Kull, A. & B[" o]{}hringer, H. 1999, A&A, 341, 23
Myers, A. D., Shanks, T., Outram, P. J., Frith, W. J., & Wolfendale, A. W. 2004, MNRAS, 347, L67
Penton, S. V., Stocke, J. T. & Shull, J. M. (2004) ApJS, 152, 29
Spergel, D.N. et.al., 2003, ApJS, 148, 175
Springel, V., White, M. & Hernsquist, L. 2001 ApJ 549, 681
Sunyaev, R. A. & Zeldovich, I. B.1980, ARA&A, 18, 537
Vikhlinin, A., McNamara, B. R., Forman, W., Jones, C., Quintana, H., & Hornstrup, A. 1998, ApJ 502, 558
Vogues, W. et al. 1999, A&A 349, 389
Zappacosta et al. 2002 A& A, 394, 7
=6.cm
=8.cm
=10.1cm
[^1]: The WMAP data was downloaded from [*http://lambda.gsfc.nasa.gov*]{}.
[^2]: *http://www.eso.org/science/healpix/*
|
---
abstract: 'We study the scalar wave equation on the open exterior region of an extreme Reissner-Nordström black hole and prove that, given compactly supported data on a Cauchy surface orthogonal to the timelike Killing vector field, the solution, together with its $(t,s,\theta,\phi)$ derivatives of arbitrary order, $s$ a tortoise radial coordinate, is bounded by a constant that depends only on the initial data. Our technique does not allow to study transverse derivatives at the horizon, which is outside the coordinate patch that we use. However, using previous results that show that second and higher transverse derivatives at the horizon of a generic solution grow unbounded along horizon generators, we show that any such a divergence, if present, would be milder for solutions with compact initial data.'
author:
- |
Sergio Dain$^{1,2}$ and Gustavo Dotti$^{1}$\
\
$^1$Facultad de Matemática, Astronomía y Física, FaMAF,\
Universidad Nacional de Córdoba,\
Instituto de Física Enrique Gaviola, IFEG, CONICET,\
Ciudad Universitaria, (5000) Córdoba, Argentina.\
$^{2}$Max Planck Institute for Gravitational Physics,\
(Albert Einstein Institute), Am Mühlenberg 1,\
D-14476 Potsdam Germany.
title: 'The wave equation on the extreme Reissner-Nordström black hole'
---
Introduction {#sec:introduction}
============
Extreme black holes lie in the boundary between black holes and naked singularities. Since black holes are believed to be astrophysically relevant, whereas naked singularities are considered unphysical, the issue of stability of extreme black holes is key in understanding the process of gravitational collapse. Black hole stability is a longstanding open problem in General Relativity. The pioneering works of Regge, Wheeler [@Regge:1957td], Zerilli [@Zerilli:1970se] [@Zerilli:1974ai] and Moncrief [@Moncrief:1975sb] determined the modal linear stability of electro-gravitational perturbations in the domain of outer communication of the spherically symmetric electro-vacuum black holes, by ruling out exponential growth in time. Since then, a lot of effort has been made to establish more accurate bounds on linear fields. In particular, analyzing the scalar wave equation on the black hole background provides useful insight into the more complex problem of linear gravitational perturbations. Kay and Wald ([@wald:1056] [@wald:218] [@Kay:1987ax]) obtained uniform boundedness for solutions of the wave equation on the exterior of the Schwarzschild black hole. In recent years this result has been extended to the non-extreme Kerr black hole (see the review articles [@Dafermos:2008en], [@Dafermos:2010hd] and references therein.) The purpose of this work is to place pointwise bounds on scalar waves on the exterior region of an extreme Reissner-Nordström black hole. The interest in Reissner-Nordström black holes lies in the fact that they share the complexity of the global structure of the more relevant Kerr black holes and, due to spherical symmetry, are far more tractable than the rotating holes. The modal stability of the outer region of Reissner-Nordström black holes under linear perturbations of the metric and electromagnetic fields was established in [@Zerilli:1970se] [@Zerilli:1974ai] [@Moncrief:1975sb], both for the extreme and sub-extreme cases. The modal [*instability*]{} of the Reissner-Nordström naked singularity, and also of the black hole inner static region, was proved only recently in [@Dotti:2010uc].
The wave equation on extreme Reissner-Nordström black holes has recently been studied by Aretakis in a series of relevant articles [@Aretakis:2011ha; @Aretakis:2011hc; @Aretakis:2010gd], where it was found that second and higher order transverse derivatives at the horizon grow without bound along the horizon generators (see also [@Lucietti:2012sf] were similar results were found for the Teukolsky equation on an extreme Kerr black hole). One of the motivations of our article is understanding the meaning of these instabilities. More specifically, we wonder if they arise in the evolution of fields from data of compact support on a $t=$ constant Cauchy surface (“compact data", for short), which is a subclass in [@Aretakis:2010gd; @Aretakis:2011hc; @Aretakis:2011ha] for which, as we show below, the proof of instability there fails. Although we do not prove that the horizon instability is absent for compact data, we do show that, if present, is milder. On the other side, we get a remarkably simple proof of pointwise boundedness of fields of compact data and their partial derivatives of any order in $(t,s,\theta,\phi)$ coordinates on the open black hole exterior region. These results are stated under Theorem \[t:1\] in Section \[sec:main-result\], where their relevance to the problem of spherical gravitational collapse is discussed. Theorem \[t:1\] is proved in Section \[sec:behav-massl-scal\].
Main results {#sec:main-result}
============
Consider the exterior region ${\mathcal{D}}$ of the extreme Reissner-Nordström black hole. This region is described in isotropic coordinates $(t,\rho,\theta,\phi)$ by the metric $$\label{metric1}
g = -N^{-2} dt^2 +N^{2} \left(d \rho^2+ \rho^2 (d\theta ^2 + \sin ^2 \theta d
\phi^2) \right), \;\; N=1+\frac{m}{\rho},$$ where the positive constant $m$ represents the total mass of the spacetime, which equals the absolute value of the total electric charge. The electromagnetic field $$\label{emf}
{\cal F} =\pm \frac{m}{(m+\rho)^2} \; dt \wedge d\rho,$$ together with this metric, solves the Einstein-Maxwell equations. Note that the isotropic coordinate $\rho$ differs from the standard radial coordinate $$r=\rho+m,$$ that gives the area $A= 4 \pi r^2$ of the spheres spanned by acting on a point with the $SO(3)$ isometry subgroup. Instead of $\rho$, it is often more convenient to use the “tortoise´´ radial variable $-\infty < s < \infty$ defined by $$\label{defs}
\frac{ds}{d{\rho}}=N({\rho})^2.$$ We choose the integration constant such that $$\label{s}
s = {\rho}-\frac{m^2}{{\rho}}+ 2m\log\left(\frac{{\rho}}{m}\right).$$ Isotropic coordinates cover only the open exterior region ${\mathcal{D}}$ of the black hole. Figure \[f:1\] exhibits the well known conformal diagram of an extreme Reissner-Nordström black hole (see [@Hawking73], [@Carter73] and references therein), region ${\mathcal{D}}$ appears shaded. The unshaded region is the black hole interior, proved to be linearly unstable in [@Dotti:2010uc]. $S$ is a generic $t=$ constant surface, it is a Cauchy surface for ${\mathcal{D}}$ and a complete Riemannian manifold with topology ${\mathbb{S}^2}\times \mathbb{R}$. Its induced metric approaches that of the cylinder as $\rho \to 0^+$, limit in which the area of the isometry spheres tend to $A= 4 \pi m^2$. We denote by $i_+$ and $i_-$ the future and past timelike infinity of ${\mathcal{D}}$ respectively. The asymptotically flat spacelike infinity is denoted by $i_0$, and the asymptotically cylindrical end of $t=$ constant surfaces is denoted $i_c$. Note that the surface $S$, being orthogonal to the Killing vector ${\partial}/ {\partial}t$, is asymptotically null at $i_c$.\
![Conformal diagram for the extreme Reissner-Nordström black hole[]{data-label="f:1"}](extreme-rn)
In this work we study the scalar wave equation on ${\mathcal{D}}$ $$\label{laplacian}
\Box_g \Phi=0,$$ with initial data on $S$ $$\label{id}
\phi=\Phi|_S, \quad \chi = \dot \Phi|_S,$$ where the dot denotes derivative with respect to $t$. The existence and uniqueness of the solution of the Cauchy problem (\[laplacian\])-(\[id\]) on a curved background is well established (see, for example, [@Hawking73], also [@Friedlander]). We prove the following
\[t:1\] Let $\Phi$ be a solution of the wave equation (\[laplacian\]) on the open exterior region ${\mathcal{D}}$ of an extreme Reissner-Nordström black hole, which has smooth initial data (\[id\]) of compact support on the Cauchy surface $S$. Then, there exists a constant $C$, which depends only on the initial data (\[id\]), such that, in ${\mathcal{D}}$, $$\label{boundphi}
|\Phi|\leq \frac{C}{\rho+m}.$$ All higher partial derivatives with respect to the coordinates $(t,s, \theta,
\phi)$ are similarly bounded in ${\mathcal{D}}$. Namely, for any $ \alpha_1 \alpha_2 ...$ there exists a constant $C_{\alpha_1 \alpha_2 ....}$ that depends on the initial data, such that $$\label{T1bound}
|\partial_{\alpha_1} \partial_{\alpha_2} ... \Phi |\leq \frac{C_{\alpha_1 \alpha_2...}}{\rho+m}.$$
Theorem \[t:1\] establishes that the spacetime ${\mathcal{D}}$ is stable with respect to this class of initial data. A key simplification in the study of extreme black holes, compared to non-extreme ones, is the existence of complete exterior Cauchy surfaces such as $S$. This has been extensively used in the present work because it allows us to avoid the delicate issues related to the behaviour of fields near the horizon. We should stress that theorem \[t:1\] applies to the evolution of data with compact support on $S$. By finite speed propagation, it is clear that the restriction of $\Phi$ to any $t=$ constant slice will have compact support, and, by smoothness, it will be bounded. The non-trivial statement in Theorem \[t:1\] is that there exists a $t$-independent bound for $\Phi$.
The scalar wave equation on the exterior of an extreme Reissner-Nordström black hole has recently been treated by Aretakis in [@Aretakis:2010gd], [@Aretakis:2011hc] and [@Aretakis:2011ha] by evolving data from a non-Cauchy surface such as $S'$ in Figure \[f:1\]. This surface intersects the horizon on one end, goes to spacelike infinity on the other end, and is suitable to analyze the stability of the exterior region of a spherical collapse of an extreme charged body (see Figure \[f:2\]). Among the many estimates found in these articles, two of them relate closely to our result. The first one is the pointwise boundedness of the solution in the domain of dependence of $S'$, in terms of initial data on $S'$, given in Theorem 4 in [@Aretakis:2011hc]. A weaker bound than (\[boundphi\]), $|\Phi|<C$, follows from this theorem and the fact that, for the kind of initial data we consider, $\Phi$ has compact support $K$ in the region between $S$ and $S'$. There are, however, two important motivations to study compact data fields. The first one is that the proof of Theorem \[t:1\] is considerably simpler than the boundedness proof in [@Aretakis:2011ha], since it uses uses only arguments involving the canonical conserved energy of the wave equation, this being possible due to the existence of the above mentioned complete Cauchy surface. The cost of this simplification is that the results in [@Aretakis:2011hc] apply to a wider class of data on $S'$ than those coming from the evolution of fields with compact support on $S$, as we discuss below.
The second motivation is determining if the blow up result for second and higher transverse derivatives at the horizon along the horizon generators, reported in Theorem 6 in [@Aretakis:2011hc], and generalized to scalar wave equations on spacetimes with degenerate Killing horizons in [@Aretakis:2012ei], holds for compact data. To state this result we need to switch to a coordinate system that covers the horizon: $$\{ v=t +s (\rho), r=\rho+m, \theta,\phi \}.$$ In these coordinates the metric (\[metric1\]) has the form $$g = -(1-m/r)^2 \; dv^2 + 2 dv dr + r^2 \; (d \theta^2 + \sin^2 \theta \; d \varphi^2),$$ and the entire diagram in Figure \[f:1\] is covered. The horizon is located at $r=m$, and region ${\cal D}$ corresponds to $r>m$. The Killing vector field ${\partial}/{\partial}v$ becomes null at $r=m$, its integral lines are the horizon generators, and $v$ is an affine parameter. Note that ${\partial}/{\partial}r|_{r=m}$ is a null vector, orthogonal to the $SO(3)$ orbits, and has unit inner product with ${\partial}/{\partial}v$. We call $ {\partial}/{\partial}r |_{r=m}$ a [*transverse derivative at the horizon.*]{} Theorem 6 in [@Aretakis:2011hc] states that second and higher order transverse derivatives of a solution of the scalar field equation diverge as $v
\to \infty$ along horizon generators. Although this result holds for generic data in the class analyzed therein, it fails for those fields which evolve from data with compact support on $S$, as we discuss in detail in Section \[sec:transv-deriv-at\]. For fields evolving from compact data, we could not rule out these divergences, although we showed that, if present, they would be milder than those reported by Aretakis.
In what follows we analyze the behaviour of fields with initial data with compact support on $S$ of Figure \[f:1\]. Consider the spherically symmetric collapse of charged matter. It is possible to arrange the matter model in such a way that the exterior region is a portion of an extreme Reissner-Nordström black hole. One way of constructing such spacetime is by using a thin shell of matter, this has been extensively studied in [@boulware73]. Other models are those of charged dust collapse (see [@0264-9381-7-6-008] and references therein). For the present discussion it is enough to know that such a construction is feasible with some matter model. We show schematically the conformal diagram of a charged collapse spacetime in Figure \[f:2\], where we have avoided drawing the singularity, which may have a complicated structure, irrelevant to our purposes.
![Conformal diagram for the collapse of charged dust.[]{data-label="f:2"}](extreme-rn-collapse){height="8cm"}
In a realistic collapse of matter there is always a surface like $S'$, for which the intersection ${\cal I}$ of its future domain of dependence with the domain of outer communications agrees with that of $S'$ in Figure \[f:1\]. Note that the results in [@Aretakis:2011ha; @Aretakis:2011hc] apply to ${\cal I}$ for data given on $S'$ which may be non trivial at the horizon. In Figure \[f:2\], $S$ is a Cauchy surface of ${\mathbb{R}^3}$ topology that enters the matter region, and $\Sigma \subset S$ is the minimal subset of $S$ whose future contains $S'$. The results [@Aretakis:2011ha; @Aretakis:2011hc] imply that generic data of compact support within the vacuum region of $S$ will evolve in a bounded wave in the vacuum portion of this spacetime outside the horizon. This is so because the portion of this region lying between $S$ and $S'$ is compact and, as the data of this field on $S'$ is in the Aretakis class, the field is also bounded in the outer region in the future of $S'$.\
Our proof of boundedness is simpler, but can only be applied to the spherical collapse model if we restrict to those fields with data of compact support in $\Sigma \subset S$ (instead of $S$), for which the evolution in the outer region is identical to that of a field in the Reissner-Nordström geometry. Physically, these fields are characterized by the fact that they reach the horizon after the surface of the collapsing star has crossed it. We prove below that the growth of these fields along horizon generators is milder than the growth of those fields which enter the matter region earlier. The fact that fields initially supported in $\Sigma$ are better behaved than those entering the matter before the horizon is formed is an aspect of the stability of the spherical collapse worth pointing out.
Behaviour of massless scalar fields {#sec:behav-massl-scal}
===================================
Recasting the wave operator {#recast}
---------------------------
Consider the wave equation (\[laplacian\]) on the exterior ${\cal M}$ of the extreme Reissner-Nordström background, whose metric in isotropic coordinates is (\[metric1\]). Defining $$\label{defF}
\Phi = \frac{F}{({\rho}+m)},$$ we obtain $$\label{box2}
-\tfrac{{\rho}^2}{{\rho}+m} \; \Box_g \Phi = \ddot F +\; {\mathcal{A}}\; F,$$ where a dot means ${\partial}_t$, $$\label{A}
{\mathcal{A}}= -{\partial}_s^2 + \left( \frac{2m {\rho}^3}{({\rho}+m)^6} - \frac{{\rho}^2}{({\rho}+m)^4} \Delta \right)
=: -{\partial}_s^2 + \left( V_1 - V_2 \; \Delta \right)$$ $s$ is the tortoise radial coordinate introduced in (\[s\]), and ${\Delta}$ is the standard Laplacian on the unit sphere. From (\[s\]) we deduce $$s \sim -\frac{m^2}{{\rho}} \; \text{ as } {\rho}\to 0^+ , \;\; s \sim {\rho}\; \text{
as } {\rho}\to \infty.$$ The potentials $V_1$ and $V_2$ are positive, bounded $$\label{Vbounds}
0 < V_1 < \frac{1}{32 \, m^2} , \quad 0 < V_2 < \frac{1}{16 \, m^2},$$ and have the following fall off $$V_2 \sim s^{-2}, \quad V_1 \sim 2m |s|^{-3} \text{ as } s \to \pm \infty.$$ The symmetry in the asymptotic expressions above is not coincidental, $V$ is an even function of $s$. The origin of this symmetry is the conformal isometry $C$ of extreme Reissner-Nordström noticed in [@couch84], given by $$\label{ci}
C(t,{\rho},\theta,\phi)=(t,m^2/{\rho},\theta,\phi),$$ Under this map the pullback of the metric and electromagnetic fields are $$\tilde g_{ab} = \left( \tfrac{m}{{\rho}} \right)^2 g_{ab}, \quad \tilde {\cal F}_{ab} = -
{\cal F}_{ab}.$$
Since the equation $\Box \Phi - (R/6) \Phi$ is conformally invariant with conformal weight minus one, and the Ricci scalar of an electro-vacuum solution vanishes (thus $R = \tilde R =0$), it follows that, if $\Phi(t,{\rho},\theta,\phi)$ is a solution of $\Box \Phi =0$, then so is $$\label{fip}
'\Phi(t,{\rho},\theta,\phi) = (m/{\rho}) \Phi(t,m^2/{\rho},\theta,\phi).$$ For the above field $'F(t,{\rho},\theta,\phi) = ({\rho}+m) (m/{\rho})
\Phi(t,m^2/{\rho},\theta,\phi) = F(t,m^2/{\rho},\theta,\phi)$. The conformal isometry is easily expressed in the alternative radial coordinate $s$: under ${\rho}\to
m^2/{\rho}$, $s \to -s$. The fact that for any solution $F(t,s,\theta,\phi)$ of $
(-{\partial}_s^2 + \left( V_1 - V_2 \; \Delta \right) F =0$, the function $'F(t,s,\theta,\phi)= F(t,-s,\theta,\phi)$ is also a solution of this equation, implies that $V_i(s)=V_i(-s), i=1,2$.\
Note the consistency of the bound (\[boundphi\]) with the conformal symmetry: if $\Phi$ has compact support on a $t-$slice, then so does $'\Phi$ given in (\[fip\]), however the bound on $'\Phi$ gives no additional information, as it follows from the bound on $\Phi$: $$| '\Phi(t,\rho,\theta,\phi)| = \left| \frac{m}{\rho} \Phi(t,m
2/\rho,\theta,\phi) \right| \leq \frac{m}{\rho} \; \frac{C}{m+(m^2/\rho)} =
\frac{C}{m+\rho}.$$
Finally, although we will not make use of this fact, we note that the operator $$\label{boxi}
{\partial}_s^2 + V(s) {\Delta},$$ is the Laplacian on the cylinder ${\cal C} = \mathbb{S}^2 \times {\mathbb R}$ with respect to the metric $$h=V^2 ds^2 + V \;(d\theta ^2 + \sin ^2 \theta d \varphi^2),$$ which reduces to the standard metric on ${\cal C}$ if we set $V(s)=1$. The principal part of the operator ${\mathcal{A}}$ in (\[A\]) is a particular case of (\[boxi\]).
Estimates for functions defined on the cylinder {#efc}
-----------------------------------------------
A $t-$slice $S$ of the extreme Reissner-Nordström spacetime is a cylinder with the non standard metric induced from (\[metric1\]). In this Section we establish pointwise bounds on functions on $S$ from $L^2$ norms defined using the standard metric on ${\mathbb{S}^2}\times {\mathbb R}$, i.e., we use the hermitian product $$\label{hp}
\langle f , g \rangle = \int f^* \; g \; dx \; \sin(\theta) \, d\theta \, d \varphi,$$ ($x = s/m$, where $s$ is defined in equation (\[s\])) and the associated norm $$||f || = \sqrt{ \langle f , f \rangle }.$$ The following result from [@Dimock:1987hi] is used in [@Kay:1987ax]. Since we will make use of it, we give a detailed proof using elementary methods.
\[l:boundfcylinder\] Let $f$ be a complex function on the cylinder with finite norm, then $$\label{sb}
|f(s,\theta,\phi)| \leq M \; \left( ||f|| + m^2 \, || \partial_s^2 \, f || + ||
\triangle \, f || \right),$$ where $M$ is the constant defined in (\[M\]).
A function $f$ of finite norm can be expanded, using spherical harmonics in ${\mathbb{S}^2}$ and Fourier transform in ${\mathbb R}$, as $$\label{rep}
f(x,\theta,\phi) = \frac{1}{\sqrt{2\pi}} \int dk \sum_{\ell \, m} \hat f_{\ell
\, m}(k)\; e^{ikx}\; Y_{\ell \, m}(\theta,\phi).$$ where $$\label{rep2}
\hat f_{\ell \, m}(k) := \frac{1}{\sqrt{2\pi}} \int dk f(x,\theta,\phi)\;
e^{-ikx}\; Y^*_{\ell \, m}(\theta,\phi).$$ From equation (\[rep2\]) we deduce $$\begin{aligned}
\nonumber
|f(x,\theta,\phi)| & \leq \frac{1}{\sqrt{2\pi}} \int dk \sum_{\ell \, m} |\hat f_{\ell
\, m}(k)|\; |Y_{\ell \, m}(\theta,\phi)|\\
&=\frac{1}{\sqrt{2\pi}} \int dk \sum_{\ell \, m}
\left[|\hat f_{\ell \, m}(k)|(1+k^2+\ell(\ell+1)) \right] \;
\left[\frac{|Y_{\ell \, m}(\theta,\phi)|}{(1+k^2+\ell(\ell+1)) } \right] \label{eq:1b}\end{aligned}$$ where in the last line we have just multiplied and divided $(1+k^2+\ell(\ell+1))$. Using the Cauchy-Schwarz inequality for series in (\[eq:1b\]), then the Cauchy-Schwarz inequality for integrals yields $$\begin{aligned}
|f(x,\theta,\phi)| &\leq \frac{1}{\sqrt{2\pi}} \int dk \sqrt{\sum_{\ell \, m}
|\hat f_{\ell \, m}(k)|^2(1+k^2+\ell(\ell+1))^2 } \;
\sqrt{\sum_{\ell \, m} \frac{|Y_{\ell \,
m}(\theta,\phi)|^2}{(1+k^2+\ell(\ell+1))^2 } } \nonumber \\
&\leq \;\sqrt{ \sum_{\ell \, m} \int \frac{| Y_{\ell \, m} (\theta,\phi) |^2}{
(1+k^2+\ell (\ell+1))^2} \, \frac{dk}{2\pi}}
\; \sqrt{ \sum_{\ell' \, m'} \int |\hat f_{\ell' \, m'}(k')|^2 \, (1+k'^2+\ell'
(\ell'+1))^2 \; dk' } \label{bound1}.\end{aligned}$$
The second factor in (\[bound1\]) can be bounded using $$\begin{aligned}
\label{rep3}
\langle \partial_x^2 f , \partial_x^2 f \rangle &= \int \; dk \; \sum_{\ell \,
m} \; |k^2 \;\hat f_{\ell \, m}(k)|^2,\\
\langle \triangle f , \triangle f \rangle &= \int \; dk \; \sum_{\ell \, m} \;
|\ell (\ell+1) \;\hat f_{\ell \, m}(k)|^2,
\label{rep4}\end{aligned}$$ together with $(a+b+c)^2 \leq 3 (a^2+b^2+c^2)$ and $\sqrt{a^2+b^2+c^2} \leq |a|+|b|+|c|$, $$\begin{gathered}
\sqrt{ \sum_{\ell' \, m'} \int |\hat f_{\ell' \, m'}(k')|^2 \, (1+k'^2+\ell'
(\ell'+1))^2 \;dk' } \\
\leq \sqrt{ \sum_{\ell' \, m'} \int 3|\hat f_{\ell' \, m'}(k')|^2 \,
(1+k'^4+\ell'^2 (\ell'+1)^2)\; dk' }
=
\sqrt{3} \sqrt{ ||f||^2 + || \partial_x^2 \, f ||^2 + || \triangle \, f ||^2 } \\
\leq \sqrt{3} \left( ||f|| + || \partial_x^2 \, f || + || \triangle \, f ||
\right)\end{gathered}$$ The identity $\sum_m | Y_{\ell \, m} (\theta,\phi) |^2 = (2 \ell +1 )/(4 \pi)$ in the first factor of (\[bound1\]) then gives, after restoring units ($s=mx$), $$\label{sb1}
|f(s,\theta,\phi)| \leq M \; \left( ||f|| + m^2 \, || \partial_s^2 \, f || + ||
\triangle \, f || \right),$$ where $$\begin{aligned}
M^2 &= \frac{3}{8 \pi^2} \; \int \sum_{\ell} \frac{2 \ell + 1}{ (1+k^2+\ell
(\ell+1))^2} \, dk \\
&= -\frac{3}{8 \pi^2} \; \frac{\partial}{\partial \ell} \sum_{\ell} \int
(1+k^2+\ell (\ell+1))^{-1} \, dk \\
&= \frac{3}{16 \; \pi} \; \sum_{\ell} \frac{2\ell+1}{(1+ \ell(\ell+1))^{3/2}} \label{M}\end{aligned}$$
Applying Lemma \[l:boundfcylinder\] to $F_t(s,\theta,\phi):=F(t,s,\theta,\phi)$ we get a bound for $|F|$ [*on the t-slice*]{}. However, if the terms $||F_t||, ||\Delta F_t||$ and $|| \partial_s^2 \, F_t ||$ on the right hand side of (\[sb\]) were further bounded by [*conserved*]{} ($t$-independent) slice integrals, then we would get a $t$-independent bound for $|F_t|$, i.e., a bound for $|F|$. This is our motivation to study conserved energies in the following Section.
Conserved energies {#sec:conserved-energies}
------------------
In Section \[recast\] we have shown that the problem (\[laplacian\])-(\[id\]) of propagation of scalar waves on the exterior extreme Reissner-Nordström spacetime ${\cal M}$ can be reformulated as the equation $$\label{eqF}
{\cal O} F := \ddot F +\; {\mathcal{A}}\; F=0,$$ with initial data $(f,g)$ of compact support on the $t=0$ slice $S=\mathbb{S}^2 \times \mathbb{R}$, given by $$\label{idf}
f=F(t=0)=\phi/({\rho}+m), \quad g=\dot F(t=0)=\chi/({\rho}+m).$$ A solution of equation (\[eqF\]) has a conserved (i.e., $t-$independent) energy $$\label{ceF}
{\mathcal{E}}[F]= \int_{S_t} ( \dot F^2 + F{\mathcal{A}}(F) ) \; {\, dv},$$ where the integral is performed on a $t-$slice using the standard volume element $dv= \sin(\theta) \; d\theta\; d \varphi \; ds$. This energy is useful because it provides a $t-$independent bound for $|| {\partial}_s F ||$.\
Taking derivatives with respect to $t$ to equation (\[eqF\]) shows that $\dot F, \ddot F,...$ all satisfy the same equation, giving extra conserved quantities, in particular $$\label{em1a}
{\mathcal{E}}[\dot F]= \int_{{S_{t}}} (\ddot F^2 + \dot F{\mathcal{A}}(\dot F) ) \; dv,$$ which, using (\[eqF\]) to substitute for $\ddot F$ reduces to $$\label{ep1}
{\mathcal{E}}[\dot F]= \int_{{S_{t}}} [({\mathcal{A}}F)^2 + \dot F{\mathcal{A}}(\dot F)]\; {\, dv}.$$ The above conserved energy is useful because it provides a $t-$independent bound for $|| {\mathcal{A}}F ||$, and thus for $|| \partial_s^2 F ||$. We would like to get similar bounds for $|| (\Delta F) ||$ and $||F||$, and use them in (\[sb\]). Following [@Dafermos:2008en], equation (\[ceF\]) suggests that, to obtain a bound for the integral of $F^2$, we should consider the energy of a “time integral” $\tilde F$ of the solution $F$ in (\[eqF\]), i.e. a solution of the system $$\label{tF}
\dot{\tilde F}=F \; \text{ and } \; \ddot {\tilde F} +\; {\mathcal{A}}\; \tilde F=0.$$ Assume there exists such a solution, its conserved energy would be $$\begin{aligned}
{\mathcal{E}}[\tilde F] &= \int_{{S_{t}}} \left[(\dot {\tilde F})^2 + \tilde F{\mathcal{A}}( \tilde
F) \right] {\, dv},\\
&= \int_{{S_{t}}} \left[ F^2 + \tilde F{\mathcal{A}}( \tilde F) \right] {\, dv}, \label{em1}\end{aligned}$$ and would bound $||F||$. Using $[\Delta, {\cal O}]=0$ in (\[tF\]) we could prove that ${\cal O} \Delta \tilde F=0$, the conserved energy of this solution being $$\begin{aligned}
{\mathcal{E}}[\Delta \tilde F] &= \int_{{S_{t}}} \left (\Delta \dot{\tilde F}) ^2 + \Delta
\tilde F{\mathcal{A}}(
\Delta \tilde F) \right] {\, dv},\\
&= \int_{{S_{t}}} \left (\Delta F) ^2 + \Delta \tilde F{\mathcal{A}}( \Delta \tilde F)
\right] {\, dv}, \label{edelF}\end{aligned}$$ which bounds $||\Delta F||$.\
In order to proceed with this idea, we have to prove that a solution of (\[tF\]) exists for $F$ satisfying (\[eqF\])-(\[idf\]). This is done in the following Section.
Integrating in time {#sec:integrating-time}
-------------------
In this section we will prove the existence of the “time integral” solution $\tilde F$ of the system (\[tF\]) for a solution $F$ of (\[eqF\])-(\[idf\]). The existence of $\tilde F$ implies the conservation of the energies (\[em1\]) and (\[edelF\]) for $F$. The proof requires a notion of the inverse ${\mathcal{A}}^{-1}$. This is introduced by taking advantage of the fact that ${\cal A}$ is positive definite (since $V_1$ and $V_2$ are positive), and this allows us to define an inner product on functions on the linear space $\hat {\cal H}$ of functions of compact support on $S$, $$(p,q)=\int_{S} [ \partial_s p \partial_s q + V_2 (\partial_\theta
p \partial_\theta q + (\sin\theta)^{-1} \partial_\phi
p \partial_\phi q ) +V_1 pq ] {\, dv},$$ which is formally obtained by integrating by parts $\int_{S} p {\mathcal{A}}q \; dv$. We define a Hilbert space ${\mathcal{H}}$ as the completion of $\hat {\cal H}$ under this norm. This is the space where ${\mathcal{A}}^{-1}$ is defined, as shown in the following.
\[l:inverseA\] Let $q$ be smooth and having compact support $S_q \subset S$. Then, there exist a unique solution $p \in {\mathcal{H}}$ of the equation $$\label{invA}
{\mathcal{A}}(p)=q.$$ Moreover, $p$ is smooth.
The Lax-Milgram theorem (see for example [@Gilbarg]) asserts that if $B(\cdot, \cdot)$ is a bilinear form on ${\mathcal{H}}$ for which there exists $\alpha, \beta>0$ such that $$\label{lm1}
|B(u,v)| \leq \alpha \sqrt{(u,u)} \; \sqrt{(v,v)}, \;\; (u,u) \leq \beta \; B(u,u),$$ hold for all $u,v \in {\mathcal{H}}$ then, for any bounded linear operator $L: {\mathcal{H}}\to
{\mathbb R}$, the equation on $p$ $$\label{lm2}
B(z,p)=L(z) \;\; \text{ for every } \; z \in {\mathcal{H}},$$ has a unique solution.\
Conditions (\[lm1\]) are trivially satisfied for $\alpha=\beta=1$ if $B(p,q)=(p,q)$ (Schwarz’s inequality).
The operator $L(z):=\int_{S} q z \; dv$ can be shown to be bounded by using the fact that the restriction of $V_1$ to $S_q$ has a minimum $V_1^{(q)} >0$ and thus, applying Schwarz’s inequality to $L^2[{\cal S}_q,dv]$ gives $$\begin{gathered}
| L(z) | = \left| \int_{{ S}_q } q z \; dv \right| \leq \sqrt{ \int_{{S}_q} q^2 \; dv} \; \sqrt{ \int_{{S}_q} z^2 \; dv} \\ <
\sqrt{ \int_{{S}_q} q^2 \;dv} \; \sqrt{ \int_{{S}_q} \left( \frac{V_1}{V_1^{(q)}} \right) z^2 \; dv}
< \sqrt{ \int_{{S}_q} \left( \frac{q^2}{V_1^{(q)}} \right) \; dv}\;
\sqrt{(z,z)} \end{gathered}$$ It follows from the Lax-Milgram theorem that there exists a $p \in {\mathcal{H}}$ such that $$\int_{S} z {\mathcal{A}}p \; dv = \int_{S} z q \; dv, \; \; \text{ for every } \; z \in {\mathcal{H}},$$ and hence $p$ is a weak solution of the elliptic differential equation ${\mathcal{A}}p =
q$. Smoothness of $p$ follows from interior elliptic regularity arguments (see [@Gilbarg]) and thus $p$ satisfies ${\mathcal{A}}p=q$.
An alternative proof using expansions in spherical harmonics $Y_{\ell
m}(\theta,\phi)$ sheds light on the behaviour of $p$ at infinity and near the horizon. Introducing $P=p/N$, equation (\[invA\]) reads $$\label{ode}
{\mathcal{A}}(p)={\mathcal{A}}(NP) = N^{-3} \left( {\partial}^2_{\rho} P + r^{-2} \triangle P \right) = q.$$ If we expand $P = \sum_{\ell m} P_{\ell m}(\rho) Y_{\ell m}(\theta,\phi)$ (and similarly expand $q$) the above equation reduces to an ODE for each mode that can solved explicitly: $$\label{solode}
(2\ell +1) \, P_{\ell m}(\rho) = \rho^{\ell+1} \int^{\rho}_{ C_{\ell m}} x^{-\ell} N^3(x) q_{\ell m}(x) \, dx
- \rho^{-\ell} \int^{\rho}_{ D_{\ell m}} x^{\ell+1} N^3(x) q_{\ell m}(x) \, dx.$$ For every harmonic mode, $ C_{\ell m}$ and $D_{\ell m}$ are constants of integration of the generic solution above, however if we use the fact that the $q_{\ell m}$ have compact support, we conclude that the choice $ C_{\ell m} =
\infty, D_{\ell m}=0$ is the only one that gives an appropriate asymptotic behavior near the horizon and spatial infinity, such that $p$ belongs to ${\mathcal{H}}$. This gives (after multiplication times $N$) the unique $p$ singled out by the Lax-Milgram theorem. Its projection $$\label{pl}
p_{\ell} := \sum_{m=-\ell}^{\ell} p_{\ell m}(\rho) Y_{\ell m}(\theta,\phi),$$ onto the $\ell$ subspace behaves as $$\label{beh}
p_{\ell} \sim \begin{cases} \rho^{\ell} & \text{ as }\rho \to 0^+, \\
\rho^{-\ell} & \text{ as }\rho \to \infty.
\end{cases}$$ Thus, generically, $p$ approaches a constant in both the $\rho \to \infty$ and the $\rho \to 0^+$ limits.
Lemma \[l:inverseA\] refer to the space variables on the cylinder $S$, however the functions involved in the proof of existence of equations (\[tF\]) depend also on the time parameter $t$. The following remarks, which concern the $t$ dependence of the functions, are useful in the proofs. Let $q(t,s,\theta,\phi)$ be a smooth function on $\mathbb{R}\times S$ which has compact support on $S$ for every $t$. Note that this is our appropriate class of functions since they arise as solutions of the wave equation with compact support, smooth, initial data. Let $p$ be the solution of $$\label{eq:1}
{\mathcal{A}}(p)=q.$$ From Lemma \[l:inverseA\] we deduce that $p$ is smooth on $S$. To obtain smoothness with respect to $t$ we take $t$ derivatives to equation (\[eq:1\]). The function $\dot p $, if it it exists, should satisfy the equation $$\label{eq:2}
{\mathcal{A}}(\dot p )= \dot q.$$ However, by hypothesis $\dot q$ is smooth and has compact support on $S$, hence we can use Lemma \[l:inverseA\] to prove that $\dot p$ exists and it is smooth on $S$. Taking an arbitrary number of $t$ derivatives we conclude that $p(t,s,\theta,\phi)$ is smooth on $\mathbb{R}\times S$.
Partial derivatives with respect to $t$ clearly conmute with ${\mathcal{A}}$, to prove that they also conmute with ${\mathcal{A}}^{-1}$ for this class of functions we write equation (\[eq:1\]) and (\[eq:2\]) as $$\begin{aligned}
\label{eq:3}
p &= {\mathcal{A}}^{-1} (q),\\
\dot p &= {\mathcal{A}}^{-1} (\dot q). \label{eq:3b}\end{aligned}$$ Taking a time derivative of (\[eq:3\]) and using equation (\[eq:3b\]) we obtain $$\label{eq:4}
\frac{\partial }{\partial t} {\mathcal{A}}^{-1} (q)={\mathcal{A}}^{-1} (\dot q).$$
We have all the ingredients to prove that there is a solution to equations (\[tF\]). We emphasize that in the following proof we do not make use of the decay behaviour (\[beh\]), we only need the statement of Lemma \[l:inverseA\].
\[l:int-time\] For a given solution $F$ of equation (\[eqF\]) with initial data (\[idf\]) of compact support, there exist a solution $\tilde F$ of equations (\[tF\]), and the energies (\[em1\]) and (\[edelF\]) are finite and conserved.
Consider the function $$\label{ttF}
\tilde{\tilde F} = - {\mathcal{A}}^{-1} F.$$ The function $\tilde{\tilde F}$ exists and it is smooth by Lemma \[l:inverseA\], since $F$ and all its time derivatives have compact support in $S$ for all $t$. Note that $$\ddot{\tilde{\tilde F}} = -{\mathcal{A}}^{-1} \ddot F = {\mathcal{A}}^{-1} {\mathcal{A}}F = F = -{\mathcal{A}}\tilde{\tilde F}.$$ This equation shows that $\tilde{\tilde F}$ is a solution of the wave equation and a second time integral of $F$, i.e., $\ddot{\tilde{\tilde F}}=F$. This immediately implies that $$\tilde F = \dot{\tilde{\tilde F}},$$ is also a solution of the wave equation, and a first time integral of $F$, i.e., $\dot{\tilde F} =F$. This first time integral has finite energy, since $${\mathcal{E}}[\tilde F] = \int_{S} \left[ (\dot {\tilde F})^2 + \tilde F{\mathcal{A}}( \tilde F) \right] {\, dv}= \int_{S} \left[ F^2 + \tilde F{\mathcal{A}}( \tilde F) \right]{\, dv}\label{em1b},$$ and $${\mathcal{A}}\tilde F = \frac{\partial }{\partial t} {\mathcal{A}}\tilde{\tilde F} = -\dot F,$$ so both terms in the integrand in (\[em1b\]) have compact support. Note, however, that $$\label{eq:Etildetilde}
{\mathcal{E}}\left[\tilde{\tilde F}\right] = \int_{S} \left[ \left(\dot{\tilde{\tilde
F}}\right)^2 + \tilde{\tilde{F}}{\mathcal{A}}\tilde{\tilde{F}} \right] {\, dv}=\int_{S} \left[ ( {\tilde F})^2 - \tilde{\tilde F} F \right] {\, dv},$$ diverges as a consequence of the behaviour (\[beh\]) of $\tilde F = -{\mathcal{A}}^{-1}
\dot F$ in the first term in the integrand (equation (\[beh\]) applies to this case since $\dot F$ has compact support on $t-$slices.)
The conservation of $ {\mathcal{E}}[\tilde F] $ in (\[em1b\]) follows from $$\frac{d}{dt} {\mathcal{E}}[\tilde F] = \int_{S} \left[ 2 F \dot F + F{\mathcal{A}}\tilde F + \tilde F {\mathcal{A}}F \right]{\, dv}=
\int_{S} \left[ 2 F (\ddot{\tilde F} + {\mathcal{A}}\tilde F ) \right]{\, dv}= 0.$$ All the integrations by parts above are possible since always one of the factors have compact support. Note also that $$\label{eq:5}
\tilde F ={\mathcal{A}}^{-1} G,$$ where $G$ is the solution of the wave equation (\[eqF\]) with initial data with compact $$\label{dataFt}
G(t=0) = -g, \quad \dot G (t=0) = {\mathcal{A}}(f).$$ The finiteness and conservation of ${\mathcal{E}}[\Delta
\tilde F]$ follows from similar arguments using $[\Delta,{\mathcal{A}}]=0$.
It is also possible to prove the existence of $\tilde F$ directly from equation (\[eq:5\]) without constructing the second time integral $\tilde{\tilde
F}$. Namely, take the solution $G$ of the wave equation with data (\[dataFt\]). The function $G$ has compact support on $S$ for every $t$, hence there exist $\tilde F$ such that (\[eq:5\]) holds. We have chosen to construct first $\tilde{\tilde F}$ because this function could be useful in future aplications. The existence of $\tilde F$ can also be proved by acting with ${\mathcal{A}}$ only on initial data (instead of on a function that depends on $t$ as in (\[eq:5\])). That is, consider the solution $\tilde F$ of the wave equation (\[eqF\]) with the following initial (see equation (\[dataFt\])) data $$\label{eq:6}
\tilde F(t=0)=-{\mathcal{A}}^{-1}(g), \quad \dot{\tilde F}(t=0)=f.$$ The initial data have not compact support but the solution of the wave equation, by finite speed propagation, nevertheless exists and it is smooth.
Proof of theorem \[t:1\]: bound on $\Phi$
-----------------------------------------
Equation (\[laplacian\]) subject to (\[id\]) is equivalent to (\[eqF\]) subject to (\[idf\]). Let $F_t(s,\theta,\phi)=F(t,s,\theta,\phi)$ be the restriction of $F$ to a $t$ slice. Since the slice is $S^2 \times {\mathbb R}$ and $F_t$ has compact support, the bound (\[sb\]) holds, then $$|F_t(s,\theta,\phi)| \leq M \; \left( ||F_t|| + m^2\; || \partial_s^2 \, F_t || + || \Delta \, F_t || \right).$$ On the other hand, from (\[A\]), $$|| {\partial}_s^2 F_t || \leq || {\mathcal{A}}F_t || + V_1^{max} ||F_t|| + V_2^{max} || \Delta F_t||$$ where $V_{1,2}^{max}$ are the maximum of the positive potential $V_{1,2}$, given in (\[Vbounds\]). Thus $$\begin{aligned}
\nonumber
|F_t(s,\theta,\phi)| & \leq M \; \left( m^2 \, || {\mathcal{A}}F_t ||+ (1+m^2 \, V_1^{max}) ||F_t|| +
(1+ m^2\, V_2^{max}) || \Delta F_t|| \right) \\
& \leq \tfrac{17 }{16} M \; \left( m^2 \, || {\mathcal{A}}F_t ||+ ||F_t|| + || \Delta F_t|| \right)\end{aligned}$$ However $$||F_t|| \leq \sqrt{{\mathcal{E}}[\tilde F]}, \;\;\; || {\mathcal{A}}F_t || \leq \sqrt{{\mathcal{E}}[\dot F]},\;\;\;
|| \Delta \, F_t || \leq \sqrt{{\mathcal{E}}[\Delta \tilde F]},$$ and the above energies are $t-$independent, thus $$\label{eq:bound}
|F(t,s,\theta,\phi)| \leq \tfrac{17 }{16} M\;
\left( m^2 \sqrt{{\mathcal{E}}[\dot F]} + \sqrt{{\mathcal{E}}[\Delta \tilde F]} + \sqrt{{\mathcal{E}}[\tilde F]} \right) =: C,$$ where $C$ is a constant, and (\[boundphi\]) follows.
It is important to note that if we attempt to replace $F$ by $\tilde F$ in the bound (that is, if we want to prove that $\tilde F$ is bounded) then the last term in the right hand side of is $\sqrt{{\mathcal{E}}[\tilde{\tilde F}]}$, and we have seen that this energy is not bounded (see equation ).
Proof of Theorem \[t:1\]: bounds on higher derivatives {#hd}
------------------------------------------------------
The metric (\[metric1\]) admits a four dimensional space of Killing vector fields, the span of $$\begin{aligned}
K_1 & = \cos (\phi) \; {\partial}_{\theta} - \cot(\theta) \; \sin(\phi) {\partial}_{\phi} \\
K_2 &= \sin (\phi) \; {\partial}_{\theta} - \cot(\theta) \; \cos(\phi) {\partial}_{\phi} \\ \label{killings}
K_3 &= {\partial}_{\phi} \\
K_4 &= {\partial}_t\end{aligned}$$ Since the wave operator $\Box$ commutes with Lie derivatives along Killing vector fields, given any solution $\Phi$ of equation (\[laplacian\]), $\pounds_{K_{i_1}} \cdots \pounds_{K_{i_j}}\Phi$ will also be a solution of this equation. Applying (\[boundphi\]) to this solution we obtain the bound $$\label{bpd1}
| \pounds_{K_{i_1}}.... \pounds_{K_{i_j}}\Phi | \leq \frac{C_{i_1 i_2...i_j}}{\rho+m},$$ where $C_{i_1 i_2...i_j}$ is a constant.\
At every point $p$ of the spacetime ${\cal M}$, the vectors (\[killings\]) span a 3-dimensional subspace of the tangent space $T_p {\cal M}$, equation (\[bpd1\]) fails to give a bound for derivatives of $\Phi$ along radial directions. To obtain a pointwise bound for ${\partial}_s F$, $F$ a solution of (\[eqF\]), we proceed as follows: equation (\[sb\]) applied to ${\partial}_s F$ gives $$\label{bFs}
|{\partial}_s F| \leq M \; \left( ||{\partial}_s F|| + m^2 \, || \partial_s^3 \, F || + || \triangle \, {\partial}_s F || \right).$$ The first term on the right hand side above has a $t-$independent bound given by the ${\mathcal{E}}[F]$, since $${\mathcal{E}}[F]\geq \int_{S} ( F {\mathcal{A}}F) \; dv \geq \int_{S} |\partial_s F|^2 dv.$$ The last term is similarly bounded by the energy ${\mathcal{E}}[\triangle F]$ (note that $[{\cal O},\triangle]=0$, then $\triangle F$ is a solution of the field equation if $F$ is a solution). To bound the second term, we use the fact that $[{\cal O},{\mathcal{A}}]=0$, and compute the energy of ${\mathcal{A}}F$: $${\mathcal{E}}[{\mathcal{A}}F] \geq \int_{S} ({\mathcal{A}}F){\mathcal{A}}({\mathcal{A}}F) dv \geq \int_{S} |\partial_s{\mathcal{A}}F|^2.$$ The last integrand is (see (\[A\])) $$\label{sder}
\partial_s{\mathcal{A}}F=-\partial^3_s F+F \partial_s V_1 +V_1\partial_s F - (\partial_s
V_2) \triangle F-V_2 \triangle \partial_s F$$ and it is easy to check that the functions $\partial_s V_i$ are bounded, say $|\partial_s V_i| \leq V_{i,s}^{max}$. Thus $$\begin{gathered}
\nonumber
||\partial^3_s F|| \leq || \partial_s{\mathcal{A}}F || + V_{1,s}^{max} ||F|| + V_{1}^{max} ||{\partial}_s F||
+ V_{2,s}^{max} ||\triangle F|| + V_{2}^{max} ||{\partial}_s \triangle F || \\
\leq {\mathcal{E}}[\ A F] + V_{1,s}^{max} {\mathcal{E}}[\tilde F] + V_{1}^{max} {\mathcal{E}}[F] +
V_{2,s}^{max} {\mathcal{E}}[\triangle \tilde F] + V_{2}^{max} {\mathcal{E}}[ \triangle F ]\end{gathered}$$ which is $t-$independent. We conclude that the right hand side of equation (\[bFs\]) can be bounded by a $t-$independent constant.
It is easier to find a pointwise bound for the second radial derivative, since using (\[A\]), $$|\partial^2_sF|\leq |{\mathcal{A}}F| + V_1^{max} |F| + V_2^{max} |\triangle F|.$$ and both ${\mathcal{A}}F$ and $\triangle F$ are solutions of (\[eqF\]), and therefore, pointwise bounded.
To bound $|{\partial}_s ^3 F|$ we may use (\[sder\]) $$|\partial^3_s F| \leq | \partial_s{\mathcal{A}}F | + V_{1,s}^{max} |F| + V_{1}^{max} |{\partial}_s F|
+ V_{2,s}^{max} |\triangle F| + V_{2}^{max} |{\partial}_s \triangle F |,$$ together with the fact that every function on the right hand side is either a solution of (\[eqF\]) with compact support on $t-$slices, or an $s-$derivative of such a solution, all of which are bounded. Higher $s-$derivatives can be bounded in this way by induction: take $(n-3)$ $s-$derivatives of equation (\[sder\]), this gives ${\partial}_s^n F$ in terms of lower $s-$derivatives of $F, {\mathcal{A}}F$ and $\triangle F$, all of which, being solutions of (\[eqF\]) with compact support on $t-$slices, are pointwise bounded by the inductive hypothesis. These terms come multiplied by higher $s-$derivatives of the $V_i$, but these can be easily shown to be bounded by noting that ${\partial}_s^k V_i$ is a polynomial in $z :=1/(r+m)$.\
### Transverse derivatives at the horizon {#sec:transv-deriv-at}
In [@Aretakis:2011hc], some transverse derivatives of $\Phi$ across the horizon were found to diverge along the horizon generators. In this Section we show that compact data fields, which belong to a subclass of the solutions studied in [@Aretakis:2011hc], are better behaved.
Since the coordinates $\{ t, \rho,\theta,\phi\}$ (or $\{ t, s,\theta,\phi\}$) cover only the exterior region of the black hole, we need to switch to advanced coordinates $\{ v=t+s,r=\rho+m,\theta,\phi \}$ in order to properly state this problem. Note that $$\begin{aligned}
\left. \frac{{\partial}}{{\partial}s}\right|_{\{ t,\theta,\phi \} } &= \left. \frac{{\partial}}{{\partial}v}\right|_{\{ r,\theta,\phi \} } + \left( \frac{r-m}{r} \right)^2
\left. \frac{{\partial}}{{\partial}r}\right|_{\{ v,\theta,\phi \} },\\
\left. \frac{{\partial}}{{\partial}t}\right|_{\{ s,\theta,\phi \} } &= \left. \frac{{\partial}}{{\partial}v}\right|_{\{ r,\theta,\phi \} },
\label{dv}\end{aligned}$$ become linearly dependent at the horizon and that the norm of $\left. \frac{{\partial}}{{\partial}s}\right|_{\{ t,\theta,\phi \} }$ vanishes when $r \to m^+$. Thus, although we have proved the pointwise boundedness of partial derivatives along the coordinates $\{t,s,\theta,\phi \}$, which are suitable and span the tangent space at any point outside the horizon, the study of the transverse derivatives $ \left. \frac{{\partial}}{{\partial}r}\right)_{\{ v,\theta,\phi \} }$ [*at the horizon*]{} requires a separate treatment.
In advanced coordinates, (\[metric1\]) reads $$\label{rnvr} ds^2 = -(1-m/r)^2 \; dv^2 + 2 dv dr + r^2 \; (d
\theta^2 + \sin^2 \theta \; d \varphi^2),$$ and the scalar wave equation is $$\label{boxvr}
\Box \Phi = \left( \frac{r-m}{r} \right)^2 {\partial}_r^2 \Phi + 2 \left(
\frac{r-m}{r^2} \right) {\partial}_r \Phi +2 {\partial}_r {\partial}_v \Phi +\frac{2}{r} {\partial}_v \Phi +
\frac{\triangle}{r^2} \Phi =0.$$
Theorem 1 in [@Aretakis:2011hc] states that for every $\ell$ there exists a set of constants $\beta_i$ such that the functions $H_{\ell} [\Phi]$, defined on the horizon $r=m$ as $$H_{\ell} [\Phi] := \left[{\partial}_r^{\ell +1} \Phi_{\ell}+ \sum_{i=0}^{\ell} \beta_i
{\partial}_r^{i} \Phi_{\ell} \right]_{r=m},$$ are constant along the horizon generators, i.e., they depend on $(\theta,\phi)$ but not on $v$. Here, $ \Phi_{\ell}$ is the projection of $\Phi$ onto the $2\ell+1$ dimensional $\ell$ harmonic space on ${\mathbb{S}^2}$ (as in equation (\[pl\])), which is itself a solution of the wave equation. This theorem implies that a generic solution $\Phi$ of the wave equation within the class studied in [@Aretakis:2011hc] does not admit a time integral solution $\tilde \Phi$ in the sense of (\[tF\]), as the existence of such a time integral would imply that $H_{\ell} [\Phi] = {\partial}_v H_{\ell} [\tilde \Phi] \equiv
0$, whereas the $H_{\ell} [\Phi] $ are generically non-trivial for these fields, as they evolve from data give on a surface crossing the horizon, and the data are generically non-trivial at the horizon. The reason why this result does not contradict Lemma \[l:int-time\] above lies in the fact that the solutions of compact data studied here are a subclass of the set studied in [@Aretakis:2011hc], and the $H_{\ell} [\Phi]$ trivially vanish for this subclass, as can be seen by taking the limit $v \to -\infty$ along horizon generators (where eventually $\Phi$ is trivial), and using the fact that $H_{\ell} [\Phi]$ does not depend on $v$.
Theorem 6 in [@Aretakis:2011hc] states that some transverse derivatives at the horizon blow up along generators, more precisely, $$\label{div}
\partial_r^{\ell+m+k} {\partial}_v^m \Phi_{\ell} \sim v^{k-1},
\text{ as } v \to \infty, \; k \geq 2, m\geq 0 ,$$ where the limit is taken along a horizon generator, i.e., $v \to \infty$ while keeping $r=m$, and $(\theta,\phi)$ fixed. The proof of this theorem, however, requires that the $H_{\ell} [\Phi]$ be non trivial, and so this result does not hold for compact data solutions.
The worst divergences of transverse derivatives along the horizon reported in [@Aretakis:2011hc] come from the $\ell=0$ piece of $\Phi$. Let us then assume, for simplicity, that $\Phi=\Phi_{\ell=0}$ is a spherically symmetric solution of (\[boxvr\]), then the last term in (\[boxvr\]) vanishes and $${\partial}_v ({\partial}_r \Phi + \tfrac{1}{m} \Phi) {\; \dot{=} \;}0,$$ where $ \dot{=}$ means “equal at the horizon”. Integrating this equation, we prove the constancy along the horizon generators of $$\label{h0}
H_{\ell=0} [\Phi]={\partial}_r \Phi + \tfrac{1}{m} \Phi,$$ which is one of the $H_{\ell}$ referred to above. Note that this conserved quantity implies the boundedness of ${\partial}_r \Phi$ at the horizon. If we take the $r$-derivative of equation (\[boxvr\]), evaluate this equation at the horizon, and then use the original equation (\[boxvr\]) (evaluated at the horizon) to eliminate the term ${\partial}_v {\partial}_r \Phi$ we obtain $$\begin{aligned}
{\partial}_v {\partial}_r^2 \Phi &= \tfrac{2}{m^2} {\partial}_v \Phi - \tfrac{1}{m^2} {\partial}_r \Phi,\\
&= \tfrac{2}{m^2} {\partial}_v \Phi - \tfrac{1}{m^2} H_{\ell=0}+\tfrac{1}{m^3}\Phi,\end{aligned}$$ where in the last equality we have used equation . Integrating this equation in $v$, and using (\[h0\]) gives $$\label{pr2h}
{\partial}_r^2 \Phi = {\partial}_r^2 \Phi |_{v_0} +
\frac{2}{m^2} \left( \Phi - \Phi|_{v_0} \right) - \frac{H_0}{m^2} (v-v_0)
+ \frac{1}{m^3} \int_{v_0}^v \Phi \; dv.$$ Since $|\Phi| \leq C v^{-3/5}$ for large $v$ along a horizon generator and some constant $C$ [@Aretakis:2011hc], it follows from that, if $H_0 \neq 0$, ${\partial}_r^2 \Phi \sim v$ along generators. However, for fields with compact data, $H_0=0$ in implies that $|{\partial}_r^2 \Phi| \leq C' v^{2/5}$ in this same limit, $C'$ a constant. This is a distinctive feature of compact data fields.\
Now suppose that $\tilde \Phi$ belongs to the class studied by Aretakis. Then we could rewrite as $$\label{pr2ha}
{\partial}_r^2 \Phi = {\partial}_r^2 \Phi |_{v_0} +
\frac{2}{m^2} \left( \Phi - \Phi|_{v_0} \right)
+ \frac{1}{m^3} \left( \tilde \Phi - \tilde \Phi|_{v_0} \right),$$ and use boundedness of $| \tilde \Phi|$ to prove boundedness of ${\partial}_r^2 \Phi$ at the horizon. More generally, we could apply (\[div\]) to $\tilde \Phi$ and arrive at $$\label{div2}
\partial_r^{\ell+n+q} {\partial}_v^n \Phi_{\ell} \sim v^{q-3},
\text{ as } v \to \infty, \; (q \geq 4, n \geq 0) .$$
We do not have a proof that $\tilde \Phi$ could be extended to a field in the class of solutions in [@Aretakis:2011hc], as, in principle, $\tilde \Phi$ is only defined in the open set $r>m$. However, the facts that $\tilde \Phi
\sim r^{-1}$ near spacelike infinity (see (\[beh\])) and $E[\tilde \Phi]<
\infty$ suggest that such an extension exits. [^1] Note that it is unlikely that we could further extend these arguments to $\tilde{\tilde {\Phi}}$, since this field has divergent energy.
Acknowledgments {#acknowledgments .unnumbered}
===============
We would like to thank Lars Andersson, Pieter Blue, Mihalis Dafermos and Martin Reiris for illuminating discussions. We specially thank Stefanos Aretakis and Harvey Reall for pointing out errors in previous versions of this manuscript, and making comments that led to many improvements.
The authors are supported by CONICET (Argentina). This work was supported by grants PIP 112-200801-02479 and PIP 112-200801-00754 of CONICET (Argentina), Secyt 05/B384 and 30720110101131 from Universidad Nacional de Córdoba (Argentina), and a Partner Group grant of the Max Planck Institute for Gravitational Physics (Germany).
[10]{}
S. Aretakis. , 2010, 1006.0283.
S. Aretakis. . , 307:17–63, 2011, 1110.2007.
S. Aretakis. . , 12:1491–1538, 2011, 1110.2009.
S. Aretakis. . 2012, 1206.6598.
D. G. Boulware. Naked singularities, thin shells, and the reissner-nordström metric. , 8:2363–2368, Oct 1973.
B. Carter. Black hole equilibrium states. In [*Black holes/Les astres occlus (École d’Été Phys. Théor., Les Houches, 1972)*]{}, pages 57–214. Gordon and Breach, New York, 1973.
W. Couch and R. Torrence. Conformal invariance under spatial inversion of extreme [R]{}eissner-[N]{}ordström black holes. , 16:789–792, 1984. 10.1007/BF00762916.
M. Dafermos and I. Rodnianski. , 2008, 0811.0354.
M. Dafermos and I. Rodnianski. . 2010, 1010.5137.
J. Dimock and B. S. Kay. , 175:366, 1987.
G. Dotti and R. J. Gleiser. . , 27:185007, 2010, 1001.0152.
F. G. Friedlander. . Cambridge University, Cambridge, 1975.
D. Gilbarg and N. S. Trudinger. . Springer-Verlag, Berlin, 2001. Reprint of the 1998 edition.
S. W. Hawking and G. F. R. Ellis. . Cambridge University Press, Cambridge, 1973.
B. S. Kay and R. M. Wald. . , 4:893–898, 1987.
J. Lucietti and H. S. Reall. , 2012, 1208.1437.
V. Moncrief. . , D12:1526–1537, 1975.
A. Ori. The general solution for spherical charged dust. , 7(6):985, 1990.
T. Regge and J. A. Wheeler. . , 108:1063–1069, 1957.
R. M. Wald. Note on the stability of the schwarzschild metric. , 20(6):1056–1058, 1979.
R. M. Wald. Erratum: Note on the stability of the schwarzschild metric. , 21(1):218–218, 1980.
F. Zerilli. . , D9:860–868, 1974.
F. J. Zerilli. . , 24:737–738, 1970.
[^1]: We thank S. Aretakis and H. Reall for this observation.
|
---
abstract: 'In this paper, the effect of the intrinsic distribution of cosmological candles is investigated. We find that, in the case of a narrow distribution, the deviation of the observed modulus of sources from the expected central value could be estimated within a ceratin range. We thus introduce a lower and upper limits of $\chi ^{2}$, $\chi _{\min }^{2}$ and $ \chi _{\max }^{2}$, to estimate cosmological parameters by applying the conventional minimizing $\chi ^{2}$ method. We apply this method to a gamma-ray burst (GRB) sample as well as to a combined sample including this GRB sample and an SN Ia sample. Our analysis shows that: a) in the case of assuming an intrinsic distribution of candles of the GRB sample, the effect of the distribution is obvious and should not be neglected; b) taking into account this effect would lead to a poorer constraint of the cosmological parameter ranges. The analysis suggests that in the attempt of constraining the cosmological model with current GRB samples, the results tend to be worse than what previously thought if the mentioned intrinsic distribution does exist.'
author:
- |
Y.-P. Qin$^{1.2}$[^1], B.-B. Zhang$^{1,3}$,Y.-M. Dong$^{1,3}$, F.-W. Zhang$^{2}$,H.-Z. Li,$^{1,3}$, L.-W. Jia$^{1,3}$, L.-S. Mao$^{1,3}$, R.-J. Lu$^{1,2,3}$,T.-F. Yi $^{2}$, X.-H. Cui$^{1,3}$, and Z.-B. Zhang $^{1,3}$\
$^{1}$National Astronomical Observatories/Yunnan Observatory, Chinese Academy of Sciences, P. O. Box 110, Kunming,\
Yunnan, 650011, P. R. China\
$^{2}$Physics Department, Guangxi University, Nanning, Guangxi 530004, P. R. China\
$^{3}$The Graduate School of the Chinese Academy of Sciences
date: 'Accepted year mon day . Received year mon day; in original form 2005,June,13th'
title: Method of determining cosmological parameter ranges with samples of candles with an intrinsic distribution
---
\[firstpage\]
cosmological parameters — cosmology: observations — distance scale
Introduction
============
One of the greatest achievements obtained in the past few years in astrophysics is the determination of cosmological parameters with type Ia supernovae (SN Ia), which suggests an accelerating universe at large scales (Riess et al. 1998, Perlmutter et al. 1999, Tonry et al. 2003, Barris et al. 2004, Knop et al. 2003, Riess et al. 2004). The cosmic acceleration was also confirmed, independently of the SN Ia magnitude-redshift relation, by the observations of the cosmic microwave background anisotropies (WMAP: Bennett et al. 2003) and the large scale structure in the distribution of galaxies (SDSS: Tegmark et al. 2004a, 2004b). It is well known that all known types of matter with positive pressure generate attractive forces and decelerate the expansion of the universe. Given this, a dark energy component with negative pressure was generally suggested to be the invisible fuel that drives the current acceleration of the universe. There are a huge number of candidates for the dark energy component in the literature, such as a cosmological constant $\Lambda$ (Carroll et al. 1992), an evolving scalar field (referred to by some as quintessence: Ratra and Peebles 1988; Caldwell et al. 1998), the phantom energy, in which the sum of the pressure and energy density is negative (Caldwell 2002), the so-called “X-matter" (Turner and White 1997; Zhu 1998; Zhu, Fujimoto and Tatsumi 2001; Zhu, Fujimoto and He 2004b), the Chaplygin gas (Kamenshchik et al. 2001; Bento et al. 2002; Zhu 2004), the Cardassion model (Freese and Lewis 2002; Zhu and Fujimoto 2002, 2003, 2004; Zhu, Fujimoto and He 2004a), and the brane world model (Randall and Sundrum 1999a, 1999b; Deffayet, Dvali and Gabadadze 2002).
Samples of SN Ia sources available in the early analysis contain only sources with redshifts $z<1$. Although observations of the fluctuation in the cosmic microwave background (CMB) can constrain the cosmological model up to redshifts as high as $z\sim 1000$ (e.g., Spergel et al. 2003), a more direct measurement of the universe with objects located at very large distances is strongly desired. Fortunately, recent observations extended the SN Ia sample to sources with redshifts as large as $z=1.7$. The previous result was confirmed by these high redshift sources and the analysis revealed that before its acceleration the universe underwent a period of deceleration (Riess et al. 2004). The success of including high redshift SN Ia sources inspires us to great efforts to search for cosmological rulers with much higher redshifts. Based on the $E_{p}-E_{\gamma }$ relation found recently in a class of gamma-ray bursts (GRBs) (Ghirlanda et al. 2004b), Dai et al. (2004) assumed that the GRB sources obeying this relation can be used to measure the universe. In their sample of 12 GRBs, two have redshifts $z>2$. Soon after their work, the same issue was investigated by many authors (see Ghirlanda et al. 2004a; Friedman and Bloom 2005; Firmani et al. 2005; Xu et al. 2005; Liang and Zhang 2005). It was found that current GRB data which are lack of low redshift sources could be used to marginalize some parameters in their reasonable ranges (see Xu et al. 2005 for a detailed explanation), or they could be employed to constrain the cosmological model with a new Bayesian method (Firmani et al. 2005). Although the size of the current GRB sample is small and low reshift sources are missed, the idea that some high redshift extragalactic sources other than SN Ia might be employed to determine the cosmological model is quite interesting and promising.
It would be natural that, for a kind of source which could serve as candles, one assumes a distribution of luminosity, which is reasonable due to fluctuation. As discussed in Kim et al. (2004), the uncertainty of a source must include both the systematic uncertainty and the magnitude dispersion. We argue that, if there exists a distribution of luminosity of the candles, the expected luminosity itself (or the corresponding deduced luminosity distance) could be different from source to source, which would be due to an intrinsic property rather than to the measurement uncertainty. This raises a topic of finding an appropriate method to estimate cosmological parameter ranges with candles with a certain distribution.
When employing candles such as SN Ia or GRBs to measure the universe, the confidence level associated with the fit of the theoretical curve to the luminosity distance data was described by a statistic $\chi ^{2}$ which is defined under the assumption that the measurement uncertainty is the only cause of the deviation of the data to the curve. The best fit will be obtained when one reaches the minimum value of $\chi ^{2}$. However, for candles with a certain distribution, the deviation of the observed luminosity from the expected curve must be caused by both the measurement uncertainty and the distribution itself. When taking into account the distribution of luminosity, the $\chi ^{2}$ statistic could not be defined if the distribution itself is unknown. The minimizing $\chi ^{2}$ method will not be applicable if the statistic itself cannot be defined.
In the following, we will study how to deal with this matter and investigate what one can expect from the analysis. A corresponding method will be proposed and will be illustrated with two samples.
The method
==========
In this section, we propose a method to deal with candles with a certain distribution when employing them to constrain the cosmological model. As mentioned above, the statistic $\chi ^{2}$ could not be defined for candles with a certain distribution if the distribution itself is unknown. Even if the distribution is known, the statistic is still undefinable since there is no way to know the real luminosity of each source. These difficulties lead to two problems. One is that the well-known minimizing $\chi ^{2}$ method could not be applicable without a definition of the statistic. The other is that the probability associated with the statistic $ \chi ^{2}$, if we define it when taking into account the deviation arising from the distribution, is not available (since the real luminosity of each source is unknown).
It is known that the convolution of two Gaussian is still a Gaussian with a width that is given by the quadratic sum of the two widths of the original distributions. That is $\sigma^2=\sigma_1^2+\sigma_2^2$, where $\sigma_1^2$ and $\sigma_2^2$ are the variances of the two Gaussian functions concerned and $\sigma^2$ is that of the resulted Gaussian.
Let us consider the deviation of an observed luminosity distance modulus, $%
\mu _{ob}$, of a source from the real value of the quantity, $\mu
_{th}$, which follows
$$\begin{aligned}
(\mu _{ob}\pm \sigma _{ob})-\mu _{th}(z;H_{0},\Omega _{m},\Omega
_{\Lambda }) \nonumber \\
=(\mu _{ob}\pm \sigma _{ob})-[\mu _{th,0}(z;H_{0},\Omega
_{m},\Omega _{\Lambda })+\Delta \mu _{th}],\end{aligned}$$
where $\sigma _{ob}$ is the measurement uncertainty of $\mu
_{ob}$, $\mu _{th,0}$ is the central value of $\mu _{th}$, which is the real value of the modulus expected in the case when there is no distribution of the candles, and $\Delta \mu _{th}$ represents the deviation of $\mu _{th}$ from $\mu _{th,0}$. Suppose that the distribution of candles is narrow enough so that the absolute value of the deviation of $\mu _{th}$ from $\mu _{th,0}$, $%
|\Delta \mu _{th}|$, is small. According to the error transform formula, the uncertainty of $\mu _{ob}$ relative to $\mu _{th,0}$ could be determined by $$\sigma _{ob,0}=\sqrt{\sigma _{ob}^{2}+(\Delta \mu _{th})^{2}}.$$Relative to the expected central moduli, the $\chi ^{2}$ statistic of a sample of the candles could be determined by$$\chi ^{2}=\underset{i}{\sum }\frac{[\mu _{ob,i}-\mu
_{th,0,i}(z;H_{0},\Omega _{m},\Omega _{\Lambda })]^{2}}{\sigma
_{ob,i}^{2}+(\Delta \mu _{th,i})^{2}}.$$\[Note that, in the case of SN Ia, $\sigma _{ob,i}^{2}$ should be replaced by $\sigma _{ob,i}^{2}+\sigma _{v}^{2}$, where $\sigma _{ob,i}$ is the uncertainty in the individual distance moduli deduced from the empirical relation between the light-curve shape and luminosity and $\sigma _{v}$ is the uncertainty associated with the dispersion in supernovae redshift (transformed to units of distance moduli) due to peculiar velocities (see Riess et al. 2004)\]
It seems that, with equation (3), one might be able to evaluate the $\chi ^{2} $ statistic. But because $\Delta \mu _{th,i}$ is in no way to be known, this is unfortunately not true. However, under the condition that the distribution of candles is narrow, we can estimate $\Delta \mu _{th,i}$ with the width of the distribution. Let $\widetilde{\sigma }_{dis}$ be the width of the distribution of $\mu _{th}/\mu _{th,0}$ (called the intrinsic distribution of the relative luminosity distance moduli). (Note that $\mu
_{th}/\mu _{th,0}$ should of course become unity when there is no deviation of $\mu _{th}$ from $\mu _{th,0}$). We assume $|\Delta
\mu _{th,i}|\simeq \widetilde{\sigma }_{dis}\mu _{th,0,i}$. Thus the $\chi ^{2}$ statistic could be estimated by$$\chi ^{2}\simeq \underset{i}{\sum }\frac{[\mu _{ob,i}-\mu
_{th,0,i}(z;H_{0},\Omega _{m},\Omega _{\Lambda })]^{2}}{\sigma _{ob,i}^{2}+%
\widetilde{\sigma }_{dis}^{2}\mu _{th,0,i}^{2}}.$$
As long as $\widetilde{\sigma }_{dis}$ is provided, the $\chi
^{2}$ statistic is then available according to (4). For any kind of candle, quantity $\widetilde{\sigma }_{dis}$ could be estimated when the sample employed is large enough and the measurement uncertainty $\sigma _{ob}$ is small enough and when the cosmological model is fixed. Obviously, this could not be realized at present since the cosmological model itself is currently a target to be pursued and for interesting candles the measurement uncertainty is always quite large. But this cannot prevent one to estimate the limits of $\widetilde{\sigma }_{dis}$. As the deviation of $\mu _{ob}$ from $\mu _{th,0}$ is caused by both the distribution of $\mu _{th}$ and the measurement uncertainty of $\mu _{ob}$ itself, $\widetilde{\sigma }_{dis}$ must be smaller than $\widetilde{\sigma }_{dis,\max }$, where $\widetilde{%
\sigma }_{dis,\max }$ is the width of the distribution of $\mu _{ob}/\mu
_{th,0}$, which is determined by $\widetilde{\sigma }_{dis,\max }=\sqrt{%
\underset{i}{\sum }(\mu _{ob,i}/\mu _{th,0,i}-1)^{2}/(N-1)}$, with $N$ being the size of the sample. Let us over estimate the effect of the measurement uncertainty in the opposite way. Within the range of $[\mu _{ob,i}-\sigma
_{ob,i},\mu _{ob,i}+\sigma _{ob,i}]$ we take the value that is the closest one to $\mu _{th,0,i}$ as $\mu _{ob,i}^{\ast }$. Obviously, the distribution of $\mu _{ob}^{\ast }/\mu _{th,0}$ would be narrower than the distribution of $\mu _{th}/\mu _{th,0}$ since the deviation caused by the measurement uncertainty is over subtracted. We take the width of the distribution of $%
\mu _{ob}^{\ast }/\mu _{th,0}$ as $\widetilde{\sigma }_{dis,\min }$, which is calculated with $\widetilde{\sigma }_{dis,\min }=\sqrt{\underset{i}{\sum }%
(\mu _{ob,i}^{\ast }/\mu _{th,0,i}-1)^{2}/(N-1)}$. Clearly, $\widetilde{%
\sigma }_{dis}$ must be larger than $\widetilde{\sigma }_{dis,\min }$. With these two quantities we have$$\chi _{\min }^{2}\simeq \underset{i}{\sum }\frac{[\mu _{ob,i}-\mu
_{th,0,i}(z;H_{0},\Omega _{m},\Omega _{\Lambda })]^{2}}{\sigma _{ob,i}^{2}+%
\widetilde{\sigma }_{dis,\max }^{2}\mu _{th,0,i}^{2}}$$and$$\chi _{\max }^{2}\simeq \underset{i}{\sum }\frac{[\mu _{ob,i}-\mu
_{th,0,i}(z;H_{0},\Omega _{m},\Omega _{\Lambda })]^{2}}{\sigma _{ob,i}^{2}+%
\widetilde{\sigma }_{dis,\min }^{2}\mu _{th,0,i}^{2}}.$$Since $\widetilde{\sigma }_{dis,\min }<\widetilde{\sigma }_{dis}<\widetilde{%
\sigma }_{dis,\max }$, one gets $\chi _{\min }^{2}<\chi ^{2}<\chi
_{\max }^{2}$. With equations (5) and (6), one can calculate the corresponding probability associated with the $\chi ^{2}$ statistic and confine the conventional confidence contour. In this way, cosmological parameters would be constrained. With this estimating method, the first problem is largely eased and the second is solved.
Application
===========
Let us consider a GRB sample. The sample was presented and studied in Xu et al. (2005) and Xu (2005) (the XDL GRB sample) which contains 17 GRBs. As suggested in Ghirlanda et al. (2004a), the scatter of the data points of their GRB sample around the correlation of $E_{p}-E_{\gamma }$ found recently (Ghirlanda et al. 2004b) is of a very small order.
To check if the data of the XDL GRB sample are consistent with no scatter beyond the measurement errors in terms of statistics, the simplest method is to calculate the mean of the deviation of the deduced luminosity distance moduli from the expected one of the sample and then compare it with the average of the measurement error. The mean of the deviation is defined as $\sigma_{dev}=\sqrt{%
\underset{i}{\sum }((\mu _{ob,i}-\mu _{ex,i})/\mu
_{ex,i})^{2}/(N-1)}$, where $\mu _{ex}$ is the expected value of $\mu$, while the average of the measurement error is calculated with $\sigma_{err}=\sqrt{%
\underset{i}{\sum }(\sigma_{ob,i}/\mu _{ex,i})^{2}/(N-1)}$. (Note that, as redshifts of these sources are not the same, we consider the relative values.) We get the following from the XDL sample: $\sigma_{dev}=0.0122$ and $\sigma_{err}=0.0116$, where we adopt $(\Omega _{m},\Omega _{\Lambda },h)=(0.29,0.71,0.65)$. It shows that the deviation is slightly larger than the measurement error. (Ignoring the slight difference between the two quantities, the result confirms what suggested in Ghirlanda et al. 2004a, 2004b.) Taking $\mu _{th,0}$ as $\mu _{ex}$ adopted here, one finds that $\sigma_{dev}$ is identical with $\widetilde{\sigma }_{dis,\max }$ defined in last section. Thus, for the XDL sample, $\widetilde{\sigma }_{dis}<0.0122$, suggesting that the distribution, if exists, would be quite narrow. Another approach involves a simulation analysis. We assume that there is no intrinsic distribution of the deduced luminosity distance moduli, and thus the deviation observed is due to the measurement uncertainty. Obviously, under this assumption the distribution of $\mu _{ob}/\mu _{ex}$ should peak at unity. According to the null hypothesis, the observed value of $\mu _{ex}$ for each source is obtained by chance from a parent population of $\mu _{ob}^{\prime
}$ whose distribution obeys a Gaussian with the measurement uncertainty served as the width of the Gaussian. For each source one can create a $\mu _{ob}^{\prime }$ via simulation as long as the expected value $\mu _{ex}$ and the measurement uncertainty are known. In this way, from the 17 $\mu _{ex}$ and the corresponding measurement uncertainties, one can create a set of 17 $\mu
_{ob}^{\prime }$ data by a Monte-Carlo simulation and then obtain a set of 17 $\mu _{ob}^{\prime }/\mu _{ex}$ data. We perform 100 times of simulation and get 100 sets of 17 $\mu _{ob}^{\prime
}/\mu _{ex}$ data. Combining these 100 sets we get a large sample with its size being 1700. The deviation of the relative simulated luminosity distance moduli from the expected one (the unity) is defined as $\sigma_{dev}^{\prime }=\sqrt{%
\underset{i}{\sum }(\mu _{ob,i}^{\prime }/\mu
_{ex,i}-1)^{2}/(N-1)}$. Note that $\sigma_{dev}^{\prime }$ could be written as $\sigma_{dev}^{\prime }=\sqrt{%
\underset{i}{\sum }((\mu _{ob,i}^{\prime }-\mu _{ex,i})/\mu
_{ex,i})^{2}/(N-1)}$, which could thus be directly compared with $\sigma_{dev}$, the deviation of the observed data defined above. From the XDL sample we get $\sigma_{dev}^{\prime }=0.0113$, which suggests that the deviation associated with observation, denoted by $\sigma_{dev}$, is also slightly larger than that expected from the measurement uncertainties. Two methods come to almost the same result, suggesting that there might be an intrinsic distribution of the relative luminosity distance moduli of the XDL sample, although it would be quite narrow (as the difference between $\sigma_{dev}^{\prime }$ and $\sigma_{dev}$ and that between $\sigma_{err}$ and $\sigma_{dev}$ are small).
To illustrate how to apply the method proposed above to deal with data with intrinsic distributions, we assume in the following that there is a distribution of the true value of the deduced relative luminosity distance moduli for the XDL sample, although the distribution, if it exists, might be very narrow (see what suggested above). For the sake of comparison, we perform the fit with three $\chi ^{2}$ statistics. One is the conventional $\chi
^{2}$ which could be determined by (3) when taking $\Delta \mu
_{th,i}=0$. The other two are $\chi _{\min }^{2}$ and $\chi _{\max
}^{2}$ which are determined by equations (5) and (6) respectively. Each $\chi ^{2}$ statistic is calculated with the XDL GRB sample in many tries. In each try, we adopt a set of parameters and based on these parameters we deduce both the observed and theoretical luminosity distance moduli. With these moduli and the measurement uncertainties, we are able to evaluate $\widetilde{\sigma
}_{dis,\min }$ and $\widetilde{\sigma }_{dis,\max }$ (see what proposed in last section), and then the corresponding $\chi ^{2}$ statistic would be well determined ($H_{0} = 65 kms^{-1}Mpc^{-1}$ is adopted throughout this paper). For each $\chi ^{2}$, the best fit will be obtained when the smallest value is reached.
Displayed in Fig. 1 are the Hubble diagram and the confident contour plot of the XDL GRB sample. As concluded previously by other authors (see Ghirlanda et al. 2004a; Friedman and Bloom 2005; Xu et al. 2005), currently, employing GRB samples alone cannot tightly constrain the cosmological model. Fig. 1 shows that, the parameter ranges are indeed poorly constrained even there is no intrinsic distribution of the relative luminosity distance moduli (see solid lines in Fig. 1b). Taking into account an intrinsic distribution of the moduli leads to much poorer results. This indicates that if there indeed exists an intrinsic distribution of the moduli, the effect arising from the distribution should not be ignored.
Shown in Table 1 are the best fit cosmological parameters for the three kinds of universe, obtained by applying the minimizing $\chi
^{2}$ method to the three $\chi ^{2}$ statistics, where the $1\sigma$ errors are estimated from the corresponding $1\sigma$ contours in Fig. 1b. As shown in Fig. 1b, the $1\sigma$ contours are not closed within the ranges of the plot. This leads a poor constraint to the limits of the parameters. Some limits are therefore not able to be determined, which are denoted by “?” in Table 1.
Sample Universe $(\Omega_{M},\Omega_{\Lambda},\chi_{0,\nu}^2)$ [^2] $(\Omega_{M},\Omega_{\Lambda},\chi_{max,\nu}^2)$ $(\Omega_{M},\Omega_{\Lambda},\chi_{min,\nu}^2)$
-------- ---------- ------------------------------------------------------------------ ---------------------------------------------------------------- -----------------------------------------------------------------
SN+GRB flat $(0.283^{+0.0314}_{-0.0288},0.717, 197.9)$ $(0.288^{+0.0134}_{-0.0201},0.712, 193.1)$ $(0.288^{+0.0138}_{-0.000375},0.712, 185.4)$
SN+GRB open $(0.368^{+0.127}_{-0.114},0.857^{+0.371}_{-0.170}, 197.0)$ $(0.281^{+0.0201}_{-0.0201},0.717^{+0.0198}_{-0.0353}, 193.2)$ $(0.281^{+0.00669}_{-0.0134},0.717^{+0.0100}_{-0.0201}, 185.4)$
SN+GRB closed $(0.281^{+0.0334}_{-0.0469},0.717^{+0.0296}_{-0.0.0804}, 198.0)$ $(0.428^{+0.147}_{-0.161},0.942^{+0.226}_{-0.246}, 191.3)$ $(0.441^{+0.147}_{-0.0201},0.967^{+0.226}_{-0.0351}, 183.1)$
GRB flat $(0.188^{+0.200}_{-0.114},0.812, 19.69)$ $(0.188^{+1.176}_{-?},0.812, 15.14)$ $(0.188^{+1.539}_{-?},0.812, 7.43)$
GRB open $(0.187^{+0.221}_{-?},0.682^{+0.221}_{-?}, 19.67)$ $(0.187^{+?}_{-?},0.256^{+?}_{-?}, 14.96)$ $(0.154^{+?}_{-?},0.391^{+?}_{-?}, 7.40)$
GRB closed $(0.187^{+0.201}_{-0.114},0.817^{+0.401}_{-0.206}, 19.67)$ $(0.187^{+1.177}_{-?},0.817^{+0.551}_{-?}, 15.14)$ $(0.187^{+1.54}_{-?},0.817^{+0.59}_{-?}, 7.44)$
The fact that the parameter ranges are poorly constrained (even when the intrinsic distribution of the relative luminosity distance moduli is ignored) might probably be due to the lack of low redhsift sources, as it is already known that low redhsift sources are important when employing a GRB sample to constrain the cosmological parameters (see Firmani et al. 2005). We thus follow what were done previously (see Ghirlanda et al. 2004a) to combine an SN Ia sample and the XDL sample to constrain the cosmological model. The SN Ia sample employed is that presented in Riess et al. (2004) (the so-called gold set of SN Ia) which contains 157 sources (where, many low redshifts sources are included). In the same way and for the same reason we apply the minimizing $\chi
^{2}$ method to the three $\chi ^{2}$ statistics to find the best fit cosmological parameters. Note that, unlike what is shown in the case of the GRB sample, the deduced luminosity distance moduli of the SN Ia sources do not depend on the adopted cosmological parameters.
It is known that, in estimating the deduced luminosity distance moduli of the SN Ia sources, deviations caused by different magnitudes of the peak luminosity of the sources have been checked. Indeed, we find that the distribution of the relative luminosity distance moduli of the SN Ia sample is very narrow (the figure is omitted). This suggests that, if it still exists (possibly caused by the small deviation from the adopted empirical relation between the light-curve shape and luminosity), the intrinsic distribution must be extremely narrow. Thus we ignore the intrinsic distribution of the relative luminosity distance moduli of the SN Ia sample, and consider only $\widetilde{\sigma
}_{dis,\min }$ and $\widetilde{\sigma }_{dis,\max }$ for the GRB sample when we calculate the corresponding $\chi _{\min }^{2}$ and $\chi _{\max }^{2}$ for the combined sample (including the XDL GRB sample and the gold SN Ia sample).
Shown in Fig. 2 are the Hubble diagram and the confident contour plot of the combined sample. One finds that, including the SN Ia sample significantly improves the constraint of the ranges of cosmological parameters. Once more, the result shows that taking into account the intrinsic distribution of the relative luminosity distance moduli leads to a poorer constraint. The effect is still obvious (although it is less obvious than that adopting the GRB sample alone) and therefore should not be neglected.
The resulting best fit cosmological parameters as well as their $1\sigma$ errors are listed in Table 1 as well.
Discussion and conclusions
==========================
The effect of the intrinsic distribution of cosmological candles is investigated in this paper. Due to fluctuation, it is natural that a property (say, the luminosity) of sources served as a cosmological candle might form a distribution and scatter around a central value. If the distribution does exist, the statistic $\chi
^{2}$ cannot be defined since the distribution itself is unclear and the real value of the property for each source is unknown. However, when the distribution is narrow, the deviation of the observed modulus of each source from the central value could be estimated within a ceratin range. We accordingly define a lower and upper limits of $\chi ^{2}$, $\chi _{\min }^{2}$ and $ \chi
_{\max }^{2}$, to estimate cosmological parameters via the conventional minimizing $\chi ^{2}$ method. The confidence contours of these two $\chi ^{2}$ statistics can then be plotted in the conventional way, and with these curves the ranges of the parameters could be determined as long as a confidence level is assigned.
With this method, a sample bearing a relatively small width of the intrinsic distribution of the deduced relative luminosity distance moduli would be applicable to constraining the cosmological parameters. To illustrate this method we employ a GRB sample alone and later combine this GRB sample with the gold SN Ia sample, assuming that this GRB sample (the XDL sample) has an intrinsic distribution of the deduced relative luminosity distance moduli while the SN Ia sample has not. The analysis suggests that: a) the effect of the intrinsic distribution of the relative luminosity distance moduli is obvious and therefore should not be neglected if the distribution itself does exist; b) taking into account this effect would lead to a poorer constraint of the ranges of cosmological parameters. This indicates that in the attempt of constraining the cosmological model with GRB samples, the results tend to be worse than what previously thought if the mentioned intrinsic distribution exists, although the distribution is very narrow.
As revealed recently by Wang et al. (2005), there is a clear evidence for a tight linear correlation between peak luminosities of SN Ia and their $B-V$ colors at $\sim $ 12 days after the $B$ maximum. They found that this empirical correlation allows one to reduce scatters in estimating their peak luminosities from $\sim $ 0.5 mag to the levels of 0.18 and 0.12 mag in the $%
V$ and $I$ bands, respectively. We wonder if taking into account this effect can reduce the measurement uncertainty of the luminosity distance of the SN Ia sources. If so, the ranges of the cosmological parameters might be better constrained (when compared with Fig. 2) (this will be investigated later).
As encountered in other cases, our method suffers from possible evolution of candles. Quite recently, Firmani et al. (2004) found evidence supporting an evolving luminosity function of long GRBs, where the luminosity scales as $%
(1+z)^{1.0\pm0.2}$. It is unclear if the corrected gamma-ray energy, from which the luminosity distance moduli of the adopted GRB sample are deduced, evolves with redshif. If so, the question if the GRB sample can still be used to constrain the cosmological model should be answered. This deserves a detailed investigation. (It could be done only when the size of the sample is large enough).
Acknowledgments {#acknowledgments .unnumbered}
===============
We thank Profs. K. S. Cheng, C. Firmani, Z. G. Dai, Y.-Q. Lou, Y.-F. Huang, and Z.-H. Zhu for their helpful suggestions and comments. This work was supported by the Special Funds for Major State Basic Research Projects (973) and National Natural Science Foundation of China (No. 10273019).
[99]{} Bennett, C. L., Halpern, M., Hinshaw, G., Jarosik, N. et al. 2003, ApJS, 148, 1 Bento, M. C., Bertolami, O and Sen, A. A. 2002, PhRvD, 66, 043507 Caldwell, R. 2002, Phys.Lett.B, 545, 23 Caldwell, R., Dave, R., and Steinhardt, P. J. 1998, PhRvL, 80, 1582 Carroll, S., Press, W. H. and Turner, E. L. 1992, ARA&A ,30, 499 Dai, Z. G., Liang, E. W., and Xu, D 2004, ApJ, 612, L101 Deffayet, C., Dvali, G. and Gabadadze, G. 2002, PhRvD,, 65, 044023 Freese, K. and Lewis, M. 2002, Phys.Lett.B, 540, 1 Firmani, C., Avila-Reese, V., Ghisellini, G., Tutukov, A. V. 2004, ApJ, 611, 1033 Firmani, C., Ghisellini, G., Ghirlanda, G. and Avila-Reese,V. 2005, MNRAS, in press (astro-ph/0501395) Friedman, A. S., Bloom, J. S. 2005, ApJ, in press (astro-ph/0408413) Ghirlanda, G., Ghisellini, G., Lazzati, D., Firmani, C. 2004a, ApJ, 613, L13 Ghirlanda, G., Ghisellini, G., Lazzati, D. 2004b, ApJ, 616, 331 Kamenshchik, A., Moschella, U. and Pasquier, V. 2001, Phys.Lett.B, 511, 265 Kim, A. G., Linder, E. V., Miquel, R., Mostek, N. 2004, MNRAS, 347, 909 Knop, R. A.,Aldering, G.; Amanullah, R.; Astier, P. et al. 2003, ApJ, 598, 102 Liang, E., Zhang, B. 2005, ApJ, in press \[astro-ph/0504404\] Perlmutter, S., Aldering, G., Goldhaber, G., Knop, R. A., Nugent, P. et al. 1999, ApJ, 517, 565 Ratra, B. and Peebles, P. J. E. 1988, PhRvD,, 37, 3406 Randall, L. and Sundrum, R. 1999a, PhRvL, 83, 3370 Randall, L. and Sundrum, R. 1999b, PhRvL, 83, 4690, Riess, A. G., Filippenko, A. V., Challis, P., Clocchiatti, A., Diercks, A. et al. 1998, AJ, 116, 1009 Riess, A. G., Strolger, L.-G., Tonry, J., Casertano, S., Ferguson, H. C. et al. 2004, ApJ, 607, 665 Spergel, D. N., Verde, L., Peiris, H. V., Komatsu, E., Nolta, M. R. et al. 2003, ApJS, 148, 175 Tegmark, M., Strauss, M. A., Blanton, M. R., Abazajian, K. et al. 2004a, PhRvD, 69, 103501 Tegmark, M. B., Michael R., Strauss, M. A., Hoyle, F. et al.2004b, ApJ, 606, 702 Tonry, J. L., Schmidt, B. P., Barris, B., Candia, P. et al. 2003, ApJ, 594, 1 Turner, M. S. and White, M. 1997, PhRvD, 56, R4439 Wang, X., Wang, L., Zhou, X., Lou, Y.-Q., Li, Z. 2005, ApJ, inpress Xu, D, 2005, astro-ph/0504052 Xu, D, Dai, Z. G., Liang, E. W. 2005, ApJ, accepted (astro-ph/0501458) Zhu, Z. -H. 1998 A&A, 338, 777 Zhu, Z. -H. 2004 A&A, 423, 421 Zhu, Z. -H. and Fujimoto, M. -K. 2002, ApJ, 581, 1 Zhu, Z. -H. and Fujimoto, M. -K. 2003, ApJ, 585, 52 Zhu, Z. -H. and Fujimoto, M. -K. 2004, ApJ, 602, 12 Zhu, Z. -H., Fujimoto, M. -K. and He, X. -T. 2004a, ApJ, 603, 365. Zhu, Z. -H., Fujimoto, M. -K. and He, X. -T. 2004b A&A, 417, 833. Zhu, Z. -H., Fujimoto, M. -K. and Tatsumi, D. 2001, A&A, 372, 377
\[lastpage\]
[^1]: E-mail: ypqin@ynao.ac.cn
[^2]: $\chi_{0,\nu}^2$ is the reduced $\chi^2$ calculated with equation (4) when assigning $\widetilde{\sigma }_{dis}=0$.
|
---
author:
- 'T. Haines, B.C. Ngô'
title: Nearby cycles for local models of some Shimura varieties
---
amssym.def amssym
=msbm10 scaled 1 =msbm7 scaled 1 =msbm5 scaled 1 ===\#1[[\#1]{}]{}
=eufm10 scaled 1 =eufm7 scaled 1 =eufm5 scaled 1 ===\#1[[\#1]{}]{}
\[proposition\][[Corollary]{}]{} \[proposition\][[Definition]{}]{} \[proposition\][[Lemma]{}]{} \[proposition\][[Theorem]{}]{}
\#1\_\#2\^\#3 \#1\#2[[to 12mm\_[\#1]{}\^[\#2]{}]{}]{} \#1\#2 \#1\#2 \#1\#2[to 4mm. ]{} \#1\#2[to 4mm. ]{} \#1
Introduction
============
For certain classical groups $G$ and certain minuscule coweights $\mu$ of $G$, M. Rapoport and Th. Zink have constructed a projective scheme $M(G,\mu)$ over ${\Bbb Z}_p$ that is a local model for singularities at $p$ of some Shimura variety with level structure of Iwahori type at $p$. Locally for the étale topology, $M(G,\mu)$ is isomorphic to a natural ${\Bbb Z}_p$-model ${\mathcal M}(G,\mu)$ of the Shimura variety.
The semi-simple trace of the Frobenius endomorphism on the nearby cycles of ${\mathcal M}(G,\mu)$ plays an important role in the computation of the local factor at $p$ of the semi-simple Hasse-Weil zeta function of the Shimura variety, see [@Rapoport]. We can recover the semi-simple trace of Frobenius on the nearby cycles of ${\mathcal M}(G,\mu)$ from that of the local model $M(G,\mu)$, see loc.cit. Thus the problem to calculate the function $$x\in M(G,\mu)({\Bbb F}_q)\mapsto{\mathrm Tr}^{ss}({\mathrm Fr}_q,{\mathrm R}\Psi({{\bar{\Bbb Q}}_\ell})_x)$$ comes naturally. R. Kottwitz has conjectured an explicit formula for this function.
To state this conjecture, we note that the set of ${\Bbb F}_q$-points of $M(G,\mu)$ can be naturally embedded as a finite set of Iwahori-orbits in the affine flag variety of $G({\Bbb F}_q(\!(t)\!))$ $$M(G,\mu)({\Bbb F}_q)\subset G\bigl({\Bbb F}_q(\!(t)\!)\bigr)/I$$ where $I$ is the standard Iwahori subgroup of $G\bigl({\Bbb F}_q(\!(t)\!)\bigr)$.
[**Conjecture** ]{}([**Kottwitz**]{}) [*For all $x\in M(G,\mu)({\Bbb F}_q)$*]{}, $${\mathrm Tr}^{ss}({\mathrm Fr}_q,{\mathrm R}\Psi({{\bar{\Bbb Q}}_\ell})_x)=q^{\langle\rho,\mu\rangle }z_\mu(x).$$
Here $q^{\langle\rho,\mu\rangle }z_\mu(x)$ is the unique function in the center of the Iwahori-Hecke algebra of $I$-bi-invariant functions with compact support in $G\bigl({\Bbb F}_p(\!(t)\!)\bigr)$, characterized by $$q^{\langle\rho,\mu\rangle }z_\mu(x)*{\Bbb I}_K={\Bbb I}_{K\mu K}.$$ Here $K$ denotes the maximal compact subgroup $G({\Bbb F}_q[[t]])$ and ${\Bbb I}_{K\mu K}$ denotes the characteristic function of the double-coset corresponding to a coweight $\mu$.
Kottwitz’ conjecture was first proved for the local model of a special type of Shimura variety with Iwahori type reduction at $p$ attached to the group ${\mathrm GL}(d)$ and minuscule coweight $(1,0^{d-1})$ (the “Drinfeld case”) in [@Haines2]. The method of that paper was one of direct computation: Rapoport had computed the function ${\mathrm Tr}^{ss}({\mathrm Fr}_q,{\mathrm R}\Psi({\bar {\Bbb Q}_\ell})_x)$ for the Drinfeld case (see [@Rapoport]), and so the result followed from a comparison with an explicit formula for the Bernstein function $z_{(1,0^{d-1})}$. More generally, the explicit formula for $z_\mu$ in [@Haines2] is valid for any minuscule coweight $\mu$ of any quasisplit $p$-adic group. Making use of this formula, U. Görtz verified Kottwitz’ conjecture for a similar Iwahori-type Shimura variety attached to $G = {\mathrm GL}(4)$ and $\mu = (1,1,0,0)$, by computing the function ${\mathrm Tr}^{ss}({\mathrm Fr}_q,{\mathrm R}\Psi({\bar {\Bbb Q}_\ell})_x)$ for $x$ ranging over all 33 strata of the corresponding local model $M(G,\mu)$.
Shortly thereafter, A. Beilinson and D. Gaitsgory were motivated by Kottwitz’ conjecture to attempt to produce all elements in the center of the Iwahori-Hecke algebra geometrically, via a nearby cycle construction. For this they used Beilinson’s deformation of the affine Grassmanian: a space over a curve $X$ whose fiber over a fixed point $x \in X$ is the affine flag variety of the group $G$, and whose fiber over every other point of $X$ is the affine Grassmanian of $G$. In [@Gaitsgory] Gaitsgory proved a key commutativity result (similar to our Proposition 21) which is valid for any split group $G$ and any dominant coweight, in the function field setting. His result also implies that the semi-simple trace of Frobenius on nearby cycles (of a $K$-equivariant perverse sheaf on the affine Grassmanian) corresponds to a function in the center of the Iwahori-Hecke algebra of $G$.
The purpose of this article is to give a proof of Kottwitz’ conjecture for the cases $G = {\mathrm GL}(d)$ and $G = {\mathrm GSp}(2d)$. In fact we prove a stronger result (Theorem 11) which applies to arbitrary coweights, and which was also conjectured by Kottwitz (although only the case of minuscule coweights seems to be directly related to Shimura varieties).
[**Main Theorem**]{} [*Let $G$ be either ${\mathrm GL}(d)$ or ${\mathrm GSp}(2d)$. Then for any dominant coweight $\mu$ of $G$, we have*]{} $${\mathrm Tr}^{ss}({\mathrm Fr}_q,{\mathrm R}\Psi^{M}({\mathcal A}_{\mu,\eta})) = (-1)^{2\langle\rho,\mu\rangle}\sum_{\lambda \leq \mu}m_\mu(\lambda)z_{\lambda}.$$
Here $M$ is a member of an increasing family of schemes $M_{n_\pm}$ which contains the local models of Rapoport-Zink; the generic fiber of $M$ can be embedded in the affine Grassmanian of $G$, and ${\mathcal A}_{\mu,\eta}$ denotes the $K$-equivariant intersection complex corresponding to $\mu$. The special fiber of $M$ embeds in the affine flag variety of $G({\bar{ {\Bbb F}}}_q(\!(t)\!))$ so we can think of the semi-simple trace of Frobenius on nearby cycles as a function in the Iwahori-Hecke algebra of $G$.
While the strategy of proof is similar to that of Beilinson and Gaitsgory, in order to get a statement which is valid over all local non-Archimedean fields we use a somewhat different model, based on spaces of lattices, in the construction of the schemes $M_{n_\pm}$ (we have not determined the precise relation between our model and that of Beilinson-Gaitsgory). This is necessary to compensate for the lack of an adequate notion of affine Grassmanian over $p$-adic fields. The union of the schemes $M_{n_\pm}$ can be thought of as a $p$-adic analogue of Beilinson’s deformation of the affine Grassmanian.
We would like to thank G. Laumon and M. Rapoport for generous advice and encouragement. We would like to thank A. Genestier who has pointed out to us a mistake occurring in the first version of this paper. We thank Robert Kottwitz for explaining the argument of Beilinson and Gaitsgory to us and for helpful conversations about this material.
T. Haines acknowledges the hospitality and support of the Institut des Hautes Études Scientifiques in Bures-sur-Yvette in the spring of 1999, when this work was begun. He is partially supported by an NSF-NATO Postdoctoral fellowship, and an NSERC Research grant.
B.C. Ngô was visiting the Max Planck Institut fuer Mathematik during the preparation of this article.
Rapoport-Zink local models
==========================
Some definitions in the linear case
-----------------------------------
Let $F$ be a local non-Archimedean field. Let ${\mathcal O}$ denote the ring of integers of $F$ and let $k={\Bbb F}_q$ denote the residue field of ${\mathcal O}$. We choose a uniformizer $\varpi$ of ${\mathcal O}$. We denote by $\eta$ the generic point of $S={\mathrm Spec\,}({\mathcal O})$ and by $s$ its closed point.
For $G={\mathrm GL}(d)$ and for $\mu$ the minuscule coweight $$(\underbrace{1,\ldots,1}_r,\underbrace{0,\ldots,0}_{d-r})$$ with $1\leq r\leq d-1$, the local model $M_{\mu}$ represents the functor which associates to each ${\mathcal O}$-algebra $R$ the set of $L_\bullet=(L_0,\ldots,L_{d-1})$ where $L_0,\ldots,L_{d-1}$ are $R$-submodules of $R^d$ satisfying the following properties
- $L_0,\ldots,L_{d-1}$ are locally direct factors of corank $r$ in $R^d$,
- $\alpha'(L_0)\subset L_1,\,\alpha'(L_1)\subset L_2,\ldots,\,
\alpha'(L_{d-1})\subset L_0$ where $\alpha$ is the matrix $$\alpha'=\pmatrix{0& 1& & \cr
& \ddots& \ddots & \cr
& & 0 & 1\cr
\varpi& & & 0}$$
The projective $S$-scheme $M_{\mu}$ is a local model for singularities at $p$ of some Shimura variety for unitary group with level structure of Iwahori type at $p$ (see [@Rapoport],[@Rapoport-Zink]).
Following a suggestion of G. Laumon, we introduce a new variable $t$ and rewrite the moduli problem of $M_{\mu}$ as follows. Let $M_{\mu}(R)$ be the set of $L_{\bullet}=(L_0,\ldots,L_{d-1})$ where $L_0,\ldots,L_{d-1}$ are $R[t]$-submodules of $R[t]^d/tR[t]^d$ satisfying the following properties
- as $R$-modules, $L_0,\ldots,L_{d-1}$ are locally direct factors of corank $r$ in $R[t]^d/tR[t]^d$,
- $\alpha(L_0)\subset L_1,\,\alpha(L_1)\subset L_2,\ldots,\,
\alpha(L_{d-1})\subset L_0$ where $\alpha$ is the matrix $$\alpha=\pmatrix{0& 1& & \cr
& \ddots& \ddots & \cr
& & 0 & 1\cr
t+\varpi& & & 0}$$
Obviously, these two descriptions are equivalent because $t$ acts as $0$ on the quotient $R[t]^d/tR[t]^d$. Nonetheless, the latter description indicates how to construct larger $S$-schemes $M_\mu$, where $\mu$ runs over a certain cofinal family of dominant (nonminuscule) coweights.
Let $n_-\leq 0 <n_+$ be two integers.
Let $M_{r,n_{\pm}}$ be the functor which associates each ${\mathcal O}$-algebra $R$ the set of $L_{\bullet}=(L_0,\ldots,L_{d-1})$ where $L_0,\ldots,L_{d-1}$ are $R[t]$-submodules of $$t^{n_-}R[t]^d/t^{n_+}R[t]^d$$ satisfying the following properties
- as $R$-modules, $L_0,\ldots,L_{d-1}$ are locally direct factors with rank $n_{+}d-r$ in $t^{n_-}R[t]^d/t^{n_+}R[t]^d$,
- $\alpha(L_0)\subset L_1,\,\alpha(L_1)\subset L_2,\ldots,\,
\alpha(L_{d-1})\subset L_0$.
This functor is obviously represented by a closed sub-scheme in a product of Grassmannians. In particular, $M_{r,n_{\pm}}$ is projective over $S$.
In some cases, it is more convenient to adopt the following equivalent description of the functor $M_{r,n_\pm}$. Let us consider $\alpha$ as an element of the group $$\alpha\in{\mathrm GL}(d,{\mathcal O}[t,t^{-1},(t+\varpi)^{-1}]).$$ Let ${\mathcal V}_0,{\mathcal V}_1,\ldots,{\mathcal V}_d$ be the fixed ${\mathcal O}[t]$-submodules of ${\mathcal O}[t,t^{-1},(t+\varpi)^{-1}]^d$ defined by $${\mathcal V}_i=\alpha^{-i} {\mathcal O}[t]^d.$$ In particular, we have ${\mathcal V}_d=(t+\varpi)^{-1}{\mathcal V}_0$. Denote by ${\mathcal V}_{i,R}$ the tensor ${\mathcal V}_i\otimes_{\mathcal O} R$ for any ${\mathcal O}$-algebra $R$.
Let $M_{r,n_\pm}$ be the functor which associates to each ${\mathcal O}$-algebra $R$ the set of $${\mathcal L}_\bullet=({\mathcal L}_0\subset{\mathcal L}_1\subset\cdots\subset
{\mathcal L}_d=(t+\varpi)^{-1}{\mathcal L}_0)$$ where ${\mathcal L}_0,{\mathcal L}_1,\ldots$ are $R[t]$-submodules of $R[t,t^{-1},(t+\varpi)^{-1}]^d$ satisfying the following conditions
- for all $i=0,\ldots,d-1$, we have $t^{n_+}{\mathcal V}_{i,R}\subset{\mathcal L}_i\subset t^{n_-}{\mathcal V}_{i,R}$,
- as $R$-modules, ${\mathcal L}_i/t^{n_+}{\mathcal V}_{i,R}$ is locally a direct factor of $t^{n_-}{\mathcal V}_{i,R}/t^{n_+}{\mathcal V}_{i,R}$ with rank $n_+d-r$.
By using the isomorphism $$\alpha^i:t^{n_-}{\mathcal V}_{i,R}/t^{n_+}{\mathcal V}_{i,R}
\ident t^{n_-}R[t]^d/t^{n_+}R[t]^d$$ we can associate to each sequence $L_\bullet=(L_i)$ as in Definition 1 of $M_{r,n_\pm}$, the sequence ${\mathcal L}_\bullet=({\mathcal L}_i)$ as in Definition 2, in such a way that $$\alpha^i({\mathcal L}_i/t^{n_+}{\mathcal V}_{i,R})=L_i.$$ This correspondence is clearly bijective. Therefore, the two definitions of the functor $M_{r,n_\pm}$ are equivalent.
It will be more convenient to consider the disjoint union $M_{n_\pm}$ of projective schemes $M_{r,n_\pm}$ for all $r$ for which $M_{r,n_\pm}$ makes sense, namely $$M_{n_\pm}=\coprod_{dn_-\leq r \leq dn_+}M_{r,n_\pm},$$ instead of each connected component $M_{r,n_\pm}$ individually.
Group action
------------
Definition 2 permits us to define a natural group action on $M_{n_\pm}$. Every $R[t]$-module ${\mathcal L}_i$ as above is included in $$t^{n_+}R[t]^d\subset {\mathcal L}_i\subset t^{n_-}(t+\varpi)^{-1}R[t]^d.$$ Let ${\bar{\mathcal L}}_i$ denote its image in the quotient $${\bar{\mathcal V}}_{n_\pm,R}=t^{n_-}(t+\varpi)^{-1}R[t]^d/t^{n_+}R[t]^d.$$ Obviously, ${\mathcal L}_i$ is completely determined by ${\bar{\mathcal L}}_i$.
Let ${\bar{\mathcal V}}_i$ denote the image of ${\mathcal V}_i$ in ${\bar{\mathcal V}}_{n_\pm}$. We can view ${\bar{\mathcal V}}_{n_\pm}$ as the free $R$-module $R^{(n_+ -n_-+1)d}$ equipped with the endomorphism $t$ and with the filtration $${\bar{\mathcal V}}_\bullet = ({\bar{\mathcal V}}_0\subset{\bar{\mathcal V}}_1
\cdots\subset{\bar{\mathcal V}}_d=(t+\varpi)^{-1}{\bar{\mathcal V}}_0)$$ which is stabilized by $t$.
We now consider the functor ${J}_{n_\pm}$ which associates to each ${\mathcal O}$-algebra $R$ the group ${J}_{n_\pm}(R)$ of all $R[t]$-automorphisms of ${\bar{\mathcal V}}_{n_\pm}$ fixing the filtration ${\bar{\mathcal V}}_\bullet$. This functor is represented by a closed subgroup of ${\mathrm GL}((n_+-n_-+1)d)$ over $S$ that acts in the obvious way on $M_{n_\pm}$.
The group scheme $J_{n_\pm}$ is smooth over $S$.
Consider the functor ${\mathcal J}_{n_\pm}$ which associates to each ${\mathcal O}$-algebra $R$ the ring ${\mathcal J}_{n_\pm}(R)$ of all $R[t]$-endomorphisms of ${\bar{\mathcal V}}_{n_\pm}$ stabilizing the filtration ${\bar{\mathcal V}}_\bullet$. This functor is obviously represented by a closed sub-scheme of the $S$-scheme ${\frak{gl}}((n_+-n_-+1)d)$ of square matrices with rank $(n_+ -n_-+1)d$.
The natural morphism of functors $J_{n_\pm}\rightarrow{\mathcal J}_{n_\pm}$ is an open immersion. Thus it suffices to prove that ${\mathcal J}_{n_\pm}$ is smooth over $S$.
Giving an element of ${\mathcal J}_{n_\pm}$ is equivalent to giving $d$ vectors $v_1,\ldots,v_d$ such that $v_i\in t^{n_-}{\bar{\mathcal V}}_i$. This implies that ${\mathcal J}_{n_\pm}$ is isomorphic to a trivial vector bundle over $S$ of rank $$\sum_{i=1}^d{\mathrm rk}_{\mathcal O}(t^{n_-}{\mathcal V}_i/t^{n_+}{\mathcal O}[t]^d)=
d^2(n_+ -n_- +1)-(d-1)d/2.$$ This finishes the proof of the lemma. $\square$
Description of the generic fibre
--------------------------------
For this purpose, we use Definition 1 of $M_{n_\pm}$. Let $R$ be an $F$-algebra. The matrix $\alpha$ then is invertible as an element $$\alpha\in{\mathrm GL}(d,R[t]/t^{n_+-n_-}R[t]),$$ the group of automorphisms of $t^{n_-}R[t]^d/t^{n_+}R[t]^d$.
Let $(L_0,\ldots,L_{d-1})$ be an element of $M_{n_\pm}(R)$. As $R$-modules, the $L_i$ are locally direct factors of the same rank. For $i=1,\ldots,d-1$, the inclusion $\alpha(L_{i-1})\subset L_i$ implies the equality $\alpha(L_{i-1})=L_i$. In this case, the last inclusion $\alpha(L_{d-1})\subset L_0$ is automatically an equality, because the matrix $$\alpha^d={\mathrm diag\,}(t+\varpi,\ldots,t+\varpi)$$ satisfies the property: $\alpha^d(L_0)=L_0$. In others words, the whole sequence $(L_0,\ldots,L_{d-1})$ is completely determined by $L_0$.
Let us reformulate the above statement in a more precise way. Let ${\mathrm Grass}_{n_\pm}$ be the functor which associates to each ${\mathcal O}$-algebra $R$ the set of $R[t]$-submodules $L$ of $t^{n_-}R[t]^d/ t^{n_+}R[t]^d$ which, as $R$-modules, are locally direct factors of $t^{n_-}R[t]^d/ t^{n_+}R[t]^d$. Obviously, this functor is represented by a closed sub-scheme of a Grassmannian. In particular, it is proper over $S$.
Let $\pi: M_{n_\pm}\rightarrow {\mathrm Grass}_{n_\pm}$ be the morphism defined by $$\pi(L_0,\ldots,L_{d-1})=L_0.$$
The above discussion can be reformulated as follows.
The morphism $\pi:M_{n_\pm}\rightarrow{\mathrm Grass}_{n_\pm}$ is an isomorphism over the generic point $\eta$ of $S$. $\square$
Let $K_{n_\pm}$ the functor which associates to each ${\mathcal O}$-algebra $R$ the group $$K_{n_\pm}={\mathrm GL}(d,R[t]/t^{n_+-n_-}R[t]).$$ Obviously, it is represented by a smooth group scheme over $S$ and acts naturally on ${\mathrm Grass}_{n_\pm}$. This action yields a decomposition into orbits that are smooth over $S$ $${\mathrm Grass}_{n_\pm}=\coprod_{\lambda\in\Lambda(n_\pm)} O_\lambda$$ where $\Lambda(n_\pm)$ is the finite set of sequences of integers $\lambda=(\lambda_1,\ldots,\lambda_d)$ satisfying the following condition $$n_+\geq\lambda_1\geq\cdots\geq\lambda_d\geq n_-.$$ This set $\Lambda(r,n_\pm)$ can be viewed as a finite subset of the cone of dominant coweights of $G={\mathrm GL}(d)$ and conversely, every dominant coweight of $G$ occurs in some $\Lambda(n_\pm)$. For all $\lambda\in \Lambda(n_\pm)$, we have $$O_\lambda(F)=K_F\,t^{\lambda} K_F/K_F.$$ Here $K_F={\mathrm GL}(d,F[[t]])$ is the standard maximal “compact” subgroup of $G_F={\mathrm GL}(d,F(\!(t)\!))$ and acts on ${\mathrm Grass}_{n_\pm}(F)$ through the quotient $K_{n_\pm}(F)$. The above equality holds if one replaces $F$ by any field which is also an ${\mathcal O}$-algebra, since $K_{n_\pm}$ is smooth; in particular it holds for the residue field $k$.
We derive from the above lemma the description $$M_{n_\pm}(F)=\coprod_{\lambda\in\Lambda(n_\pm)}K_F\, t^{\lambda} K_F/K_F.$$
We will need to compare the action of $J_{n_\pm}$ on $M_{n_\pm}$ and the action of $K_{n_\pm}$ on ${\mathrm Grass}_{n_\pm}$. By definition, $J_{n_\pm}(R)$ is a subgroup of $$J_{n_\pm}(R)\subset{\mathrm GL}(d,R[t]/t^{n_+-n_-}(t+\varpi)R[t])$$ for any ${\mathcal O}$-algebra $R$. By using the natural homomorphism $${\mathrm GL}(d,R[t]/t^{n_+-n_-}(t+\varpi)R[t])\rightarrow
{\mathrm GL}(d,R[t]/t^{n_+-n_-}R[t])$$ we get a homomorphism $J_{n_\pm}(R)\rightarrow K_{n_\pm}(R)$. This gives rises to a homomorphism of group schemes $\rho:J_{n_\pm}\rightarrow K_{n_\pm}$, which is surjective over the generic point $\eta$ of $S$.
The proof of the following lemma is straightforward.
With respect to the homomorphism $\rho:J_{n_\pm}\rightarrow K_{n_\pm}$, and to the morphism $\pi:M_{n_\pm}\rightarrow {\mathrm Grass}_{n_\pm}$, the action of $J_{n_\pm}$ on $M_{n_\pm}$ and the action of $K_{n_\pm}$ on ${\mathrm Grass}_{n_\pm}$ are compatible. $\square$
Description of the special fibre
--------------------------------
For this purpose, we will use Definition 2 of $M_{n_\pm}$. The functor $M_{r, n_\pm}$ associates to each $k$-algebra $R$ the set of $${\mathcal L}_\bullet=({\mathcal L}_0\subset{\mathcal L}_1\subset\cdots\subset
{\mathcal L}_d=t^{-1}{\mathcal L}_0)$$ where ${\mathcal L}_0,{\mathcal L}_1,\ldots$ are $R[t]$-submodules of $R[t,t^{-1}]^d$ satisfying the following conditions
- for all $i=0,\ldots,d-1$, we have $t^{n_+}{\mathcal V}_{i,R}\subset{\mathcal L}_i\subset t^{n_-}{\mathcal V}_{i,R}$,
- as an $R$-module, each ${\mathcal L}_i/t^{n_+}{\mathcal V}_{i,R}$ is locally a direct factor of $t^{n_-}{\mathcal V}_{i,R}/t^{n_+}{\mathcal V}_{i,R}$ with rank $n_+d-r$.
Let $I_{k}$ denote the standard Iwahori subgroup of $G_k={\mathrm GL}\bigl(d,k(\!(t)\!)\bigr)$, that is, the subgroup of integer matrices ${\mathrm GL}\bigl(d,k[[t]]\bigr)$ whose reduction mod $t$ lies in the subgroup of upper triangular matrices in ${\mathrm GL}(d,k)$. The set of $k$-points of $M_{n_\pm}$ can be realized as a finite subset in the set of affine flags of ${\mathrm GL}(d)$ $$M_{n_\pm}(k)\subset G_k/I_k.$$ By definition, the $k$-points of $J_{n_\pm}$ are the matrices in ${\mathrm GL}(d,k[t]/t^{n_+-n_-+1}k[t])$ whose reduction mod $t$ is upper triangular. Thus, $J_{n_\pm}(k)$ is a quotient of $I_k$. Obviously, the action of $J_{n_\pm}(k)$ on $M_{n_\pm}(k)$ and the action of $I_k$ on $G_k/I_k$ are compatible. Therefore, for each $r$ such that $dn_- \leq r \leq dn_+$ there exists a finite subset ${\tilde W}(r, n_\pm)\subset {\tilde W}$ of the affine Weyl group ${\tilde W}$ such that $$M_{n_\pm}(k)=\coprod_{w\in {\tilde W}(n_\pm)} I_k wI_k/I_k,$$ where ${\tilde W}(n_\pm) = \coprod_{r} {\tilde W}(r, n_\pm)$. One can see easily that any element $w\in{\tilde W}$ occurs in the finite subset ${\tilde W}(n_\pm)$ for some $n_\pm$. But the exact determination of the finite sets ${\tilde W}(r,n_\pm)$ is a difficult combinatorial problem; for the case of minuscule coweights of ${\mathrm GL}(d)$ (i.e., $n_+ = 1$ and $n_- = 0$) these sets have been described by Kottwitz and Rapoport ([@Kora]).
Let us recall that $${\mathrm Grass}_{n_\pm}(k)=\coprod_{\lambda\in\Lambda(n_\pm)}
K_k\,t^{\lambda} K_k/K_k.$$ The proof of the next lemma is straightforward.
The map $\pi(k):M_{n_\pm}(k)\rightarrow{\mathrm Grass}_{n_\pm}(k)$ is the restriction of the natural map $G_k/I_k\rightarrow G_k/K_k$.
Symplectic case
---------------
For the symplectic case, we will give only the definitions of the symplectic analogues of the objects which were considered in the linear case. The statements of Lemmas 3,4,5 and 6 remain unchanged.
In this section, the group $G$ stands for ${\mathrm GSp}(2d)$ associated to the symplectic form $\langle\, ,\, \rangle $ represented by the matrix $$\pmatrix{0 & J\cr -J & 0}$$ where $J$ is the anti-diagonal matrix with entries equal to $1$. Let $\mu$ denote the minuscule coweight $$\mu=(\underbrace{1,\ldots,1}_{d}, \underbrace{0,\ldots,0}_{d}).$$
Following Rapoport and Zink ([@Rapoport-Zink]) the local model $M_\mu$ represents the functor which associates to each ${\mathcal O}$-algebra $R$ the set of sequences $L_\bullet=(L_0,\ldots,L_d)$ where $L_0,\ldots,L_d$ are $R$-submodules of $R^{2d}$ satisfying the following properties
- $L_0,\ldots,L_d$ are locally direct factors of $R^{2d}$ of rank $d$,
- $\alpha'(L_0)\subset L_1,\ldots,\alpha'(L_{d-1})\subset L_d$ where $\alpha'$ is the matrix of size $2d\times 2d$ $$\alpha'=\pmatrix{0& 1& & \cr
& \ddots& \ddots & \cr
& & 0 & 1\cr
\varpi& & & 0}$$
- $L_0$ and $L_d$ are isotropic with respect to $\langle\, ,\, \rangle $.
Just as in the linear case, let us introduce a new variable $t$ and give the symplectic analog of Definition 2. We consider the matrix of size $2d\times 2d$ $$\alpha'=\pmatrix{0& 1& & \cr
& \ddots& \ddots & \cr
& & 0 & 1\cr
t+\varpi& & & 0}$$ viewed as an element of $$\alpha\in{\mathrm GL}(2d,{\mathcal O}[t,t^{-1},(t+\varpi)^{-1}]).$$ Denote by ${\mathcal V}_0,\ldots,{\mathcal V}_{2d-1}$ the fixed ${\mathcal O}[t]$-submodules of ${\mathcal O}[t,t^{-1},(t+\varpi)^{-1}]^{2d}$ defined by ${\mathcal V}_i=\alpha^{-i}{\mathcal O}[t]^{2d}$. For an ${\mathcal O}$-algebra $R$, let ${\mathcal V}_{i,R}$ denote ${\mathcal V}_i\otimes_{\mathcal O} R$.
For any $R[t]$-submodule ${\mathcal L}$ of $R[t,t^{-1},(t+\varpi)^{-1}]^{2d}$, the $R[t]$-module $${\mathcal L}^{\perp'} = \{x\in R[t,t^{-1},(t+\varpi)^{-1}]^{2d}\mid
\forall y\in {\mathcal L}, t^n(t+\varpi)^{n'}\langle x,y\rangle \in R[t]\}$$ is called the [*dual*]{} of ${\mathcal L}$ with respect to the form $\langle\,,\,\rangle ' = t^n(t+\varpi)^{n'}\langle\,,\,\rangle $. Thus ${\mathcal V}_0$ is autodual with respect to the form $\langle\,,\,\rangle $ and ${\mathcal V}_d$ is autodual with respect to the form $(t+\varpi)\langle\,,\,\rangle $.
Here is the symplectic analog of Definition 2 of the model $M_{n_\pm}$. For $n_-=0$ and $n_+=1$, $M_{n_\pm}$ will coincide with $M_\mu$, for $\mu = (1^d, 0^d)$:
For any $n_-\leq 0<n_+$, let $M_{n_\pm}$ be the functor which associates to each ${\mathcal O}$-algebra $R$ the set of sequences $${\mathcal L}_\bullet=({\mathcal L}_0\subset{\mathcal L}_1\subset\cdots\subset{\mathcal L}_d)$$ where ${\mathcal L}_0,\ldots,{\mathcal L}_d$ are $R[t]$-submodules of $R[t,t^{-1},(t+\varpi)^{-1}]^{2d}$ satisfying the following properties
- for all $i=0,\ldots,d$, we have $t^{n_+}{\mathcal V}_{i,R}\subset{\mathcal L}_i\subset t^{n_-}{\mathcal V}_{i,R}$,
- as $R$-modules, ${\mathcal L}_i/t^{n_+}{\mathcal V}_{i,R}$ is locally a direct factor of $t^{n_-}{\mathcal V}_{i,R}/t^{n_+}{\mathcal V}_{i,R}$ of rank $(n_+-n_-)d$,
- ${\mathcal L}_0$ is autodual with respect to the form $t^{-n_--n_+}\langle\, ,\, \rangle $, and ${\mathcal L}_d$ is autodual with respect to the form $t^{-n_--n_+}(t+\varpi)\langle\, ,\, \rangle $.
Let us now define the natural group action on $M_{n_\pm}$. The functor $J_{n_\pm}$ associates to each ${\mathcal O}$-algebra $R$ the group $J_{n_\pm}(R)$ of $R[t]$-linear automorphisms of $${\bar{\mathcal V}}_{n_\pm,R}=t^{n_-}(t+\varpi)^{-1}R[t]^{2d}/t^{n_+}R[t]^{2d}$$ which fix the filtration $${\bar{\mathcal V}}_{\bullet,R}=({\bar{\mathcal V}}_{0,R}\subset\cdots\subset{\bar{\mathcal V}}_{d,R})$$ (the image of ${\mathcal V}_{\bullet,R}$ in ${\bar {\mathcal V}}_{n_\pm,R}$) and which fix the symplectic form $t^{-n_--n_+}(t+\varpi)\langle\,,\,\rangle $, up to a unit in $R$. This functor is represented by an $S$-group scheme $J_{n_\pm}$ which acts on $M_{n_\pm}$. Lemma 3 remains true in the symplectic case : $J_{n_\pm}$ is a [*smooth*]{} group scheme over $S$. The proof is completely similar to the linear case.
Let us now describe the generic fibre of $M_{n_\pm}$. Let ${\mathrm Grass}_{n_\pm}$ be the functor which associates to each ${\mathcal O}$-algebra $R$ the set of $R[t]$-submodules $L$ of $t^{n_-}R[t]^{2d}/t^{n_+}R[t]^{2d}$ which, as $R$-modules, are locally direct factors of rank $(n_+-n_-)d$ and which are isotropic with respect to $t^{-n_--n_+}\langle\,,\,\rangle $. Then the morphism $\pi:M_{n_\pm}\rightarrow{\mathrm Grass}_{n_\pm}$ defined by $\pi(L_\bullet)=L_{0}$ is an isomorphism over the generic point $\eta$ of $S$. Let $K_{n_\pm}$ denote the functor which associates to each ${\mathcal O}$-algebra $R$ the group of $R[t]$-automorphisms of $t^{n_-}R[t]^{2d} / t^{n_+}R[t]^{2d}$ which fix the symplectic form $t^{-n_- -n_+}\langle\,,\,\rangle$ up to a unit in $R$. Then $K_{n_\pm}$ is represented by a smooth group scheme over $S$, and it acts in the obvious way on ${\mathrm Grass}_{n_\pm}$. Consequently, we have a stratification in orbits of the generic fibre $M_{n_\pm,\eta}$ $$M_{n_\pm,\eta}=\coprod_{\lambda\in \Lambda(n_\pm)}O_{\lambda,\eta}.$$ Here $\Lambda(n_\pm)$ is the set of sequences $\lambda=(\lambda_1,\ldots,\lambda_d)$ satisfying $$n_+\geq\lambda_1\geq\cdots\geq\lambda_d\geq {n_++n_-\over 2},$$ and can be viewed as finite subset of the cone of dominant coweights of $G={\mathrm GSp}(2d)$. One can easily check that each dominant coweight of ${\mathrm GSp}(2d)$ occurs in some $\Lambda(n_\pm)$. For any $\lambda\in \Lambda(n_\pm)$, we have also $$O_{\lambda,\eta}(F)=K_F t^\lambda K_F/K_F$$ where $K_F=G(F[[t]])$ is the ”maximal compact” subgroup of $G_F=G(F(\!(t)\!))$.
Next we turn to the special fiber of $M_{n_\pm}$. For this it is most convenient to give a slight reformulation of Definition 7 above. Let $R$ be any ${\mathcal O}$-algebra. It is easy to see that specifying a sequence ${\mathcal L}_\bullet = ({\mathcal L}_0 \subset \dots {\mathcal L}_d)$ as in Definition 7 is the same as specifying a periodic “lattice chain” $$\dots \subset {\mathcal L}_{-1} \subset {\mathcal L}_0 \subset \dots \subset {\mathcal L}_{2d} = (t+\varpi)^{-1}{\mathcal L}_0 \subset \dots$$ consisting of $R[t]$-submodules of $R[t,t^{-1},(t+\varpi)^{-1}]^{2d}$ with the following properties:
- $t^{n_+}{\mathcal V}_{i, R} \subset {\mathcal L}_i \subset t^{n_-}{\mathcal V}_{i, R}$, where ${\mathcal V}_{i, R} = \alpha^{-i}{\mathcal V}_{0, R}$, for every $i \in {\Bbb Z}$,
- ${\mathcal L}_i / t^{n_+}{\mathcal V}_{i, R}$ is locally a direct factor of rank $(n_+ - n_-)d$, for every $i \in {\Bbb Z}$,
- ${\mathcal L}^{\perp}_i = t^{-n_- - n_+}{\mathcal L}_{-i}$, for every $i \in {\Bbb Z}$,
where $\perp$ is defined using the original symplectic form $\langle\,,\,\rangle$ on $R[t,t^{-1},(t+\varpi)^{-1}]^{2d}$. We denote by $I_k$ the standard Iwahori subgroup of ${\mathrm GSp}(2d,k[[t]])$, namely, the stabilizer in this group of the periodic lattice chain ${\mathcal V}_{\bullet, k[[t]]}$. There is a canonical surjection $I_k \rightarrow J_{n_\pm}(k)$ and so the Iwahori subgroup $I_k$ acts via its quotient $J_{n_\pm}(k)$ on the set $M_{n_\pm}(k)$. Moreover, the $I_k$-orbits in $M_{n_\pm}(k)$ are parametrized by a certain finite set ${\tilde W}(n_\pm)$ of the affine Weyl group ${\tilde W}({\mathrm GSp}(2d))$ $$M_{n_\pm}(k) = \coprod_{w \in {\tilde W}(n_\pm)} I_k \, w \, I_k / I_k.$$ The precise description of the sets ${\tilde W}(n_\pm)$ is a difficult combinatorial problem (see [@Kora] for the case $n_+=1$, $n_-=0$), but one can easily see that any $w \in {\tilde W}({\mathrm GSp}(2d))$ is contained in some ${\tilde W}(n_\pm)$.
The definitions of the group scheme action of $K_{n_\pm}$ on ${\mathrm Grass}_{n_\pm}$, of the homomorphism $\rho:J_{n_\pm}\rightarrow K_{n_\pm}$ and the compatibility properties (Lemmas 5,6) are obvious and will be left to the reader.
Semi-simple trace on nearby cycles
==================================
Semi-simple trace
-----------------
The notion of semi-simple trace was introduced by Rapoport in [@Rapoport] and its good properties were mentioned there. The purpose of this section is only to give a more systematic presentation in insisting on the important fact that the semi-simple trace furnish a kind of sheaf-function dictionary à la Grothendieck. In writing this section, we have benefited from very helpful explanations of Laumon.
Let ${\bar F}$ be a separable closure of the local field $F$. Let $\Gamma$ be the Galois group ${\mathrm Gal}({\bar F}/F)$ of $F$ and let $\Gamma_0$ be the inertia subgroup of $\Gamma$ defined by the exact sequence $$1\rightarrow \Gamma_0\rightarrow\Gamma\rightarrow{\mathrm Gal}({\bar k}/k)\rightarrow 1.$$ For any prime $\ell\not= p$, there exists a canonical surjective homomorphism $$t_\ell:\Gamma_0\rightarrow {\Bbb Z}_\ell(1).$$
Let ${\mathcal R}$ denote the abelian category of continuous, finite dimensional $\ell$-adic representations of $\Gamma$. Let $(\rho,V)$ be an object of ${\mathcal R}$ $$\rho:\Gamma\rightarrow{\mathrm GL}(V).$$ According to Grothendieck, the restricted representation $\rho(\Gamma_0)$ is [*quasi-unipotent*]{} i.e. there exists a finite-index subgroup $\Gamma_1$ of $\Gamma_0$ which acts unipotently on $V$ (the residue field $k$ is supposed finite). There exists then an unique nilpotent morphism, the [*logarithm*]{} of $\rho$ $$N:V\rightarrow V(-1)$$ characterized by the following property : for all $g\in \Gamma_1$, we have $$\rho(g)=\exp(N t_\ell(g)).$$
Following Rapoport, an increasing filtration ${\mathcal F}$ of $V$ will be called [*admissible*]{} if it is stable under the action of $\Gamma$ and such that $\Gamma_0$ operates on the associated graded ${\mathrm gr}^{\mathcal F}_\bullet(V)$ through a finite quotient. Admissible filtrations always exist : we can take for instance the filtration defined by the kernels of the powers of $N$.
We define the semi-simple trace of Frobenius on $V$ as $${\mathrm Tr}^{ss}({\mathrm Fr}_q,V)=\sum_k {\mathrm Tr}({\mathrm Fr}_q,{\mathrm gr}^{\mathcal F}_k(V)^{\Gamma_0}) .$$
The semi-simple trace ${\mathrm Tr}^{ss}({\mathrm Fr}_q,V)$ does not depend on the choice of the admissible filtration ${\mathcal F}$.
Let us first consider the case where $\Gamma_0$ acts on $V$ through a finite quotient. Since the functor taking invariant of a finite group acting on ${{\bar{\Bbb Q}}_\ell}$-vector space is exact, the graded associated to the filtration ${\mathcal F}'$ of $V^{\Gamma_0}$ induced by ${\mathcal F}$ is equal to ${\mathrm gr}^{\mathcal F}_\bullet(V)^{\Gamma_0}$ $${\mathrm gr}_k^{\mathcal F'}(V^{\Gamma_0})={\mathrm gr}^{\mathcal F}_k(V)^{\Gamma_0}.$$ Consequently $${\mathrm Tr}({\mathrm Fr}_q,V^{\Gamma_0})=\sum_k {\mathrm Tr}
({\mathrm Fr}_q,{\mathrm gr}^{\mathcal F}_k(V)^{\Gamma_0}) .$$
In the general case, any two admissible filtrations admit a third finer admissible filtration. By using the above case, one sees the semi-simple trace associated to each of the two first admissible filtrations is equal to the semi-simple trace associated to the third one and the lemma follows. $\square$
The function defined by $$V\mapsto{\mathrm Tr}^{ss}({\mathrm Fr}_q,V)$$ on the set of isomorphism classes $V$ of ${\mathcal R}$, factors through the Grothendieck group of ${\mathcal R}$.
For any object $C$ of the derived category associated to ${\mathcal R}$, we put $${\mathrm Tr}^{ss}({\mathrm Fr}_q,C)=\sum_i (-1)^i {\mathrm Tr}^{ss}({\mathrm Fr}_q,{\mathrm H}^i(C)).$$ By the above corollary, for any distinguished triangle $$C\rightarrow C'\rightarrow C''\rightarrow C[1]$$ the equality $${\mathrm Tr}^{ss}({\mathrm Fr}_q,C)+{\mathrm Tr}^{ss}({\mathrm Fr}_q,C'')={\mathrm Tr}^{ss}({\mathrm Fr}_q,C')$$ holds.
Let $X$ be a $k$-scheme of finite type, $X_{\bar s}=X\otimes_k{\bar k}$. Let $D^b_c(X\times_k\eta)$ the derived category associated to the abelian category of constructible $\ell$-adic sheaves on $X_{\bar s}$ equipped with an action of $\Gamma$ compatible with the action of $\Gamma$ on $X_{\bar s}$ through ${\mathrm Gal}({\bar k}/k)$, see [@Deligne]. Let ${\mathcal C}$ be an object of $D^b_c(X\times_k\eta)$. For any $x\in X(k)$, the fibre ${\mathcal C}_x$ is an object of the derived category of ${\mathcal R}$. Thus we can define the function semi-simple trace $$\tau^{ss}_{\mathcal C}:X(k)\rightarrow{{\bar{\Bbb Q}}_\ell}$$ by $$\tau^{ss}_{\mathcal C}(x)={\mathrm Tr}^{ss}({\mathrm Fr}_q,{\mathcal C}_x).$$
This association ${\mathcal C}\mapsto \tau^{ss}_{\mathcal C}$ furnishes an analog of the usual sheaf-function dictionary of Grothendieck (see [@Grothendieck]):
Let $f:X\rightarrow Y$ be a morphism between $k$-schemes of finite type.
1. Let ${\mathcal C}$ be an object of $D^b_c(Y\times_k\eta)$. For all $x\in X(k)$, we have $$\tau^{ss}_{f^*{\mathcal C}}(x)=\tau^{ss}_{\mathcal C}(f(x))$$
2. Let ${\mathcal C}$ be an object of $D^b_c(X\times_k\eta)$. For all $y\in Y(k)$, we have $$\tau^{ss}_{{\mathrm R}f_!{\mathcal C}}(y)=\sum_{\scriptstyle x\in X(k)\atop
\scriptstyle f(x)=y} \tau^{ss}_{\mathcal C}(x).$$
The first statement is obvious because $f^*{\mathcal C}_x$ and ${\mathcal C}_{f(x)}$ are canonically isomorphic as objects of the derived category of ${\mathcal R}$.
It suffices to prove the second statement in the case $Y=s$. By Corollary 9 and “dévissage”, it suffices to consider the case where ${\mathcal C}$ is concentrated in only one degree, say in the degree zero. Denote $C={\mathcal H}^0({\mathcal C})$ and choose an admissible filtration of $C$ $$0=C_0\subset C_1\subset C_2\subset\cdots\subset C_n=C.$$
The associated spectral sequence $${\mathrm E}^{i,j-i}_1={\mathrm H}^j_c(X_{\bar s}, C_i/C_{i-1})
\Longrightarrow {\mathrm H}^j_c(X_{\bar s},C)$$ yields an abutment filtration on ${\mathrm H}^j_c(X_{\bar s},C)$ with associated graded ${\mathrm E}_\infty^{i,j-i}$. Since the inertia group acts on ${\mathrm E}^{i,j-i}_1$ through a finite quotient, the same property holds for ${\mathrm E}_\infty^{i,j-i}$ because ${\mathrm E}_\infty^{i,j-i}$ is a subquotient of ${\mathrm E}^{i,j-i}_1$. Consequently, the abutment filtration on ${\mathrm H}^j_c(X_{\bar s},C)$ is an admissible filtration and by definition, we have $${\mathrm Tr}^{ss}({\mathrm Fr}_q,Rf_!C)=\sum_{i,j}(-1)^j
{\mathrm Tr}({\mathrm Fr}_q,({\mathrm E}_\infty^{i,j-i})^{\Gamma_0}).$$
Now, the identity in the Grothendieck group $$\sum_{i,j}(-1)^j{\mathrm E}_1^{i,j-i}=
\sum_{i,j}(-1)^j{\mathrm E}_\infty^{i,j-i}$$ implies $$\sum_{i,j}(-1)^j({\mathrm E}_1^{i,j-i})^{\Gamma_0}
=\sum_{i,j}(-1)^j{(\mathrm E}_\infty^{i,j-i})^{\Gamma_0}$$ because taking the invariants by finite group is an exact functor.
The same exactness implies $${(\mathrm E}_1^{i,j-i})^{\Gamma_0}={\mathrm H}^j_c(X_{\bar s}, C_{i}/C_{i-1})^{\Gamma_0}
={\mathrm H}^j_c(X_{\bar s}, (C_{i}/C_{i-1})^{\Gamma_0}).$$ By putting the above equalities together, we obtain $${\mathrm Tr}^{ss}({\mathrm Fr}_q,Rf_!C)=\sum_{i,j}(-1)^j
{\mathrm Tr}({\mathrm Fr}_q,{\mathrm H}^j_c(X_{\bar s}, (C_{i}/C_{i-1})^{\Gamma_0})).$$
By using now the Grothendieck-Lefschetz formula, we have $$\sum_{x\in X(k)}{\mathrm Tr}({\mathrm Fr}_q,(C_{i}/C_{i-1})^{\Gamma_0}_x)
=\sum_{j}(-1)^j
{\mathrm Tr}({\mathrm Fr}_q,{\mathrm H}^j_c(X_{\bar s}, (C_{i}/C_{i-1})^{\Gamma_0})).$$ Consequently, $${\mathrm Tr}^{ss}({\mathrm Fr}_q,Rf_!C)=\sum_{x\in X(k)}{\mathrm Tr}^{ss}({\mathrm Fr}_q,C_x).\ \ \square$$
Nearby cycles
-------------
Let ${\bar\eta}={\mathrm Spec\,}({\bar F})$ denote the geometric generic point of $S$, ${\bar S}$ be the normalization of $S$ in ${\bar\eta}$ and ${\bar s}$ be the closed point of ${\bar S}$. For an $S$-scheme $X$ of finite type, let us denote by ${\bar\jmath}^X:X_{\bar\eta}\rightarrow X_{\bar S}$ the morphism deduced by base change from ${\bar\jmath}:{\bar\eta}\rightarrow {\bar S}$ and denote by ${\bar\imath}^X:X_{\bar s}\rightarrow X_{\bar S}$ the morphism deduced from ${\bar\imath}:{\bar s}\rightarrow{\bar S}$.
The nearby cycles of an $\ell$-adic complex $C_\eta$ on $X_\eta$, is the complex of $\ell$-adic sheaves defined by $${\mathrm R}\Psi^X(C_\eta)=
i^{X,*}{\mathrm R}{\bar\jmath}^X_{*} {\bar\jmath}^{X,*} C_\eta.$$ The complex ${\mathrm R}\Psi^X(C_\eta)$ is equipped with an action of $\Gamma$ compatible with the action of $\Gamma$ on $X_{\bar s}$ through the quotient ${\mathrm Gal}({\bar k}/k)$.
For $X$ a proper $S$-scheme, we have a canonical isomorphism $${\mathrm R}\Gamma(X_{\bar s},{\mathrm R}\Psi(C_\eta))
={\mathrm R}\Gamma(X_{\bar\eta},C_\eta)$$ compatible with the natural actions of $\Gamma$ on the two sides.
Let us suppose moreover the generic fibre $X_\eta$ is smooth. In order to compute the local factor of the Hasse-Weil zeta function, one should calculate the trace $$\sum_j (-1)^j{\mathrm Tr}({\mathrm Fr}_q,{\mathrm H}^j(X_{\bar\eta},{{\bar{\Bbb Q}}_\ell})^{\Gamma_0}).$$ Assuming that the graded pieces in the monodromy filtration of ${\mathrm H}^j(X_{\bar\eta},{{\bar{\Bbb Q}}_\ell})$ are pure (Deligne’s conjecture), Rapoport proved that the true local factor is completely determined by the semi-simple local factor, see [@Rapoport]. Now by the above discussion the semi-simple trace can be computed by the formula $$\sum_j (-1)^j{\mathrm Tr}^{ss}({\mathrm Fr}_q,{\mathrm H}^j(X_{\bar\eta},{{\bar{\Bbb Q}}_\ell}))
=\sum_{x\in X(k)}{\mathrm Tr}^{ss}({\mathrm Fr}_q,{\mathrm R}\Psi({{\bar{\Bbb Q}}_\ell})_x).$$
Statement of the main result
============================
Nearby cycles on local models
-----------------------------
We have seen in subsection 2.3 (resp. 2.5 for symplectic case) that the generic fibre of $M_{n_\pm}$ admits a stratification with smooth strata $$M_{n_\pm,\eta}=\coprod_{\lambda\in\Lambda(n_\pm)} O_{\lambda,\eta}.$$ Denote by ${\bar O}_{\lambda,\eta}$ the Zariski closure of $O_{\lambda,\eta}$ in $M_{n_\pm,\eta}$; in general ${\bar O}_{\lambda, \eta}$ is no longer smooth . It is natural to consider ${\mathcal A}_{\lambda,\eta}={\mathrm IC}(O_{\lambda,\eta})$, its $\ell$-adic intersection complex.
We want to calculate the function $$\tau^{ss}_{{\mathrm R}\Psi^M({\mathcal A}_{\lambda,\eta})}(x)
={\mathrm Tr}^{ss}({\mathrm Fr}_q,{\mathrm R}\Psi^M({\mathcal A}_{\lambda,\eta})_x)$$ of semi-simple trace of the Frobenius endomorphism on the nearby cycle complex ${\mathrm R}\Psi^M({\mathcal A}_{\lambda,\eta})$ defined in the last section. We are denoting the scheme $M_{n_\pm}$ simply by $M$ here.
As $O_{\lambda,\eta}$ is an orbit of $J_{n_\pm,\eta}$, the intersection complex ${\mathcal A}_{\lambda,\eta}$ is naturally $J_{n_\pm,\eta}$-equivariant. As we know that $J_{n_\pm}$ is smooth over $S$ by Lemma 3, its nearby cycle ${\mathrm R}\Psi^M({\mathcal A}_{\lambda,\eta})$ is $J_{n_\pm,{\bar s}}$-equivariant. In particular, the function $$\tau^{ss}_{{\mathrm R}\Psi^M({\mathcal A}_{\lambda,\eta})}:M_{n_\pm}(k)\rightarrow{{\bar{\Bbb Q}}_\ell}$$ is $J_{n_\pm}(k)$-invariant.
Now following the group theoretic description of the action of $J_{n_\pm}(k)$ on $M_{n\pm}(k)$ in subsection 2.4 (resp. 2.5), we can consider the function $\tau^{ss}_{{\mathrm R}\Psi^M({\mathcal A}_{\lambda,\eta})}$ as a function on $G_k$ with compact support which is invariant on the left and on the right by the Iwahori subgroup $I_k$ $$\tau^{ss}_{{\mathrm R}\Psi^M({\mathcal A}_{\lambda,\eta})}\in{\mathcal H}(G_k/\!/I_k).$$
The following statement was conjectured by R. Kottwitz, and is the main result of this paper.
Let $G$ be either ${\mathrm GL}(d)$ or ${\mathrm GSp}(2d)$. Let $M = M_{n_\pm}$ be the scheme associated to the group $G$ and the pair of integers $n_\pm$, as above. Then we have the formula $$\tau^{ss}_{{\mathrm R}\Psi^M({\mathcal A}_{\lambda,\eta})}=(-1)^{2\langle\rho,\lambda\rangle }
\sum_{\lambda'\leq\lambda}m_\lambda(\lambda') z_{\lambda'}$$ where $z_{\lambda'}$ is the function of Bernstein associated to the dominant coweight $\lambda'$, which lies in the center $Z({\mathcal H}(G_k/\!/I_k))$ of ${\mathcal H}(G_k/\!/I_k)$.
Here, $\rho$ is half the sum of positive roots for $G$ and thence $2\langle\rho,\lambda\rangle $ is the dimension of $O_{\lambda,\eta}$. The integer $m_\lambda(\lambda')$ is the multiplicity of weight $\lambda'$ occuring in the representation of highest weight $\lambda$. The partial ordering $\lambda' \leq \lambda$ is defined to mean that $\lambda - \lambda'$ is a sum of positive coroots of $G$.
Comparing with the formula for minuscule $\mu$ given in Kottwitz’ conjecture (cf. Introduction), one notices the absence of the factor $q^{\langle\rho,\mu\rangle }$ and the appearance of the sign $(-1)^{2\langle\rho,\mu\rangle }$. This difference is explained by the normalization of the intersection complex ${\mathcal A}_{\mu,\eta}$. For minuscule coweights $\mu$, the orbit $O_\mu$ is closed. Consequently, the intersection complex ${\mathcal A}_{\mu,\eta}$ differs from the constant sheaf only by normalization factor $${\mathcal A}_{\mu,\eta}={{\bar{\Bbb Q}}_\ell}[2\langle\rho,\mu\rangle ](\langle\rho,\mu\rangle ).$$
We refer to Lusztig’s article [@Lusztig] for the definition of Bernstein’s functions. In fact, what we need is rather the properties that characterize these functions. We will recall these properties in the next subsection.
A commutative triangle
----------------------
Denote by $K_k$ the standard maximal compact subgroup $G(k[[t]])$ of $G_k$, where $G$ is either ${\mathrm GL}(d)$ or ${\mathrm GSp}(2d)$. The ${{\bar{\Bbb Q}}_\ell}$-valued functions with compact support in $G_k$ invariant on the left and on the right by $K_k$ form a commutative algebra ${\mathcal H}(G_k/\!/K_k)$ with respect to the convolution product. Here the convolution is defined using the Haar measure on $G_k$ which gives $K_k$ measure 1. Denote by ${\Bbb I}_K$ the characteristic function of $K_k$. This element is the unit of the algebra ${\mathcal H}(G_k/\!/K_k)$. Similarly we define the convolution on ${\mathcal H}(G_k /\!/I_k)$ using the Haar measure on $G_k$ which gives $I_k$ measure 1.
We consider the following triangle $$\diagramme{ &{{\bar{\Bbb Q}}_\ell}[X_*]^{W} & \cr
\hfill{}^{\mathrm Bern.}\! \swarrow & &
\nwarrow\!{}^{\mathrm Sat.}\hfill \cr
Z({\mathcal H}(G_k/\!/I_k)) & \hfld{}{-*{\Bbb I}_K} & {\mathcal H}(G_k/\!/K_k)}$$ Here ${{\bar{\Bbb Q}}_\ell}[X_*]^{W}$ is the $W$-invariant sub-algebra of the ${{\bar{\Bbb Q}}_\ell}$-algebra associated to the group of cocharacters of the standard (diagonal) torus $T$ in $G$ and $W$ is the Weyl group associated to $T$. For the case $G = {\mathrm GL}(d)$, this algebra is isomorphic to the algebra of symmetric polynomials with $d$ variables and their inverses: ${\bar {\Bbb Q}}_{\ell}[X^{\pm}_1,\ldots,X^{\pm}_d]^{S_d}$.
The above maps $${\mathrm Sat}:{\mathcal H}(G_k/\!/K_k)\rightarrow{{\bar{\Bbb Q}}_\ell}[X_*]^{W}$$ and $${\mathrm Bern}:{{\bar{\Bbb Q}}_\ell}[X_*]^{W}\rightarrow Z({\mathcal H}(G_k/\!/I_k))$$ are the isomorphisms of algebras constructed by Satake, see [@Satake] and by Bernstein, see [@Lusztig]. It follows immediately from its definition that the Bernstein isomorphism sends the irreducible character $\chi_\lambda$ of highest weight $\lambda$ to $${\mathrm Bern}(\chi_\lambda)=\sum_{\lambda'\leq\lambda}m_\lambda(\lambda')
z_{\lambda'}.$$
The horizontal map $$Z({\mathcal H}(G_k/\!/I_k))\rightarrow {\mathcal H}(G_k/\!/K_k)$$ is defined by $f\mapsto f*{\Bbb I}_K$ where $$f*{\Bbb I}_K(g)=\int_{G_k}f(gh^{-1}){\Bbb I}_K(h) \,d h.$$
The next statement seems to be known to the experts. It can be deduced easily, see [@Haines], from results of Lusztig [@Lusztig] and Kato [@Kato]. Another proof can be found in an article of Dat [@Dat].
The above triangle is commutative.
It follows that the horizontal map is an isomorphism, and that $(-1)^{2\langle \rho\, , \, \lambda \rangle} \sum_{\lambda' \leq \lambda}m_{\lambda}(\lambda')z_{\lambda'}$ is the unique element in $Z({\mathcal H}(G_k/\!/I_k))$ whose image in ${\mathcal H}(G_k/\!/K_k)$ has Satake transform $(-1)^{2\langle \rho\, , \, \lambda \rangle} \chi_{\lambda}$.
Thus in order to prove the Theorem 11, it suffices now to prove the two following statements.
The function $\tau^{ss}_{{\mathrm R}\Psi^M({\mathcal A}_{\lambda,\eta})}$ lies in the center $Z({\mathcal H}(G_k/\!/I_k))$ of the algebra ${\mathcal H}(G_k/\!/I_k)$.
The Satake transform of $\tau^{ss}_{{\mathrm R}\Psi^M({\mathcal A}_{\lambda,\eta})}*{\Bbb I}_K$ is equal to $(-1)^{2\langle\rho,\lambda\rangle }\chi_\lambda$, where $\chi_\lambda$ is the irreducible character of highest weight $\lambda$.
In fact we can reformulate Proposition 14 in such a way that it becomes independent of Proposition 13. We will prove Proposition 14 in the next section.
In order to prove Proposition 13, we have to adapt Lusztig’s construction of geometric convolution to our context. This will be done in the section 7. The proof of Proposition 14 itself will be given in section 8.
Proof of Proposition 14
=======================
Averaging by $K$
----------------
The map $$Z({\mathcal H}(G_k/\!/I_k))\rightarrow {\mathcal H}(G_k/\!/K_k)$$ defined by $f\mapsto f*{\Bbb I}_K$ can be obviously extended to a map $$C_c(G_k/I_k)\rightarrow C_c(G_k/K_k)$$ where $C_c(G_k/I_k)$ (resp. $C_c(G_k/K_k)$) is the space of functions with compact support in $G_k$ invariant on the right by $I_k$ (resp. $K_k$). This map can be rewritten as follows $$f*{\Bbb I}_K(g)=\sum_{h\in K_k/I_k} f(gh).$$
Therefore, this operation corresponds to summing along the fibres of the map $G_k/I_k\rightarrow G_k/K_k$. For the particular function $\tau^{ss}_{{\mathrm R}\Psi^M({\mathcal A}_{\lambda,\eta})}$, it amounts to summing along the fibres of the map $$\pi(k):M_{n_\pm}(k)\rightarrow {\mathrm Grass}_{n_\pm}(k),$$ (see Lemma 6).
By using now the sheaf-function dictionary for semi-simple trace, we get $$\tau^{ss}_{{\mathrm R}\Psi^M({\mathcal A}_{\lambda,\eta})}*{\Bbb I}_K
=\tau^{ss}_{{\mathrm R}\pi_{{\bar s},*}{\mathrm R}\Psi^M({\mathcal A}_{\lambda,\eta})}.$$ The nearby cycle functor commutes with direct image by a proper morphism, so that $${\mathrm R}\pi_{{\bar s},*}{\mathrm R}\Psi^M({\mathcal A}_{\lambda,\eta})
={\mathrm R}\Psi^{{\mathrm Grass}}{\mathrm R}\pi_{\eta,*}({\mathcal A}_{\lambda,\eta}).$$ By Lemma 4, $\pi_{\eta}$ is an isomorphism. Consequently, ${\mathrm R}\pi_{\eta,*}({\mathcal A}_{\lambda,\eta})={\mathcal A}_{\lambda,\eta}$.
According to the description of ${\mathrm Grass} = {\mathrm Grass}_{n_\pm}$ (see subsections 2.3 and 2.5), we can prove that ${\mathrm R}\Psi^{{\mathrm Grass}}{\mathcal A}_{\lambda,\eta}={\mathcal A}_{\lambda,{\bar s}}$ (note that the complex ${\mathcal A}_{\lambda,\eta}$ over ${\mathrm Grass}_{\eta}$ can be extended in a canonical fashion to a complex ${\mathcal A}_{\lambda}$ over the $S$-scheme ${\mathrm Grass}$, thus ${\mathcal A}_{\lambda, {\bar s}}$ makes sense). In particular, the inertia subgroup $\Gamma_0$ acts trivially on ${\mathrm R}\Psi^{{\mathrm Grass}}{\mathcal A}_{\lambda,\eta}$ and the semi-simple trace is just the ordinary trace. The proof of a more general statement will be given in the following appendix.
By putting together the above equalities, we obtain $${\mathrm R}\pi_{{\bar s},*}{\mathrm R}\Psi^M({\mathcal A}_{\lambda,\eta})={\mathcal A}_{\lambda,s}.$$
To conclude the proof of Proposition 14, we quote an important theorem of Lusztig and Kato, see [@Lusztig] and [@Kato]. We remark that Ginzburg and also Mirkovic and Vilonen have put this result in its natural framework : a Tannakian equivalence, see [@Ginzburg],[@Mirkovic-Vilonen].
\[Lusztig, Kato\] The Satake transform of the function $\tau^{ss}_{{\mathcal A}_{\lambda,s}}$ is equal to $${\mathrm Sat}(\tau_{{\mathcal A}_{\lambda,s}})=(-1)^{2\langle\rho,\lambda\rangle }\chi_\lambda$$ where $\chi_\lambda$ is the irreducible character of highest weight $\lambda$.
Appendix
--------
This appendix seems to be well known to the experts. We thank G. Laumon who has kindly explained it to us.
Let us consider the following situation.
Let $X$ be a proper scheme over $S$ equipped with an action of a group scheme $J$ smooth over $S$. We suppose there is a stratification $$X=\coprod_{\alpha\in\Delta} X_\alpha$$ with each stratum $X_\alpha$ smooth over $S$. We assume that the group scheme $J$ acts transitively on all fibers of $X_\alpha$. Moreover, we suppose there exists, for each $\alpha$, a $J$-equivariant resolution of singularities ${\tilde X}_\alpha$ $$\pi_\alpha:{\tilde X}_\alpha\rightarrow{\bar X}_\alpha$$ of the closure ${\bar X}_\alpha$ of $X_\alpha$, such that this resolution ${\tilde X}_\alpha$, smooth over $S$, contains $X_\alpha$ as a Zariski open ; the complement ${\tilde X}_\alpha-X_\alpha$ is also supposed to be a union of normal crossing divisors.
If $X$ is an invariant subscheme of the affine Grassmannian or of the affine flag variety, we can use the Demazure resolution.
Let $i_\alpha$ denote the inclusion map $X_\alpha\rightarrow X$ and let ${\mathcal F}_\alpha$ denote $i_{\alpha,!}{{\bar{\Bbb Q}}_\ell}$. A complex of sheaves ${\mathcal F}$ is said $\Delta$-constant if its cohomology sheaves of ${\mathcal F}$ are successive extensions of ${\mathcal F}_\alpha$ with $\alpha\in\Delta$. The intersection complex of ${\bar X}_\alpha$ is $\Delta$-constant.
For an $\ell$-adic complex ${\mathcal F}$ of sheaves on $X$, there exists a canonical morphism $${\mathcal F}_{\bar s}\rightarrow{\mathrm R}\Psi^X({\mathcal F}_\eta)$$ whose the mapping cylinder is the vanishing cycle ${\mathrm R}\Phi^X({\mathcal F})$.
If ${\mathcal F}$ is $\Delta$-constant bounded complex, ${\mathrm R}\Phi^X({\mathcal F})=0$.
Clearly, it suffices to prove ${\mathrm R}\Phi^X({\mathcal F}_\alpha)=0$. Consider the equivariant resolution $\pi_\alpha:{\tilde X}_\alpha\rightarrow {\bar X}_\alpha$. We have a canonical isomorphism $${\mathrm R}\pi_{\alpha,*}{\mathrm R}\Phi^{{\tilde X}_\alpha}({\mathcal F}_\alpha)
\ident{\mathrm R}\Phi^{{\bar X}_\alpha}({\mathcal F}_\alpha).$$ It suffices then to prove ${\mathrm R}\Phi^{{\tilde X}_\alpha}({\mathcal F}_\alpha)=0$. This is known because ${\tilde X}_\alpha$ is smooth over $S$ and ${{\tilde X}_\alpha}-X_\alpha$ is union of normal crossing divisors. $\square$
If ${\mathcal F}$ is $\Delta$-constant and bounded, the inertia group $\Gamma_0$ acts trivially on the nearby cycle ${\mathrm R}\Psi^X({\mathcal F}_\eta)$.
The morphism ${\mathcal F}_{\bar s}\rightarrow{\mathrm R}\Psi^X({\mathcal F}_\eta)$ is an isomorphism compatible with the actions of $\Gamma$. The inertia subgroup $\Gamma_0$ acts trivially on ${\mathcal F}_{\bar s}$, thus it acts trivially on ${\mathrm R}\Psi^X({\mathcal F}_\eta)$, too. $\square$
Invariant subschemes of $G/I$
=============================
We recall here the well known ind-scheme structure of $G_k/I_k$ where $G$ denotes the group ${\mathrm GL}(d,k(\!(t+\varpi)\!))$ or the group ${\mathrm GSp}(2d,k(\!(t+\varpi)\!))$ and where $I$ is its standard Iwahori subgroup. The variable $t+\varpi$ is used instead of $t$ in order to be compatible with the definitions of local models given in section 2.
Linear case
-----------
Let $N_{n_\pm}$ be the functor which associates to each ${\mathcal O}$-algebra $R$ the set of $${\mathcal L}_\bullet=({\mathcal L}_0\subset{\mathcal L}_1\subset\cdots\subset{\mathcal L}_d=(t+\varpi)^{-1}{\mathcal L}_0)$$ where ${\mathcal L}_0,{\mathcal L}_1,\ldots$ are $R[t]$-submodules of $R[t,t^{-1},(t+\varpi)^{-1}]^d$ such that for $i=0,1,\ldots,d-1$ $$(t+\varpi)^{n_+}{\mathcal V}_{i,R}\subset {\mathcal L}_i\subset (t+\varpi)^{n_-}{\mathcal V}_{i,R}$$ and ${\mathcal L}_i / (t+\varpi)^{n_+}{\mathcal V}_{i,R}$ is locally a direct factor, of fixed rank independent of $i$, of the free $R$-module $(t+\varpi)^{n_-}{\mathcal V}_{i,R} / (t+\varpi)^{n_+}{\mathcal V}_{i,R}$. Obviously, this functor is represented by a closed subscheme in a product of Grassmannians. In particular, $N_{n_\pm}$ is proper.
Let $I_{n_\pm}$ be the functor which associates to each ${\mathcal O}$-algebra $R$ the group $R[t]$-linear automorphisms of $$(t+\varpi)^{n_--1}R[t]^d/(t+\varpi)^{n_+}R[t]^d$$ fixing the image in this quotient of the filtration $${\mathcal V}_{0,R}\subset{\mathcal V}_{1,R}\subset\cdots\subset {\mathcal V}_{d,R}=(t+\varpi)^{-1}{\mathcal V}_{0,R}.$$ This functor is represented by a smooth group scheme over $S$ which acts on $N_{n_\pm}$.
Symplectic case
---------------
Let $N_{n_\pm}$ be the functor which associates to each ${\mathcal O}$-algebra $R$ the set of sequences $${\mathcal L}_\bullet=({\mathcal L}_0\subset{\mathcal L}_1\subset\cdots\subset{\mathcal L}_d)$$ where ${\mathcal L}_0,{\mathcal L}_1,\ldots$ are $R[t]$-submodules of $R[t,t^{-1},(t+\varpi)^{-1}]^{2d}$ satisfying $$(t+\varpi)^{n_+}{\mathcal V}_{i,R}\subset {\mathcal L}_i\subset (t+\varpi)^{n_-}{\mathcal V}_{i,R}$$ and such that ${\mathcal L}_i / (t+\varpi)^{n_+}{\mathcal V}_{i,R}$ is locally a direct factor of $(t+\varpi)^{n_-}{\mathcal V}_{i,R} / (t+\varpi)^{n_+}{\mathcal V}_{i,R}$ of rank $(n_+ - n_-)d$ for all $i=0,1,\ldots,d$, and ${\mathcal L}_0$ (resp. ${\mathcal L}_d$) is autodual with respect to the symplectic form $(t+\varpi)^{-n_--n_+}\langle\,,\,\rangle $ (resp. $(t+\varpi)^{-n_--n_++1}\langle\,,\,\rangle $).
Let $I_{n_\pm}$ be the functor which associates to each ${\mathcal O}$-algebra $R$ the group $R[t]$-linear automorphisms of $$(t+\varpi)^{n_--1}R[t]^{2d}/(t+\varpi)^{n_+}R[t]^{2d}$$ fixing the image in this quotient, of the filtration $${\mathcal V}_{0,R}\subset{\mathcal V}_{1,R}\subset\cdots\subset {\mathcal V}_{2d,R}=(t+\varpi)^{-1}{\mathcal V}_{0,R},$$ and fixing the symplectic form $(t+\varpi)^{-n_--n_++1}\langle\,,\,\rangle $ up to a unit in $R$. This functor is represented by a smooth group scheme over $S$ which acts on $N_{n_\pm}$.
There is no vanishing cycle on $N$
----------------------------------
It is well known (see [@Mathieu] for instance) that $N = N_{n_\pm}$ admits a stratification by $I_{n_\pm}$-orbits $$N_{n_\pm}=\coprod_{w\in {\tilde W}'(n_\pm)} O_w$$ where ${\tilde W}'(n_\pm)$ is a finite subset of the affine Weyl group ${\tilde W}$ of ${\mathrm GL}(d)$ (resp. ${\mathrm GSp}(2d)$). For all $w\in {\tilde W}'(n_\pm)$, $O_w$ is smooth over $S$ and $I_{n_\pm}$ acts transitively on its geometric fibers. All this remains true if we replace $S$ by any other base scheme.
Let ${\bar O}_w$ denote the closure of $O_w$. Let ${\mathcal I}_{w,\eta}$ (resp. ${\mathcal I}_{w,s}$) denote the intersection complex of ${\bar O}_{w,\eta}$ (resp. ${\bar O}_{w,s}$). We have $${\mathrm R}\Psi^{N}({\mathcal I}_{w,\eta})={\mathcal I}_{w,{\bar s}}$$ (see Appendix 5.2 for a proof). In particular, the inertia subgroup $\Gamma_0$ acts trivially on ${\mathrm R}\Psi^{N}({\mathcal I}_{w,\eta})$.
Let ${\tilde W}$ be the affine Weyl group of ${\mathrm GL}(d)$, respectively ${\mathrm GSp}(2d)$. It can be easily checked that ${\tilde W}=\bigcup_{n_\pm}\ {\tilde W}'(n_\pm)$ for the linear case as well as for the symplectic case.
Convolution product of ${\mathcal A}_\lambda$ with ${\mathcal I}_w$
===================================================================
Convolution diagram
-------------------
In this section, we will adapt a construction due to Lusztig in order to define the convolution product of an equivariant perverse sheaf ${\mathcal A}_\lambda$ over $M_{n_\pm}$ with an equivariant perverse sheaf ${\mathcal I}_w$ over $N_{n'_\pm}$. See Lusztig’s article [@Lusztig1] for a quite general construction.
For any dominant coweight $\lambda$ and any $w\in{\tilde W}$, we can choose $n_\pm$ and $n'_\pm$ so that $\lambda\in\Lambda(n_\pm)$ and $w\in{\tilde W}'(n'_\pm)$. >From now on, since $\lambda$ and $w$ as well as $n_\pm$ and $n'_\pm$ are fixed, we will often write $M$ for $M_{n_\pm}$ and $N$ for $N_{n'_\pm}$. This should not cause any confusion.
The aim of this subsection is to construct the convolution diagram à la Lusztig $$\diagramme{ & {\tilde M}\times {\tilde N} & & &\cr
\hfill{}^{p_1}\! \swarrow & &
\searrow\!{}^{p_2}\hfill & & \cr
M\times N & & M\,{\tilde\times}\,N
&\hfld{m}{} & P}$$ with the usual properties that will be made precise later.
Linear case
-----------
- The functor $M\,{\tilde\times}\,N$ associates to each ${\mathcal O}$-algebra $R$ the set of pairs $({\mathcal L}_\bullet,{\mathcal L}'_\bullet)$ $$\displaylines{
{\mathcal L}_\bullet=({\mathcal L}_0\subset{\mathcal L}_1\subset\cdots\subset{\mathcal L}_d=(t+\varpi)^{-1}{\mathcal L}_0)\cr
{\mathcal L}'_\bullet=({\mathcal L}'_0\subset{\mathcal L}'_1\subset\cdots\subset{\mathcal L}'_d=(t+\varpi)^{-1}{\mathcal L}'_0)
}$$ where ${\mathcal L}_i,{\mathcal L}'_i$ are $R[t]$-submodules of $R[t,t^{-1},(t+\varpi)^{-1}]^d$ satisfying the following conditions $$\displaylines{
t^{n_+}{\mathcal V}_{i,R}\subset{\mathcal L}_i\subset t^{n_-}{\mathcal V}_{i,R}\cr
(t+\varpi)^{n'_+}{\mathcal L}_{i}\subset{\mathcal L}'_i\subset (t+\varpi)^{n'_-}{\mathcal L}_{i}}$$ As usual, ${\mathcal L}_i/t^{n_+}{\mathcal V}_{i,R}$ is supposed to be locally a direct factor of $t^{n_-}{\mathcal V}_{i,R}/t^{n_+}{\mathcal V}_{i,R}$, and ${\mathcal L}'_i/(t+\varpi)^{n'_+}{\mathcal L}_{i}$ locally a direct factor of $(t+\varpi)^{n'_-}{\mathcal L}_{i}/(t+\varpi)^{n'_+}{\mathcal L}_{i}$ as $R$-modules.The ranks of the projective $R$-modules ${\mathcal L}_i/t^{n_+}{\mathcal V}_{i,R}$ and ${\mathcal L}'_i/(t+\varpi)^{n'_+}{\mathcal L}_{i}$ are each also supposed to be independent of $i$. It follows from the above conditions that $$t^{n_+}(t+\varpi)^{n'_+}{\mathcal V}_{i,R}\subset{\mathcal L}'_i\subset t^{n_-}(t+\varpi)^{n'_-}{\mathcal V}_{i,R}$$ and ${\mathcal L}'_i/t^{n_+}(t+\varpi)^{n'_+}{\mathcal V}_{i,R}$ is locally a direct factor of $t^{n_-}(t+\varpi)^{n'_-}{\mathcal V}_{i,R}/t^{n_+}(t+\varpi)^{n'_+}{\mathcal V}_{i,R}$ as an $R$-module. Thus defined the functor $M \, {\tilde \times} \, N$ is represented by a projective scheme over $S$.
- The functor $P$ associates to each ${\mathcal O}$-algebra $R$ the set of chains ${\mathcal L}'_\bullet$ $${\mathcal L}'_\bullet=({\mathcal L}'_0\subset{\mathcal L}'_1\subset\cdots\subset{\mathcal L}'_d=(t+\varpi)^{-1}{\mathcal L}'_0)$$ where ${\mathcal L}'_i$ are $R[t]$-submodules of $R[t,t^{-1},(t+\varpi)^{-1}]^d$ satisfying $$t^{n_+}(t+\varpi)^{n'_+}{\mathcal V}_{i,R}\subset{\mathcal L}'_i\subset
t^{n_-}(t+\varpi)^{n'_-}{\mathcal V}_{i,R}$$ and the usual conditions “locally a direct factor as $R$-modules”. As above, ${\rm rk}_R({\mathcal L}'_i/t^{n_+}(t+\varpi)^{n'_+}{\mathcal V}_{i,R})$ is supposed to be independent of $i$. Obviously, this functor is represented by a projective scheme over $S$.
- The forgetting map $m({\mathcal L}_\bullet,{\mathcal L}'_\bullet)={\mathcal L}'_\bullet$ yields a morphism $$m:M\,{\tilde\times}\,N\rightarrow P.$$ This map is defined: it suffices to note that $t^{n_-}(t+\varpi)^{n'_-}{\mathcal V}_{i,R} / {\mathcal L}'_i$ is locally free as an $R$-module, being an extension of $t^{n_-}{\mathcal V}_{i,R} / {\mathcal L}_i$ by $(t+\varpi)^{n'_-}{\mathcal L}_i / {\mathcal L}'_i$, each of which is locally free. Clearly, this morphism is a proper morphism because its source and its target are proper schemes over $S$.
Now before we can construct the schemes ${\tilde M}$, ${\tilde N}$, and the remaining morphisms in the convolution diagram, we need the following simple remark.
The functor which associates to each ${\cal O}$-algebra $R$ the set of matrices $g\in{\frak{gl}}_s(R)$ such that the image of $g:R^s\rightarrow R^s$ is locally a direct factor of rank $r$ of $R^s$ is representable by a locally closed subscheme of ${\frak{gl}}_s$.
For $1\leq i\leq s$, denote by ${\rm St}_i$ the closed subscheme of ${\frak{gl}}_s$ defined by the equations : all minors of order at least $i+1$ vanish. By using Nakayama’s lemma, one can see easily that the above functor is represented by the quasi-affine, locally closed subscheme ${\rm St}_r-{\rm St}_{r-1}$ of ${\frak{gl}}_s$. $\square$
Now let ${\bar{\mathcal V}}_0\subset{\bar{\mathcal V}}_1\subset\cdots$ be the image of ${\mathcal V}_0\subset{\mathcal V}_1\subset\cdots$ in the quotient $${\bar{\mathcal V}}=t^{n_-}(t+\varpi)^{n'_--1}{\mathcal O}[t]^d/
t^{n_+}(t+\varpi)^{n'_+}{\mathcal O}[t]^d.$$ Let ${\bar{\mathcal L}}_0\subset{\bar{\mathcal L}}_1\subset\cdots$ be the images of ${\mathcal L}_0\subset{\mathcal L}_1\subset\cdots$ in the quotient ${\bar{\mathcal V}}_{R}={\bar{\mathcal V}}\otimes_{\mathcal O} R$. Because ${\mathcal L}_i$ is completely determined by ${\bar{\mathcal L}}_i$, we can write ${\bar{\mathcal L}}_\bullet\in M(R)$ for ${\mathcal L}_\bullet\in M(R)$ and so on.
- We consider the functor ${\tilde M}$ which associates to each ${\mathcal O}$-algebra $R$ the set of $R[t]$-endomorphisms $g\in{\mathrm End}({\bar{\mathcal V}}_{R})$ such that if ${\bar{\mathcal L}}_i=g\bigl(t^{n_-}{\bar{\mathcal V}}_i\bigr)$ then $$t^{n_+}{\bar{\mathcal V}}_{i,R}\subset{\bar{\mathcal L}}_i\subset
t^{n_-}{\bar{\mathcal V}}_{i,R}$$ and ${\bar{\mathcal L}}_i/t^{n_+}{\bar{\mathcal V}}_{i,R}$ is locally a direct factor of $t^{n_-}{\bar{\mathcal V}}_{i,R}/t^{n_+}{\bar{\mathcal V}}_{i,R}$, of the same rank, for all $i=0,\ldots,d-1$. Using Lemma 18 ones sees this functor is representable and comes naturally with a morphism $p:{\tilde M}\rightarrow M$.
- In a totally analogous way, we consider the functor ${\tilde N}$ which associates to each ${\mathcal O}$-algebra $R$ the set of $R[t]$-endomorphisms $g\in{\mathrm End}({\bar{\mathcal V}}_R)$ such that if ${\bar{\mathcal L}}_i=g\bigl((t+\varpi)^{n'_-}{\bar{\mathcal V}}_{i,R}\bigr)$ then $$(t+\varpi)^{n'_+}{\bar{\mathcal V}}_{i,R}\subset{\bar{\mathcal L}}_i\subset
(t+\varpi)^{n'_-}{\bar{\mathcal V}}_{i,R}$$ and ${\bar{\mathcal L}}_i/(t+\varpi)^{n'_+}{\bar{\mathcal V}}_{i,R}$ is locally a direct factor of $(t+\varpi)^{n'_-}{\bar{\mathcal V}}_{i,R}/(t+\varpi)^{n'_+}{\bar{\mathcal V}}_{i,R}$, of the same rank for all $i=0,\ldots,d-1$. As above, the representability follows from Lemma 18. This functor comes naturally with a morphism $p':{\tilde N}\rightarrow N$.
- Now we define the morphism $p_1:{\tilde M}\times{\tilde N}
\rightarrow {M}\times N$ by $p_1=p\times p'$.
- We define the morphism $p_2:{\tilde M}\times{\tilde N}
\rightarrow {M}\,{\tilde\times}\, N$ by $p_2(g,g')=({\mathcal L}_\bullet,{\mathcal L}'_\bullet)$ with $$({\mathcal L}_\bullet,{\mathcal L}'_\bullet)=(g(t^{n_-}{\mathcal V}_\bullet),
gg'(t^{n_-}(t+\varpi)^{n'_-}{\mathcal V}_\bullet)).$$
We have now achieved the construction of the convolution diagram. We need to prove some usual facts related to this diagram.
The morphisms $p_1$ and $p_2$ are smooth and surjective. Their relative dimensions are equal.
The proof is very similar to that of Lemma 3. Let us note that the morphism $p:{\tilde M}\rightarrow M$ can be factored as $p=f\circ j$ where $j:{\tilde M}\rightarrow U$ is an open immersion and $f:U\rightarrow M$ is the vector bundle defined as follows. For any ${\mathcal O}$-algebra $R$ and any ${\mathcal L}_\bullet\in M(R)$, the fibre of $U$ over ${\mathcal L}_\bullet$ is the $R$-module $$U({\mathcal L}_\bullet)=\bigoplus_{i=0}^{d-1}
(t+\varpi)^{n'_-}{\mathcal L}_i/t^{n_+}(t+\varpi)^{n'_+}{\mathcal V}_{i,R}.$$ The morphisms $p',p_1$ and $p_2$ can be described in the same manner. The equality of relative dimensions of $p_1$ and $p_2$ follows from Lemma 23 (proved in section 8) and the fact that they are each smooth. $\square$
Just as in subsection 2.2, we can consider the group valued functor ${\tilde J}$ which associates to each ${\mathcal O}$-algebra $R$ the group of $R[t]$-linear automorphisms of ${\bar{\mathcal V}}_{R}$ which fix the filtration ${\bar{\mathcal V}}_0\subset{\bar{\mathcal V}}_1\subset\cdots\subset{\bar{\mathcal V}}_d$. Obviously, this functor is represented by an affine algebraic group scheme over $S$. The same proof as that of Lemma 3 proves that ${\tilde J}$ is smooth over $S$. Moreover, there are canonical morphisms of $S$-group schemes ${\tilde J} \rightarrow J$ and ${\tilde J} \rightarrow I$, where $J = J_{n_\pm}$ (resp. $I = I_{n'_\pm}$) is the group scheme defined in subsection 2.2 (resp. 6.1).
- We consider the action $\alpha_1$ of ${\tilde J} \times {\tilde J}$ on ${\tilde M}\times{\tilde N}$ defined by $$\alpha_1(h,h';g,g')=(gh^{-1},g'{h'}^{-1}).$$ Clearly, this action leaves stable the fibres of $p_1:{\tilde M}\times{\tilde N}\rightarrow
{M}\times{N}$.
- We also consider the action $\alpha_2$ of ${\tilde J}\times {\tilde J}$ on the same ${\tilde M}\times{\tilde N}$ defined by $$\alpha_1(h,h';g,g')=(gh^{-1},hg'{h'}^{-1}).$$ Clearly, this action leaves stable the fibres of $p_2:{\tilde M}\times{\tilde N}\rightarrow
{M}\,{\tilde\times}\,{N}$.
The action $\alpha_1$, respectively $\alpha_2$, is transitive on all geometric fibres of $p_1$, respectively $p_2$. The geometric fibers of $p_1$, respectively $p_2$, are therefore connected.
Let $E$ be a (separably closed) field containing the fraction field $F$ of ${\mathcal O}$ or its residue field $k$. Let $g,g'$ be elements of ${\tilde M}(E)$ such that $${\mathcal L}_\bullet=p(g)=p(g')\in M(E).$$
For all $i=0,\ldots,d-1$, denote by ${\hat{\mathcal V}}_i$ and ${\hat{\mathcal L}}_i$ the tensors $$\displaylines{
{\hat{\mathcal V}}_i={\mathcal V}_i\otimes_{{\mathcal O}[t]} E[t]_{(t(t+\varpi))}\cr
{\hat{\mathcal L}}_i={\mathcal L}_i\otimes_{E[t]} E[t]_{(t(t+\varpi))}}$$ where $E[t]_{(t(t+\varpi))}$ is the localized ring of $E[t]$ at the ideal $(t(t+\varpi))$, i.e., the ring ${\mathcal S}^{-1}E[t]$ where ${\mathcal S} = E[t] - \{(t) \cup (t+\varpi)\}$; this is a semi-local ring. Of course, we can consider the modules ${\hat{\mathcal V}}_i$ and ${\hat{\mathcal L}}_i$ as $E[t]_{(t(t+\varpi))}$-submodules of $E(t)^d$.
Clearly, we have an isomorphism $${\bar{\mathcal V}}_{E}=
t^{n_-}(t+\varpi)^{n'_--1}{\hat{\mathcal V}}_0/t^{n_+}(t+\varpi)^{n'_+}{\hat{\mathcal V}}_0$$ so that $E[t]$-endomorphisms of ${\bar{\mathcal V}}_{E}$ are the same as $E[t]_{(t(t+\varpi))}$-endomorphisms of ${\hat{\mathcal V}}_0$ taken modulo $t^{n_+-n_-}(t+\varpi)^{n'_+-n'_-+1}$.
By using the Nakayama lemma, $g$ and $g'$ can be lifted to $${\hat g}, {\hat g}'\in{\mathrm GL}(d,E(t))$$ such that $${\hat{\mathcal L}}_i={\hat g}t^{n_-}{\hat{\mathcal V}}_i\ ; \
{\hat{\mathcal L}}_i={\hat g}'t^{n_-}{\hat{\mathcal V}}_i.$$ This induces of course ${\hat h}{\bar{\mathcal V}}_i={\bar{\mathcal V}}_i$ with ${\hat h}={\hat g}^{-1}{\hat g}'$ and for all $i=0,\ldots,d-1$.
Let $h$ be the reduction modulo $t^{n_+-n_-}(t+\varpi)^{n'_+-n'_-+1}$ of ${\hat h}$. It is clear that $g'=gh$ and $h$ lies in ${\tilde J}(E)$.
We have proved that ${\tilde J}$ acts transitively on the geometric fibres of ${\tilde M}\rightarrow M$. We can prove in a completely similar way that ${\tilde J}$ acts transitively on geometric fibres of ${\tilde N}\rightarrow N$. Consequently, the action $\alpha_1$ is transitive on geometric fibres of $p_1$.
The proof of the statement for $\alpha_2$ and $p_2$ is similar. $\square$
The symmetric construction yields the following diagram $$\diagramme{ & {\tilde N}\times {\tilde M} & & &\cr
\hfill{}^{p'_1}\! \swarrow & &
\searrow\!{}^{p'_2}\hfill & & \cr
N\times M & & N\,{\tilde\times}\,M
&\hfld{m'}{} & P}$$ enjoying the same structures and properties. More precisely, we define $N \, {\tilde \times} \, M$ as follows: for each ${\mathcal O}$-algebra $R$, let $(N \, {\tilde \times} \, M)(R)$ be the set of pairs $({\mathcal L}'_\bullet,{\mathcal L}_\bullet)$ $$\displaylines{
{\mathcal L}'_\bullet=({\mathcal L}'_0\subset{\mathcal L}'_1\subset\cdots\subset{\mathcal L}'_d=(t+\varpi)^{-1}{\mathcal L}'_0)\cr
{\mathcal L}_\bullet=({\mathcal L}_0\subset{\mathcal L}_1\subset\cdots\subset{\mathcal L}_d=(t+\varpi)^{-1}{\mathcal L}_0)
}$$ where ${\mathcal L}'_i,{\mathcal L}_i$ are $R[t]$-submodules of $R[t,t^{-1},(t+\varpi)^{-1}]^d$ satisfying the following conditions $$\displaylines{
(t+\varpi)^{n'_+}{\mathcal V}_{i,R}\subset{\mathcal L}'_i\subset (t+\varpi)^{n'_-}{\mathcal V}_{i,R}\cr
t^{n_+}{\mathcal L}'_i\subset{\mathcal L}_i\subset t^{n_-}{\mathcal L}'_i}$$ such that for each $i=0,\ldots,d-1$, the $R$-module ${\mathcal L}'_i / (t+\varpi)^{n'_+}{\mathcal V}_{i,R}$ is locally a direct factor of $(t+\varpi)^{n'_-}{\mathcal V}_{i,R} / (t+\varpi)^{n'_+}{\mathcal V}_{i,R}$, and the $R$-module ${\mathcal L}_i/t^{n_+}{\mathcal L}'_i$ is locally a direct factor of $t^{n_-}{\mathcal L}'_i/t^{n_+}{\mathcal L}'_i$. It is also supposed that ${\rm rk}_R({\mathcal L}'_i / (t+\varpi)^{n'_+}{\mathcal V}_{i,R})$ and ${\rm rk}_R({\mathcal L}_i /t^{n_+}{\mathcal L}'_i)$ are independent of $i$.
The morphisms $p'_1$, $p'_2$, and $m'$ are defined in the obvious way: $p'_1 = p' \times p$, $m'({\mathcal L}'_\bullet, {\mathcal L}_\bullet) = {\mathcal L}_\bullet$, and $p'_2(g',g) = (g'(t+\varpi)^{n'_-}{\mathcal V}_{i,R} \, , \, g'g(t^{n_-}(t+\varpi)^{n'_-}){\mathcal V}_{i,R})$.
Symplectic case
---------------
In this section we construct the symplectic analog of the convolution diagram just discussed. In particular we need to define the schemes $M \, {\tilde \times} \, N$, ${\tilde M}$, ${\tilde N}$, $P$, and the morphisms $p_1$, $p_2$, and $m$. Moreover we need to construct the smooth group scheme ${\tilde J}$ which acts on the whole convolution diagram. Once this is done, defining the symplectic analogues of the actions $\alpha_1$ and $\alpha_2$, proving the symplectic analogues of Lemmas 19 and 20, and defining the symmetric construction are all straightforward tasks and will be left to the reader.
- The functor $M\,{\tilde\times}\,N$ associates to each ${\mathcal O}$-algebra $R$ the set of pairs $({\mathcal L}_\bullet,{\mathcal L}'_\bullet)$ $$\displaylines{
{\mathcal L}_\bullet=({\mathcal L}_0\subset{\mathcal L}_1\subset\cdots\subset{\mathcal L}_d)\cr
{\mathcal L}'_\bullet=({\mathcal L}'_0\subset{\mathcal L}'_1\subset\cdots\subset{\mathcal L}'_d)}$$ where ${\mathcal L}_i,{\mathcal L}'_i$ are $R[t]$-submodules of $R[t,t^{-1},(t+\varpi)^{-1}]^{2d}$ satisfying the following conditions $$\displaylines{
t^{n_+}{\mathcal V}_{i,R}\subset{\mathcal L}_i\subset t^{n_-}{\mathcal V}_{i,R}\cr
(t+\varpi)^{n'_+}{\mathcal L}_{i}\subset{\mathcal L}'_i\subset (t+\varpi)^{n'_-}{\mathcal L}_{i}}$$ satisfying the usual ”locally direct factors as $R$-modules” conditions: ${\mathcal L}_i / t^{n_+}{\mathcal V}_{i,R}$ is locally a direct factor of $t^{n_-}{\mathcal V}_{i,R} / t^{n_+}{\mathcal V}_{i,R}$ of rank $(n_+ - n_-)d$ and ${\mathcal L}'_i / (t+\varpi)^{n'_+}{\mathcal L}_i$ is locally a direct factor of $(t+\varpi)^{n'_-}{\mathcal L}_i / (t+\varpi)^{n'_+}{\mathcal L}_i$ of rank $(n'_+ - n'_-)d$. Moreover we suppose ${\mathcal L}_0$, ${\mathcal L}_d$, ${\mathcal L}'_0$ and ${\mathcal L}'_d$ are autodual with respect to $t^{-n_--n_+}\langle\,,\,\rangle $, $t^{-n_--n_+}(t+\varpi)\langle\,,\,\rangle $, $t^{-n_--n_+}(t+\varpi)^{-n'_--n'_+}\langle\,,\,\rangle $ and $t^{-n_--n_+}(t+\varpi)^{-n'_--n'_++1}\langle\,,\,\rangle $ respectively.
- The functor $P$ associates to each ${\mathcal O}$-algebra $R$ the set of chains ${\mathcal L}'_\bullet$ $${\mathcal L}'_\bullet=({\mathcal L}'_0\subset{\mathcal L}'_1\subset\cdots\subset{\mathcal L}'_d)$$ where ${\mathcal L}'_i$ are $R[t]$-submodules of $R[t,t^{-1},(t+\varpi)^{-1}]^{2d}$ satisfying $$t^{n_+}(t+\varpi)^{n'_+}{\mathcal V}_{i,R}\subset{\mathcal L}'_i\subset
t^{n_-}(t+\varpi)^{n'_-}{\mathcal V}_{i,R},$$ such that the usual “locally a direct factor as $R$-modules of rank $(n_+ - n_- + n'_+ - n'_-)d$” condition holds, and such that ${\mathcal L}'_0$ and ${\mathcal L}'_d$ are autodual with respect to $t^{-n_--n_+}(t+\varpi)^{-n'_--n'_+}\langle\,,\,\rangle $ and $t^{-n_--n_+}(t+\varpi)^{-n'_--n'_++1}\langle\,,\,\rangle $ respectively.
- The forgetting map $m({\mathcal L}_\bullet,{\mathcal L}'_\bullet)={\mathcal L}'_\bullet$ yields a morphism $m:M\,{\tilde\times}\,N\rightarrow P$. Clearly, $m$ is a proper morphism between proper $S$-schemes.
- We consider the functor ${\tilde M}$ which associates to each ${\mathcal O}$-algebra $R$ the set of $R[t]$-endomorphisms $g$ of $${\bar{\mathcal V}}_{R}=t^{n_-}(t+\varpi)^{n'_--1}{\mathcal V}_{0,R}/
t^{n_+}(t+\varpi)^{n'_+}{\mathcal V}_{0,R}$$ satisfying $$\langle gx,gy\rangle = c_g t^{n_+-n_-}\langle x,y\rangle$$ for some $c_g \in R^{\times}$, and such that if ${\bar{\mathcal L}}_i=g\bigl(t^{n_-}{\bar {\mathcal V}}_i\bigr)$ for $i=0,\ldots,d$, then we have $$t^{n_+}{\bar{\mathcal V}}_{i,R}\subset{\bar{\mathcal L}}_i\subset
t^{n_-}{\bar{\mathcal V}}_{i,R},$$ and ${\bar {\mathcal L}}_i / t^{n_+}{\bar {\mathcal V}}_{i,R}$ is locally a direct factor of $t^{n_-}{\bar {\mathcal V}}_{i,R} / t^{n_+}{\bar {\mathcal V}}_{i,R}$ of rank $(n_+ - n_-)d$. If $g\in {\tilde M}(R)$ then ones sees using the definitions that automatically, ${\bar{\mathcal L}}_\bullet=gt^{n_-}{\mathcal V}_{\bullet,R}\in M(R)$. The functor ${\tilde M}$ is representable and comes naturally with a morphism $p:{\tilde M}\rightarrow M$.
- Next consider the functor ${\tilde N}$ which associates to each ${\mathcal O}$-algebra $R$ the set of $R[t]$-endomorphisms $g$ of ${\bar {\mathcal V}}_R$ satisfying $$\langle gx,gy\rangle = c_g (t+\varpi)^{n'_+ - n'_-}\langle x,y \rangle$$ for some $c_g \in R^{\times}$ and such that if ${\bar {\mathcal L}}'_i = g(t+\varpi)^{n'_-}{\bar {\mathcal V}}_{i,R}$ for $i=0,\ldots,d$ then we have $$(t+\varpi)^{n'_+}{\bar {\mathcal V}}_{i,R} \subset {\bar {\mathcal L}}'_i \subset (t+\varpi)^{n'_-}{\bar {\mathcal V}}_{i,R},$$ and ${\bar {\mathcal L}}'_i / (t+\varpi)^{n'_+}{\bar {\mathcal V}}_{i,R}$ is locally a direct factor of $(t+\varpi)^{n'_-}{\bar {\mathcal V}}_{i,R} / (t+\varpi)^{n'_+}{\bar {\mathcal V}}_{i,R}$ of rank $(n'_+ - n'_-)d$. >From the definitions one sees that ${\bar {\mathcal L}}'_\bullet \in N(R)$. The functor ${\tilde N}$ is representable and comes with a functor $p': {\tilde N} \rightarrow N$.
- We define $p_1 = p \times p'$. We define $p_2 : {\tilde M} \times {\tilde N} \rightarrow M {\tilde \times} N$ exactly as in the linear case.
- We let ${\tilde J}$ denote the functor which associates to any ${\mathcal O}$-algebra $R$ the group of $R[t]$-linear automorphisms of ${\bar {\mathcal V}}_{R}$ which fix the form $t^{-n_- - n_+}(t+\varpi)^{-n'_- - n'_+ + 1}\langle \, , \, \rangle$ up to an element in $R^{\times}$. As in Lemma 3, the group scheme ${\tilde J}$ is smooth over $S$. There are canonical $S$-group scheme morphisms ${\tilde J} \rightarrow J$ and ${\tilde J} \rightarrow I$, where $J = J_{n_\pm}$ (resp. $I = I_{n'_\pm}$) was defined in subsection 2.5 (resp. 6.2).
Definition of the convolution product
-------------------------------------
Let us recall the standard definition of convolution product due to Lusztig [@Lusztig1] (see also [@Ginzburg] and [@Mirkovic-Vilonen]).
Let $E$ be a field containing the fraction field $F$ of ${\mathcal O}$ or its residue field $k$ and let $\epsilon={\mathrm Spec\,}(E)\rightarrow S$ be the corresponding morphism. For all $S$-schemes $X$, let $X_\epsilon$ denote the base change $X\times_S\epsilon$.
Let ${\mathcal A}$ be a perverse sheaf over $M_{\epsilon}$ that is $J_{\epsilon}$-equivariant. Let ${\mathcal I}$ be a perverse sheaf over $N_{\epsilon}$ that is $I_{\epsilon}$-equivariant. Both $I_{\epsilon}$ and $J_{\epsilon}$ are quotients of ${\tilde J}_{\epsilon}$, so we can say that ${\mathcal A}$ and ${\mathcal I}$ are ${\tilde J}_{\epsilon}$-equivariant.
Since $p_1$ is a smooth morphism, the pull-back $p_1^*({\mathcal A}\boxtimes_\epsilon{\mathcal I})$ is also perverse up to the shift by the relative dimension of $p_1$. A priori, this pull-back is only $\alpha_1$-equivariant. As ${\mathcal A}$ and ${\mathcal I}$ are ${\tilde J}_{\epsilon}$-equivariant, $p_1^*({\mathcal A}\boxtimes_\epsilon{\mathcal I})$ is also $\alpha_2$-equivariant. Since $p_2$ is smooth and the action $\alpha_2$ is transitive on its geometric fibres, there exists a perverse sheaf ${\mathcal A}\,{\tilde\boxtimes}_\epsilon{\mathcal I}$, unique up to unique isomorphism, such that $$p_1^*({\mathcal A}\boxtimes_\epsilon{\mathcal I})=p_2^*({\mathcal A}\,{\tilde\boxtimes}_\epsilon{\mathcal I})$$ by the theorem 4.2.5 of Beilinson-Bernstein-Deligne [@BBD]. And we put now $${\mathcal A}*_\epsilon{\mathcal I}=
{\mathrm R} m_{*}({\mathcal A}\,{\tilde\boxtimes}_\epsilon{\mathcal I}).$$ By the symmetric construction, we can define the convolution product ${\mathcal I}*_\epsilon{\mathcal A}$.
Let $E$ be now the algebraic closure ${\bar k}$ of the residual field $k$. We suppose that the perverse sheaves ${\mathcal A}$ and ${\mathcal I}$ are equipped with an action of ${\mathrm Gal}({\bar F}/F)$ compatible with the action of ${\mathrm Gal}({\bar F}/F)$ on the geometric special fibre through ${\mathrm Gal}({\bar k}/k)$. In practice, the inertia subgroup $\Gamma_0$ acts trivially on ${\mathcal I}$ and non trivially on ${\mathcal A}$. As the semi-simple trace provides a sheaf-function dictionary, we have : $$\displaylines{
\tau^{ss}_{\mathcal A}*\tau^{ss}_{\mathcal I}=\tau^{ss}_{{\mathcal A}\,*_{\bar s}\,{\mathcal I}}\cr
\tau^{ss}_{\mathcal I}*\tau^{ss}_{\mathcal A}=\tau^{ss}_{{\mathcal I}\,*_{\bar s}\,{\mathcal A}}}$$ where the convolution on the left hand is the ordinary convolution in the Hecke algebra ${\mathcal H}(G_k/\!/I_k)$.
Proof of Proposition 13
=======================
Cohomological part
------------------
According to the sheaf-function dictionary for semi-simple traces, it suffices to prove the following statement. Beilinson and Gaitsgory have proved a related result in the equal characteristic case, using a deformation of the affine Grassmanian of $G$, see [@Gaitsgory].
We have an isomorphism $${\mathrm R}\Psi^M({\mathcal A}_{\lambda,\eta})*_{\bar s}{\mathcal I}_{w,{\bar s}}\ident
{\mathcal I}_{w,{\bar s}}*_{\bar s}{\mathrm R}\Psi^M({\mathcal A}_{\lambda,\eta}).$$
The above statement makes sense because the functor ${\mathrm R}\Psi$ sends perverse sheaves to perverse sheaves, by a theorem of Gabber, see [@Illusie]. In particular, ${\mathrm R}\Psi^M({\mathcal A}_{\lambda,\eta})$ is a perverse sheaf.
Let us recall that $${\mathrm R}\Psi^N({\mathcal I}_{w,\eta})\ident{\mathcal I}_{w,s}$$ so that we have to prove $${\mathrm R}\Psi^M({\mathcal A}_{\lambda,\eta})*_{\bar s}{\mathrm R}\Psi^N({\mathcal I}_{w,\eta})\ident
{\mathrm R}\Psi^N({\mathcal I}_{w,\eta})*_{\bar s}{\mathrm R}\Psi^M({\mathcal A}_{\lambda,\eta}).$$
First, let us prove that nearby cycle commutes with convolution product.
We have the isomorphisms $$\displaylines{
{\mathrm R}\Psi^M({\mathcal A}_{\lambda,\eta})*_{\bar s}{\mathrm R}\Psi^N({\mathcal I}_{w,\eta})\ident
{\mathrm R}\Psi^P({\mathcal A}_{\lambda,\eta}*_\eta{\mathcal I}_{w,\eta})\cr
{\mathrm R}\Psi^N({\mathcal I}_{w,\eta})*_{\bar s}{\mathrm R}\Psi^M({\mathcal A}_{\lambda,\eta})\ident
{\mathrm R}\Psi^P({\mathcal I}_{w,\eta}*_\eta{\mathcal A}_{\lambda,\eta})
}$$
According to a theorem of Beilinson-Bernstein (see the theorem 4.7 in [@Illusie]) we have an isomorphism of perverse sheaves $${\mathrm R}\Psi^{M\times N}({\mathcal A}_{\lambda,\eta}\boxtimes_\eta{\mathcal I}_{w,\eta})
\ident{\mathrm R}\Psi^M({\mathcal A}_{\lambda,\eta})\boxtimes_{\bar s}{\mathrm R}\Psi^N({\mathcal I}_{w,\eta}).$$ This induces an isomorphism between the pull-backs $$p_1^*{\mathrm R}\Psi^{M\times N}({\mathcal A}_{\lambda,\eta}\boxtimes_\eta{\mathcal I}_{w,\eta})
\ident p_1^*({\mathrm R}\Psi^M({\mathcal A}_{\lambda,\eta})\boxtimes_{\bar s}
{\mathrm R}\Psi^N({\mathcal I}_{w,\eta}))$$ which are up to the shift by the relative dimension $p_1$, perverse too. By definition, we have $$p_1^*({\mathrm R}\Psi^{M}({\mathcal A}_{\lambda,\eta})
\boxtimes_{\bar s}{\mathrm R}\Psi^N({\mathcal I}_{w,\eta}))
\ident p_2^*({\mathrm R}\Psi^{M}({\mathcal A}_{\lambda,\eta})
\,{\tilde\boxtimes}_{\bar s}\,{\mathrm R}\Psi^N({\mathcal I}_{w,\eta})).$$ As $p_1, p_2$ are smooth, $p_1^*$ and $p_2^*$ commute with nearby cycle, so applying ${\mathrm R}\Psi^{{\tilde M} \times {\tilde N}}$ to $$p^*_1({\mathcal A}_{\lambda,\eta} \boxtimes_{\eta} {\mathcal I}_{w,\eta}) \ident p^*_2({\mathcal A}_{\lambda,\eta} {\tilde \boxtimes}_{\eta} {\mathcal I}_{w,\eta})$$ gives an isomorphism $$p_1^*{\mathrm R}\Psi^{M\times N}({\mathcal A}_{\lambda,\eta}\boxtimes_\eta{\mathcal I}_{w,\eta})
\ident p_2^*{\mathrm R}\Psi^{M\,{\tilde\times}\,N}({\mathcal A}_{\lambda,\eta}\,
{\tilde\boxtimes}_\eta\,{\mathcal I}_{w,\eta}).$$ Since $p_2$ is smooth with connected geometric fibres, the uniqueness part of theorem 4.2.5 of Beilinson-Bernstein-Deligne [@BBD] implies that we have an isomorphism $${\mathrm R}\Psi^{M\,{\tilde\times}\,N}({\mathcal A}_{\lambda,\eta}\,{\tilde\boxtimes}_\eta
\,{\mathcal I}_{w,\eta})\ident {\mathrm R}\Psi^M({\mathcal A}_{\lambda,\eta})\,
{\tilde\boxtimes}_{\bar s}\,{\mathrm R}\Psi^N({\mathcal I}_{w,\eta}).$$
By applying now the functor ${\mathrm R} m_{*}$, we have an isomorphism $${\mathrm R} m_{*}{\mathrm R}\Psi^{M\,{\tilde\times}\,N}({\mathcal A}_{\lambda,\eta}\,
{\tilde\boxtimes}_\eta\,{\mathcal I}_{w,\eta})
\ident{\mathrm R}\Psi^M({\mathcal A}_{\lambda,\eta})*_{\bar s}{\mathrm R}\Psi^N({\mathcal I}_{w,\eta}).$$ Since the functor ${\mathrm R}\Psi$ commutes with the direct image of a proper morphism, we have $${\mathrm R}\Psi^P({\mathcal A}_{\lambda,\eta}*_\eta{\mathcal I}_{w,\eta})\ident
{\mathrm R} m_{*}{\mathrm R}\Psi^{M\,{\tilde\times}\,N}({\mathcal A}_{\lambda,\eta}\,
{\tilde\boxtimes}_\eta\,{\mathcal I}_{w,\eta}).$$ By composing the above isomorphisms, we get $${\mathrm R}\Psi^M({\mathcal A}_{\lambda,\eta})*_{\bar s} {\mathrm R}\Psi^N({\mathcal I}_{w,\eta})\ident
{\mathrm R}\Psi^P({\mathcal A}_{\lambda,\eta}*_\eta{\mathcal I}_{w,\eta}).$$ By the same argument, we prove $${\mathrm R}\Psi^N({\mathcal I}_{w,\eta})*_{\bar s}{\mathrm R}\Psi^M({\mathcal A}_{\lambda,\eta})\ident
{\mathrm R}\Psi^P({\mathcal I}_{w,\eta}*_\eta{\mathcal A}_{\lambda,\eta}).$$ This finishes the proof of the lemma. $\square$
Now it clearly suffices to prove $${\mathcal A}_{\lambda,\eta}*_\eta{\mathcal I}_{w,\eta}\ident
{\mathcal I}_{w,\eta}*_\eta{\mathcal A}_{\lambda,\eta}$$ which is an easy consequence of the following lemma.
1. Over the generic point $\eta$, we have two commutative triangles $$\diagramme{
& M_{\eta}\,{\tilde\times}\,N_{\eta} & \cr
\hfill{}^{i}\!\swarrow & &\searrow\!^{m}\hfill \cr
M_{\eta}\,{\times}\,N_{\eta} &
{\smash{\mathop{\hbox to 15mm{\rightarrowfill}}
\limits^{\scriptstyle j}}}
&\,\ P_{\eta} \cr
\hfill_{i'}\!\nwarrow & &\nearrow\!_{ m'}\hfill \cr
& N_{\eta}\,{\tilde\times}\,M_{\eta} & \cr
}$$ where all arrows are isomorphisms.
2. Morever, we have the following isomorphisms $$\displaylines{
i^*({\mathcal A}_{\lambda,\eta}\boxtimes{\mathcal I}_{w,\eta})\ident
{\mathcal A}_{\lambda,\eta}\,{\tilde\boxtimes}\,{\mathcal I}_{w,\eta}\cr
i'{}^*({\mathcal A}_{\lambda,\eta}\boxtimes {\mathcal I}_{w,\eta}) \ident
{\mathcal I}_{w,\eta}\,{\tilde \boxtimes}\,{\mathcal A}_{\lambda,\eta}\cr}$$.
Proof of Lemma 23
-----------------
Let us prove the above lemma in the linear case.
Over the generic point $\eta$, we have the canonical decomposition of $${\bar{\mathcal V}}_{F}=t^{n_-}(t+\varpi)^{n'_--1}F[t]^d
/t^{n_+}(t+\varpi)^{n'_+}F[t]^d$$ into the direct sum ${\bar{\mathcal V}}_{F}={\bar{\mathcal V}}^{\,(t)}_{F}
\oplus {\bar{\mathcal V}}^{\,(t+\varpi)}_{F}$ where $$\displaylines{
{\bar {\mathcal V}}^{\,(t)}_{F}=t^{n_-}F[t]^d/t^{n_+}F[t]^d\cr
{\bar {\mathcal V}}^{\,(t+\varpi)}_{F}=
(t+\varpi)^{n'_- -1}F[t]^d/(t+\varpi)^{n'_+}F[t]^d.}$$ With respect to this decomposition, all the terms of the filtration $${\bar {\mathcal V}}_0\subset{\bar {\mathcal V}}_1\subset\cdots\subset
{\bar{\mathcal V}}_{d-1}$$ decompose to ${\bar{\mathcal V}}_i=
{\bar{\mathcal V}}^{\,(t)}_i\oplus{\bar{\mathcal V}}^{\,(t+\varpi)}_i$ for all $i=0,\ldots,d-1$. Here, we have $${\bar{\mathcal V}}^{\,(t)}_0=\cdots={\bar{\mathcal V}}^{\,(t)}_{d-1}
=F[t]^d/t^{n_+}F[t]^d.$$ Let $R$ be an $F$-algebra and let $({\mathcal L}_\bullet,{\mathcal L}'_\bullet)$ be an element of $(M\,{\tilde\times}\,N)(R)$. These chains of $R[t]$-modules verify $$\displaylines{
t^{n_+}{\mathcal V}_{i,R}\subset{\mathcal L}_i\subset t^{n_-}{\mathcal V}_{i,R}\cr
(t+\varpi)^{n'_+}{\mathcal L}_{i}\subset{\mathcal L}'_i\subset (t+\varpi)^{n'_-}{\mathcal L}_{i}}$$ As usual, let ${\bar{\mathcal L}}_i,{\bar{\mathcal L}}'_i$ denote the image of ${\mathcal L}_i,{\mathcal L}'_i$ in ${\bar {\mathcal V}}_{R}$. As $R[t]$-modules, they decompose to ${\bar{\mathcal L}}_i={\bar{\mathcal L}}^{\,(t)}_i\oplus{\bar{\mathcal L}}^{\,(t+\varpi)}_i$ and ${\bar{\mathcal L}}'_i={\bar{\mathcal L}}'{}^{\,(t)}_i\oplus{\bar{\mathcal L}}'{}^{\,(t+\varpi)}_i.$ The above inclusion conditions imply indeed $${\bar{\mathcal L}}^{\,(t)}_i={\bar{\mathcal L}}'{}^{\,(t)}_i\ ;\
{\bar{\mathcal L}}^{\,(t+\varpi)}_i={\bar{\mathcal V}}^{\,(t+\varpi)}_{i,R}.$$
Consequently, ${\mathcal L}_\bullet$ is completely determined by ${\mathcal L}'_\bullet$. In other terms, the map $m({\bar{\mathcal L}}_\bullet,{\bar{\mathcal L}}'_\bullet)={\bar{\mathcal L}}'_\bullet$ is an isomorphism of functors over $\eta$. In the same way, the map $$i({\bar{\mathcal L}}_\bullet^{\,(t)}\oplus{\bar{\mathcal V}}^{\,(t+\varpi)}_{\bullet,R},
{\bar{\mathcal L}}_\bullet^{\,(t)}\oplus{\bar{\mathcal L}}'{}^{\,(t+\varpi)}_\bullet)=
({\bar{\mathcal L}}_\bullet^{\,(t)}\oplus{\bar{\mathcal V}}^{\,(t+\varpi)}_{\bullet,R},
{\bar{\mathcal V}}_{\bullet,R}^{\,(t)}\oplus{\bar{\mathcal L}}'{}^{\,(t+\varpi)}_\bullet)$$ yields an isomorphism $i:M_{\eta}\,{\tilde\times}\, N_{\eta}
\ident M_{\eta}{\times} N_{\eta}$. The composed isomorphism $j=m\circ i^{-1}$ is given by $$j({\bar{\mathcal L}}_\bullet^{\,(t)}\oplus{\bar{\mathcal V}}^{\,(t+\varpi)}_{\bullet,R},
{\bar{\mathcal V}}_{\bullet,R}^{\,(t)}\oplus{\bar{\mathcal L}}'{}^{\,(t+\varpi)}_\bullet)
={\bar{\mathcal L}}{}^{\,(t)}_\bullet\oplus
{\bar{\mathcal L}}'{}^{\,(t+\varpi)}_\bullet.$$ The analogous statement for the lower triangle in the diagram can be proved in the same way and the first part of the lemma is proved.
By the very definition of ${\mathcal A}_{\lambda,\eta}\,{\tilde\boxtimes}\,{\mathcal I}_{w,\eta}$, in order to prove the second part of the lemma, it suffices to construct an isomorphism $$p_1^*({\mathcal A}_{\lambda,\eta}\boxtimes{\mathcal I}_{w,\eta})\ident
p_2^*\,i^*({\mathcal A}_{\lambda,\eta}\boxtimes{\mathcal I}_{w,\eta}).$$ In fact, the triangle $$\diagramme{
& {\tilde M}_{\eta}\times {\tilde N}_{\eta} & \cr
\hfill{}^{p_1}\!\swarrow & &\searrow\!^{p_2}\hfill \cr
M_{\eta}\,{\times}\,N_{\eta} &
{\smash{\mathop{\hbox to 15mm{\leftarrowfill}}
\limits^{\scriptstyle i}}} &
M_{\eta}\,{\tilde\times}\,N_{\eta}}$$ does not commute. Nevertheless this lack of commutativity can be corrected by equivariant properties. We consider the diagram $$\diagramme{
&{\tilde M}_{\eta}\times{\tilde N}_{\eta}&&\cr
\hfill{}^{q_1}\!\swarrow && \searrow\!^{q_2}\hfill &\cr
\hfill {\tilde J}_{\eta}\times M_{\eta}\,{\times}\,N_{\eta}\!\!\!\!\!\! &
{\smash{\mathop{\hbox to 10mm{\leftarrowfill}}
\limits^{\scriptstyle{\mathrm Id}\times i}}} &
\!\!\!\!\!\!{\tilde J}_{\eta}\times M_{\eta}\,{\tilde\times}\,N_{\eta}&\cr
{}^{{\mathrm pr}_1}\!\swarrow & &\!\!\!\!\!\!\swarrow\!{}_{{\mathrm pr}_2}\hfill
\searrow \!{}^{\alpha}&\cr
M_{\eta}\,{\times}\,N_{\eta}\ \ \
{\smash{\mathop{\hbox to 15mm{\leftarrowfill}}
\limits^{\scriptstyle i}}}\!\!\!\! &
\ M_{\eta}\,{\tilde\times}\,N_{\eta}& &
\!\!\!\!
\!\!\!\!\!\!M_{\eta}\,{\tilde\times}\,N_{\eta}\ \ \cr
}$$ defined as follows.
For any $F$-algebra $R$, an element $g\in {\tilde M}(R)$ is an $R[t]$-endomorphism of ${\bar {\mathcal V}}_{R}$ such that ${\bar{\mathcal L}}_\bullet=g(t^{n_-}{\bar{\mathcal V}}_{\bullet,R})\in M(R)$. As ${\bar {\mathcal V}}_{R}$ decomposes to ${\bar {\mathcal V}}_{R}={\bar {\mathcal V}}_{R}^{\,(t)}\oplus
{\bar {\mathcal V}}_{R}^{\,(t+\varpi)}$, its $R[t]$-endomorphism $g$ can be identified to a pair $g=(g^{\,(t)},g^{\,(t+\varpi)})$ where $g^{\,(t)}$, respectively $g^{\,(t+\varpi)}$, is an endomorphism of ${\bar {\mathcal V}}_{R}^{\,(t)}$, respectively of ${\bar {\mathcal V}}_{R}^{\,(t+\varpi)}$.
As we have seen above, for $\bar{\mathcal L}_\bullet\in M(R)$, we have ${\bar{\mathcal L}}_i= {\bar{\mathcal L}}_i^{\,(t)}\oplus {\bar{\mathcal L}}_i^{\,(t+\varpi)}$ with ${\bar{\mathcal L}}_i^{\,(t+\varpi)}={\bar {\mathcal V}}_{i,R}^{\,(t+\varpi)}$. Consequently, $g^{\,(t+\varpi)}$ is an automorphism of ${\bar {\mathcal V}}_{R}^{\,(t+\varpi)}$ fixing the filtration ${\bar{\mathcal V}}_{\bullet,R}^{\,(t+\varpi)}$. In a similar way, an element $g'\in {\tilde N}(R)$ can be identified with a pair $(g'{}^{\,(t)},g'{}^{\,(t+\varpi)})$ where $g'{}^{\,(t)}$ is an automorphism of ${\bar {\mathcal V}}_{R}^{\,(t)}$ fixing the filtration ${\bar {\mathcal V}}^{(t)}_{\bullet,R}$.
- The morphism $q_1$ is defined by $$q_1(g,g')=((g'{}^{\,(t)},g^{\,(t+\varpi)}),
g^{\,(t)}t^{n_-}{\bar{\mathcal V}}_{\bullet,R}^{\,(t)}\oplus{\bar{\mathcal V}}_{\bullet,R}^{\,(t+\varpi)},
{\bar{\mathcal V}}_{\bullet,R}^{\,(t)}\oplus
g'{}^{\,(t+\varpi)}(t+\varpi)^{n'_-}{\bar{\mathcal V}}_{\bullet,R}^{\,(t+\varpi)}).$$
- The morphism $q_2$ is defined by $$q_2(g,g')=((g'{}^{\,(t)},g^{\,(t+\varpi)}),
g^{\,(t)}t^{n_-}{\bar{\mathcal V}}_{\bullet,R}^{\,(t)}\oplus{\bar{\mathcal V}}_{\bullet,R}^{\,(t+\varpi)},
g^{\,(t)}t^{n_-}{\bar{\mathcal V}}_{\bullet,R}^{\,(t)}\oplus
g'{}^{\,(t+\varpi)}(t+\varpi)^{n'_-}{\bar{\mathcal V}}_{\bullet,R}^{\,(t+\varpi)}).$$
- The morphism $\alpha$ is defined by $$\displaylines{
\alpha((g'{}^{\,(t)},g^{\,(t+\varpi)}),
{\bar{\mathcal L}}_\bullet^{\,(t)}\oplus{\bar{\mathcal V}}_{\bullet,R}^{\,(t+\varpi)},
{\bar{\mathcal L}}_\bullet^{\,(t)}\oplus
{\bar{\mathcal L}'}_\bullet{}^{\,(t+\varpi)})\cr
=({\bar{\mathcal L}}_\bullet^{\,(t)}\oplus{\bar{\mathcal V}}_{\bullet,R}^{\,(t+\varpi)},
{\bar{\mathcal L}}_\bullet^{\,(t)}\oplus
g^{\,(t+\varpi)}{\bar{\mathcal L}'}_\bullet{}^{\,(t+\varpi)}).}$$
- ${\mathrm pr}_1$ and ${\mathrm pr}_2$ are the obvious projections
We can can easily check that this diagram commutes and that $${\mathrm pr}_1\circ q_1= p_1\ ;\ \alpha\circ q_2=p_2.$$ Now it is clear that $$p_1^*({\mathcal A}_{\lambda,\eta}\boxtimes{\mathcal I}_{w,\eta})
\ident q_2^*\,{\mathrm pr}_2^*\,i^*
({\mathcal A}_{\lambda,\eta}\boxtimes{\mathcal I}_{w,\eta}).$$ Moreover, by equivariant properties of ${\mathcal A}_\lambda$ and ${\mathcal I}_w$, we have $${\mathrm pr}_2^*\,i^*
({\mathcal A}_{\lambda,\eta}\boxtimes{\mathcal I}_{w,\eta})
\ident \alpha^*i^*
({\mathcal A}_{\lambda,\eta}\boxtimes{\mathcal I}_{w,\eta}).$$ (Note that the group $I_\eta$ acts on $M_\eta {\tilde \times} N_\eta$ by acting on the second factor of $M_\eta \times N_\eta \cong M_\eta {\tilde \times} N_\eta$ and $\alpha$ gives the corresponding action of ${\tilde J}_\eta$ via the projection ${\tilde J}_\eta \rightarrow I_\eta$.) In putting these things together, we get the required isomorphism $$p_1^*({\mathcal A}_{\lambda,\eta}\boxtimes{\mathcal I}_{w,\eta})
\ident p_2^*\,i^*
({\mathcal A}_{\lambda,\eta}\boxtimes{\mathcal I}_{w,\eta}).$$ This finishes the proof of the lemma in the linear case.
In the symplectic case, let us mention that the $F$-vector space $$t^{n_-}(t+\varpi)^{n'_- -1}F[t]^{2d}/t^{n_+}(t+\varpi)^{n'_+}F[t]^{2d}$$ equipped with the symplectic form $t^{-n_- -n_+}(t+\varpi)^{-n'_- -n'_+ +1}\langle\,,\,\rangle $ splits into the direct sum of two vector spaces $$t^{n_-}F[t]^{2d}/t^{n_+}F[t]^{2d}\oplus
(t+\varpi)^{n'_- -1}F[t]^{2d}/(t+\varpi)^{n'_+}F[t]^{2d}$$ equipped with symplectic forms $t^{-n_--n_+}\langle\,,\,\rangle $ and $(t+\varpi)^{-n'_- -n'_+ +1}\langle\,,\,\rangle $ respectively. Further, note that $g \in {\tilde M}(R)$ decomposes as $g = (g^{(t)},g^{(t+\varpi)})$ where $g^{(t)} \in {\mathrm Aut}_{R[t]}(t^{n_-}R[t]^{2d} / t^{n_+}R[t]^{2d})$ is such that $\langle g^{(t)}x,g^{(t)}y \rangle = c_{g^{(t)}}t^{-n_- + n_+}\langle x,y \rangle$ (for some $c_{g^{(t)}} \in R^{\times}$), and $g^{(t+\varpi)} \in {\mathrm Aut}_{R[t]}((t+\varpi)^{n'_- -1}R[t]^{2d} / (t+\varpi)^{n'_+}R[t]^{2d})$ is such that $\langle g^{(t+\varpi)}x, g^{(t+\varpi)}y \rangle = c_{g^{(t+\varpi)}}\langle x,y \rangle$ (for some $c_{g^{(t+\varpi)}} \in R^{\times}$). A similar decomposition $g' = (g'{}^{\,(t)},g'{}^{\,(t+\varpi)})$ holds, and thus ones sees $(g'{}^{\,(t)},g^{(t+\varpi)}) \in {\tilde J}(R)$. Thus the maps $q_1$ and $q_2$ as defined above make sense in the symplectic case as well. The rest of the argument goes through without change as in the linear case.
This finishes the proof of Lemma 23. We have therefore finished the proof of Proposition 21, and thus Proposition 13 and Theorem 11 as well. $\square$
[1]{}
A.A. Beilinson ; J. Bernstein ; P. Deligne. , I 100 (1982).
J.-F. Dat.. 508 (1999), 61–83.
P. Deligne. SGA 7 II, LNM 340, Springer 1973.
D. Gaitsgory. . [*Preprint*]{}. math.AG/9912074 9 Dec 1999.
V. Ginzburg.. (1996)
A. Grothendieck. [Formule de Lefschetz et rationalité des fonctions $L$]{} Sém. Bourbaki no 279.
T. Haines. (1998).
T. Haines. (1998).
L. Illusie.. No. 223 (1994), 9–57.
S.-I. Kato. . 66 (1982), no. 3, 461–468.
R. Kottwitz; M. Rapoport. . (1998).
G. Lusztig. 101-102 (1983), 200-229.
G. Lusztig. 129(1997), 85–98.
O. Mathieu.. 159-160 (1988), 267 pp.
I. Mirkovic, K. Vilonen.. (1997).
M. Rapoport. edited by L.Clozel and J. Milne. 11, p. 253-321, Acad. Press 1990.
M. Rapoport,Th. Zink.. 144, Princeton Univ. Press 1996
I. Satake. Theory of spherical functions on reductive algebraic groups over $p$-adic fields. , 18: 1–69, 1963.
|
---
abstract: 'We give examples of open manifolds that carry infinitely many complete metrics of nonnegative sectional curvature such that they all have the same soul, and their isometry classes lie in different connected components of the moduli space. All previously known examples of this kind have souls of codimension one. In our examples the souls have codimensions three and two.'
address:
- |
Igor Belegradek\
School of Mathematics\
Georgia Institute of Technology\
Atlanta, GA, USA 30332
- |
David González-Álvaro\
ETSI de Caminos, Canales y Puertos\
Universidad Politécnica de Madrid\
28040 Spain
author:
- Igor Belegradek
- 'David González-Álvaro'
title: Diffeomorphic souls and disconnected moduli spaces of nonnegatively curved metrics
---
plus3pt minus3pt plus3pt minus3pt
[^1] [^2]
Motivation and results
======================
There has been considerable recent interest in studying spaces of metrics with various curvature restrictions, such as nonnegative sectional curvature, to be denoted , see [@TW15] and references therein. For a manifold $V$ let $\mathcal{R}_{K\ge 0}(V)$ denote the space of complete Riemannian metrics on $V$ of $K\ge 0$ with the topology of smooth ($=C^\infty$) uniform convergence on compact sets, and $\mathcal{M}_{K\ge 0}(V)$ be the corresponding moduli space, the quotient space of $\mathcal{R}_{K\ge 0}(V)$ by the ${\operatorname{Diff}}(V)$-action via pullback.
The soul construction [@CG72] takes as the input a complete metric of $K\ge 0$ on an open connected manifold $V$, and a basepoint of $V$, and produces a totally convex compact boundaryless submanifold $S$ of $V$, called the [*soul*]{}, such that $V$ is is diffeomorphic to a tubular neighborhood of $S$. If we fix a metric and vary the basepoint, the resulting souls are ambiently isotopic [@Yi90] and isometric [@Sh79]. Consider the map [**soul**]{} that sends an isometry class of a complete metric of $K\ge 0$ on $V$ to the isometry class of its soul: $$\text{\bf soul}\co\mathcal{M}_{K\ge 0}(V)\rightarrow \coprod_{S\in \mathcal V}\mathcal{M}_{K\ge 0}(S)$$ where the co-domain is given the topology of disjoint union, and $\mathcal V$ is a set of pairwise non-diffeomorphic manifolds such that $S\in \mathcal V$ if and only if $S$ is diffeomorphic to a soul of a complete metric of $K\ge 0$ on $V$.
A tantalizing open problem is to decide if the map [**soul**]{} is continuous; the difficulty is that the soul is constructed via asymptotic geometry which is not captured by the compact-open topology on the space of metrics. The following is immediate from [@BFK17 Theorem 2.1].
\[thm: BFK cont\] If $V$ is indecomposable, then the map [**soul**]{} is continuous.
An open manifold [*indecomposable*]{} if it admits complete metric of $K\ge 0$ such that the normal sphere bundle to a soul has no section. It follows from [@Yi90] that for indecomposable $V$ the soul is uniquely determined by the metric (and not on the basepoint). Moreover, [@BFK17] implies that the souls of nearby metrics are ambiently isotopic by a small compactly supported isotopy. In particular, metrics with non-diffeomorphic souls in an indecomposable manifold lie in different connected components of $\mathcal{M}_{K\ge 0}(V)$.
There are many examples where the diffeomorphism (or even homeomorphism) type of the soul depends on the metric, see [@Be03; @KPT05; @BKS11; @Ot11; @BKS15; @GZ18], and if the ambient open manifold $V$ is indecomposable, this gives examples where $\mathcal{M}_{K\ge 0}(V)$ is not connected, or even has infinitely many connected components.
If $V$ has a complete metric with $K\ge 0$ with soul of codimension one, then [**soul**]{} is a homeomorphism, see [@BKS11]. Thus if for some soul $S$ the space $\mathcal{M}_{K\ge 0}(S)$ has infinitely many connected components, then so does $\mathcal{M}_{K\ge 0}(V)$; for example, this applies to $V=S\times{\mathbb{R}}$.
Examples of closed manifolds $S$ for which $\mathcal{M}_{K\ge 0}(S)$ has infinitely many connected components can be found in [@KPT05; @DKT18; @De17; @Go20a; @DG19; @De20]. These metrics on $S$ have $K\ge 0$ and $\mathrm{scal}>0$, and the connected components are distinguished by index-theoretic invariants that are constant of paths of $\mathrm{scal}>0$.
The papers mentioned in the previous paragraph only prove existence of infinitely many path-components. We take this opportunity to note that they actually get infinitely many connected components.
\[thm: path scal>0\] Let $M$ be a closed manifold. If two points in the same connected component of $\mathcal M_{K\ge 0}(M)$ have $\mathrm{scal}>0$, then they can be joined by a path of isometry classes of $\mathrm{scal}>0$.
In this paper we show that some of these $S$ as above can be realized as souls of codimensions $2$ or $3$ in indecomposable manifolds. The codimension $2$ case is a fairly straightforward consequence of results in [@WZ90; @KS93; @DKT18].
\[thm: witten\] For every positive integer $n$ there are infinitely many homotopy types that contain a manifold $M$ such that$\textup{(a)}$ $M$ is a simply-connected manifold that is the total space of a principal circle bundle over $\mathbb{S}^2\times \mathbb{C}P^{2n}$, and$\textup{(b)}$ if $V$ is the total space of a non-trivial complex line bundle over $M$, then $V$ has infinitely many complete metrics of $K\ge 0$ whose souls equal the zero section, and whose isometry classes lie in different connected components of $\mathcal{M}_{K\ge 0}(V)$.
The codimension $3$ case requires a bit more work. Recall that if $M$ is the total space of a linear ${\mathbb{S}}^3$-bundle over ${\mathbb{S}}^4$, then $M$ admits a metric of $K\ge 0$ [@GZ00], and moreover, if the bundle has nonzero Euler number, then $\mathcal{M}_{K\ge 0}(V)$ has infinitely many connected components [@De17; @Go20a]. We prove:
\[thm: cd 3\] Let $M$ be the total space of a linear ${\mathbb{S}}^3$-bundle $\xi$ over ${\mathbb{S}}^4$ with Pontryagin number $p_1(\xi)$ and nonzero Euler number $e(\xi)$. If $\frac{p_1(\xi)}{2e(\xi)}$ is not an odd integer, then $M$ is diffeomorphic to a codimension three submanifold $S$ of an indecomposable manifold $V$ that admits infinitely many complete metrics of $K\ge 0$ with soul $S$ whose isometry classes lie in different connected components of $\mathcal{M}_{K\ge 0}(V)$.
Milnor famously showed that some ${\mathbb{S}}^3$-bundles over ${\mathbb{S}}^4$ are exotic spheres [@Mi56]. In fact, $M$ is a homotopy sphere if and only if $e(\xi)=\pm 1$. Unfortunately, if $e(\xi)=\pm 1$, then $\frac{p_1(\xi)}{2}$ is an odd integer, so no $M$ in Theorem \[thm: cd 3\] is a homotopy sphere. On the other hand, for every integer $n$ with $n\ge 2$ there is $M$ as in the conclusion of Theorem \[thm: cd 3\] with $H^4(M)\cong{\mathbb{Z}}_n$, see Section \[sec: codim 3\]. To prove Theorem \[thm: cd 3\] we use results of Grove-Ziller [@GZ00] and some topological trickery to find an indecomposable $V$ with a codimension three soul, and then we observe that the metric on the soul can be moved by Cheeger deformation to metrics in [@De17; @Go20a] that represent infinitely many connected components.
Let us conclude by mentioning that other results on connected components of moduli spaces corresponding to various nonnegative or positive curvature conditions can be found in [@KS93; @BG95; @Wr11; @TW17; @Go20b].
Structure of the paper {#structure-of-the-paper .unnumbered}
----------------------
Theorems \[thm: BFK cont\] and \[thm: path scal>0\] are proved in Section \[sec: connect comp\]. Theorem \[thm: witten\] is established in Section \[sec: codim2\]. Theorem \[thm: cd 3\] is proved in Section \[sec: codim 3\], and the needed background is reviewed in Sections \[sec: cheeger\], \[sec: 3-sphere\], \[sec: bundles\].
Acknowledgements {#acknowledgements .unnumbered}
----------------
The authors are grateful to Luis Guijarro for hospitality during Belegradek’s visit to Madrid where this project was started.
Continuity of souls, connectedness and path-connectedness {#sec: connect comp}
=========================================================
Theorem 2.1 in [@BFK17] says that the map that sends a complete metric of $K\ge 0$ on $V$ to its soul, considered as a point is the space of smooth compact submanifolds of $V$ with smooth topology, is continuous. Two nearby submanifolds are ambiently isotopic by a small isotopy with compact support. Hence, the isometry classes of the induced metrics on these submanifolds are close in the moduli space. Thus we get a continuous map $$\mathcal{R}_{K\ge 0}(V)\rightarrow \coprod_{S\in \mathcal V}\mathcal{M}_{K\ge 0}(S)$$ that takes a metric to the isometry class of its soul, where the co-domain is given the disjoint union topology, i.e., the set in the co-domain is open if and only if its intersection with each $\mathcal{M}_{K\ge 0}(S)$ is open. Finally, by the definition of quotient topology the above continuous map descends to a continuous map defined on $\mathcal{M}_{K\ge 0}(V)$.
Let $X$ denote the the space of isometry classes of Riemannian metrics on a closed manifold $M$ with smooth ($=C^\infty$) topology, and let $X_{\mathrm{scal}\ge 0}$, $X_{\mathrm{scal}>0}$ be the subspaces of $X$ of isometry classes of metrics of nonnegative and positive scalar curvature, respectively.
$X$ is metrizable.
This is well-known, but we cannot find a proof in the literature, and hence present it here for completeness. The smooth topology on the space of all Riemannian metrics on $M$ is induced by a metric whose isometry group contains ${\operatorname{Diff}}(M)$ [@Ebin-thesis Proposition 148], and every ${\operatorname{Diff}}(M)$-orbit is closed [@Ebin-thesis Proposition 142]. The corresponding pseudometric on the set of orbits induces the quotient topology, and the pseudodistance is simply the infimum of distances between the orbits [@Hi68 Theorem 4]. Since the orbits are closed, the quotient space is $T_1$, so that the pseudometric is actually a metric.
Also $X$ is locally path-connected (because this property is inherited by quotients, and $X$ is the quotient of the space of metrics, which is an open subset in the Fréchet space of $2$-tensors on $M$). In fact, every point of $X$ has a contractible neighborhood (as follows from the smooth version of Corollary 7.3 in [@Ebin-symp] which can be deduced from the discussion after the corollary) but we do not need it here.
\[thm: path scal >=0\] If $C$ is a connected subset of $X_{\mathrm{scal}\ge 0}$ that contains no Ricci-flat metrics, then any two points $y, z\in C$ can be joined by a path in $\{y, z\}\cup X_{\mathrm{scal}>0}$.
By continuous dependence of Ricci flow on initial metric, see e.g. [@BGI Theorem A], for every point $x\in X$ there is a neighborhood $U_x$ and a positive constant $\tau_x$ such that the Ricci flow that starts at any point of $U_x$ exists in $[0,\tau_x]$.
Being a metrizable space $X$ is paracompact Hausdorff, and hence has a locally finite open cover $\{R_{x_i}\}_{i\in I}$ such that $R_{x_i}\subset U_{x_i}$ for all $i$, and there is a continuous function $\tau\co X\to (0,\infty)$ with $\tau(x)\le \tau_{x_i}$ for all $x\in R_{x_i}$, see [@Mun-book Theorem 41.8]. Since $C$ contains no Ricci-flat metrics, for every $x\in C$ the Ricci flow of $x$ has $\mathrm{scal}>0$ for all times in $(0,\tau(x)]$, see [@Br10 Proposition 2.18]. By continuous dependence of the Ricci flow on initial metric the map $T\co X\to X$ that sends $x$ to the Ricci flow of $x$ at time $\tau (x)$ is continuous. Hence, if $C$ is a connected subset of $X_{\mathrm{scal}\ge 0}$ that contains $y, z$, then $T(C)$ is a connected subset of $X_{\mathrm{scal}>0}$.
Since $X_{\mathrm{scal}>0}$ is an open subset in the locally path-connected space $X$, every connected component of $X_{\mathrm{scal}>0}$ is path-connected. Hence the connected component of $X_{\mathrm{scal}>0}$ that contains $T(C)$ also contains a path from $T(y)$ to $T(z)$. Concatenating the path with Ricci flows from $y, z$ to $T(y), T(z)$, respectively, we get a path from $y$ to $z$ with desired properties.
No flat manifold admits a metric of $\mathrm{scal}>0$ [@GL83 Corollary A]. Hence $M$ admits no flat metric. Since Ricci-flat metrics of $K\ge 0$ are flat, $\mathcal M_{K\ge 0}(M)$ contains no Ricci-flat metrics. Applying Theorem \[thm: path scal >=0\] to the connected component of $\mathcal M_{K\ge 0}(M)$ that contains $y, z$ finishes the proof.
Codimension two {#sec: codim2}
===============
If $\mathbb{S}^{2t+1}\to \mathbb{C}P^t$ is the circle bundle obtained by restricting the diagonal circle action on ${\mathbb{C}}^{t+1}$, where $t$ is a positive integer, then its Euler class generates $H^2(\mathbb{C}P^t)$ as follows from the Gysin sequence and $2$-connectedness of $\mathbb{S}^{2t+1}$. Consider the product of two such circle bundles with $t=1$ and $t=2n$. Then the argument [@WZ90 p.227] implies that any $M$ as in (a) is the quotient of the Riemannian product of two unit spheres ${\mathbb{S}}^3\times {\mathbb{S}}^{4n+1}$ by the free isometric circle action $e^{i\phi}(x,y)=(e^{il\phi} x,e^{-ik\phi}y)$ for some coprime integers $k$, $l$. This gives a Riemannian submersion metric on $M$ with $K\ge 0$ and $\mathrm{Ric}>0$.
Sometimes it happens that the quotients corresponding to different pairs $(k, l)$ are diffeomorphic. In fact, $H^4(M)$ is a cyclic group of order $l^2$, so up to sign $l$ is determined by the homotopy type of $M$, but for a given $l$ the quotients fall into finitely many diffeomorphism types [@DKT18 Proposition 2.2]. Their diffeomorphism classification was studied in [@WZ90; @KS93] and finally in [@DKT18] where it was shown that for each $n$ there are infinitely many homotopy types that contain $M$ as in (a) and such that the Riemannian submersion metrics as above represent infinitely many connected components of $\mathcal{M}_{K\ge 0}(M)$.
Similarly, since ${\mathbb{S}}^3\times {\mathbb{S}}^{4n+1}$ is $2$-connected, any complex line bundle over $M$ is the quotient of ${\mathbb{S}}^3\times {\mathbb{S}}^{4n+1}\times{\mathbb{C}}$ by the circle action $e^{i\phi}(x,y, z)=(e^{il\phi} x,e^{-ik\phi}y, e^{im}z)$, cf. [@BKS15 Lemma 12.3]. In particular, $V$ carries a complete Riemannian submersion metric of $K\ge 0$ with soul equal to the zero section, which is the quotient of ${\mathbb{S}}^3\times {\mathbb{S}}^{4n+1}\times\{0\}$ by the above circle action, and hence is diffeomorphic to $M$.
If we fix $l$ and the Euler class of the line bundle in $H^2(M)\cong{\mathbb{Z}}$, there are only finitely many possibilities for diffeomorphism type of the pair $(V, \text{\,soul})$ for above metrics. By varying $k$ appropriately, then we get a sequence of complete metrics of $K\ge 0$ on each $V$ as above such that the metrics on the soul represent infinitely many connected components of $\mathcal{M}_{K\ge 0}(M)$.
If the line bundle is non-trivial, then $V$ is indecomposable, and the map [**soul**]{} is continuous by Theorem \[thm: BFK cont\]. Thus $\mathcal{M}_{K\ge 0}(V)$ has infinitely many connected components.
Equivariant Cheeger deformation {#sec: cheeger}
===============================
The purpose of this section is to review Cheeger deformation, and note that it passes to quotients by free isometric actions.
Let $G$ be a compact Lie group with a bi-invariant metric $Q$ that acts isometrically on a Riemannian manifold $(M,q_0)$. Consider the diagonal $G$-action on $M\times G$ given by $a\cdot (p,g)=(ap, ag)$, $p\in M$, $a, g\in G$. Its orbit space is commonly denoted by $M\times_G G$. The map $\pi\co M\times G\to M$ given by $\pi(p,g)=g^{-1}p$ descends to a diffeomorphism $\phi\co M\times_G G\to M$.
For any positive scalar $t$ the $G$-action is isometric in the product metric $q_0+\frac{Q}{t}$, which induces a metric $q_t$ on $M$ that makes $\pi$ into a Riemannian submersion. Similarly, $\phi$ becomes an isometry between $q_t$, $t>0$, and the Riemannian submersion metric on $M\times_G G$ induced by $q_0+\frac{Q}{t}$.
The map $t\to g_t$ is continuous for $t\ge 0$; this is the [*Cheeger deformation*]{} of $q_0$, see e.g. [@AB15 p.140] or [@Zi09]. The key property is that if $q_0$ has $K\ge 0$, then so does $q_t$ for all $t$.
Fix a closed subgroup $H$ of $G$ such that the $H$-action on $M$ is free. For $t\ge 0$ let $\bm{q_t}$ be the metric on $M/H$ that makes the $H$-orbit map into a Riemannian submersion $\chi\co(M, q_t)\to (M/H, \bm{q_t})$. The map $t\to \bm{q_t}$ is continuous for $t\ge 0$.
The $H$-action on $M\times G$ given by $h\cdot (p,g)=(p, gh^{-1})$ commutes with the diagonal $G$-action, and hence descends to a free $H$-action on $M\times_G\hspace{1pt} G$. For this action the maps $\pi$ and $\phi$ are $H$-equivariant, and descend to a Riemannian submersion $M\times (H\backslash G)\to M/H$ and an isometry $M\times_G (H\backslash G)\to M/H$, respectively, where $t>0$ and $H\backslash G$ is given the Riemannian submersion metric induced by $\frac{Q}{t}$.
Thus in the following diagram all maps are Riemannian submersions for $t>0$ $$\xymatrix{
& M\times G\ar[dl]\ar[r]\ar[d]^\pi & M\times H\backslash G\ar[d]\ar[rd] &\\
M\times_G G\ar[r]^{\quad\phi} & M \ar[r]^{\chi} & M/H & M\times_G H\backslash G \ar[l]
}$$ and $\chi$ is also a Riemannian submersion for $t=0$. In this diagram $M$ and $M/H$ are the only spaces where the metric corresponding to $t=0$ is defined.
Some algebra and geometry of the $3$-sphere {#sec: 3-sphere}
===========================================
In this section we specialize the discussion of Section \[sec: cheeger\] to the case when $G={\mathbb{S}}^3\times{\mathbb{S}}^3$, where ${\mathbb{S}}^3$ is thought of as unit quaternions, and $H$ is the diagonal subgroup of $G$, i.e., $H=\{(g,g)\,:\, g\in G\}$.
Consider the diffeomorphism $\psi\co{\mathbb{S}}^3\to H\backslash G$ given by $\psi(c)=(c,1)H$; thus $\psi^{-1}$ sends the coset $(a,b)H$ to $ab^{-1}$. With this identification the (left) $G$-action on $H\backslash G$ becomes $(a,b)\cdot c=acb^{-1}$, where $a, b, c\in{\mathbb{S}}^3$; indeed $$(a,b)(c,1)H=(ac,b)H=(acb^{-1},1)H.$$ Since $(-1,-1)$ acts trivially, the $G$-action on $H\backslash G$ descends to an $SO(4)$ action with isotropy subgroups isomorphic to $SO(3)$.
It follows that any $G$-invariant Riemannian metric on $H\backslash G$ is isometric to a round $3$-sphere (i.e., a metric sphere in ${\mathbb{R}}^4$). Indeed, $SO(3)$ acts transitively on every tangent $2$-sphere, so $G$ acts transitively on the unit tangent bundle, and hence the metric has constant Ricci curvature, which on the $3$-sphere makes the metric round.
The discussion in Section \[sec: cheeger\] immediately gives the following.
\[prop: cheeger\] Let $H$ be the diagonal subgroup of $G={\mathbb{S}}^3\times{\mathbb{S}}^3$. Given an isometric $G$-action on a Riemannian manifold $(M, q_0)$ of $K\ge 0$ that restricts to a free $H$-action there is path of Riemannian metrics $(M, q_t)$ of $K\ge 0$, defined for $t\ge 0$, such that $\bullet$ for every $t\ge 0$ the $G$-action is $q_t$-isometric, and the Riemannian submersion metric $(M/H, \bm{q_t})$ induced by $q_t$ has $K\ge 0$, and $t\to \bm{q_t}$ is a continuous path of metrics on $M/H$, $\bullet$ if $t>0$ and $H\backslash G$ is given the Riemannian submersion metric induced by a bi-invariant metric on $G$, then $H\backslash G$ is isometric to a round sphere, and $(M/H,\bm{q_t})$ is isometric to the Riemannian submersion metric on $(M, q_t)\times_G H\backslash G$.
Bundle theoretic facts {#sec: bundles}
======================
This section reviews several well-known bundle theoretic facts.
\[lem: class spaces\] Let $C\le G$ be an order two normal subgroup of a topological group $G$. If $P\to X$ is a non-trivial principal $G$-bundle over finite cell complex with $H^1(X;{\mathbb{Z}}_2)=0$, then the associated principal $G/C$-bundle $P/C\to X$ is non-trivial.
The surjection $G\to G/C=H$ induces a fibration of classifying spaces $BC\to BG\to BH$ where $BC$ is a homotopy fiber of $BG\to BH$, see [@MO-fibration-classifying-spaces]. As explained in [@MT68 p.139], for any finite complex $X$ we get an exact sequence of pointed sets $$[X, BC]\to [X,BG]\to [X,BH]$$
with constant maps as basepoints. Since $[X, BC]=H^1(X;{\mathbb{Z}}_2)=0$, the rightmost arrow is injective.
A [*$k$-plane bundle*]{} is a vector bundle with fiber ${\mathbb{R}}^k$.
\[lem: 3-plane bundles\] Let $X$ be a paracompact space with $H^1(X;{\mathbb{Z}}_2)=0=H^2(X)$. If a $3$-plane bundle over $X$ has a nowhere zero section, then it is trivial.
A nowhere zero section gives rise to a splitting of the bundle into the Whitney sum of a line and a $2$-plane subbundles, which are orientable since $H^1(X;{\mathbb{Z}}_2)=0$, and in fact, trivial because a line bundle is determined by its first Stiefel-Whitney class in $H^1(X;{\mathbb{Z}}_2)$, and an orientable $2$-plane bundle is determined by its Euler class in $H^2(X)$.
\[lem: euler pontr\] If $X$ is a finite cell complex with $H^1(X;{\mathbb{Z}}_2)=0=H^4(X;{\mathbb{Q}})$, then the number of isomorphism classes of $3$-plane bundles over $X$ is finite.
Since $H^1(X;{\mathbb{Z}}_2)=0$, any vector bundle over $X$ is orientable. There are only finitely many isomorphism classes of orientable $3$-plane bundles with a given first rational Pontryagin class [@Be01 Theorem A.0.1], which lies in $H^4(X;{\mathbb{Q}})=0$.
Codimension three {#sec: codim 3}
=================
This section ends with a proof of Theorem \[thm: cd 3\]. First, we recall some results and notations from [@GZ00].
Following [@GZ00 p.349] let $P_{k,l}$ denote the principal ${\mathbb{S}}^3\times {\mathbb{S}}^3$-bundle over ${\mathbb{S}}^4$ classified by the map $q\to (q^k, q^{-l})$ in $\pi_3({\mathbb{S}}^3\times {\mathbb{S}}^3)\cong{\mathbb{Z}}\times{\mathbb{Z}}$, where $q\in {\mathbb{S}}^3$.
Let $M_{k,l}$ be the the associated bundle $P_{k,l}\times_{{\mathbb{S}}^3\times {\mathbb{S}}^3} {\mathbb{S}}^3$ where the action on ${\mathbb{S}}^3$ is as in Section \[sec: 3-sphere\], see [@GZ00 p.352]. Equivalently [@Po95 Proposition 8.27], the action is given by the universal covering ${\mathbb{S}}^3\times {\mathbb{S}}^3\to SO(4)$ where the $SO(4)$-action on ${\mathbb{S}}^3$ is standard, Hence, $M_{k,l}$ is a linear ${\mathbb{S}}^3$-bundle over ${\mathbb{S}}^4$.
The Euler number and the Pontryagin number of the ${\mathbb{S}}^3$-bundle $M_{k,l}\to {\mathbb{S}}^4$ are $\pm(k+l)$ and $\pm 2(k-l)$, see [@Kr10 p.159, 169]. The Gysin sequence shows that $H^4(M_{k,l})\cong{\mathbb{Z}}_{k+l}$ if $k+l\neq 0$, and then $H^4(M_{k,l})\cong{\mathbb{Z}}$ if $k+l=0$.
Somewhat confusingly, the notation $M_{m,n}$ is also used in the literature to denote the total space of another ${\mathbb{S}}^3$-bundle over ${\mathbb{S}}^4$ based on a different choice of generators in $\pi_3({\mathbb{S}}^3\times {\mathbb{S}}^3)$. This usage goes back to James and Whitehead, and more to the point, appears in works quoted below. Thus $M_{k,l}$ of [@GZ00] equals $M_{m,n}$ of [@CE03; @Go20a] when $m=-l$, $n=k+l$. In what follows all results are rephrased in notations of [@GZ00].
According to Section \[sec: cheeger\] $M_{k,l}$ can be described as $P_{k,l}/H$ where $H$ is the diagonal subgroup in ${\mathbb{S}}^3\times {\mathbb{S}}^3$, cf. also Key Observation in [@GKS20]. Thus $M_{k,l}$ is the base of a principal ${\mathbb{S}}^3$-bundle with total space $P_{k,l}$. Our strategy hinges on the following:
Find all $k, l$ such that the principal $H$-bundle $P_{k,l}\to P_{k,l}/H=M_{k,l}$ is non-trivial.
Some partial solutions are presented below. An especially interesting case (which we could not resolve in this paper) is when $|k+l|=1$, or equivalently, $M_{k,l}$ is a homotopy sphere.
If $kl=0$, the principal $H$-bundle $P_{k,l}\to M_{k,l}$ is trivial.
The principal ${\mathbb{S}}^3\times{\mathbb{S}}^3$-bundle $P_{k,0}$ is isomorphic to $P\times {\mathbb{S}}^3$ for some principal ${\mathbb{S}}^3$-bundle $P$ over ${\mathbb{S}}^4$. The inclusion $i\co P\to P\times {\mathbb{S}}^3$ given by $i(p)=(p,1)$ is transverse to the $H$-orbits, hence it descends to an immersion $\bm{i}\co P\to (P\times {\mathbb{S}}^3)/H$, which is a diffeomorphism because both domain and co-domain are closed manifolds of the same dimension. Then $i\circ\bm{i}^{-1}$ is a section of $P\times {\mathbb{S}}^3\to (P\times {\mathbb{S}}^3)/H$, and any principal bundle with a section is trivial.
Lemma \[lem:Milnor\] below sheds some light on why the assumption “$\frac{p_1(\xi)}{2e(\xi)}$ is not odd” is relevant. Let us first restate the assumption:
\[lem: odd\] $\frac{k-l}{k+l}$ is an odd integer if and only if $\frac{k}{k+l}\in{\mathbb{Z}}$.
$\frac{k-l}{k+l}$ is odd if and only if $\frac{k-l}{k+l}+1=\frac{2k}{k+l}$ is even if and only if $\frac{k}{k+l}\in{\mathbb{Z}}$
\[lem:Milnor\] If $kl\neq 0$ and the principal ${\mathbb{S}}^3$-bundle $P_{k,l}\to P_{k,l}/H=M_{k,l}$ is trivial, then $k+l\neq 0$ and $\frac{k}{k+l}\in{\mathbb{Z}}$.
If the bundle is trivial, $P_{k,l}$ is diffeomorphic to ${\mathbb{S}}^3\times M_{k,l}$. By Künneth formula $H^4(P_{k,l})\cong H^4(M_{k,l})$ which is ${\mathbb{Z}}_{k+l}$ if $k+l\neq 0$, and ${\mathbb{Z}}$ if $k+l=0$. As was mentioned on [@GZ00 p.349], the quotient of the principal ${\mathbb{S}}^3\times {\mathbb{S}}^3$-bundle $P_{k,l}$ by the subgroup $1\times {\mathbb{S}}^3$ can be identified with $P_k$, the principal ${\mathbb{S}}^3$-bundle over ${\mathbb{S}}^4$ with Euler number $k$. Since $k\neq 0$, we get $H^4(P_k)\cong{\mathbb{Z}}_{k}$ [@GZ00 p.346]. The Gysin sequence for the ${\mathbb{S}}^3$-bundle $P_{k,l}\to P_k$ reads $$\xymatrix{
{\mathbb{Z}}_k\cong H^4(P_k)\ar[r] & H^4(P_{k,l})\ar[r] & H^1(P_k)=0,
}$$ which shows that $k+l$ is a nonzero integer that divides $k$.
The asymmetry in the conclusion of Lemma \[lem:Milnor\] is an illusion: $\frac{k}{k+l}\in{\mathbb{Z}}$ if and only if $\frac{l}{k+l}\in{\mathbb{Z}}$ because $\frac{k}{k+l}+\frac{l}{k+l}=1$.
By Proposition 3.11 of [@GZ00] each $P_{k,l}$ admits a cohomogeneity one action by $\mathbb{S}^3\times \mathbb{S}^3\times \mathbb{S}^3$ with codimension two singular orbits, and such that the action of the subgroup $G:=\mathbb{S}^3\times \mathbb{S}^3\times \{1\}$ coincides with the principal bundle action. Hence by [@GZ00 Theorem E] the space $P_{k,l}$ carries a $G$-invariant metric $\gamma_{k,l}$ of $K\ge 0$.
Let ${\mathbb{S}}^3(r)$ be the round $3$-sphere of radius $r$ on which ${\mathbb{S}}^3\times{\mathbb{S}}^3$ acts as in Section \[sec: 3-sphere\]. Let $h_{k,l, r}$ be the Riemannian submersion metrics on $M_{k,l}=P_{k,l}\times_{{\mathbb{S}}^3\times{\mathbb{S}}^3} {\mathbb{S}}^3$ induced by the product of $\gamma_{k,l}$ and ${\mathbb{S}}^3(r)$. Then $h_{k,l, r}$ has $K\ge 0$ and $\mathrm{scal}>0$ by [@Go20a Theorem 2.1].
An essential point is that there are infinitely many ways to represent $M$ as $M_{k,l}$. Indeed, by assumption $M=M_{k,l}$ for some $k,l\in{\mathbb{Z}}$ with $k+l\neq 0$ and such that $\frac{k-l}{k+l}$ is not an odd integer. The latter is equivalent to $\frac{l}{k+l}\notin{\mathbb{Z}}$ by Lemma \[lem: odd\]. For $i\in{\mathbb{Z}}$ let $$l_i=l-56(k+l)i\quad\text{and}\quad k_i=k+l-l_i=-l+(k+l)(56i+1).$$ Then $M_{k_i, l_i}$ are orientation-preserving diffeomorphic to $M$ [@CE03 Corollary 1.6]. By [@Go20a Section 3.1] there is $r$ and infinitely many values of $i$ for which the metrics $h_{k_i,l_i, r}$ lie in different connected components of $\mathcal{M}_{K\ge 0}(M)$.
Let $g_{k,l}$ be the Riemannian submersion metric of $K\ge 0$ induced on $M_{k,l}=H\backslash P_{k,l}$ by $\gamma_{k,l}$. Proposition \[prop: cheeger\] implies that $g_{k,l}$ and $h_{k,l, r}$ lie in the same path-component of $\mathcal{M}_{K\ge 0}(M_{k,l})$. Thus for $k_i,l_i$ as in the previous paragraph $g_{k_i,l_i}$ lie in different connected components of $\mathcal{M}_{K\ge 0}(M)$.
Consider the associated vector bundle $P_{k,l}\times_{H}{\mathbb{R}}^3$ over $M_{k,l}$ where $H={\mathbb{S}}^3$ acts on ${\mathbb{R}}^3$ via the universal covering ${\mathbb{S}}^3\to SO(3)$. We give $P_{k,l}\times_{H}{\mathbb{R}}^3$ the Riemannian submersion metric induced by the product of $\gamma_{k,l}$ and the standard Euclidean metric. This is a complete metric of $K\ge 0$ with soul $P_{k,l}\times_{H}\{0\}$ which is isometric to $(M_{k,l}, g_{k,l})$.
Since $\frac{l_i}{k_i+l_i}\notin{\mathbb{Z}}$, the principal ${\mathbb{S}}^3$-bundle $P_{k_i,l_i}\to M_{k_i,l_i}$ is non-trivial by Lemma \[lem:Milnor\]. Consider the associated $3$-plane bundle $P_{k_i,l_i}\times_{{\mathbb{S}}^3}{\mathbb{R}}^3$ over $M_{k_i,l_i}$ where ${\mathbb{S}}^3$ acts on ${\mathbb{R}}^3$ via the universal covering ${\mathbb{S}}^3\to SO(3)$. Any such vector bundle is non-trivial by Lemma \[lem: class spaces\], and hence by Lemma \[lem: 3-plane bundles\] its total space is indecomposable. Pull back the vector bundles via diffeomorphisms $M\to M_{k_i,l_i}$. The pullback bundles fall into finitely many isomorphism classes by Lemma \[lem: euler pontr\], so after passing to a subsequence we can assume that the bundles are isomorphic, and hence share the same ten-dimensional total space, which we denote $V$.
In summary, $V$ is an indecomposable manifold with infinitely many complete metrics of $K\ge 0$ whose souls are all equal to the zero section, and diffeomorphic to $M$, and such that the induced metrics on the souls lie in different connected components of $\mathcal{M}_{K\ge 0}(M)$. Theorem \[thm: BFK cont\] finishes the proof.
\[2\][ [\#2](http://www.ams.org/mathscinet-getitem?mr=#1) ]{} \[2\][\#2]{}
[AB15]{}
M. M. Alexandrino and R. G. Bettiol, *Lie groups and geometric aspects of isometric actions*, Springer, Cham, 2015.
I. Belegradek, *Pinching, [P]{}ontrjagin classes, and negatively curved vector bundles*, Invent. Math. **144** (2001), no. 2, 353–379.
, *[V]{}ector bundles with infinitely many souls*, Proc. Amer. Math. Soc. **131** (2003), no. 7, 2217–2221.
I. Belegradek, F. T. Farrell, and V. Kapovitch, *[S]{}pace of nonnegatively curved metrics and pseudoisotopies*, J. Differential Geom. **105** (2017), no. 3, 345–374.
B. Botvinnik and P. B. Gilkey, *The eta invariant and metrics of positive scalar curvature*, Math. Ann. **302** (1995), no. 3, 507–517.
E. Bahuaud, C. Guenther, and J. Isenberg, *Convergence stability for [R]{}icci flow*, J. Geom. Anal. **30** (2020), no. 1, 310–336.
I. Belegradek, S. Kwasik, and R. Schultz, *[M]{}oduli spaces of nonnegative sectional curvature and non-unique souls*, J. Differential Geom. **89** (2011), no. 1, 49–85.
, *[C]{}odimension two souls and cancellation phenomena*, Adv. Math. **275** (2015), 1–46.
S. Brendle, *Ricci flow and the sphere theorem*, American Mathematical Society, Providence, RI, 2010, Graduate Studies in Mathematics, Vol. 111.
J. Cheeger and D. Gromoll, *On the structure of complete manifolds of nonnegative curvature*, Ann. of Math. (2) **96** (1972), 413–443.
D. Crowley and C. M. Escher, *A classification of $S^3$-bundles over $S^4$*, Differential Geom. Appl. **18** (2003), no. 3, 363–380.
A. Dessai, *On the moduli space of nonnegatively curved metrics on Milnor spheres*, arXiv:1712.08821.
, *Moduli space of nonnegatively curved metrics on manifolds of dimension [$4k+1$]{}*, arXiv:2005.04741.
A. Dessai and D. Gonzĺez-Álvaro, *Moduli space of metrics of nonnegative sectional or positive Ricci curvature on homotopy real projective spaces*, Trans. Amer. Math. Soc., doi.org/10.1090/tran/8044.
A. Dessai, S. Klaus, and W. Tuschmann, *Nonconnected moduli spaces of nonnegative sectional curvature metrics on simply connected manifolds*, Bull. Lond. Math. Soc. **50** (2018), no. 1, 96–107.
D. G. Ebin, *On the space of [Riemannian]{} metrics*, ProQuest LLC, Ann Arbor, MI, 1967, Thesis (Ph.D.)–Massachusetts Institute of Technology.
, *The manifold of [R]{}iemannian metrics*, Global [A]{}nalysis ([P]{}roc. [S]{}ympos. [P]{}ure [M]{}ath., [V]{}ol. [XV]{}, [B]{}erkeley, [C]{}alif., 1968), Amer. Math. Soc., Providence, R.I., 1970, pp. 11–40.
S. Goette, M. Kerin, and K. Shankar, *Highly connected [$7$]{}-manifolds and non-negative sectional curvature*, Ann. of Math. (2) **191** (2020), no. 3, 829–892.
M. Gromov and H. B. Lawson, Jr., *Positive scalar curvature and the [D]{}irac operator on complete [R]{}iemannian manifolds*, Inst. Hautes Études Sci. Publ. Math. (1983), no. 58, 83–196 (1984).
D. González-Álvaro and M. Zibrowius, *Open manifolds with non-homeomorphic positively curved souls*, Math. Proc. Cambridge Philos. Soc., doi.org/10.1017/S0305004119000227.
M. J. Goodman, *On the moduli spaces of metrics with nonnegative sectional curvature*, Ann. Global Anal. Geom. 57 (2020), no. 2, 305–320.
, *Moduli spaces of Ricci positive metrics in dimension five*, arXiv:2002.00333.
K. Grove and W. Ziller, *Curvature and symmetry of Milnor spheres*, Ann. of Math. (2) **152** (2000), no. 1, 331–367.
C. J. Himmelberg, *Pseudo-metrizability of quotient spaces*, Fund. Math. **63** (1968), 1–6.
M. Kreck and S. Stolz, *Nonconnected moduli spaces of positive sectional curvature metrics*, J. Amer. Math. Soc. **6** (1993), no. 4, 825–850.
M. Kreck, *Differential algebraic topology. From stratifolds to exotic spheres*, Graduate Studies in Mathematics, vol. 110, American Mathematical Society, Providence, RI, 2010.
V. Kapovitch, A. Petrunin, and W. Tuschmann, *Non-negative pinching, moduli spaces and bundles with infinitely many souls*, J. Differential Geom. **71** (2005), no. 3, 365–383.
P. May and D. Nardin, *A fibration of classifying spaces*, mathoverflow.net/questions/182618.
J. Milnor, *On manifolds homeomorphic to the [$7$]{}-sphere*, Ann. of Math. (2) **64** (1956), 399–405.
R. E. Mosher and M. C. Tangora, *Cohomology operations and applications in homotopy theory*, Harper & Row, Publishers, New York-London 1968.
J. R. Munkres, *Topology*, Prentice Hall, Inc., Upper Saddle River, NJ, 2000, Second edition.
S. Ottenburger, *A classification of 5-dimensional manifolds, souls of codimension two and non-diffeomorphic pairs*, arXiv:1103.0099.
I. R. Porteous, *Clifford algebras and the classical groups*, Cambridge Studies in Advanced Mathematics, vol. 50, Cambridge University Press, Cambridge, 1995.
V. A. Sharafutdinov, *Convex sets in a manifold of nonnegative curvature*, (Russian) Mat. Zametki **26** (1979), no. 1, 129–136, 159.
W. Tuschmann and D. J. Wraith, *Moduli spaces of [R]{}iemannian metrics*, Oberwolfach Seminars, vol. 46, Birkhäuser Verlag, Basel, 2015, Second corrected printing.
W. Tuschmann and M. Wiemeler, *On the topology of moduli spaces of non-negatively curved Riemannian metrics*, arXiv:1712.07052.
D. J. Wraith, *On the moduli space of positive [R]{}icci curvature metrics on homotopy spheres*, Geom. Topol. **15** (2011), no. 4, 1983–2015.
M. Y. Wang and W. Ziller, *Einstein metrics on principal torus bundles*, J. Differential Geom. **31** (1990), no. 1, 215–248.
J.-W. Yim, *Space of souls in a complete open manifold of nonnegative curvature*, J. Differential Geom. **32** (1990), no. 2, 429–455.
W. Ziller, *On M. Mueter’s Ph.D. Thesis on Cheeger deformations*, arXiv:0909.0161.
[^1]: 2010 *Mathematics Subject classification. Primary 53C20.*
[^2]: This work was partially supported by the Simons Foundation grant 524838 (Belegradek) and by MINECO grant MTM2017-85934-C3-2-P (González-Álvaro).
|
---
abstract: 'We investigate pointwise multipliers on vector-valued function spaces over $\R^d$, equipped with Muckenhoupt weights. The main result is that in the natural parameter range, the characteristic function of the half-space is a pointwise multiplier on Bessel-potential spaces with values in a UMD Banach space. This is proved for a class of power weights, including the unweighted case, and extends the classical result of Shamir and Strichartz. The multiplication estimate is based on the paraproduct technique and a randomized Littlewood-Paley decomposition. An analogous result is obtained for Besov and Triebel-Lizorkin spaces.'
address:
- 'Institute of Mathematics, Martin-Luther-Universität Halle-Wittenberg, 06099 Halle (Saale), Germany'
- |
Delft Institute of Applied Mathematics\
Delft University of Technology\
P.O. Box 5031\
2600 GA Delft\
The Netherlands
author:
- Martin Meyries
- Mark Veraar
title: 'Pointwise multiplication on vector-valued function spaces with power weights'
---
[^1]
Introduction
============
It is a classical result of Shamir [@Shamir] and Strichartz [@Strich67] that for $p\in (1,\infty)$ the characteristic function ${{{\bf 1}}}_{\R_+^d}$ of the half-space $\R^d_+ = \{(x',t): x'\in \R^{d-1},\, t>0\}$ acts as a pointwise multiplier on the Bessel-potential space (or fractional Sobolev space) $H^{s,p}(\R^d)$ in the parameter range $$-\frac{1}{p'}<s<\frac1p,$$ where $p'$ is the dual exponent of $p$. This condition can be understood by recalling that the trace at the hyperplane $\{(x',0):x'\in \R^{d-1}\}$ is continuous on these spaces if and only if $s>1/p$. The case of negative smoothness follows from a duality argument. The corresponding result was proved some years earlier for the Slobodetskii spaces $W^{s,p}(\R^d)$ by Lions $\&$ Magenes [@LiMa] and Grisvard [@Gri63]. Further extensions to Besov spaces $B_{p,q}^s(\R^d)$ and Triebel-Lizorkin spaces $F_{p,q}^s(\R^d)$ were given by Peetre [@Peetre76], Triebel [@Tri83], Franke [@Franke86], Marschall [@Marschall] and Sickel [@Sickel87], see the monograph of Runst $\&$ Sickel [@RS96] for details. For more recent results we also refer to Sickel [@Sickel99b; @Sickel99a] and Triebel [@Triebel02].
The characteristic function serves as a natural extension operator for the half-space. Its multiplier property was one of the main ingredients for Seeley’s result [@Se] on complex interpolation of Bessel-potential spaces with boundary conditions. On the other hand, the multiplier property is also a direct consequence of Seeley’s result. In this sense the assertions are equivalent. They are further equivalent to the validity of Hardy’s inequality [@Tri83 Section 2.8.6].
In this paper we extend the multiplier result for the characteristic function to the weighted vector-valued case. We consider power weights $w_\gamma$ depending on the last coordinate only, i.e., $$w_\gamma(x',t) = |t|^\gamma, \qquad x'\in \R^{d-1},\qquad t\in \R.$$ These weights act at the same hyperplane as ${{{\bf 1}}}_{\R_+^d}$. Hence the parameter range where ${{{\bf 1}}}_{\R_+^d}$ is a multiplier will depend on the exponent $\gamma$. Here the dual exponent $\gamma' = - \frac{\gamma}{p-1}$ of $\gamma$ with respect to $p$ comes into play.
The following is our main result. It is proved in Section \[subs:multich\]. In the vector-valued case it seems to be new also in the unweighted case $\gamma = 0$.
\[thm:1\]Let $X$ be a *UMD* Banach space, $p\in (1,\infty)$ and $\gamma\in (-1,p-1)$. Then for $$-\frac{1+\gamma'}{p'} < s < \frac{1+\gamma}{p}$$ the characteristic function ${{{\bf 1}}}_{\R_+^d}$ of the half-space is a pointwise multiplier on $H^{s,p}(\R^d,w_\gamma;X)$.
To be precise, the theorem states that for all $f\in H^{s,p}(\R^d,w_\gamma;X)$ the product ${{{\bf 1}}}_{\R_+^d}f$ again belongs to $H^{s,p}(\R^d,w_\gamma;X)$ and there is a constant $C > 0$, independent of $f$, such that $$\|{{{\bf 1}}}_{\R_+^d}f\|_{H^{s,p}(\R^d,w_\gamma;X)} \leq C \|f\|_{H^{s,p}(\R^d,w_\gamma;X)}.$$ This multiplier result seems to close a gap in the literature. It has already been used in several works.
The spaces $H^{s,p}(\R^d,w_\gamma;X)$ are defined with the Bessel-potential in the usual way based on the weighted Lebesgue space $L^p(\R^d,w_\gamma;X)$, see Section \[subsec:spaces\]. For an exponent $\gamma\in (-1,p-1)$ as in the theorem, the weight $w_\gamma$ belongs to the Muckenhoupt class $A_p$, see Section \[sec:Mucken\]. The condition on $s$ shows the effect of the weight on the regularity of $H^{s,p}(\R^d,w_\gamma;X)$ at the hyperplane $\{(x',0):x'\in \R^{d-1}\}$: the range of $s$ where jumps are allowed is enlarged as $\gamma$ increases.
A Banach space $X$ has UMD if and only if the Hilbert transform extends continuously to $L^2(\R;X)$, see Section \[subsec:UMD\] for some details and references. For instance, Hilbert spaces and classical function spaces like $L^p$, $W^{s,p}$, $H^{s,p}$, $B_{p,q}^s$ and $F_{p,q}^s$ have UMD in their reflexive range. Many other fundamental operators in vector-valued harmonic analysis are bounded if and only if the underlying space has UMD. Since the 80’s, it has turned out that in UMD spaces one can develop vector-valued Fourier analysis (see [@Bou83; @Bou2; @Bu2; @Bu3; @McC84; @Zim89]). More recently, this has led to an extensive theory on operator-valued Fourier multipliers and singular integrals (see [@GW03; @HHN; @HytTb; @HytWeT1; @StrWe; @We]), which originally was motivated by regularity theory for parabolic PDEs (see [@DHP; @KuWe] and references therein).
Employing standard localization techniques, Theorem \[thm:1\] extends to the characteristic function of Lipschitz domains on spaces equipped with power weights based on the distance to the boundary.
\[cor:Lipschitz-indicator\] Let $X$ be a *UMD* Banach space, $p\in (1,\infty)$, $\gamma\in (-1,p-1)$ and $-\frac{1+\gamma'}{p'} < s < \frac{1+\gamma}{p}$. Assume $\Omega \subset \R^d$ is a bounded Lipschitz domain. Then the characteristic function ${{{\bf 1}}}_{\Omega}$ of $\Omega$ is a pointwise multiplier on $H^{s,p}(\R^d,\emph{\text{dist}}(\cdot,\partial \Omega)^\gamma;X)$.
Our second main result concerns Besov and Triebel-Lizorkin spaces and does not require the UMD property of the underlying Banach space. It is also proved in Section \[subs:multich\]. For weighted vector-valued $B$-spaces, the case $s> 0$ was already treated by Grisvard [@Gri63].
\[thm:2\]Let $X$ be a Banach space, $p\in (1,\infty)$, $q\in [1,\infty]$ and $\gamma\in (-1,p-1)$. Then for $-\frac{1+\gamma'}{p'} < s < \frac{1+\gamma}{p}$ the characteristic function ${{{\bf 1}}}_{\R_+^d}$ of the half-space is a pointwise multiplier on $B^{s}_{p,q}(\R^d,w_\gamma;X)$ and on $F^{s}_{p,q}(\R^d,w_\gamma;X)$, respectively.
Corollary \[cor:Lipschitz-indicator\] can be extended to the setting of $B$- and $F$-spaces as well.
Another result, which is also due to Strichartz [@Strich67] in the unweighted scalar case, is devoted to the pointwise multiplication with bounded $H^{s,p}$-functions and motivated by power nonlinearities. A special case of Theorem \[thm:lastone\] is the following, where we can allow for general weights $w\in A_p$. The notion of the type of a Banach space is explained in Section \[sec:type\]. For instance, in the theorem one can choose for $X$ one of the classical function spaces $L^r$, $W^{\alpha,r}$, $H^{\alpha,r}$, $B_{r,q}^\alpha$ or $F_{r,q}^\alpha$, provided $q,r\in [2,\infty)$.
\[thm:3\] Let $X$ be a *UMD* Banach space which has type $2$, and let $s>0$, $p\in(1,\infty)$ and $w\in A_p$. Then there is a constant $C> 0$ such that for all $m\in H^{s,p}(\R^d, w)\cap L^\infty(\R^d)$ and $f\in H^{s,p}(\R^d,w;X)\cap L^\infty(\R^d;X)$ one has $$\begin{aligned}
\|mf\|_{H^{s,p}(\R^d,w;X)} \leq C \big(\|m\|_{L^\infty(\R^d)} \|f\|_{H^{s,p}(\R^d,w;X)} + \|m\|_{H^{s,p}(\R^d, w)} \|f\|_{L^\infty(\R^d;X)}\big).\end{aligned}$$
In Proposition \[multiplication-Hinfty\] a variant of this estimate is given for operator-valued multipliers $m$, i.e., $m(x)\in {{\mathscr L}}(X,Y)$ for $x\in \R^d$ with UMD spaces $X,Y$. In this case we have to assume that the image of $m$ is $\mathcal R$-bounded (see Section \[subsec:UMD\] for more information), the sup-norm in the above estimate is replaced by its $\mathcal R$-bound $\mathcal R(m)$ and the $H$-norm of $m$ is replaced by an $F$-norm depending on the type of $Y$.
Our motivation to consider the weighted vector-valued setting is the maximal $L^p$-$L^q$-maximal regularity approach to parabolic evolution equations, and further the approach based on the weights $\text{dist}(\cdot,\partial\Omega)^\gamma$ to treat problems with rough boundary data. In the forthcoming paper [@MeyVer4] we apply the multiplier results to extend Seeley’s characterization of complex and real interpolation spaces of Sobolev spaces with boundary conditions to the weighted vector-valued case. This allows, for instance, to characterize the fractional power domains of the time derivative with zero initial conditions on $L^p(\R_+,w_\gamma;X)$ and on $F_{p,q}^0(\R_+,w_\gamma;X)$.
In the rest of this introduction we explain the techniques employed in the proofs of the above results and the difficulties arising in the vector-valued setting.
Strichartz’ proof of the multiplier assertion for the characteristic function on $H^{s,p}(\R^d)$ is based on a difference norm for these spaces, see [@Strich67 Section 2]. It generalizes to $H$-spaces with values in a Hilbert space, see [@Walker Section 6.1]. In the general vector-valued case, such a norm does not seem to be available for $H^{s,p}(\R^d;X)$, even if $X$ has UMD. The vector-valued analogue of the difference norm leads to the Triebel-Lizorkin space $F^{s}_{p,2}(\R^d;X)$, see [@Tri83 Section 2.5.10] and [@Triebel3 Theorem 6.9]. But one has $$H^{s,p}(\R^d;X) = F^{s}_{p,2}(\R^d;X),$$ i.e., the usual Littlewood-Paley decomposition for the $H$-spaces, if and only if $X$ can be renormed as a Hilbert space (see [@HaMe96], and Proposition \[prop:type-embed\] for a refinement of this assertion in terms of type and cotype of $X$). As a substitute, a randomized Littlewood-Paley decomposition is available if $X$ has UMD. This result is originally due to Bourgain [@Bou2] and McConnell [@McC84]. In Section \[section:UMD\] we derive such a decomposition for the weighted spaces $H^{s,p}(\R^d,w;X)$ with $A_p$-weights $w$, essentially as a consequence of [@HH12]. As a byproduct, we also obtain that these spaces form a complex interpolation scale.
In the general vector-valued case, difference norms are still available for $F$- and $B$-spaces with positive smoothness. As in [@Tri83 Section 2.8.6] one could use these norms to prove the multiplier property of ${{{\bf 1}}}_{\R_+^d}$. At least in the reflexive range, the case of negative smoothness then follows from a duality argument. However, this excludes the important cases $F_{p,1}^s$ and $F_{p,\infty}^s$ as well as non-reflexive underlying spaces $X$. For the Slobodetskii spaces $W$ and Besov spaces $B$, the multiplier result can also be derived as in [@Gri63] from real interpolation with Dirichlet boundary conditions. For real interpolation spaces quite convenient norms are available. However, the $H$- and the $F$- spaces cannot be obtained by real interpolation.
Theorem \[thm:1\] will be a consequence of the following estimate, which is valid under the assumptions on the parameters as in the theorem: $$\label{intro-est}
\|mf\|_{H^{s,p}(\R^d,w_\gamma;X)} \leq C \big( \|m\|_{B_{r,\infty}^{\frac{1+\mu}{r}}(\R,w_\mu)} + \|m\|_\infty\big)\|f\|_{H^{s,p}(\R^d,w_\gamma;X)}$$ Here $r$ and $\mu$ can be chosen in a range which depends on the other parameters and $m$ only depends on the last coordinate of $\R^d$. After a suitable cut-off, the characteristic function ${{{\bf 1}}}_{\R_+^d}$ belongs to the Besov space $B_{r,\infty}^{\frac{1+\mu}{r}}(\R,w_\mu)$ for all $r\in (1,\infty)$ and $\mu\in (-1,r-1)$ (see Lemma \[chi\]). In this context ${{{\bf 1}}}_{\R_+^d}$ is considered to depend on the last variable only. Together with , this yields Theorem \[thm:1\].
The estimate is shown in Theorem \[multiplication-esti\]. Also here the more general case of an operator-valued $m$ is considered, where as before the sup-norm of $m$ is replaced by its $\mathcal R$-bound. It is analogous as for unweighted, scalar-valued $B$- and $F$-spaces, see [@Franke86; @Marschall; @RS96; @Sickel87; @Tri83] and in particular [@RS96 Section 4.6]. Similar to these references, its proof is based on the paraproduct technique as introduced by Bony (see e.g. [@Bony]). For pointwise multipliers this method was first employed by Peetre [@Peetre76] and Triebel [@Tri83] in order to treat the case of $B$- and $F$-spaces in the full parameter range $p,q\in (0,\infty]$. For more recent developments in the context of paraproducts in a UMD-valued setting we refer to [@HytWeis10; @MP12].
The idea of the paraproduct approach is as follows, see also [@RS96 Section 4.4]. For a function $\varphi$ with ${\widehat}\varphi \in C_c^\infty(\R^d)$ and ${\widehat}\varphi (0) = 1$ one sets $S^l f = {{\mathscr F}}^{-1}({\widehat}\varphi(2^{-l}\cdot) {\widehat}f)$, such that $S^l f \to f$ as $l\to \infty$ in the sense of distributions. One defines the product of two distributions $m$ and $f$ as $$mf = \lim_{l\to \infty} S^l m \cdot S^l f,$$ whenever this limit exists in the distributional sense. This extends the pointwise product of smooth functions. Observe that $S^l m \cdot S^l f$ is well-defined in a pointwise sense since the factors have compact Fourier support and are therefore smooth. Now one decomposes this limit into the sum of three series $\Pi_1(m,f)$, $\Pi_2(m,f)$ and $\Pi_3(m,f)$, the paraproducts, such that $$mf = \Pi_1(m,f) + \Pi_2(m,f) + \Pi_3(m,f),$$ see Section \[sec:paraproducts\] for details. These collect different sizes of Fourier supports of $m$ and $f$, respectively, and are thus estimated in different ways.
The estimate of $\Pi_1(m,f)$, in which the $m$-factors have large Fourier supports, is based on the randomized Littlewood-Paley decomposition for $H^{s,p}(\R^d,w;X)$. It yields $$\label{intro-esti-1}
\|\Pi_1(m,f)\|_{H^{s,p}(\R^d,w;X)} \leq C \|m\|_\infty \|f\|_{H^{s,p}(\R^d,w;X)},$$ see Lemma \[multiplication1\]. An analogous result holds with $H^{s,p}$ replaced by $F_{p,q}^s$ and $B_{p,q}^s$, where one can directly use the Littlewood-Paley decomposition from the definition of these spaces and thus does not require $X$ to have UMD (see Lemma \[multiplication2\]).
The other two paraproducts are estimated in endpoint type Triebel-Lizorkin norms to the result $$\label{intro-esti-2}
\|\Pi_i(m,f)\|_{F^{s}_{p,1}(\R^d,w_\gamma;X)} \leq C \|m\|_{B_{r,\infty}^{\frac{1+\mu}{r}}(\R, w_\mu)} \|f\|_{F^{s}_{p,\infty}(\R^d,w_\gamma;X)}, \qquad i=2,3,$$ see the Lemmas \[multiplication3\] and \[multiplication4\]. As in [@Franke86] and [@RS96 Section 4.4], the proofs are based on Jawerth-Franke type embeddings and weighted estimates of series in spaces of entire analytic functions. These rather technical results are considered in detail in Appendix \[sec:analytic\].
Observe that in there is a smoothing in the microscopic parameter $q$. Since $$F^{s}_{p,1} \hookrightarrow F^{s}_{p,q} \hookrightarrow F^{s}_{p,\infty}, \qquad q\in [1,\infty],$$ on the left-hand side of we have the smallest $F$-space and on the right-hand side of we have the largest $F$-space for fixed $s$ and $p$. The smoothing can be employed for the $H$-spaces as follows: since $$F^{s}_{p,1}(\R^d,w;X) \hookrightarrow H^{s,p}(\R^d,w;X)\hookrightarrow F^{s}_{p,\infty}(\R^d,w;X)$$ for arbitrary Banach spaces $X$ and weights $w\in A_p$ (see [@SchmSi05] and [@MeyVer1 Proposition 3.12]), the estimate immediately gives $$\|\Pi_i(m,f)\|_{H^{s,p}(\R^d,w_\gamma;X)} \leq C \|m\|_{B_{r,\infty}^{\frac{1+\mu}{r}}(\R, w_\mu)} \|f\|_{H^{s,p}(\R^d,w_\gamma;X)}, \qquad i=2,3.$$ In particular, the smoothing effect in on the microscopic scale allows to avoid the randomized Littlewood-Paley decomposition in the estimates of $\Pi_2$ and $\Pi_3$.
The idea to treat vector-valued $H$-spaces by considering the corresponding $F$-spaces and employing that many of their properties are independent of the microscopic parameter $q$ is due to Schmeisser $\&$ Sickel [@SchmSiunpublished] in the context of traces, see also [@MeyVer2; @SSS].
This paper is organized as follows. In Section \[sec:prel\] we introduce the weighted function spaces and in Section \[section:UMD\] we consider the randomized Littlewood-Paley decomposition for weighted Bessel-potential spaces. The paraproducts are estimated in Section \[sec:Point\], and these results are applied in Section \[sec:p-mult\] to obtain our main results on pointwise multiplication. In Appendix \[sec:analytic\] we prove the required auxiliary results for spaces of entire analytic functions.
**Notations.** Generic positive constants are denoted by $C$. For $x\in \R^d$ we write $$x = (x',t), \qquad x'\in \R^{d-1}, \qquad t\in \R.$$ We let $\N = \{1, 2, 3, \ldots\}$ and $\N_0 = \N\cup\{0\}$. Throughout, $X$ and $Y$ are complex Banach spaces. It will explicitly be stated if further properties as UMD are assumed. The space of bounded linear operators from $X$ to $Y$ is denoted by ${{\mathscr L}}(X,Y)$, and ${{\mathscr L}}(X) = {{\mathscr L}}(X,X)$. The Schwartz class is denoted by ${{\mathscr S}}(\R^d;X)$, and we write ${{\mathscr S'}}(\R^d;X) = {{\mathscr L}}({{\mathscr S}}(\R^d);X)$ for the $X$-valued tempered distributions. The Fourier transform is denoted by $\widehat{f}$ or ${{\mathscr F}}f$. For $\sigma = k + \sigma_*$ with $k\in \N_0$ and $\sigma_*\in [0,1)$ we denote by $BC^\sigma(\R^d; X)$ the space of $C^k$-functions with bounded derivatives and $\sigma_*$-Hölder continuous $k$-th derivatives.
Preliminaries\[sec:prel\]
=========================
In this section we briefly recall some notions and facts from the Fourier analytic approach to function spaces (see [@Tri83], and for the weighted case [@Bui82; @HS08]). For the vector-valued setting we refer to [@SSS; @SchmSiunpublished; @Tr01] and [@MeyVer1 Sections 2 and 3].
Muckenhoupt weights {#sec:Mucken}
-------------------
A function $w:\R^d\to [0,\infty)$ is called a weight if $w\in L^1_{\text{loc}}(\R^d)$ and if it is positive almost everywhere on $\R^d$. For $p\in (1,\infty)$ the Muckenhoupt class of weights on $\R^d$ is denoted by $A_p$ or $A_p(\R^d)$, and $A_\infty = \bigcup_{p> 1} A_p$ (see [@GraModern Chapter 9] for the general theory). We are mainly interested in anisotropic power weights $w$ of the form $$w_{\gamma}(x',t) = |t|^\gamma, \qquad x=(x',t) \in \R^d, \qquad x'\in \R^{d-1},\qquad t \in \R.$$ This notation will be used throughout the rest of the paper. Here $w_{\gamma} \in A_p$ if and only if $\gamma \in (-1,p-1)$, see [@HS08 Example 1.5]. For $w\in A_\infty$ the norm of $L^p(\R^d,w;X)$ is defined by $$\|f\|_{L^p(\R^d,w;X)} = \left ( \int_{\R^d} \|f(x)\|_X^p w(x) \, dx\right)^{1/p}.$$ For $f\in L^1_{\text{loc}}(\R^d;X)$ the Hardy-Littlewood maximal operator $M$ is given by $$(Mf)(x) = \sup_{r>0} \frac{1}{|B(x,r)|} \int_{B(x,r)} \|f(y)\|_X\,dy, \qquad x\in \R^d.$$ The operator $M$ is bounded on $L^p(\R^d,w;X)$ if and only if $w\in A_p$. More generally, the weighted Fefferman-Stein maximal inequality (see [@AJ80 Theorem 3.1], and also [@MeyVer1 Proposition 2.2]) says that for $p\in (1,\infty)$, $q\in (1,\infty]$, $w\in A_p$ and any $(f_k)_{k\geq 0} \subset L^p(\R^d,w;\ell^q(X))$ we have $$\label{Fefferman-Stein}
\|(Mf_k)_{k\geq 0} \|_{L^p(\R^d,w;\ell^q)} \leq C\|(f_k)_{k\geq 0}\|_{L^p(\R^d,w;\ell^q(X))}.$$ In Lemma \[lem:maxoperator\] in the appendix we consider a version of this inequality for mixed-norm spaces.
Weighted function spaces {#subsec:spaces}
------------------------
Let $\Phi(\R^d)$ be the collection of all sequences $(\varphi_k)_{k\geq 0} \subset {{\mathscr S}}(\R^d)$ such that $$\begin{aligned}
{\widehat}{\varphi}_0 = {\widehat}{\varphi}, \qquad {\widehat}{\varphi}_1(\xi) = {\widehat}{\varphi}(\xi/2) - {\widehat}{\varphi}(\xi), \qquad {\widehat}{\varphi}_k(\xi) = {\widehat}{\varphi}_1(2^{-k+1} \xi), \quad k\geq 2, \qquad \xi\in \R^d,\end{aligned}$$ with a generator function $\varphi$ of the form $$ 0\leq {\widehat}{\varphi}(\xi)\leq 1, \quad \xi\in \R^d, \qquad {\widehat}{\varphi}(\xi) = 1 \ \text{ if } \ |\xi|\leq 1, \qquad {\widehat}{\varphi}(\xi)=0 \ \text{ if } \ |\xi|\geq \frac32.$$ Observe that ${\text{\rm supp\,}}{\widehat}{\varphi}_k \subseteq \{2^{k-1} \leq |\xi|\leq \frac{3}{2}2^{k}\}$ for $k\geq 1$. For $(\varphi_k)_{k\geq 0} \in \Phi(\R^d)$ and $f\in {{\mathscr S'}}(\R^d;X)$ we set $$S_k f = \varphi_k * f = {{\mathscr F}}^{-1} ( {\widehat}{\varphi}_k {\widehat}{f}).$$ The norms of the Besov space $B$, the Triebel-Lizorkin space $F$ and the Bessel-potential space $H$ are for $s\in \R$, $p\in (1,\infty)$, $q\in [1,\infty]$, $w\in A_\infty$ and $f\in {\mathscr S}'(\R^d;X)$ given by $$\|f\|_{B_{p,q}^s (\R^d,w;X)} = \Big\| \big( 2^{sk}S_k f\big)_{k\geq 0} \Big\|_{\ell^q(L^p(\R^d,w;X))},$$ $$\|f\|_{F_{p,q}^s (\R^d,w;X)} = \Big\| \big( 2^{sk}S_k f\big)_{k\ge 0} \Big\|_{L^p(\R^d,w;\ell^q(X))},$$ $$\|f\|_{H^{s,p}(\R^d,w;X)} = \|{{\mathscr F}}^{-1} [(1+|\cdot|^2)^{s/2} {\widehat}{f} ]\|_{L^p(\R^d,w;X)}.$$ Each choice of $(\varphi_k)_{k\geq 0} \in \Phi(\R^d)$ leads to an equivalent norm for the $B$- and $F$-spaces. For $m\in \N_0$ we also consider Sobolev spaces $W$, with norm $$\|f\|_{W^{m,p}(\R^d,w;X)} = \Big(\sum_{|\alpha|\leq m} \|D^{\alpha} f\|_{L^p(\R^d,w;X)}^p\Big)^{1/p}.$$ By [@MeyVer1 Lemma 3.8], the space ${{\mathscr S}}(\R^d;X)$ is dense in each of the above spaces if $q<\infty$. A useful substitute for the lack of density in case $q=\infty$ is the Fatou property. If $E = B_{p,q}^s (\R^d,w;X)$ or $E = F_{p,q}^s (\R^d,w;X)$, it says that for $(f_n)_{n\geq 0}\subset E$ we have $$\label{fatou}
\lim_{n\to \infty} f_n = f \text{ in }{{\mathscr S'}}(\R^d;X), \quad \liminf_{n\to \infty}\|f_n\|_{E}<\infty \quad \Longrightarrow \quad f\in E, \quad \|f\|_E \leq \liminf_{n\to \infty} \|f_n\|_E,$$ see [@SSS Proposition 2.18]. We have the elementary embeddings $$\label{eq:scaleBF}
B_{p,\min\{p,q\}}^{s} (\R^d,w;X)\hookrightarrow F_{p,q}^{s} (\R^d,w;X) \hookrightarrow B_{p,\max\{p,q\}}^{s} (\R^d,w;X),$$ and if $1\leq q_0\leq q_1\leq \infty$, then $$\label{eq:monotony}
B_{p,q_0}^s (\R^d,w;X)\hookrightarrow B_{p,q_1}^s (\R^d,w;X), \qquad F_{p,q_0}^s (\R^d,w;X)\hookrightarrow F_{p,q_1}^s (\R^d,w;X).$$ Moreover, for $w\in A_p$, $s\in \R$ and $m\in \N_0$, $$\begin{aligned}
\label{eq:TriebelLizorkinH}
F^{s}_{p,1}(\R^d,w;X) \hookrightarrow H^{s,p}(\R^d,w;X) \hookrightarrow F^{s}_{p,\infty}(\R^d,w;X),\\
\label{eq:TriebelLizorkinW}
F^{m}_{p,1}(\R^d,w;X) \hookrightarrow W^{m,p}(\R^d,w;X) \hookrightarrow F^{m}_{p,\infty}(\R^d,w;X).\end{aligned}$$ where the embeddings for $F^{s}_{p,1}$ and $F^{m}_{p,1}$ even hold in case $w\in A_\infty$.
\[HW\] Note that $L^p(\R^d,w;X) = H^{0,p}(\R^d,w;X) = W^{0,p}(\R^d,w;X)$. But $H^{1,p}(\R^d;X) = W^{1,p}(\R^d;X)$ if and only if $X$ has the UMD property (see [@McC84; @Zim89]), and $L^p(\R^d;X) = F_{p,2}^0(\R^d;X)$ if and only if $X$ can be renormed as a Hilbert space (see [@HaMe96] and [@SchmSiunpublished Remark 7]).
A difference norm for weighted Besov spaces
-------------------------------------------
For an integer $m\geq 1$ define $$\Delta_h^m f(x) = \sum_{l =0}^m {{m}\choose{l }} (-1)^l f(x+(m-l)h), \qquad x,h\in \R^d.$$ For $f\in L^p(\R^d,w;X)$ let $$[f]_{B^s_{p,q}(\R^d,w;X)}^{(m)} = \Big(\int_0^\infty t^{-sq} \Big\|t^{-d}\int_{|h|\leq t} \|\Delta_h^m f\|_X \, dh\Big\|_{L^p(\R^d,w)}^{q} \, \frac{dt}{t}\Big)^{1/q},$$ with the usual modification if $q=\infty$, and set $${|\!|\!|}f{|\!|\!|}_{B^s_{p,q}(\R^d,w;X)}^{(m)} = \|f\|_{L^p(\R^d,w;X)} + [f]_{B^s_{p,q}(\R^d,w;X)}^{(m)}.$$
One can extend a well-known result on the equivalence of norms to the weighted case (cf. [@SchmSiunpublished], [@Tri83 Section 2.5.10] and [@Triebel3 Theorem 6.9]). A similar result for weighted $F$-spaces is stated in [@MeyVer2 Proposition 2.3].
\[prop:Lpsmoothness-Besov\] Let $s>0$, $p\in (1, \infty)$, $q\in [1, \infty]$ and $w\in A_p$. Let $m \in \N$ be such that $m>s$. There is a constant $C>0$ such that for all $f\in L^p(\R^d,w;X)$ one has $$\label{eq:equivnorm-Besov}
C^{-1} \|f\|_{B^s_{p,q}(\R^d,w;X)}\leq {|\!|\!|}f{|\!|\!|}_{B^s_{p,q}(\R^d,w;X)}^{(m)} \leq C\|f\|_{B^s_{p,q}(\R^d,w;X)},$$ whenever one of these expressions is finite.
It is often more convenient to work with the $L^p(\R^d,w;X)$-modulus of smoothness, defined by $$\omega_{p,w}^m(f,t) = \sup_{|h|\leq t} \|\Delta^m_h f\|_{L^p(\R^d,w;X)}, \qquad t>0.$$ In the unweighted case $w\equiv 1$, for any integer $m> s$ the expression $$\| f\|_{B^{s}_{p,q}(\R^d,w;X)}^{(m)} = \|f\|_{L^p(\R^d,w;X)} + \Big(\int_0^\infty t^{-sq} \omega_{p,w}^m(f,t)^q \, \frac{dt}{t}\Big)^{1/q},$$ defines an equivalent norm on $B^{s}_{p,q}(\R^d,w;X)$ (modification if $q=\infty$). We do not know if this extends to the weighted setting. However, by Minkowski’s inequality one has $$\Big\|t^{-d} \int_{|h|\leq t} \|\Delta_h^m f\|_X \, dh\Big\|_{L^p(\R^d,w)} \leq t^{-d} \int_{|h|\leq t} \|\Delta_h^m f\|_{L^p(\R^d,w;X)} \, dh \leq C \sup_{|h|\leq t}\|\Delta_h^m f\|_{L^p(\R^d,w;X)}.$$ Therefore, one always has $$\label{remark-besov-norm}
{|\!|\!|}f{|\!|\!|}_{B^{s}_{p,q}(\R^d,w;X)}^{(m)} \leq C \| f\|_{B^{s}_{p,q}(\R^d,w;X)}^{(m)}.$$
UMD-valued Bessel-potential spaces {#section:UMD}
==================================
In this section we derive a Littlewood-Paley decomposition for the spaces $H^{s,p}(\R^d,w;X)$, where $X$ has UMD and $w\in A_p$. As preparations we first recall some notions in this context and record a Mihlin multiplier theorem for $L^p(\R^d,w;X)$, which follows from the results of [@HH12]. We then give a first multiplication estimate for Hölder continuous functions and $H^{s,p}(\R^d,w;X)$, which is based on bilinear complex interpolation.
UMD spaces, Rademacher functions and $\mathcal R$-boundedness {#subsec:UMD}
-------------------------------------------------------------
A Banach space $X$ is said to have UMD if for any probability space $(\Omega,\mathscr{A},{{\mathbb P}})$ and $p\in (1, \infty)$ martingale differences are unconditional in $L^p(\Omega;X)$ (see [@Ama95; @Bu3; @RF] for a survey on the subject). The UMD property of a Banach space turns out to be equivalent to the boundedness of the vector-valued extension of the Hilbert transform on $L^p(\R;X)$. For this reason UMD is sometimes also called of class $\mathcal{H} \mathcal{T}$. Many other Fourier multipliers are known to be bounded in $L^p(\R^d;X)$ and in particular, the classical Mihlin Fourier multiplier theorem holds in the vector-valued setting if and only if $X$ has UMD, see [@Bou2; @McC84; @Zim89] (and Proposition \[prop:weightedmihlin\] below).
Let us mention a few facts on UMD spaces (see [@Ama95 Section III.4]).
(a) Hilbert spaces have UMD.
(b) Closed subspaces and the dual of UMD spaces have UMD.
(c) If $X$ has UMD, then $L^p(\Omega; X)$ has UMD for each $\sigma$-finite measure space $\Omega$ and $p\in (1,\infty)$.
(d) The reflexive range of the classical function spaces such as $L^p$, $H^{s,p}$, $B_{p,q}^s$, $F_{p,q}^s$ have UMD.
(e) UMD spaces are reflexive. Hence $L^1$, $\ell^1$, $L^\infty$, $C([0,1])$ and $c_0$ do not have UMD.
A sequence of random variables $(r_k)_{k\geq 0}$ on $\Omega$ is called a Rademacher sequence if ${{\mathbb P}}(\{r_k = 1\}) = {{\mathbb P}}(\{r_k=-1\}) = 1/2$ for $k\geq 0$ and $(r_k)_{k\geq 0}$ are independent. For instance, one can take $\Omega = (0,1)$ with the Lebesgue measure and $r_k(\omega) = \text{sign}[\sin(2^{k+1}\pi \omega)]$ for $\omega\in \Omega$.
A family of operators $\mathcal{T} \subset {{\mathscr L}}(X,Y)$ is called $\mathcal R$-bounded, if some $p\in [1,\infty)$ there is a constant $C_p$ such that for all $N\geq 1$, for all $T_0,..., T_N \in \mathcal T$ and all $x_0,...,x_N\in X$ it holds that $$\Big \|\sum_{k=0}^N r_k T_k x_k \Big\|_{L^p(\Omega;Y)} \leq C_p \Big\|\sum_{k=0}^N r_k x_k \Big \|_{L^p(\Omega;X)}.$$ The infimum of all constants $C_p$ satisfying the above estimate is denoted by $\mathcal R_p(\mathcal T)$ and is called the $\mathcal R_p$-bound of $\mathcal T$. One can show that if the inequality is satisfied for one $p$, then it holds for all $p$. We often neglect the dependence of the $\mathcal R$-bound on $p$. For further information on $\mathcal R$-boundedness we refer to [@DHP; @KuWe].
Fourier multipliers
-------------------
For a symbol $m\in L^\infty(\R^d)$ we define the operator $T_m$ by $$T_m: {{\mathscr S}}(\R^d;X) \to {{\mathscr S'}}(\R^d;X), \qquad T_m f = \mathcal {{\mathscr F}}^{-1} (m {\widehat}f).$$ For $p\in [1,\infty)$ and $w\in A_\infty$ the Schwartz class ${{\mathscr S}}(\R^d;X)$ is dense in $L^p(\R^d;X)$, see [@MeyVer1 Lemma 3.8]. The following Mihlin type multiplier theorem provides a sufficient condition for the boundedness of $T_m$. It is a simple consequence of [@HH12 Corollary 2.10]. For the scalar case $X = \C$ we refer to [@GCRdF Section IV.3]. A version with operator-valued multiplier holds as well. For this one needs an $\mathcal R$-boundedness version of the condition (see [@HHN Theorems 3.6 and 3.7], [@StrWe Theorem 4.4] and [@We]).
\[prop:weightedmihlin\] Let $X$ have *UMD*, $p\in (1, \infty)$ and $w\in A_p$. Assume that $m\in C^{d+2}(\R^d\setminus\{0\})$ satisfies $$\label{Mihlin-assum}
C_m = \sup_{|\alpha| \leq d+2}\sup_{\xi \neq 0} |\xi|^{|\alpha|} |D^{\alpha} m(\xi)| <\infty.$$ Then $T_m$ extends to a bounded operator on $L^p(\R^d,w;X)$, and its operator norm only depends on $d$, $X$, $p$, $w$ and $C_m$.
By [@HH12 Corollary 2.10] we have to verify that $T_m$ is a vector-valued Calderón-Zygmund operator, in the sense of [@HH12 Definition 2.6]. Assumption for $\alpha\leq (1,\ldots,1)$ implies that $T_m$ belongs to ${{\mathscr L}}(L^p(\R^d;X))$, see [@Zim89 Proposition 3]. Further, ${{\mathscr F}}^{-1}m$ may be represented by a function $K\in C^1(\R^d\setminus\{0\})$ satisfying $|K(x)|\leq C|x|^{-d}$ and $|\nabla K(x)|\leq C|x|^{-(d+1)}$ for $x\neq 0$, see the proof of [@Stein93 Proposition VI.4.4.2]. Hence $T_m$ is represented by the convolution with a singular kernel. We conclude that [@HH12 Corollary 2.10] applies to $T_m$.
Equivalent norms and Littlewood-Paley theory
--------------------------------------------
The following characterizations can be deduced from Proposition \[prop:weightedmihlin\]. We fix a Rademacher sequence $(r_k)_{k\geq 0}$ on a probability space $\Omega$, and a further a sequence $(\varphi_k)_{k\geq 0} \in \Phi(\R^d)$. Recall that $S_k f = \varphi_k*f$.
\[prop:UMDHisF\] Let $X$ have *UMD*, $p\in (1, \infty)$ and $w\in A_p$. Then $$H^{m,p}(\R^d,w;X) = W^{m,p}(\R^d,w;X) \qquad \text{ for all }\, m \in \N_0. \label{H=W}$$ Moreover, for $s\in \R$ we have that $f\in {{\mathscr S'}}(\R^d;X)$ belongs to $H^{s,p}(\R^d,w;X)$ if and only if $$\sup_{n\geq 0}\Big \| \sum_{k=0}^n r_k 2^{sk} S_k f\Big \|_{L^p({\Omega};L^p(\R^d,w;X))}<\infty.$$ In this case the series $\sum_{k\geq 0} r_k 2^{sk} S_k f$ converges in $L^p(\Omega; L^p(\R^d,w;X))$, and $$\label{eq:convergenceFrad}\|f\|_{F^{s}_{p,{\rm{rad}}}(\R^d,w;X)}= \Big\| \sum_{k\geq 0} r_k 2^{sk} S_k f\Big\|_{L^p({\Omega};L^p(\R^d,w;X))} = \sup_{n\geq 0}\Big \| \sum_{k=0}^n r_k 2^{sk} S_k f\Big \|_{L^p({\Omega};L^p(\R^d,w;X))}$$ defines an equivalent norm on $H^{s,p}(\R^d,w;X)$.
(i) For $H^{1,p}(\R^d;X) = W^{1,p}(\R^d;X)$ it is necessary that $X$ has UMD, see Remark \[HW\].
(ii) The extended real number $\|f\|_{F^{s}_{p,{\rm{rad}}}(\R^d,w;X)}$ is well-defined for every tempered distribution $f$ and therefore one could study the space $F^{s}_{p,{\rm{rad}}}(\R^d,w;X)$ on its own, see [@Ver12]. The result shows that if $X$ has UMD, then $F^{s}_{p,{\rm{rad}}}$ coincides with $H^{s,p}$. In particular, for $w\in A_p$ in the scalar case one has $$\|f\|_{F^s_{p,2}(\R^d,w)}\eqsim \|f\|_{F^{s}_{p,{\rm{rad}}}(\R^d,w)}.$$ The identity $F^s_{p,2}(\R^d,w) = H^{s,p}(\R^d,w)$ was proved in [@Ry01] for weights $w$ which satisfy only a local $A_p$-condition.
*Step 1.* Using Proposition \[prop:weightedmihlin\], the identity (\[H=W\]) can be shown as in the unweighted scalar case (see [@BeLo Theorem 6.2.3] or [@Tr1 Section 2.3.3]).
*Step 2.* Assume $\|f\|_{F^{s}_{p,{\rm{rad}}}(\R^d,w;X)}<\infty$. Since closed subspaces of UMD spaces have UMD and the sequence space $c_0$ does not have UMD, it follows that $X$ does not contain a copy of $c_0$. We therefore conclude from [@LeTa Theorem 9.29] that the series $\sum_{k=0}^\infty r_k 2^{sk} S_k f$ converges in $L^p({\Omega};L^p(\R^d,w;X))$. It follows from the properties of the Rademacher functions that $$\Big \|\sum_{k=0}^n r_k 2^{sk} S_k f\Big\|_{L^p({\Omega};L^p(\R^d,w;X))}\leq \Big\|\sum_{k=0}^\infty r_k 2^{sk} S_k f\Big\|_{L^p({\Omega};L^p(\R^d,w;X))},$$ which implies one inequality for the assertion in . The other inequality is trivial.
*Step 3.* Let $f\in H^{s,p}(\R^d,w;X)$ and write $f_s = {{\mathscr F}}^{-1}[(1+|\cdot|^2)^{s/2} {\widehat}{f}]\in L^p(\R^d,w;X)$. Fix $n\geq 0$, $\omega\in {\Omega}$ and define the scalar symbol $m_n \in C^\infty(\R^d)$ by $$m_n(\xi) = \sum_{k=0}^n r_k(\omega) 2^{sk}(1+|\xi|^2)^{-s/2} \widehat{\varphi}_k(\xi).$$ For each $\xi\in \R^d$, here at most three summands are nonzero. Since $\widehat{\varphi}_k$ is supported around $|\xi| = 2^k$ and $\|D^\beta {\widehat}\varphi_k\|_\infty\leq C_\beta 2^{-k|\beta|}$, it follows that $$C_m = \sup_{n\geq 0}\sup_{|\alpha|\leq d+2} \sup_{\xi\neq 0}|\xi|^{|\alpha|} |D^{\alpha} m_n(\xi)|<\infty,$$ where $C_m$ is independent of $\omega$. By Proposition \[prop:weightedmihlin\], the corresponding operators $T_{m_n}$ are bounded on $L^p(\R^d,w;X)$, uniformly in $n$ and $\omega$. From this we obtain $$\begin{aligned}
\Big\|\sum_{k=0}^n r_k(\omega) 2^{sk} \varphi_k * f\Big\|_{L^p(\R^d,w;X)} &= \|T_{m_n}
f_s\|_{L^p(\R^d,w;X)} \\ & \leq C \|f_s\|_{L^p(\R^d,w;X)} = C \|
f\|_{H^{s,p}(\R^d,w;X)}.\end{aligned}$$ Taking the $L^p({\Omega})$-norm and the supremum over $n$ yields $\|f\|_{F^{s}_{p,{\rm{rad}}}(\R^d,w;X)}\leq C \|
f\|_{H^{s,p}(\R^d,w;X)}$.
*Step 4.* For the converse estimate, assume that $\|f\|_{F^{s}_{p,{\rm{rad}}}(\R^d,w;X)}<\infty$. As we have seen in Step 2, then $\sum_{k\geq 0} r_k 2^{sk} \varphi_k * f$ converges in $L^p({\Omega};L^p(\R^d,w;X))$. From [@LeTa Theorem 2.4] we get that $\sum_{k\geq 0} r_k(\omega) 2^{sk} \varphi_k * f$ converges in $L^p(\R^d,w;X)$ for almost every $\omega\in \Omega$. Choose $(\widehat{\psi}_k)_{k \geq 0}$ such that $0\leq {\widehat}{\psi}_k\leq 1$, ${\widehat}{\psi}_k = 1$ on $\text{supp}\, {\widehat}{\varphi}_k$, $\text{supp}\,{\widehat}{\psi}_0\subset \{0 \leq |\xi|\leq 2\}$ and $\text{supp}\, {\widehat}{\psi}_k\subset \{2^{k-2}\leq |\xi|\leq 2^{k+1}\}$ for $k\geq 1$. For $\omega\in \Omega$ we set $$m_\omega = \sum_{l\geq 0} r_l(\omega) 2^{-sl} (1+|\cdot|^2)^{s/2} {\widehat}{\psi}_l,\qquad g_\omega = \sum_{k\geq 0} r_k(\omega) 2^{sk} \varphi_k * f.$$ Let $f_s$ be as in Step 3. Then the independence and symmetry of the Rademacher random variables together with the support conditions on ${\widehat}\varphi_k, {\widehat}\psi_k$ imply that $f_s = \int_\Omega T_{m_\omega} g_\omega \, d{{\mathbb P}}(\omega)$. As before, $$C_m = \sup_{|\alpha|\leq d+2} \sup_{\xi\neq 0}|\xi|^{|\alpha|} |D^{\alpha} m_\omega(\xi)|<\infty$$ is independent of $\omega$. Thus $\|T_{m_\omega} g_\omega\|_{L^p(\R^d,w;X)}\leq C \|g_\omega\|_{L^p(\R^d,w;X)}$ for almost every $\omega$ by Proposition \[prop:weightedmihlin\]. Therefore, using also Jensen’s inequality and Fubini’s theorem, $$\begin{aligned}
\|f\|_{H^{s,p}(\R^d,w;X)}^p &\, =\|f_s\|_{L^p(\R^d,w;X)}^p = \Big \|\int_\Omega T_{m_\omega} g_\omega \, d{{\mathbb P}}(\omega)\Big \|_{L^p(\R^d,w;X)}^p\\
&\, \leq \int_\Omega \big \| T_{m_\omega} g_\omega \big \|_{L^p(\R^d,w;X)}^p \, d{{\mathbb P}}(\omega) \leq C \int_\Omega \big \| g_\omega \big \|_{L^p(\R^d,w;X)}^p \, d{{\mathbb P}}(\omega) = \|f\|_{F^{s}_{p,{\rm{rad}}}(\R^d,w;X)}^p.\end{aligned}$$ Hence $f\in H^{s,p}(\R^d,w;X)$ and the required estimate follow.
Another equivalent norm for UMD-valued $H$-spaces is given as follows.
\[prop:differentiation2\] Let $X$ have *UMD*, $s\in \R$, $p\in (1, \infty)$ and $w\in A_{p}$. Then for each $m\in \N$, $$\label{eq:Besseldifferentiation}
\sum_{|\alpha|\leq m} \|D^\alpha f\|_{H^{s-m,p}(\R^d,w;X)}$$ defines an equivalent norm on $H^{s,p}(\R^d,w;X)$
This is a consequence of (\[H=W\]) and the fact that $D^{\alpha}$ and the Bessel-potential commute on ${{\mathscr S'}}(\R^d;X)$.
Duality, functional calculus and complex interpolation
------------------------------------------------------
Let $X$ be a Banach space such that its dual space $X^*$ has the Radon-Nikodym property RNP, cf. [@DU77 Definition III.1/3]. For instance, reflexive Banach spaces and thus UMD spaces have RNP, see [@DU77 Corollary III.2/12].
If $X^*$ has RNP then it follows from [@DU77 Theorem IV.1/1] that for a $\sigma$-finite measure space $(S,\Sigma,\mu)$ and $p\in (1,\infty)$ with dual exponent $p' = \frac{p}{p-1}$ one has $L^p(S,\mu;X)^* = L^{p'}(S,\mu;X^*)$, induced by the pairing $\int_{S} {\langle}f(x), g(x){\rangle}_{X, X^*} d\mu$.
Since this pairing does not respect the $A_p$-classes, in the context of weights it is more convenient to work with $${\langle}f,g{\rangle}= \int_{\R^d} {\langle}f(x), g(x){\rangle}_{X,X^*} \,dx.$$ Recall from [@GraModern] that for $w\in A_p$ the dual weight $w' = w^{-\frac{1}{p-1}}$ with respect to $p$ belongs to $A_{p'}$.
\[dual-H\] Let $X$ be a Banach space such that $X^*$ has *RNP*, let $s\in \R$, $p\in (1,\infty)$ and let $w\in A_p$. Then $$|{\langle}f,g{\rangle}| \leq \|f\|_{H^{s,p}(\R^d,w;X)} \|g\|_{H^{-s,p'}(\R^d,w';X^*)}, \qquad f\in {{\mathscr S}}(\R^d;X), \quad g\in {{\mathscr S}}(\R^d,X^*),$$ such that the pairing ${\langle}\cdot,\cdot{\rangle}$ extends continuously to $H^{s,p}(\R^d,w;X) \times H^{-s,p'}(\R^d,w';X^*)$. Every element of $H^{s,p}(\R^d,w;X)^*$ is of the form ${\langle}\cdot, g{\rangle}$ with $g\in H^{-s,p'}(\R^d,w';X^*)$. In this sense, $$H^{s,p}(\R^d,w;X)^* = H^{-s,p'}(\R^d,w';X^*).$$
For $s=0$, the weighted case can easily be deduced from the unweighted case. For general $s\in \R$ we have ${\langle}J_s f,g{\rangle}= {\langle}f, J_s g{\rangle}$, such that the same arguments as in [@Cal61 Theorem 9] for the unweighted scalar case apply.
To prove that UMD-valued $H$-spaces form a complex interpolation scale we record the following result on bounded $\mathcal H^\infty$-calculi. For a definition and the properties of this functional calculus we refer to [@DHP; @KuWe].
\[prop:Hinfty\] Let $X$ have *UMD*, $p\in (1, \infty)$ and $w\in A_p$. The following assertions hold true.
1. The operator $\partial_t$ with domain $H^{1,p}(\R,w;X)$ on $L^p(\R,w;X)$ has a bounded $\mathcal H^\infty$-calculus of angle $\frac{\pi}{2}$.
2. The operator $-\Delta$ with domain $H^{2,p}(\R^d,w;X)$ on $L^p(\R^d,w;X)$ has a bounded $\mathcal H^\infty$-calculus of angle zero.
Using Proposition \[prop:weightedmihlin\], one can argue as in [@KuWe Example 10.2].
The complex interpolation functor is denoted by $[\cdot,\cdot]_\theta$. We refer to [@MeyVer1 Proposition 6.1] for real interpolation of vector-valued $H$-spaces.
\[interpol-complex\] Let $X$ have *UMD*, $p\in (1, \infty)$ and $w\in A_p$. Assume $s_0 < s_1$, $\theta\in (0,1)$ and $s = (1-\theta)s_0 + \theta s_1$. Then $$[H^{s_0,p}(\R^d,w;X), H^{s_1,p}(\R^d,w;X)]_{\theta} = H^{s,p}(\R^d,w;X).$$
By Proposition \[prop:Hinfty\], the operator $1-\Delta$ with domain $D(A) = H^{2,p}(\R^d,w;X)$ on $L^p(\R^d,w;X)$ has a bounded $\mathcal H^\infty$-calculus of angle zero. This also implies the boundedness of its imaginary powers. Since $(1-\Delta)^{s_0/2}$ commutes with $1-\Delta$, the same is true for the realization of $A_{s_0}$ of $1-\Delta$ on $H^{s_0,p}(\R^d,w;X)$. Therefore, by [@Tr1 Theorem 1.15.3], $$[H^{s_0,p}(\R^d,w;X), D(A_{s_0}^{(s_1-s_0)/2})]_{\theta} = D(A_{s_0}^{\theta(s_1-s_0)/2}).$$ Since $D(A_{s_0}^{\tau/2}) = H^{s_0+\tau,p}(\R^d,w;X)$ for any $\tau > 0$, the assertion follows.
Multiplication by Hölder continuous functions
---------------------------------------------
Using bilinear interpolation, we give a first result on pointwise multiplication. An analogous result for $F$- and $B$-spaces is obtained in Proposition \[prop:mult-smooth\]. For $s <0$ the product is interpreted as an extension via density from the usual pointwise product of smooth functions.
\[thm:mult-smooth2\] Let $X$ and $Y$ have *UMD*, $s\in \R$, $p\in (1,\infty)$ and $w\in A_p$. Assume $\sigma > |s|$. Then $$\|mf\|_{H^{s,p}(\R^d,w;Y)} \leq C\|m\|_{BC^{\sigma}(\R^d; {{\mathscr L}}(X,Y))} \|f\|_{H^{s,p}(\R^d,w;X)}.$$
By , the result for $s \in \N_0$ follows immediately from Leibniz’ formula. For noninteger $s > 0$ it follows from the integer case and bilinear complex interpolation, see [@BeLo Theorem 4.4.1]. Here the $H$-spaces are interpolated with Proposition \[interpol-complex\]. For the interpolation of the $BC^m$-spaces with $m\in \N_0$ we note that for $\theta\in (0,1)$ and $\varepsilon > 0$ one has $$BC^{m+\theta+\varepsilon} \hookrightarrow B_{\infty,1}^{m+\theta} = [B_{\infty,1}^m, B_{\infty,1}^{m+1}]_{\theta} \hookrightarrow [BC^m, BC^{m+1}]_\theta,$$ see the Sections 2.4.7 and 2.5.7 of [@Tri83] for the scalar case.
Let finally $s<0$. Then for $f\in H^{s,p}(\R^d,w;X)$ and $g\in H^{-s,p'}(\R^d,w';Y^*)$ one has $$\begin{aligned}
|{\langle}mf, g{\rangle}| &= |{\langle}f, m^* g{\rangle}|\leq \|f\|_{H^{s,p}(\R^d;X)} \|m^* g\|_{H^{-s,p'}(\R^d,w';X^*)}
\\ & \leq C \|f\|_{H^{s,p}(\R^d;X)} \|m^*\|_{BC^{\sigma}(\R^d; {{\mathscr L}}(X,Y))} \|g\|_{H^{-s,p'}(\R^d,w';Y^*)}\end{aligned}$$ Taking the supremum over all $g$ with norm smaller than one and recalling that $\|m(x)\|_{{{\mathscr L}}(X,Y)} = \|m(x)^*\|_{{{\mathscr L}}(Y^*,X^*)}$, the required estimate follows from Proposition \[dual-H\].
The above result also holds with $H^{s,p}$ replaced by $B^{s}_{p,q}$, general $X$ and $Y$ and $s>0$. This follows from the $W^{m,p}$-case and real interpolation with parameter $q$. For reflexive spaces the case $s<0$ can be obtained by duality under restrictions on the parameters $p$ and $q$.
Estimates of paraproducts\[sec:Point\]
======================================
To investigate pointwise multipliers we follow [@Franke86; @RS96; @Tri83] and use the decomposition of a product into paraproducts. The basis for their estimates and convergence are the results in Appendix \[sec:analytic\] on weighted spaces of entire analytic functions.
Preliminaries
-------------
We fix a Rademacher sequence $(r_k)_{k\geq 0}$ on a probability space $\Omega$ and a sequence $(\varphi_k)_{k\geq 0} \in \Phi(\R^d)$ with the corresponding operators $S_k f = \varphi_k * f$.
\[Sk-Rbounded\] Let $X$ have *UMD*, $p\in (1,\infty)$ and $w\in A_p$. Then $(S_k)_{k\geq 0}$ is an $\mathcal R$-bounded subset of ${{\mathscr L}}(L^p(\R^d,w;X))$.
Let $(r'_l)_{l\geq 0}$ be an independent copy of $(r_k)_{k\geq 0}$ on $\Omega' = \Omega$. For any Banach space $Y$, as in [@GW03 Lemma 3.12] one can prove that for $(y_{kl})_{k,l=0}^N \subset Y$ one has $$\label{eq:diagonalbdd}
\Big\|\sum_{k=0}^N r_k y_{kk}\Big\|_{L^p({\Omega};Y)}\leq \Big\|\sum_{k,l=0}^N r_k r_l' y_{kl}\Big\|_{L^p({\Omega}\times{\Omega}';Y)}$$ Now let $f_0,..., f_N \in L^p(\R^d,w;X)$. Since $X$ has UMD, this is also true for $X_\Omega = L^p(\Omega;X)$, see [@Ama95 Theorem III.4.5.2]. Using with $y_{kl} = S_l f_k$ on $Y =L^p(\R^d,w;X)$ and Proposition \[prop:UMDHisF\] with $s = 0$ on $L^p(\R^d,w; X_\Omega)$ we obtain $$\begin{aligned}
\Big \|\sum_{k=0}^N r_k S_k f_k\Big\|_{L^p(\Omega; L^p(\R^d,w;X))} & \leq \Big \|\sum_{k,l=0}^N r_k r_l' S_l f_k\Big\|_{L^p({\Omega}'\times \Omega; L^p(\R^d,w;X))}
\\ & = \Big \|\sum_{l=0}^N r_l' S_l \Big(\sum_{k=0}^N r_k f_k\Big)\Big\|_{L^p({\Omega}'; L^p(\R^d,w;X_\Omega))}
\\ & \leq C \Big \|\sum_{k=0}^N r_k f_k\Big\|_{L^p({\Omega}; L^p(\R^d,w;X))}.\end{aligned}$$ This shows the $\mathcal R$-boundedness of $(S_k)_{k\geq 0}$.
On ${{\mathscr S'}}(\R^d;X)$ we define the operators $$S^l := \sum_{k=0}^l S_k, \quad l\in \N_0, \qquad S^{-l} :=0, \quad l\in \N.$$ Since ${\widehat}\varphi_k = {\widehat}\varphi_0(2^{-k}\cdot) - {\widehat}\varphi_0(2^{-k+1}\cdot)$ for $k\geq 1$, we have $S^l f = {{\mathscr F}}^{-1} ({\widehat}\varphi_0(2^{-l}\cdot){\widehat}f)$ and thus $$S^l f \to f \qquad \text{in }{{\mathscr S'}}(\R^d;X)\;\;\text{ as }l\to\infty.$$
The next result is useful for operator-valued pointwise multipliers on $H$-spaces.
\[Slm-Rbounded\]Let $X$ and $Y$ be Banach spaces. Let $m:\R^d\to {{\mathscr L}}(X,Y)$ be strongly measurable and assume that the image of $m$ is $\mathcal R$-bounded by $\mathcal R(m)$. Then $\mathcal M = \{(S^lm)(x)\,:\, l\in \N_0,\; x\in \R^d\}$ is $\mathcal R$-bounded in ${{\mathscr L}}(X,Y)$ with $\mathcal R(\mathcal M) \leq 2\|\varphi_0\|_{L^1(\R^d)} \mathcal R(m)$.
For all $l$ and $x$ we have $$(S^lm)(x) = \int_{\R^d} 2^{ld}\varphi_0(2^l(x-y))m(y)\, dy, \qquad \|2^{ld}\varphi_0(2^l(x-\cdot))\|_{L^1(\R^d)} = \|\varphi_0\|_{L^1(\R^d)}.$$ Thus the result follows from [@KuWe Corollary 2.14].
The following simple fact is analogous to [@RS96 Lemma 4.4.2]. We consider the mixed-norm spaces $$L^{p(r)}(\R^d,w;X) = L^p(\R^{d-1}; L^r(\R,w;X)),$$ for a weight $w\in A_\infty(\R)$ depending only on the last coordinate $t$. See also Appendix \[sec:analytic\].
\[lem:para1\] Let $s<0$, $p,r\in (1,\infty)$, $q\in [1,\infty]$ and $w\in A_\infty(\R)$. Then for all $f\in {{\mathscr S'}}(\R^d;X)$ one has $$\|(2^{sl} S^{l} f)_{l\geq 0} \|_{\ell^{q}(L^{p(r)}(\R^d,w;X))} \leq C \|(2^{sk} S_{k} f)_{k\geq 0}\|_{\ell^{q}(L^{p(r)}(\R^d,w;X))}.$$
We consider $q<\infty$, the case $q=\infty$ is analogous. Writing $Y=L^{p(r)}(\R^d,w;X)$, it follows from Young’s inequality for discrete convolutions that $$\begin{aligned}
\|(2^{sl} S^{l} f)_{l\geq 0} \|_{\ell^{q}(Y)} &\, \leq \Big(\sum_{l=0}^\infty \Big( \sum_{k=0}^l 2^{s(l-k)} 2^{sk}\|S_k f\|_Y \Big)^q\Big)^{1/q}
\\ &\, \leq \Big( \sum_{l=0}^\infty 2^{s l} \Big)\Big(\sum_{k=0}^\infty 2^{sk}\|S_k f\|_Y^q \Big)^{1/q}
\leq C \|(2^{sk} S_{k} f)_{k\geq 0}\|_{\ell^{q}(Y)},\end{aligned}$$ where $C= \sum_{l\geq 0} 2^{sl}$ is finite by the assumption $s < 0$.
Paraproducts {#sec:paraproducts}
------------
Let $X,Y$ be Banach spaces. As in [@RS96 Section 4.2] we define the product $mf \in {{\mathscr S'}}(\R^d;Y)$ of $m\in {{\mathscr S'}}(\R^d; {{\mathscr L}}(X,Y))$ and $f\in {{\mathscr S'}}(\R^d;X)$ by $$mf = \lim_{l\to\infty} S^l m \cdot S^l f,$$ provided this limit exists in ${{\mathscr S'}}(\R^d;Y)$. If one factor is smooth with bounded derivatives or if $m\in L^r$ and $f\in L^{r'}$, then this definition yields the usual product of a function and a distribution or the pointwise product of functions, respectively (see [@RS96 Section 4.2.1]).
As in [@RS96 Section 4.4], if the paraproducts $$\Pi_1(m,f) = \sum_{k=2}^\infty (S^{k-2}m) (S_k f),
\qquad
\Pi_2(m,f) = \sum_{k=0}^\infty\sum_{j=-1}^1 (S_{k+j}m) (S_k f),$$ $$\Pi_3(m,f)= \sum_{k=2}^\infty (S_k m) (S^{k-2} f),$$ exist in ${{\mathscr S'}}(\R^d;Y)$, then $mf$ exists as well and one has $$mf = \Pi_1(m,f) + \Pi_2(m,f) + \Pi_3(m,f).$$ Since ${\text{\rm supp\,}}{\widehat}{\varphi}_k \subset \{2^{k-1} \leq |\xi|\leq \frac{3}{2}2^{k}\}$ for $k\geq 1$, for the Fourier supports of the summands we have $$\label{FsupportPi2}
{\text{\rm supp\,}}{{\mathscr F}}[(S_{k+j}m) (S_k f)] \subset \{ |\xi|\leq 5\cdot 2^{k}\}, \qquad k\geq 0, \quad j\in \{-1,0,1\},$$ $$\label{FsupportPi3}
{\text{\rm supp\,}}{{\mathscr F}}[(S_k m) (S^{k-2} f)] \subset \{ 2^{k-3} \leq |\xi|\leq 2^{k+1}\}, \qquad k \geq 2.$$
Estimates of $\Pi_1$
--------------------
The paraproducts are estimated in different ways. We start with $\Pi_1$. For the Bessel-potential spaces we use the Littlewood-Paley decomposition from Proposition \[prop:UMDHisF\] and therefore require $X$ and $Y$ to have UMD.
\[multiplication1\] Let $X$ and $Y$ have *UMD*, $s\in \R$, $p\in (1,\infty)$ and $w\in A_p$. Let $m:\R^d \to {{\mathscr L}}(X,Y)$ be strongly measurable and assume that the image of $m$ is $\mathcal R$-bounded by $\mathcal R(m)$. Then for all $f\in H^{s,p}(\R^d,w;Y)$ the limit $\Pi_1(m,f)$ exists in ${{\mathscr S'}}(\R^d;Y)$ and $$\|\Pi_1(m,f)\|_{H^{s,p}(\R^d,w;Y)} \leq C \mathcal R(m)\|f\|_{H^{s,p}(\R^d,w;X)}.$$
\[rem:R-bound-scalar\] If $m$ is scalar-valued we have $\mathcal R(m) \leq 2\|m\|_\infty$, see [@KuWe Proposition 2.5]. So in this case the assumptions on $m$ reduce to $m\in L^\infty(\R^d)$.
We write $ \Pi_1(m,f) = \sum_{k\geq 2} f_k$ with $f_k = S^{k-2} m S_k f$. For each $n$, the support condition implies $$S_n f_k \neq 0 \qquad \text{at most for }\;k= n-1,..., n+3.$$ For $N,K,L\in \N$ with $L\leq K<N-3$ the support condition and the $\mathcal R$-boundedness of $(S_n)_{n\geq 0}$ in $\mathscr L(L^p(\R^d,w;Y))$ as shown in Lemma \[Sk-Rbounded\] yield $$\begin{aligned}
\Big \|\sum_{n=0}^N r_n 2^{sn} S_n \sum_{k=L}^Kf_k \Big\|_{L^p(\Omega; L^p(\R^d,w;Y))}
&\,\leq \sum_{j=-1}^3 \Big \|\sum_{n=L}^K r_n 2^{sn} S_n f_{n+j} \Big\|_{L^p(\Omega; L^p(\R^d,w;Y))}\\
&\, \leq C\sum_{j=-1}^3 \Big \|\sum_{n=L}^K r_n 2^{sn} f_{n+j} \Big\|_{L^p(\Omega; L^p(\R^d,w;Y))}.
\end{aligned}$$ Fix $j\in \{-1,...,3\}$. Then by Fubini’s theorem, Lemma \[Slm-Rbounded\] and Proposition \[prop:UMDHisF\], $$\begin{aligned}
\Big\|\sum_{n=L}^K r_n 2^{sn} &\, f_{n+j} \Big\|_{L^p(\Omega; L^p(\R^d,w;Y))}^p \\
&\, = \int_{\R^d} \Big\|\sum_{n=L}^K r_n 2^{sn} S^{n-2+j} m(x) S_{n+j} f(x) \Big\|_{L^p(\Omega;Y)}^p w(x)\,dx\\
&\, \leq \mathcal R(\{S^l m(x):x\in \R^d, l\in \N\})^p \int_{\R^d} \Big \|\sum_{n=L}^K r_n 2^{sn} S_{n+j} f(x)\Big \|_{L^p(\Omega;X)}^p w(x)\,dx\\
&\, \leq (C\mathcal R(m))^p\Big \|\sum_{n=L}^\infty r_n 2^{sn} S_{n+j} f \Big\|_{L^p(\Omega; L^p(\R^d,w;X))}^p=:(C\mathcal R(m))^p A_L^p.\end{aligned}$$ Here $\sum_{n=L}^\infty r_n 2^{sn} S_{n+j} f$ converges in $L^p(\Omega; L^p(\R^d,w;X))$ by Proposition \[prop:UMDHisF\], and thus $A_L \to 0$ as $L\to \infty$. It follows from Proposition \[prop:UMDHisF\] that $\sum_{k=L}^K f_k \in H^{s,p}(\R^d,w;Y)$ and $$\Big\|\sum_{k=L}^K f_k\Big\|_{H^{s,p}(\R^d,w;Y)}\leq C\sup_{N\geq 0}\Big \|\sum_{n=0}^N r_n 2^{sn} S_n \sum_{k=L}^K f_k \Big \|_{L^p(\Omega; L^p(\R^d,w;Y))}\leq C \mathcal R(m) A_L.$$ We conclude that $\big(\sum_{k=0}^N f_k\big)_{N\geq 0}$ is a Cauchy sequence in $H^{s,p}(\R^d,w;Y)$. Hence $\Pi_1(m,f) = \sum_{k=0}^\infty f_k$ converges in $H^{s,p}(\R^d,w;Y)$ and, again by Proposition \[prop:UMDHisF\], $$\big \| \Pi_1(m,f)\big\|_{H^{s,p}(\R^d,w;Y)}\leq C\mathcal R(m) A_0 \leq C \mathcal R(m) \|f\|_{H^{s,p}(\R^d,w;X)}.\qedhere$$
The corresponding estimate of $\Pi_1$ for $F$-spaces is more elementary and does not need the UMD property of the underlying Banach spaces. Here and in the sequel, for $m:\R^d\to {{\mathscr L}}(X,Y)$ we write $$\|m\|_\infty = \sup_{x\in \R^d} \|m(x)\|_{{{\mathscr L}}(X,Y)}.$$ We will make use of a convergence criterion from Lemma \[para2\] in the appendix.
\[multiplication2\] Let $X$ and $Y$ be Banach spaces, $s\in \R$, $p\in (1,\infty)$, $q\in [1,\infty]$ and $w\in A_\infty$. Let $m:\R^d \to {{\mathscr L}}(X,Y)$ be strongly measurable and assume that the image of $m$ is bounded. Then for all $f\in F^{s}_{p,q}(\R^d,w;X)$ the limit $\Pi_1(m,f)$ exists in ${{\mathscr S'}}(\R^d;Y)$ and $$\|\Pi_1(m,f)\|_{F^{s}_{p,q}(\R^d,w;Y)} \leq C \|m\|_{\infty}\|f\|_{F^{s}_{p,q}(\R^d,w;X)}.$$
Let again $\Pi_1(m,f) = \sum_{k\geq 2} f_k$ with $f_k = S^{k-2}m S_k f$. We apply the estimate of Lemma \[para2\]. It follows from that the support condition holds. Therefore, $q=1$ and $w\in A_{\infty}$ are included. To check that the corresponding right-hand side of is finite we estimate $$\big\| \big( 2^{sk} f_k\big)_{k\geq 2}\big\|_{L^p(\R^d,w; \ell^q(Y))} \leq C \sup_{k\geq 0}\|S^{k}m\|_{\infty} \|f\|_{F^{s}_{p,q}(\R^d,w;X)}.$$ Using $S^km = 2^{kd} \varphi_0(2^k\cdot)*m$ and $\|2^{kd} \varphi_0(2^k\cdot)\|_{L^1(\R^d)} = \|\varphi_0\|_{L^1(\R^d)}$, Young’s inequality implies $$\|S^{k}m\|_{\infty} \leq \|\varphi_0\|_{L^1(\R^d)} \|m\|_{\infty}, \qquad k\geq 0.$$ Hence $\Pi_1(m,f)$ exists by Lemma \[para2\] and the asserted estimate holds true.
Special estimates of $\Pi_2$ and $\Pi_3$
----------------------------------------
We now estimate $\Pi_2$ and $\Pi_3$ as it is needed for the multiplication with the characteristic function ${{{\bf 1}}}_{\R_+^d}$ of the half-space. Here we specialize to power weights of the form $$w_\gamma(x',t) = |t|^\gamma, \qquad \gamma\in (-1,p-1),$$ and consider functions $m$ which depend on the last coordinate $t$ only. Following the considerations of [@Franke86] and [@RS96 Section 4.6.2], the main tools are Jawerth-Franke embeddings and convergence criteria for weighted spaces of entire analytic functions, as presented in Appendix \[sec:analytic\]. In the rest of this subsection we can allow for general Banach spaces $X$ and $Y$.
To explain the parameters below, recall from [@GraModern Proposition 9.1.5] that for $w_{\gamma}$ the dual weight with respect to $p\in (1,\infty)$ is given by $w_{\gamma'}$, where $$\label{6016}
\gamma' = - \frac{\gamma}{p-1}, \qquad \frac{1+\gamma'}{p'} = 1-\frac{1+\gamma}{p}.$$
\[multiplication3\]Let $X$ and $Y$ be Banach spaces, $p\in (1,\infty)$, $\gamma \in (-1,p-1)$ and $- \frac{1+\gamma'}{p'} < s < \frac{1+\gamma}{p}.$ Let the numbers $r$ and $\mu$ satisfy $$\label{6013}
1<r< \infty,\qquad \mu = 0, \quad \qquad \text{ in case } \; 0\leq s <\frac{1+\gamma}{p},$$ $$\label{6014}
1<r< \frac{1}{-s},\qquad \mu = 0, \quad \qquad \text{ in case }\; -\frac{1}{p'} < s < 0,$$ $$\label{6015}
1<r<p',\qquad \frac{\mu}{r} = -s - \frac{1}{p'} +\varepsilon,\quad \qquad \text{ in case }\; - \frac{1+\gamma'}{p'} < s \leq -\frac{1}{p'},$$ for some $\varepsilon > 0$. Let $m \in B_{r,\infty}^{\frac{1+\mu}{r}}(\R, w_\mu;{{\mathscr L}}(X,Y))$ and consider it as a distribution on $\R^d$ which only depends on the last coordinate. Then for all $f\in F_{p,\infty}^s(\R^d,w_{\gamma};X)$ the limit $\Pi_2(m,f)$ exists in ${{\mathscr S'}}(\R^d;Y)$ and $$\|\Pi_2(m,f)\|_{F_{p,1}^s(\R^d,w_{\gamma};Y)} \leq C \|m\|_{B_{r,\infty}^{\frac{1+\mu}{r}}(\R, w_\mu;{{\mathscr L}}(X,Y))} \|f\|_{F_{p,\infty}^s(\R^d,w_{\gamma};X)}.$$
\[rem:improve\] In the estimate, for the microscopic parameters we have $q =1$ on the left-hand side and $q = \infty$ on the right-hand side. Such a microscopic improvement is possible because only special frequencies of $mf$ are in $\Pi_2$. Combined with $F_{p,1}^s\hookrightarrow H^{s,p}\hookrightarrow F_{p,\infty}^s$, it immediately gives an estimate of $\Pi_2$ in the Bessel-potential spaces.
For a clearer presentation we assume that $\sum_{k=0}^\infty S_{k+j}m S_k f$ exist for $j\in \{-1,0,1\}$ in ${{\mathscr S'}}(\R^d;Y)$, such that then also $\Pi_2(m,f)$ exists. This will be justified by means of Lemma \[para2\] and the estimates in Step 3. In Step 4 we will show how the numbers $p_1$, $p_2$, $\gamma_1$ and $\gamma_2$ introduced in the first two steps can be chosen.
Recall the mixed-norm spaces $L^{p(p_1)}(\R^d,w;X) = L^p(\R^{d-1}; L^{p_1}(\R,w_{\gamma_1};X))$.
*Step 1.* Suppose $p_1$ and $\gamma_1$ satisfy $$\label{Pi2_cond_1}
1<p_1<p, \qquad -1 < \gamma_1 < p_1-1, \qquad \frac{\gamma_1}{p_1}\geq \frac{\gamma}{p}, \qquad s - \frac{1+\gamma}{p} + \frac{1+\gamma_1}{p_1} > 0.$$ For each $n$ the Fourier support of $S_n\big( \sum_{k=0}^\infty S_{k+j}m S_k f\big)$ is contained in $\{|\xi|\leq 3\cdot 2^{n}\}$. The Jawerth-Franke embedding thus gives $$\begin{aligned}
\|\Pi_2(m,f)\|_{F_{p,1}^s(\R^d,w_{\gamma};Y)}&\, \leq \sum_{j=-1}^1 \Big \| \Big ( 2^{sn} S_n\sum_{k=0}^\infty S_{k+j}m S_k f\Big)_{n \geq 0} \Big \|_{L^p(\R^d,w_{\gamma}; \ell^1(Y))}\\
&\, \leq C \sum_{j=-1}^1 \Big \| \Big ( 2^{(s - \frac{1+\gamma}{p} + \frac{1+\gamma_1}{p_1})n} S_n \sum_{k=0}^\infty S_{k+j}m S_k f\Big)_{n \geq 0} \Big \|_{\ell^p(L^{p(p_1)}(\R^d,w_{\gamma_1};Y))}.\end{aligned}$$ Fix $j\in \{-1,0,1\}$. Due to , the Fourier supports of $(S_{k+j}m S_k f)_{k\geq 0}$ are subject to . Since $w_{\gamma_1}\in A_{p_1}$ and $s - \frac{1+\gamma}{p} + \frac{1+\gamma_1}{p_1} > 0$, we may apply with $q = p > 1$ to obtain $$\begin{aligned}
\Big \| \Big ( 2^{(s - \frac{1+\gamma}{p} + \frac{1+\gamma_1}{p_1})n} S_n \sum_{k=0}^\infty &\, S_{k+j}m S_k f\Big)_{n \geq 0} \Big \|_{\ell^p(L^{p(p_1)}(\R^d,w_{\gamma_1};Y))}\nonumber\\
&\,\leq C \Big \| \Big ( 2^{(s - \frac{1+\gamma}{p} + \frac{1+\gamma_1}{p_1})k} S_{k+j}m S_k f\Big)_{k \geq 0} \Big \|_{\ell^p(L^{p(p_1)}(\R^d,w_{\gamma_1};Y))}.\label{701}\end{aligned}$$
*Step 2.* Suppose $p_2$ and $\gamma_2$ satisfy $$\label{Pi2_cond_2}
p < p_2 < \infty, \qquad -1 < \gamma_2 < p_2-1, \qquad \frac{\gamma}{p}\geq \frac{\gamma_2}{p_2}.$$ Define the numbers $r$ and $\mu$ by $$\frac1r = \frac{1}{p_1} - \frac{1}{p_2}, \qquad \frac{\mu}{r} = \frac{\gamma_1}{p_1} - \frac{\gamma_2}{p_2}.$$ It follows from Hölder’s inequality, applied in the last coordinate $t$ with exponent $\frac{p_2}{p_1} > 1$, that $$\begin{aligned}
\Big \| \Big ( 2^{(s - \frac{1+\gamma}{p} + \frac{1+\gamma_1}{p_1})k}&\, S_{k+j}m S_k f\Big)_{k \geq 0} \Big \|_{\ell^p(L^{p(p_1)}(\R^d,w_{\gamma_1};Y))}\notag\\
&\, \leq \Big \| \Big( \Big \| 2^{(\frac{1+\gamma_1}{p_1}- \frac{1+\gamma_2}{p_2})k} S_k m \Big \|_{L^{r}(\R, w_\mu; {{\mathscr L}}(X,Y))} \Big)_{k\geq 0} \Big \|_{\ell^\infty(L^\infty(\R^{d-1}))}\label{667} \\
&\, \qquad \qquad \times \Big \| \Big ( 2^{(s- \frac{1+\gamma}{p} + \frac{1+\gamma_2}{p_2})n} S_{n} f\Big)_{n\geq 0}\Big\|_{\ell^p(L^{p(p_2)}(\R^d,w_{\gamma_2};X))}.\notag\end{aligned}$$ For the second factor we use the Jawerth-Franke embedding , which gives $$\begin{aligned}
\Big \| \Big ( 2^{(s- \frac{1+\gamma}{p} + \frac{1+\gamma}{p_2})n} S_{n} f\Big)_{n\geq 0} \Big\|_{\ell^p(L^{p(p_2)}(\R^d,w_{\gamma_2};X))} &\leq C \Big \|(2^{sn} S_nf)_{n \geq 0}\Big \|_{L^p(\R^d,w_{\gamma};\ell^\infty(X))} \\ & = C \|f\|_{F_{p,\infty}^s(\R^d,w_{\gamma};X)}.\end{aligned}$$ Consider the first factor. Since $m$ does not depend on $x'\in \R^{d-1}$, it is elementary to see that $$S_k m = \mathcal F^{-1} ( {\widehat}{\varphi}_k {\widehat}{m}) = \mathcal F_t^{-1} ( {\widehat}{\varphi}_k(0,\cdot) \mathcal F_t m).$$ Observe further that $(\mathcal F_t^{-1} {\widehat}{\varphi}_k(0,\cdot))_{k\geq 0} \in \Phi(\R)$. Therefore $$\begin{aligned}
\Big \| \Big (\Big \|2^{(\frac{1+\gamma_1}{p_1}- \frac{1+\gamma_2}{p_2})k} &\,S_k m \Big\|_{L^{r}(\R, w_\mu; {{\mathscr L}}(X,Y))} \Big)_{k\geq 0} \Big \|_{\ell^\infty(L^\infty(\R^{d-1}))}= \|m\|_{B_{r,\infty}^{\sigma}(\R,w_\mu;{{\mathscr L}}(X,Y))},\end{aligned}$$ where we have set $$\sigma = \frac{1+\mu}{r} = \frac{1+\gamma_1}{p_1}- \frac{1+\gamma_2}{p_2}.$$
*Step 3.* In the next step we find $p_1$, $\gamma_1$, $p_2$ and $\gamma_2$ satisfying and . Then it follows from that $$\big(2^{(s- \frac{1+\gamma}{p} + \frac{1+\gamma_1}{p_1})k} S_{k+j}m S_kf \big)_{k\geq 0} \in \ell^p(L^{p(p_1)}(\R^d,w_{\gamma_1};Y)).$$ Thus $\sum_{k\geq 0} S_{k+j}m S_k$ exists in ${{\mathscr S'}}(\R^d,Y)$ for $j\in \{-1,0,1\}$ by Lemma \[para2\] and the estimate is valid. Hence also $\Pi_2(m,f)$ exists, and the considerations of Step 1 show that it can be estimated as asserted.
*Step 4.* Here and in the sequel, by $a\searrow b$ we mean that $a$ is chosen larger but arbitrarily close to $b$. Similar for $a\nearrow b$. We seek for parameters $p_1$, $\gamma_1$, $p_2$, $\gamma_2$ satisfying and such that $B_{r,\infty}^{\sigma}(\R,w_\mu;{{\mathscr L}}(X,Y))$ becomes as large as possible. For each admissible choice of these parameters we have $\sigma = \frac{1+\mu}{r}$ and $\frac{\mu}{r} = \frac{\gamma_1}{p_1} - \frac{\gamma_2}{p_2}\geq 0.$ In view of the necessary and sufficient conditions for Sobolev embeddings from [@MeyVer1 Theorem 1.1], we thus aim to minimize $\sigma$ and $\frac{\mu}{r}$. In any case, the choices $$p_2 \searrow p, \qquad \gamma_2 = \frac{\gamma}{p} p_2,$$ are optimal in this sense and satisfy .
*Substep 4.1.* Let $0\leq s < \frac{1+\gamma}{p}$ as in . Here the choices $$p_1 \nearrow p, \qquad \gamma_1 = \frac{\gamma}{p} p_1,$$ satisfy . This leads to $\mu = 0$ and that $r$ may be arbitrarily large.
*Substep 4.2.* Let $\frac{1}{p}-1 < s< 0$ as in . Choosing $ \gamma_1 = \frac{\gamma}{p} p_1$, to satisfy $s -\frac{1+\gamma}{p} + \frac{1+\gamma_1}{p_1}>0$ we have to restrict to $1<p_1 < \frac{1}{\frac{1}{p}-s}$. It is possible to choose such $p_1$ by assumption in this substep. For $p_1 \nearrow \frac{1}{\frac{1}{p}-s}$ the condition is indeed satisfied. This results in $\mu = 0$ and $r < \frac{1}{-s}$.
*Substep 4.3.* Let $\frac{1+\gamma}{p} - 1< s \leq \frac{1}{p}-1$ as in . This is only possible for $\gamma < 0$. Here $\frac{\gamma_1}{p_1} = \frac{\gamma}{p}$ is not allowed, since there is no $p_1 > 1$ with $s - \frac{1}{p} + \frac{1}{p_1} > 0$. So we choose $$p_1\searrow 1, \qquad \gamma_1 \searrow \Big(\frac{1+\gamma}{p} -s\Big)p_1 - 1.$$ First this gives $r<p'$. Write $p_1 = 1+\varepsilon_1$ and $\frac{\gamma_1}{p_1} = \frac{1+\gamma}{p} -s - \frac{1}{p_1} + \varepsilon_2$, where $\varepsilon_1,\varepsilon_2>0$. Then $\frac{\mu}{r} = \frac{\gamma_1}{p_1} - \frac{\gamma_2}{p_2} = -s -\frac{1}{p'} + \varepsilon_1+ \varepsilon_2$. Setting $\varepsilon = \varepsilon_1 + \varepsilon_2$, we may thus choose $r$ and $\mu$ as asserted.
The estimate of $\Pi_3$ is similar. Again there is a microscopic improvement, see Remark \[rem:improve\].
\[multiplication4\] Let $X$ and $Y$ be Banach spaces, $p\in (1,\infty)$, $\gamma \in (-1,p-1)$ and $- \frac{1+\gamma'}{p'} < s < \frac{1+\gamma}{p}.$ Let the numbers $r $ and $\mu$ satisfy $$\label{6018}
1<r< \infty,\qquad \frac{\mu}{r} = s - \frac{1}{p} +\varepsilon, \quad\qquad \text{ in case }\; \frac{1}{p} \leq s < \frac{1+\gamma}{p},$$ $$\label{6019}
1<r< \frac{1}{s},\qquad \mu = 0,\quad \qquad \text{ in case }\; 0< s <\frac{1}{p},$$ $$\label{6020}
1<r<\infty,\qquad \mu =0, \quad \qquad \text{ in case }\; - \frac{1+\gamma'}{p'} < s \leq 0,$$ for some $\varepsilon > 0$. Let $m \in B_{r,\infty}^{\frac{1+\mu}{r}}(\R, w_\mu;{{\mathscr L}}(X,Y))$ and consider it as a distribution on $\R^d$ which only depends on the last coordinate. Then for all $f\in F_{p,\infty}^s(\R^d,w_{\gamma};X)$ the limit $\Pi_3(m,f)$ exists in ${{\mathscr S'}}(\R^d;Y)$ and $$\|\Pi_3(m,f)\|_{F_{p,1}^s(\R^d,w_{\gamma};Y)} \leq C \|m\|_{B_{r,\infty}^{\frac{1+\mu}{r}}(\R, w_\mu;{{\mathscr L}}(X,Y))} \|f\|_{F_{p,\infty}^s(\R^d,w_{\gamma};X)}.$$
*Step 1.* As in the previous lemma we assume that $\Pi_3(m,f)$ exists in ${{\mathscr S'}}(\R^d;Y)$ from the beginning and justify this afterwards by means of Lemma \[para2\].
Let $p_1$ and $\gamma_1$ be such that $$\label{Pi3_cond_1}
1<p_1 < p, \qquad -1 < \gamma_1 < p_1-1, \qquad \frac{\gamma_1}{p_1}\geq \frac{\gamma}{p}.$$ Since the Fourier supports of the summands of $\Pi_3(m,f)$ satisfy , we may use under the assumption , where we can allow for $q= 1$, and then the Jawerth-Franke embedding to obtain $$\begin{aligned}
\|\Pi_3(m,f) \|_{F_{p,1}^s(\R^d,w_{\gamma};Y)}
&\, \leq C \Big \| \big ( 2^{sk} S_{k}m S^{k-2} f\big)_{k \geq 2} \Big \|_{L^p(\R^d,w_{\gamma}; \ell^1(Y))}\\
&\, \leq C \Big \| \Big ( 2^{(s- \frac{1+\gamma}{p} +\frac{1+\gamma_1}{p_1})k} S_{k}m S^{k-2} f \Big)_{k \geq 0} \Big\|_{\ell^p(L^{p(p_1)}(\R^d,w_{\gamma_1};Y))}.\end{aligned}$$ Now let $p_2$ and $\gamma_2$ satisfy $$\label{Pi3_cond_2}
p<p_2 < \infty, \qquad -1 < \gamma_2 < p_2-1, \qquad \frac{\gamma}{p}\geq \frac{\gamma_2}{p_2},\qquad s + \frac{1+\gamma_2}{p_2} - \frac{1+\gamma}{p} < 0,$$ and set $w_{\gamma_2}(x',t) = |t|^{\gamma_2}$. Then, by Hölder’s inequality, $$\begin{aligned}
\Big \| \Big ( 2^{(s - \frac{1+\gamma}{p} +\frac{1+\gamma_1}{p_1})k}&\, S_{k}m S^{k-2}f \Big)_{k \geq 0} \big\|_{\ell^p(L^{p(p_1)}(\R^d,w_{\gamma_1};Y))}\\
&\, \leq \Big \| \big ( \big \| 2^{\sigma k} S_k m \big \|_{L^{r}(\R, w_\mu; {{\mathscr L}}(X,Y))}\big)_{k\geq 0} \Big \|_{\ell^\infty(L^\infty(\R^{d-1}))} \\
&\, \qquad \qquad \times \Big \| \Big ( 2^{(s- \frac{1+\gamma}{p} + \frac{1+\gamma_2}{p_2})n} S^{n} f\Big)_{n\geq 0}\Big\|_{\ell^p(L^{p(p_2)}(\R^d,w_{\gamma_2};X))},\end{aligned}$$ where as before $r = \frac{1}{p_1} - \frac{1}{p_2}$, $\mu = (\frac{\gamma_1}{p_1} - \frac{\gamma_2}{p_2}) r$ and $\sigma = \frac{1+\gamma_1}{p_1} - \frac{1+\gamma_2}{p_2}$. Since $s + \frac{1+\gamma_2}{p_2} - \frac{1+\gamma}{p} < 0$ we can apply Lemma \[lem:para1\] to replace $S^{n}$ by $S_n$ in the second factor, which can then be estimated by $C\|f\|_{F_{p,\infty}^s(\R^d,w;X)}$ in the same way as in the previous lemma using . Also the first factor can be treated in the same way to obtain $$\Big \| \big ( \big \| 2^{\sigma k} S_k m \big \|_{L^{r}(\R, w_\mu; {{\mathscr L}}(X,Y))}\big)_{k\geq 0} \Big \|_{\ell^\infty(L^\infty(\R^{d-1}))} = \|m\|_{B_{r,\infty}^{\sigma}(\R,w_\mu;{{\mathscr L}}(X,Y))}.$$
*Step 2.* We enlarge $B_{r,\infty}^{\sigma}(\R,w_\mu;{{\mathscr L}}(X,Y))$ by choosing optimal parameters according to and . In any case $p_1\nearrow p$ and $\gamma_1 = \frac{\gamma}{p}p_1\in (1,p_1-1)$ satisfies and is the best choice.
*Substep 2.1* Let $\frac{1+\gamma}{p} - 1 < s \leq 0$ as in . Then $p_2\searrow p$ and $\gamma_2 = \frac{\gamma}{p} p_2$ are admissible, which leads to $\mu = 0$ and that $r$ may be arbitrarily large.
*Substep 2.2* Let $0< s < \frac{1}{p}$ as in . We still take $\gamma_2 = \frac{\gamma}{p} p_2$, but then we have to restrict to $p_2 \nearrow \frac{1}{\frac{1}{p} -s}$. This gives $\mu = 0$ and $r\nearrow \frac{1}{s}$.
*Substep 2.3* Let $\frac{1}{p}\leq s < \frac{1+\gamma}{p}$ as in . Then we cannot take $\gamma_2 = \frac{\gamma}{p} p_2$, since $s - \frac{1+\gamma}{p} + \frac{1+\gamma_2}{p_2} < 0$ becomes impossible. Instead we let $p_2\nearrow \infty$ and $\gamma_2 \searrow (\frac{1+\gamma}{p} -s)p_2 -1$, which satisfies . Writing $\frac{\gamma_2}{p_2} = \frac{1+\gamma}{p} -s -\varepsilon$ with $\varepsilon>0$, we get $\frac{\mu}{r} = s-\frac{1}{p} + \varepsilon$ and $r\nearrow p$.
Pointwise multiplication {#sec:p-mult}
========================
Irregular functions {#subsec:irr}
-------------------
In this section we combine the estimates for the paraproducts to obtain sufficient conditions for the boundedness of $f\mapsto mf$ for irregular $m$ and vector-valued functions $f$ in Besov spaces, Triebel-Lizorkin spaces and Bessel-potential spaces. The result extends [@Franke86 Theorem 3.4.2] to the weighted vector-valued setting, see also [@RS96 Corollary 4.6.2/1] and [@Tri83 Section 2.8]. In these works also the cases $p,q\leq 1$ are considered.
Recall that the product of distributions is given by $mf = \lim_{l\to\infty} S^l m \cdot S^l f$ (if the limit exists).
\[multiplication-esti\] Let $X$ and $Y$ be Banach spaces, $p\in (1,\infty)$, $q\in[1,\infty]$, $\gamma \in (-1,p-1)$ and $- \frac{1+\gamma'}{p'} < s < \frac{1+\gamma}{p}.$ Let the numbers $r$ and $\mu$ satisfy $$1<r< \frac{1}{|s|},\qquad \mu =0, \qquad \text{ in case }\;- \frac{1}{p'} < s < \frac{1}{p},$$ $$1 < r < p, \qquad \frac{\mu}{r} = s- \frac{1}{p} + \varepsilon, \qquad \text{ in case }\;\frac{1}{p}\leq s < \frac{1+\gamma}{p},$$ $$1 < r < p', \qquad \frac{\mu}{r} = - s- \frac{1}{p'} + \varepsilon, \qquad \text{ in case }\;-\frac{1+\gamma'}{p'} < s \leq -\frac{1}{p'},$$ for some $\varepsilon > 0$. Let $m \in B_{r,\infty}^{\frac{1+\mu}{r}}(\R, w_\mu; {{\mathscr L}}(X,Y))\cap L^\infty(\R;{{\mathscr L}}(X,Y))$ and consider it as a distribution on $\R^d$ which only depends on the last coordinate. Then the following holds true.
- For $\mathcal A\in \{F,B\}$ and $f\in \mathcal A_{p,q}^s(\R^d,w_{\gamma};X)$ the product $mf$ exists in ${{\mathscr S'}}(\R^d;Y)$ and $$\|mf\|_{\mathcal A_{p,q}^s(\R^d,w_{\gamma};Y)} \leq C \big(\|m\|_{B_{r,\infty}^{\frac{1+\mu}{r}}(\R, w_\mu; {{\mathscr L}}(X,Y))} + \|m\|_{\infty} \big)\|f\|_{\mathcal A_{p,q}^s(\R^d,w_{\gamma};X)}.$$
- If $X$ and $Y$ have UMD and if the image of $m$ is $\mathcal R$-bounded by $\mathcal R(m)$, then for all $f\in H^{s,p}(\R^d,w_{\gamma};X)$ the product $mf$ exists in ${{\mathscr S'}}(\R^d;Y)$ and $$\|mf\|_{H^{s,p}(\R^d,w_{\gamma};Y)} \leq C \big( \|m\|_{B_{r,\infty}^{\frac{1+\mu}{r}}(\R, w_\mu; {{\mathscr L}}(X,Y))} + \mathcal R(m)\big)\|f\|_{H^{s,p}(\R^d,w_{\gamma};X)}.$$
Consider Assertion (a) for $\mathcal A = F$. The paraproducts exist in ${{\mathscr S'}}(\R^d;Y)$ by the Lemmas \[multiplication2\], \[multiplication3\] and \[multiplication4\], hence $mf$ exists as a distribution. The estimate follows from the monotonicity of the $F$-spaces with respect to $q\in [1,\infty]$. For $\mathcal A = B$, the estimate is now a consequence of real interpolation, see [@MeyVer1 Proposition 6.1]. Assertion (b) follows from the Lemmas \[multiplication1\], \[multiplication3\] and \[multiplication4\], combined with the elementary embeddings .
\[R-bound-crit\]
(i) If $m$ is scalar-valued, then $\mathcal R(m) \leq 2 \|m\|_{L^\infty(\R^d)}$ by [@KuWe Proposition 2.5]. However, also in this case our methods do not allow to remove the UMD property of the underlying Banach spaces. If $X,Y$ are Hilbert spaces, then a family of linear operators is $\mathcal R$-bounded if and only if it is bounded, see e.g. [@DHP Remark 3.2].
(ii) A scaling argument shows that an estimate as above is only possible with a space $B_{r,\infty}^\sigma(\R,w_\mu)$ satisfying $\sigma = \frac{1+\mu}{r}$. The conditions on the parameters cannot be improved by duality arguments in case of reflexive spaces.
(iii) Since $ B_{r,\infty}^{\frac{1}{r}}$ is not embedded into $L^\infty$, in the sharp case the boundedness of $m$ does not follow from the Besov regularity and must be imposed as an extra condition. Sufficient conditions for the $\mathcal R$-boundedness of the image of $m$ in terms of Besov regularity are provided in [@HytVer Theorem 5.1]. In particular, if $m\in B_{r,1}^{1/r}(\R; \mathscr L(X,Y))$ for some $r$ which depends on type and cotype of $X$ and $Y$ (see Section \[sec:type\]), then the image of $m$ is automatically $\mathcal R$-bounded.
Analogous arguments yield multiplication estimates for radial power weights of the form $$v_\gamma(x) = |x|^\gamma.$$
\[multiplication-esti-d\] Let $X$ and $Y$ be Banach spaces, $p\in (1,\infty)$, $q\in[1,\infty]$, $\gamma\in (-d,d(p-1))$ and $- \frac{d+\gamma'}{p'} < s < \frac{d+\gamma}{p}$. Let the numbers $r$ and $\mu$ satisfy $$1<r< \frac{d}{|s|},\qquad \mu =0, \quad \qquad \text{ in case}\;- \frac{d}{p'} < s < \frac{d}{p},$$ $$1 < r < p, \qquad \frac{\mu}{r} = s- \frac{d}{p} + \varepsilon, \qquad \text{ in case }\;\frac{d}{p}\leq s < \frac{d+\gamma}{p},$$ $$1 < r < p', \qquad \frac{\mu}{r} = - s- \frac{d}{p'} + \varepsilon, \qquad \text{ in case }\;- \frac{d+\gamma'}{p'} < s \leq -\frac{d}{p'},$$ for some $\varepsilon > 0$. Suppose that $m \in B_{r,\infty}^{\frac{d+\mu}{r}}(\R^d, v_\mu; {{\mathscr L}}(X,Y))\cap L^\infty(\R^d;{{\mathscr L}}(X,Y))$. Then for $\mathcal A\in \{F,B\}$ and $f\in \mathcal A_{p,q}^s(\R^d,v_{\gamma};X)$ the product $mf$ exists in ${{\mathscr S'}}(\R^d;Y)$ and $$\|mf\|_{\mathcal A_{p,q}^s(\R^d,v_\gamma;Y)} \leq C \big(\|m\|_{B_{r,\infty}^{\frac{d+\mu}{r}}(\R^d, v_\mu; {{\mathscr L}}(X,Y))} + \|m\|_{\infty} \big)\|f\|_{\mathcal A_{p,q}^s(\R^d,v_\gamma;X)}.$$ Moreover, if $X$ and $Y$ have UMD and if the image of $m$ is $\mathcal R$-bounded by $\mathcal R(m)$, then for all $f\in H^{s,p}(\R^d,v_\gamma;X)$ the product $mf$ exists in ${{\mathscr S'}}(\R^d;Y)$ and $$\|mf\|_{H^{s,p}(\R^d,v_\gamma;Y)} \leq C \big(\|m\|_{B_{r,\infty}^{\frac{d+\mu}{r}}(\R^d, v_\mu; {{\mathscr L}}(X,Y))} + \mathcal R(m) \big)\|f\|_{H^{s,p}(\R^d,v_\gamma;X)}.$$
As before one decomposes $mf$ into the paraproducts. For the estimate of $\Pi_1(m,f)$ one can apply the Lemmas \[multiplication1\] and \[multiplication2\]. To estimate $\Pi_2(m,f)$ one argues as in Lemma \[multiplication3\]. Instead of and one directly uses the Jawerth-Franke embeddings from [@MeyVer1 Theorem 6.4] for radial weights. One further uses instead of . The optimal choice of the parameters is analogous. In a similar way one modifies the proof of Lemma \[multiplication4\] to estimate $\Pi_3(m,f)$.
Hölder continuous functions
---------------------------
In this section we investigate the boundedness of $f\mapsto m f$ for smooth and bounded functions $m$ on $B$- and $F$-spaces. The case of $H$-spaces was already considered in Proposition \[thm:mult-smooth2\]. Together with Theorem \[multiplication-esti\] this provides the right ingredients to prove the Theorems \[thm:1\] and \[thm:2\] later on.
\[prop:mult-smooth\] Let $X$ and $Y$ be Banach spaces, $s\in \R$, $p\in (1,\infty)$, $q\in [1,\infty]$ and $w\in A_p$. Assume that $m\in BC^\sigma(\R^d; {{\mathscr L}}(X,Y))$ for some $\sigma>|s|$. Then for $\mathcal A\in \{F, B\}$ and all $f\in \mathcal A_{p,q}^s(\R^d,w;X)$ the product $mf$ exists in ${{\mathscr S'}}(\R^d;Y)$ and $$\|mf\|_{\mathcal A_{p,q}^s(\R^d,w;Y)} \leq C \|m\|_{BC^\sigma(\R^d; {{\mathscr L}}(X,Y))}\|f\|_{\mathcal A_{p,q}^s(\R^d,w;X)}.$$
By Lemma \[multiplication2\] one has $$\|\Pi_1(m,f)\|_{F_{p,q}^s(\R^d,w;Y)} \leq C \|m\|_{L^\infty(\R^d;{{\mathscr L}}(X,Y))}\|f\|_{F_{p,q}^s(\R^d,w;X)}.$$ The $B$-case follows from real interpolation (see [@MeyVer1 Proposition 5.1]). To estimate $\Pi_2(m,f)$ we use $B_{p,\infty}^{s+\sigma}\hookrightarrow \mathcal A_{p,q}^s$ and that $s+\sigma >0$ to apply under the assumption , which gives $$\begin{aligned}
\|\Pi_2(m,f)\|_{\mathcal A_{p,q}^s(\R^d,w;Y)} &\, \leq C \|\Pi_2(m,f)\|_{B_{p,\infty}^{s+\sigma}(\R^d,w;Y)} \\
&\, \leq C \sum_{j=-1}^1\big\|\big(2^{(s+\sigma)k} S_{k+j}m S_k f)_{k\geq 0}\big\|_{\ell^\infty(L^p(\R^d,w;Y))}.\end{aligned}$$ Then for fixed $j$ we obtain $$\begin{aligned}
\big\|\big(2^{(s+\sigma)k} S_{k+j}&\,m S_k f)_{k\geq 0}\big\|_{\ell^\infty(L^p(\R^d,w;Y))}\\
&\, \leq C \big\|\big(2^{\sigma k} S_{k+j}m)_{k\geq 0}\big\|_{\ell^\infty(L^\infty(\R^d;{{\mathscr L}}(X,Y)))} \big\|\big(2^{sk} S_k f)_{k\geq 0}\big\|_{\ell^\infty(L^p(\R^d,w;X))}\\
&\, \leq C \|m\|_{B_{\infty,\infty}^{\sigma}(\R^d; {{\mathscr L}}(X,Y))}\|f\|_{B_{p,\infty}^s(\R^d,w;X)}\\
&\, \leq C \|m\|_{BC^{\sigma}(\R^d; {{\mathscr L}}(X,Y))}\|f\|_{\mathcal A_{p,q}^s(\R^d,w;X)}.\end{aligned}$$ In the last line we have used that $BC^{\sigma} \hookrightarrow B_{\infty,\infty}^{\sigma}$, see [@Tri83 Proposition 2.5.7] for the scalar case, and $\mathcal A_{p,q}^s\hookrightarrow B_{p,\infty}^s$. For $\Pi_3(m,f)$ we use $B_{p,1}^s \hookrightarrow \mathcal A_{p,q}^s$ and apply under the assumption to get $$\begin{aligned}
\|\Pi_3(m,f)\|_{\mathcal A_{p,q}^s(\R^d,w;Y)} &\, \leq C \|\Pi_3(m,f)\|_{B_{p,1}^s(\R^d,w;Y)} \leq C \big\|\big(2^{sk} S_{k}m S^{k-2}f)_{k\geq 0}\big\|_{\ell^1(L^p(\R^d,w;Y))}\\
&\, \leq C \big\|\big(2^{\sigma k} S_km)_{k\geq 0}\big\|_{\ell^\infty(L^\infty(\R^d;{{\mathscr L}}(X,Y)))} \big\|\big(2^{(s-\sigma)k} S^{k} f)_{k\geq 0}\big\|_{\ell^1(L^p(\R^d,w;X))}\\
&\, \leq C \|m\|_{BC^{\sigma}(\R^d; {{\mathscr L}}(X,Y))} \|\big(2^{(s-\sigma)k} S_{k} f)_{k\geq 0}\|_{\ell^1(L^p(\R^d,w;X))}\\
&\, \leq C \|m\|_{BC^{\sigma}(\R^d; {{\mathscr L}}(X,Y))} \|f\|_{\mathcal A_{p,q}^s(\R^d,w;X)}.\end{aligned}$$ Here we also employed that $s-\sigma <0$ and applied Lemma \[lem:para1\] to replace $S^k$ by $S_k$ in the second to last line. The existence of the paraproducts and thus of $mf$ is a consequence of these estimates and Lemma \[para2\].
Characteristic functions\[subs:multich\]
----------------------------------------
It is well-known that the precise local regularity of ${{{\bf 1}}}_{\R_+^d}$ is $B_{r,\infty}^{\frac{1}{r}}$, see [@RS96 Lemma 4.6.3/2] and the references therein. This information is actually not sufficient to apply Theorem \[multiplication-esti\] in case $s\notin (- \frac{1}{p'}, \frac{1}{p})$.
\[chi\] For all $\phi\in C_c^\infty(\R^d)$, $p\in (1,\infty)$ and $\gamma \in (-1, p-1)$ one has $${{{\bf 1}}}_{\R_+^d} \phi \in B_{p,\infty}^{\frac{1+\gamma}{p}}(\R^d, w_{\gamma})\cap L^\infty(\R^d).$$
Let $Q = Q'\times (a,b)\subset \R^d$ be a cube with ${\text{\rm supp\,}}\phi \subset Q$. We write $g_h = g(\cdot + h)$ for a translation by $h\in \R^d$. Clearly, ${{{\bf 1}}}_{\R_+^d} \phi\in L^p(\R^d,w_\gamma)\cap L^\infty(\R^d)$. By , for ${{{\bf 1}}}_{\R_+^d}\phi \in B_{p,\infty}^{\frac{1+\gamma}{p}}(\R^d, w_{\gamma})$ it is sufficient to show that $$[{{{\bf 1}}}_{\R_+^d} \phi]_{B_{p,\infty}^{\frac{1+\gamma}{p}}(\R^d, w_{\gamma})} = \sup_{r>0} r^{-\frac{1+\gamma}{p}} \sup_{|h|\leq r} \|{{{\bf 1}}}_{\R_+^d,h}\phi_h - {{{\bf 1}}}_{\R_+^d}\phi\|_{L^p(\R^d,w_{\gamma})} < \infty.$$
*Step 1.* Let $r\leq 1$ and $h\in \R^d$ with $|h|\leq r$. Then $$\begin{aligned}
\|{{{\bf 1}}}_{\R_+^d,h}\phi_h - {{{\bf 1}}}_{\R_+^d} \phi \|_{L^p(\R^d,w_{\gamma})} \leq \|{{{\bf 1}}}_{\R_+^d,h} (\phi_h -\phi) \|_{L^p(\R^d,w_{\gamma})} + \|({{{\bf 1}}}_{\R_+^d,h} -{{{\bf 1}}}_{\R_+^d})\phi\|_{L^p(\R^d,w_\gamma)}.\end{aligned}$$ For the first summand we estimate $$\begin{aligned}
\|{{{\bf 1}}}_{\R_+^d,h} (\phi_h -\phi)\|_{L^p(\R^d,w_{\gamma})}^p &\,= \int_{\R^d} {{{\bf 1}}}_{\R_+^d}(t+h_d) |\phi(x+h) - \phi(x)|^p |t|^\gamma \,dt \,d x'\\
&\, \leq \int_{B(Q,1)}\|\phi'\|_\infty^p |h|^p |t|^\gamma \,dt \,d x' \leq C\, |h|^p,\end{aligned}$$ where $B(Q,1) = \{x\in \R^d: \text{dist}(x,Q)\leq 1\}$. For the second summand we have $$\begin{aligned}
\|({{{\bf 1}}}_{\R_+^d,h} -{{{\bf 1}}}_{\R_+^d})\phi\|_{L^p(\R^d,w_{\gamma})}^p &\, = \int_{\R^d} |{{{\bf 1}}}_{\R_+^d}(t+h_d) -{{{\bf 1}}}_{\R_+^d}(t)| |\phi(x)|^p|t|^\gamma \,d t\,d x' \\
&\, \leq C\int_{Q)\cap \{|t|\leq |h_d|\}}|t|^\gamma\,d x'\,d t \leq C \int_{-h_d}^{h_d} |t|^\gamma\, dt \leq C |h|^{1+\gamma}.\end{aligned}$$ Therefore $$\sup_{r\leq 1} r^{-\frac{1+\gamma}{p}} \sup_{|h|\leq r} \|{{{\bf 1}}}_{\R_+^d,h}\phi_h - {{{\bf 1}}}_{\R_+^d}\phi\|_{L^p(\R^d,w_{\gamma})} \leq C \sup_{r\in (0,1)} r^{-\frac{1+\gamma}{p}} (r + r^{\frac{1+\gamma}{p}}) < \infty.$$
*Step 2.* Let $r\geq 1$ and $h\in \R^d$ with $|h|\leq r$. We have $$\begin{aligned}
\|{{{\bf 1}}}_{\R_+^d,h}\phi_h - {{{\bf 1}}}_{\R_+^d}\phi\|_{L^p(\R^d,w_{\gamma})}&\, \leq \|{{{\bf 1}}}_{\R_+^d,h}\phi_h\|_{L^p(\R^d,w_{\gamma})} + \| {{{\bf 1}}}_{\R_+^d}\phi\|_{L^p(\R^d,w_{\gamma})}.\end{aligned}$$ The second summand is independent of $h$. For the first summand we estimate $$\begin{aligned}
\|{{{\bf 1}}}_{\R_+^d,h}\phi_h\|_{L^p(\R^d,w_{\gamma})}^p &\, \leq \int_{Q'-h} \int_{a-h_d}^{b-h_d} |\phi(x+h)|^p |t|^\gamma\, dt\,dx'\\
&\, \leq C \int_{a-h_d}^{b-h_d} |t|^\gamma \,dt \leq C (1+|r|^\gamma).\end{aligned}$$ This yields $$\sup_{r\geq 1} r^{-\frac{1+\gamma}{p}} \sup_{|h|\leq r} \|{{{\bf 1}}}_{\R_+^d,h}\phi_h - {{{\bf 1}}}_{\R_+^d}\phi\|_{L^p(\R^d,w_{\gamma})}<\infty.$$
Combining this with Step 1, it follows that $[{{{\bf 1}}}_{\R_+^d} \phi]_{B_{p,\infty}^{\frac{1+\gamma}{p}}(\R^d, w_{\gamma})}$ is finite.
For $\gamma \geq 0$ and $\phi$ nonvanishing around the origin we have ${{{\bf 1}}}_{\R_+^d} \phi \in B_{p,q}^{\frac{1+\gamma}{p}}(\R^d,w_\gamma)$ if and only if $q= \infty$. In fact, ${{{\bf 1}}}_{\R_+^d} \phi \in B_{p,q}^{\frac{1+\gamma}{p}}(\R^d,w_\gamma)$ implies that ${{{\bf 1}}}_{\R_+^d}\phi \in B_{p,q}^{\frac{1}{p}}(\R^d)$ by [@MeyVer1 Theorem 1.1], and the latter is true if and only if $q = \infty$ by [@RS96 Lemma 4.6.3/2]. For $\gamma\in (-1,0)$ this argument does not work. Using the difference norm from Proposition \[prop:Lpsmoothness-Besov\], one can show that the characterization is true also for these powers.
We can now prove our main results Theorems \[thm:1\] and \[thm:2\] on the multiplier property of ${{{\bf 1}}}_{\R^d_+}$.
Recall that $-\frac{1+\gamma'}{p'} = \frac{1+\gamma}{p} - 1$. Let $\phi\in C_c^\infty(\R)$ be equal to $1$ for $|t|\leq 1$ and equal to zero for $|t|\geq 2$. Then $ {{{\bf 1}}}_{\R_+^d} (1-\phi)$ belongs to $BC^\infty(\R^d)$ and is thus a pointwise multiplier by the Propositions \[thm:mult-smooth2\] and \[prop:mult-smooth\]. Considering ${{{\bf 1}}}_{\R_+^d} \phi$ to depend on the last coordinate $t$ only, Lemma \[chi\] shows that $${{{\bf 1}}}_{\R_+^d} \phi\in B_{r,\infty}^{\frac{1+\mu}{r}}(\R,w_\mu) \cap L^\infty(\R)$$ for all $r\in (1,\infty)$ and all $\mu\in (-1,r-1)$. Now Theorem \[multiplication-esti\] applies to ${{{\bf 1}}}_{\R_+^d} \phi$. Indeed, one can choose arbitrary $r\in (1, \frac{1}{|s|})$ for $s\in (-\frac{1}{p'},\frac{1}{p})$, $r$ close to $p$ for $s\in [\frac{1}{p}, \frac{1+\gamma}{p})$ and $r$ close to $p'$ for $s\in (-\frac{1+\gamma'}{p'},-\frac{1}{p'}]$. Since ${{{\bf 1}}}_{\R_+^d} \phi$ is scalar-valued, its image is $\mathcal R$-bounded, see Remark \[R-bound-crit\].
Multiplication algebras, type and cotype {#sec:type}
----------------------------------------
The following classical result holds for $s>0$, $p,q\in [1,\infty]$ and $\mathcal{A}\in \{B,F\}$ (see [@RS96 Section 4.6.4]), $$\label{eq:classicalcase1}
\|mf\|_{{\mathcal A}^{s}_{p,q}(\R^d)} \leq C \big(\|m\|_{L^\infty(\R^d)} \|f\|_{{\mathcal A}^{s}_{p,q}(\R^d)} + \|m\|_{{\mathcal A}^{s}_{p,q}(\R^d)} \|f\|_{L^\infty(\R^d)}\big).$$ In other words, ${\mathcal A}^{s}_{p,q}\cap L^\infty$ is a multiplicative algebra. Of course, if $s$ is large enough, then ${\mathcal A}^{s}_{p,q}\cap L^\infty = {\mathcal A}^{s}_{p,q}$ by Sobolev embedding. Since in the scalar-valued case one has $H^{s,p} = F^{s}_{p,2}$ for $p\in (1,\infty)$, this includes an estimate for Bessel-potential spaces, i.e., $$\label{eq:classicalcase2}
\|mf\|_{H^{s,p}(\R^d)} \leq C\big( \|m\|_{L^\infty(\R^d)} \|f\|_{H^{s,p}(\R^d)} + \|m\|_{H^{s,p}(\R^d)} \|f\|_{L^\infty(\R^d)}\big).$$
Using the convergence criteria from Lemma \[para2\], the following extension of to the weighted vector-valued case can be proved as in [@RS96 Section 4.6.4].
\[prop:algebra1\] Let $X$ and $Y$ be Banach spaces, $s>0$ $p\in(1,\infty)$, $q\in [1, \infty]$ and $w\in A_p$. Then for $\mathcal{A}\in \{B,F\}$ we have $$\begin{aligned}
\|m f\|_{\mathcal{A}^{s}_{p,q}(\R^d,w;X)} \leq C \big(\|m\|_{L^\infty(\R^d,w;{{\mathscr L}}(X,Y))} \|f\|_{\mathcal{A}^{s}_{p,q}(\R^d,w;X)} + \|m\|_{\mathcal{A}^{s}_{p,q}(\R^d,w;{{\mathscr L}}(X,Y))} \|f\|_{L^\infty(\R^d;X)}\big).\end{aligned}$$
In the vector-valued case one has $H^{s,p}(\R^d; X) = F_{p,2}^s(\R^d;X)$ if and only if $X$ can be renormed as a Hilbert space (see Remark \[HW\] and Proposition \[prop:type-embed\] below). Hence a vector-valued version of is not contained in Proposition \[prop:algebra1\]. To obtain a result in this direction for UMD-valued Bessel-potential spaces we make use of the notions type and cotype. These are measures for how far a space $X$ is away from being a Hilbert space.
Let a Rademacher sequence $(r_k)_{k\geq 0}$ on a probability space $\Omega$ be given, see Section \[subsec:UMD\]. Then $X$ is said to have type $\tau\in [1,2]$ if there is $C > 0$ such that for all $N\in \N$ and $x_0,...,x_N\in X$ we have $$\Big\| \sum_{n=0}^N r_n x_n \Big\|_{L^\tau(\Omega;X)} \leq C \Big( \sum_{n=0}^N \|x_n\|_X^\tau \Big)^{1/\tau}.$$ Similarly, $X$ is said to have cotype $q\in [2,\infty]$ if $$\Big( \sum_{n=0}^N \|x_n\|_X^q \Big)^{1/q} \leq C \Big\| \sum_{n=0}^N r_n x_n \Big\|_{L^q(\Omega;X)}.$$ For a general overview on this topic we refer to [@DJT Chapter 11]. Some basic facts are as follows:
(a) Every Banach space has type $\tau = 1$ and cotype $q=\infty$.
(b) If $X$ has type $\tau$, then it has type $\sigma$ for all $\sigma\in [1, \tau]$.
(c) If $X$ has cotype $q$, then it has cotype $r$ for all $r\in [q, \infty]$.
(d) A space $X$ can be renormed as a Hilbert space if and only if it has type $2$ and cotype $2$.
(e) If $(S,\mu)$ is a $\sigma$-finite measure space, then $L^r(S)$ has type $\min\{2,r\}$ and cotype $\max\{2,r\}$.
The connection of these notions to $X$-valued function spaces is as follows.
\[prop:type-embed\] Let $X$ have *UMD*, $s\in \R$, $p\in (1,\infty)$ and $w\in A_p$. Assume $X$ has type $\tau\in [1,2]$ and cotype $q\in [2,\infty]$. Then $$\label{eq:typetauemb}
F^{s}_{p,\tau}(\R^d,w;X) \hookrightarrow H^{s,p}(\R^d,w;X) \hookrightarrow F^{s}_{p,q}(\R^d,w;X).$$
Using Proposition \[prop:UMDHisF\], this can be shown in the same way as in [@Ver12 Proposition 3.1].
We have the following product estimate.
\[multiplication-Hinfty\] Let $X$ and $Y$ have *UMD*, $s > 0$, $p\in (1,\infty)$ and $w\in A_p$. Assume $Y$ has type $\tau\in (1,2]$ and that $m \in F^{s}_{p,\tau}(\R^d, w; {{\mathscr L}}(X,Y))$ has $\mathcal R$-bounded image. Then $$\begin{aligned}
\|mf\|_{H^{s,p}(\R^d,w;Y)} \leq C \big(\mathcal R(m) \|f\|_{H^{s,p}(\R^d,w;X)} + \|m\|_{F^{s}_{p,\tau}(\R^d, w; {{\mathscr L}}(X,Y))} \|f\|_{L^\infty(\R^d;X)}\big).\end{aligned}$$
We estimate the paraproducts $\Pi_{i}(m,f)$ for $i=1, 2, 3$. It follows from Lemma \[multiplication1\] that $$\|\Pi_1(m,f)\|_{H^{s,p}(\R^d,w;Y)} \leq C \mathcal R(m) \|f\|_{H^{s,p}(\R^d,w;X)}.$$ The summands of $\Pi_2(m,f)$ satisfy . We use and the estimate from Lemma \[para2\] under the assumption to get $$\begin{aligned}
\|\Pi_2(m,f)\|_{H^{s,p}(\R^d,w;Y)} & \leq C\|\Pi_2(m,f)\|_{F^{s}_{p,\tau}(\R^d,w;Y)}
\\ & \leq C\sum_{j=-1}^1 \Big \| \Big ( 2^{sn} S_{n+j}m S_n f\Big)_{n \geq 0} \Big \|_{L^p(\R^d,w; \ell^\tau(Y))}
\\ & \leq C\sum_{j=-1}^1 \Big \| \Big ( 2^{sn} S_{n+j}m \Big)_{n \geq 0} \Big \|_{L^p(\R^d,w; \ell^\tau({{\mathscr L}}(X,Y)))} \sup_{n\geq 0} \|S_n f\|_{L^\infty(\R^d;X)}
\\ & \leq C \|m\|_{F^{s}_{p,\tau}(\R^d, w; {{\mathscr L}}(X,Y))} \|f\|_{L^\infty(\R^d;X)}.\end{aligned}$$ The estimate for $\Pi_3(m,f)$ is proved in the same way using under the assumption .
As a special case of this result we extend the classical estimate to the weighted vector-valued setting with a scalar-valued multiplier. It in particular applies in case $X = L^r$ with $r \geq 2$, which is often the range of interest in the context of nonlinear partial differential equations.
\[thm:lastone\] Let $X$ be a *UMD*-Banach space with type $\tau = 2$, let $s>0$, $p\in(1,\infty)$ and $w\in A_p$. Then $$\begin{aligned}
\|mf\|_{H^{s,p}(\R^d,w;X)} \leq C \big(\|m\|_{L^\infty(\R^d)} \|f\|_{H^{s,p}(\R^d,w;X)} + \|m\|_{H^{s,p}(\R^d, w)} \|f\|_{L^\infty(\R^d;X)}\big).\end{aligned}$$
This follows from Proposition \[multiplication-Hinfty\] applied with $\tau = 2$, the fact that $\mathcal R(m) \leq 2 \|m\|_\infty$ for scalar-valued $m$ (see Remark \[rem:R-bound-scalar\]) and that $F_{p,2}^s(\R^d,w) = H^{s,p}(\R^d,w)$ for an $A_p$-weight $w$ (see Proposition \[prop:type-embed\]).
Spaces of entire analytic functions {#sec:analytic}
===================================
In this appendix we consider weighted spaces with mixed norms of entire analytic functions, which are the key to convergence and estimates of the paraproducts. The results are weighted extensions of the corresponding assertions in [@Franke86]. Since some of the proofs differ from the unweighted case, we give all details in the proofs below.
For $A > 0$, $s\in \R$, $p\in (1,\infty)$, $q\in [1,\infty]$ and $w\in A_\infty(\R^d)$ we set $$\begin{aligned}
L^p_A(\R^d,w;\ell^{s,q}(X)) = \big \{&\, (f_k)_{k\geq 0}\subset {{\mathscr S'}}(\R^d;X): \;{\text{\rm supp\,}}{\widehat}{f}_k \subset \{|\xi|\leq A 2^k\},\\
&\; \|(f_k)_{k\geq 0}\|_{L^p_A(\R^d,w;\ell^{s,q}(X))} = \|(2^{sk} f_k)_{k\geq 0}\|_{L^p(\R^d,w; \ell^q(X))} < \infty \big \}, \\
\ell^{s,q}(L^p_A(\R^d,w;X)) = \big \{&\, (f_k)_{k\geq 0}\subset {{\mathscr S'}}(\R^d;X):\; {\text{\rm supp\,}}{\widehat}{f}_k \subset \{|\xi|\leq A 2^k\},\\
&\; \|(f_k)_{k\geq 0}\|_{\ell^{s,q}(L^p_A(\R^d,w;X))} = \|(2^{sk} f_k)_{k\geq 0}\|_{\ell^q(L^p(\R^d,w;X))} < \infty \big \}.\end{aligned}$$ For $p,r\in (1,\infty)$ and $w\in A_{\infty}(\R)$ we further consider the mixed-norm space $$L^{p(r)}(\R^d,w;X) = L^p(\R^{d-1}; L^r(\R, w;X)),$$ where the weight $w$ is understood to depend on the last coordinate only. The norm in this space given by $$\|f\|_{L^{p(r)}(\R^d,w;X)}^p = \int_{\R^{d-1}} \|f(x',\cdot)\|_{L^r(\R,w;X)}^p \,dx'.$$ Then, with parameters as before, $$\begin{aligned}
\ell^{s,q}(L_A^{p(r)}(\R^d,w;X)) = \big \{&\, (f_k)_{k\geq 0}\subset {{\mathscr S'}}(\R^d;X):\; {\text{\rm supp\,}}{\widehat}f_k \subset \{|\xi|\leq A 2^k\},\\
&\; \|(f_k)_{k\geq 0}\|_{\ell^{s,q}(L_A^{p(r)}(\R^d,w;X))} = \|(2^{sk} f_k)_{k\geq 0}\|_{\ell^q(L^{p(r)}(\R^d,w;X))} < \infty \big \}.\end{aligned}$$
A maximal inequality {#sec:max-ineq}
--------------------
Let $M$ be the Hardy-Littlewood maximal operator introduced in Section \[sec:Mucken\]. The following extension of the Fefferman-Stein inequality to spaces with mixed norms is straightforward to prove.
\[lem:maxoperator\] Let $p,r \in (1, \infty)$, $q\in (1, \infty]$ and $w\in A_{p}(\R)$. Then $$\|(M f_n)_{n\geq 0}\|_{ L^{p(r)}(\R^{d},w;\ell^q)} \leq C \|(f_n)_{n\geq 0}\|_{ L^{p(r)}(\R^{d},w;\ell^q)}.$$
*Step 1*. Assume $1<q<\infty$. Let $M'$ denote the maximal operator with respect to $x'\in \R^{d-1}$ and $M''$ the maximal operator with respect to $t\in \R$. It is elementary to check that there is $C > 0$ such that the pointwise estimate $$M g \leq C M'' M' g, \qquad g\in L^1_{\text{loc}}(\R^{d}),$$ holds true. Now let $(f_n)_{n\geq 0}\in L^{p(r)}(\R^{d},w;\ell^q)$. For almost all fixed $x'$ we estimate, using the Fefferman-Stein inequality on $L^r(\R,w;\ell^q)$ with respect to $M''$, $$\begin{aligned}
\big \|\big(M f_n(x',\cdot)\big)_{n\geq 0}\big\|_{L^{r}(\R,w;\ell^q)}&\, \leq C \big\|\big(M''M' f_n(x',\cdot)\big)_{n\geq 0}\big\|_{L^{r}(\R,w;\ell^q)}\\
&\, \leq C \big\|\big(M' f_n(x',\cdot)\big)_{n\geq 0}\big\|_{L^{r}(\R,w;\ell^q)}.\end{aligned}$$ Applying the $L^{p}(\R^{d-1})$-norm, we find $$\label{bla}
\|(M f_n)_{n\geq 0}\|_{L^{p(r)}(\R^{d},w;\ell^q)}\leq C\|(M' f_n)_{n\geq 0}\|_{L^{p(r)}(\R^{d},w;\ell^q)}.$$ Now let $Y = L^{r}(\R,w;\ell^q)$. Then $Y$ is a UMD Banach lattice, see [@RF Proposition 3]. We may therefore apply the maximal inequality from [@RF Theorem 3] on $L^{p}(\R^{d-1}; Y) = L^{p(r)}(\R^{d},w;\ell^q)$ to obtain $$\|(M' f_n)_{n\geq 0}\|_{L^{p}(\R^{d-1}; Y)} \leq C \|(f_n)_{n\geq 0}\|_{L^{p}(\R^{d-1}; Y)}.$$ Combining this with gives the asserted maximal inequality.
*Step 2*. For the case $q=\infty$ we note that $$\|(M f_n)_{n\geq 0}\|_{L^{p(r)}(\R^{d},w;\ell^\infty)} \leq \|M g\|_{L^{p(r)}(\R^{d},w)},$$ where $g(x) = \sup_n|f_n(x)|$. Now one can argue as in Step 1 on $L^p(\R^{d-1};L^{r}(\R^{d_1},w))$.
Embeddings of Jawerth-Franke type {#sec:JF}
---------------------------------
The following weighted Jawerth-Franke type embeddings for spaces over the real line are proved in [@MeyVer1 Theorem 6.4]. The striking point is the independence of the microscopic parameter $q$.
\[prop:JF\] Let $s_0> s_1$, $1<p_0<p_1<\infty$, $\gamma_0\in (-1,p_0-1)$ and $\gamma_1\in (-1,p_1-1)$. Assume $$\frac{\gamma_0}{p_0} \geq \frac{\gamma_1}{p_1}, \qquad s_0 - \frac{1+\gamma_0}{p_0} \geq s_1 - \frac{1+\gamma_1}{p_1}.$$ Then for $q\in [1,\infty]$ one has the continuous embeddings $$\label{jf-BF}
B_{p_0,p_1}^{s_0}(\R,w_{\gamma_0};X) \hookrightarrow F_{p_1,q}^{s_1}(\R,w_{\gamma_1};X),$$ $$\label{jf-FB}
F_{p_0,q}^{s_0}(\R,w_{\gamma_0};X) \hookrightarrow B_{p_1,p_0}^{s_1}(\R,w_{\gamma_1};X).$$
As in [@Franke86; @RS96], we need discrete versions of these embeddings on the spaces of entire analytic functions. We follow [@Franke86 Section 2.3], see also [@RS96 Section 2.6.3]. As a preparation we state the following elementary result on Fourier supports.
\[lem:Fourier-supp\] Let $f\in {{\mathscr S'}}(\R^d;X)$ be such that ${\text{\rm supp\,}}{\widehat}f\subseteq \{|\xi|\leq A\}$ for some $A > 0$. Denote by ${{\mathscr F}}_t$ the Fourier transform with respect to the last coordinate $t\in \R$. Then for each $x'\in \R^{d-1}$ we have ${\text{\rm supp\,}}{{\mathscr F}}_t(f(x',\cdot))\subseteq \{|\lambda|\leq A\}$.
We have the following extension of Proposition \[prop:JF\]. Observe that and correspond to and , respectively. The corresponding results in the unweighted case can be found in [@Franke86 Theorem 2.4.1(IV)] and [@RS96 Theorem 2.6.3/3]. We argue as in [@Franke86], where the proof is only indicated.
\[prop:jf\_discrete\] Let $A > 0$, $s_0 > s_1$, $1<p_0<p_1< \infty$, $\gamma_0 \in (-1,p_0-1)$ and $\gamma_1\in (-1,p_1-1)$. Assume $$\frac{\gamma_0}{p_0} \geq \frac{\gamma_1}{p_1}, \qquad s_0 - \frac{1+\gamma_0}{p_0} \geq s_1 - \frac{1+\gamma_1}{p_1}.$$ Then for $q\in [1,\infty]$ one has the continuous embeddings $$\label{jf_discrete_BF}
\ell^{s_0,p_1}(L_A^{p_1(p_0)}(\R^d,w_{\gamma_0};X)) \hookrightarrow L^{p_1}_A(\R^d,w_{\gamma_1};\ell^{s_1,q}(X)),$$ $$\label{jf_discrete_FB}
L^{p_0}_A(\R^d,w_{\gamma_0};\ell^{s_0,q}(X)) \hookrightarrow \ell^{s_1,p_0}(L_A^{p_0(p_1)}(\R^{d},w_{\gamma_1};X)).$$
*Step 1.* Take the smallest integer $N$ such that $A\leq 2^N$ and set $\text{e}_k(t) = e^{\text{i}2^{N+3+k}t}$. For $s\in \R$, $p\in (1,\infty)$, $q\in [1,\infty]$, $\gamma\in (-1,p-1)$ and $(f_k)_{k\geq 0} \subset {{\mathscr S'}}(\R;X)$ with ${\text{\rm supp\,}}{\widehat}f_k \subset\{|\xi|\leq 2^{N+k}\}$ we claim that $$\label{eq:equiv-F}
\|(f_k)_{k\geq 0}\|_{L^{p}_A(\R,w_{\gamma};\ell^{s,q}(X))} \eqsim \Big \|\sum_{k\geq 0} \text{e}_k f_k\Big\|_{F_{p,q}^{s}(\R,w_{\gamma};X)},$$ $$\label{eq:equiv-B}
\|(f_k)_{k\geq 0}\|_{\ell^{s,q}(L_A^{p}(\R,w_{\gamma};X))} \eqsim \Big \|\sum_{k\geq 0} \text{e}_k f_k\Big\|_{B_{p,q}^{s}(\R,w_{\gamma};X)}.$$ Here $a\eqsim b$ means $C^{-1} a\leq b\leq Ca$. Let us prove , the case is similar. Observe that ${\text{\rm supp\,}}{{\mathscr F}}(\text{e}_kf_k) \subseteq \{7\cdot 2^{N+k} \leq |\xi| \leq 9 \cdot 2^{N+k}\}$. Let $(S_n)_{n\geq 0}$ be defined with respect to $(\varphi_n)_{n\geq 0}\in \Phi(\R)$. Then, for $n,k\geq 1$, $$S_n (\text{e}_kf_k) \neq 0 \qquad \text{only if }\;\; k+l_0 \leq n \leq k+l_1,$$ where $l_0,l_1\in \N$ are independent of $n$ and $k$. We use this, [@MeyVer1 Proposition 2.4] and that ${\widehat}{\varphi}_n = {\widehat}{\varphi}_1 (2^{-n+1\cdot})$ to obtain (setting $f_k = 0$ for negative $k$) $$\begin{aligned}
\Big \|\sum_{k\geq 0} &\,\text{e}_k f_k\Big\|_{F_{p,q}^{s}(\R,w_{\gamma};X)} \leq \sum_{l= l_0}^{l_1} \big \| \big( S_{n} (\text{e}_{n-l} f_{n-l})\big)_{n\geq 0}\big\|_{L^{p}_A(\R,w_{\gamma};\ell^{s,q}(X))}\\
&\, \leq C\sum_{l= l_0}^{l_1} \sup_{n\geq 0} \| (1+|\cdot|^{2}) {{\mathscr F}}^{-1}\big( {\widehat}{\varphi}_n (2^{N+n+l+1}\cdot)\big)\|_{L^1(\R)} \big \| ( \text{e}_{n-l} f_{n-l})_{n\geq 0}\big\|_{L^{p}_A(\R,w_{\gamma};\ell^{s,q}(X))}\\
&\, \leq C \big \| ( f_{n})_{n\geq 0}\big\|_{L^{p}_A(\R,w_{\gamma};\ell^{s,q}(X))}.\end{aligned}$$ For the converse we note that the Fourier supports of the $\text{e}_kf_k$ are pairwise disjoint. Take a function $\psi\in {{\mathscr S}}(\R)$ such that ${\widehat}{\psi} \equiv 1$ on $ \{7\cdot 2^N \leq |\xi| \leq 9 \cdot 2^N \}$ and ${\widehat}{\psi} \equiv 0$ on $\{|\xi|\leq 6 \cdot 2^N\}\cup \{|\xi|\geq 10 \cdot 2^N\}$. Define $\psi_k$ by ${\widehat}{\psi}_k = {\widehat}{\psi}(2^{-k}\cdot)$. Using again [@MeyVer1 Proposition 2.4], we get $$\begin{aligned}
\|(f_k)_{k\geq 0} &\,\|_{L^{p}_A(\R,w_{\gamma};\ell^{s,q}(X))}\leq \sum_{l=l_0}^{l_1} \| \big (S_{k+l} (\text{e}_k f_k)\big)_{k\geq 0}\|_{L^{p}_A(\R,w_{\gamma};\ell^{s,q}(X))}\\
&\, = \sum_{l=l_0}^{l_1}\Big \| \Big( \psi_k* S_{k+l} \sum_{j\geq 0} \text{e}_j f_j\Big)_{k\geq 0}\Big\|_{L^{p}_A(\R,w_{\gamma};\ell^{s,q}(X))}\\
&\, \leq C\sum_{l=l_0}^{l_1} \sup_{k\geq 0} \big \| (1+|\cdot|^{2}) {{\mathscr F}}^{-1}\big( {\widehat}{\psi}_k (2^{N+k+1}\cdot)\big)\big \|_{L^1(\R)} \Big \| \Big( S_{k+l}\sum_{j\geq 0} \,\text{e}_j f_j\Big)_{k\geq 0}\Big\|_{L^{p}_A(\R,w_{\gamma};\ell^{s,q}(X))}\\
&\, \leq C \Big \|\sum_{j\geq 0} \text{e}_j f_j\Big\|_{F_{p,q}^{s}(\R,w_{\gamma};X)}.\end{aligned}$$
*Step 2.* To prove , let $(f_k)_{k\geq 0} \in \ell^{s_0,p_1}(L_A^{p_1(p_0)}(\R^d,w_{\gamma_0};X))$. Then ${\text{\rm supp\,}}{{\mathscr F}}_t (f_k(x',\cdot)) \subseteq \{|\lambda|\leq A 2^k\}$ for $x'\in \R^{d-1}$ and each $k$ by Lemma \[lem:Fourier-supp\], where ${{\mathscr F}}_t$ is the Fourier transform with respect to $t\in \R$. We may thus use the equivalences and together with to estimate $$\begin{aligned}
\|(f_k)_{k\geq 0}\|_{L^{p_1}_A(\R^d,w_{\gamma_1};\ell^{s_1,q}(X))}^{p_1} &\, = \int_{\R^{d-1}}\| (f_k(x',\cdot))_{k\geq 0}\|_{L^{p_1}_A(\R,w_{\gamma_1};\ell^{s_1,q}(X))}^{p_1}\, dx'\\
&\, \leq C \int_{\R^{d-1}} \Big\|\sum_{k\geq 0} \text{e}_k f_k(x',\cdot)\Big\|_{F_{p_1,q}^{s_1}(\R,w_{\gamma_1};X)}^{p_1} \, dx'\\
&\, \leq C \int_{\R^{d-1}} \Big\|\sum_{k\geq 0} \text{e}_k f_k(x',\cdot)\Big\|_{B_{p_0,p_1}^{s_0}(\R,w_{\gamma_0};X)}^{p_1} \, dx'\\
&\, \leq C \,\int_{\R^{d-1}} \| (\| f_k(x',\cdot)\|_{L^{p_0}(\R,w_{\gamma_0};X))})_{k\geq 0} \|_{\ell^{s_0,p_1}}^{p_1} \, dx'\\
&\, = C \|(f_k)_{k\geq 0}\|_{\ell^{s_0,p_1}(L_A^{p_1(p_0)}(\R^d,w_{\gamma_0};X))}^{p_1}.\end{aligned}$$ The derivation of uses and is analogous.
Convergence criteria for series {#subsec:criteria}
-------------------------------
The following result provides sufficient conditions for the convergence of series in weighted mixed-norm spaces of entire analytic functions. We refer to [@RS96 Section 2.3.2] and [@JS08 Section 3.6] for the unweighted cases.
\[para2\] Let $p,p_0,p_1\in (1,\infty)$, $q\in (1,\infty]$, $w\in A_p(\R^d)$ and $w_1\in A_{p_1}(\R)$, where $w_1$ is understood to depend on the last coordinate $t\in \R$. Suppose that for some $k_0\in \N$ the sequence $(f_k)_{k\geq 0}\subset {{\mathscr S'}}(\R^d;X)$ and $s\in \R$ satisfy $$\label{6000} \text{either}\quad s\in \R\quad\text{and}\quad {\text{\rm supp\,}}{\widehat}{f_0} \subset \{|\xi|\leq 2^{k_0}\},\quad {\text{\rm supp\,}}{\widehat}{f_k} \subset \{2^{k-k_0} \leq |\xi|\leq 2^{k+k_0}\}\,;$$ $$\label{6001} \text{or}\quad s>0 \quad\text{and}\quad{\text{\rm supp\,}}{\widehat}{f_k} \subset \{|\xi|\leq 2^{k+k_0}\}\;.$$ Then the following holds true. If $(2^{sk} f_k)_{k\geq 0} \in L^{p_0(p_1)}(\R^d,w_1; \ell^q(X))$, then $f = \sum_{k=0}^\infty f_k$ converges in ${{\mathscr S'}}(\R^d;X)$ and $$\label{6005}
\|(2^{sn}S_nf)_{n\geq 0}\|_{L^{p_0(p_1)}(\R^d,w_1; \ell^q(X))} \leq C \|(2^{sk} f_k)_{k\geq 0}\|_{L^{p_0(p_1)}(\R^d,w_1; \ell^q(X))}.$$ In the same sense we have the estimates $$\label{6010}
\|(2^{sn}S_nf)_{n\geq 0}\|_{\ell^q(L^{p_0(p_1)}(\R^d,w_1; X))} \leq C \|(2^{sk} f_k)_{k\geq 0}\|_{\ell^q(L^{p_0(p_1)}(\R^d, w_1; X))},$$ $$\label{6011}
\|f\|_{F_{p,q}^s(\R^d,w;X)} \leq C \|(2^{sk} f_k)_{k\geq 0}\|_{L^p(\R^d, w; \ell^q(X))},$$ $$\label{6012}
\|f\|_{B_{p,q}^s(\R^d,w;X)} \leq C \|(2^{sk} f_k)_{k\geq 0}\|_{\ell^q(L^p(\R^d, w;X))}.$$ Assuming , all assertions hold true also for $q = 1$ and $A_\infty$-weights. Assuming , the estimates and hold true also for $q = 1$.
*Step 1.* First assume $q \in (1,\infty]$ and that the weights are in $A_{p}$ and $A_{p_1}$, respectively. Throughout we set $f_k = 0$ for $k < 0$. Suppose that is satisfied. We show the convergence of the series and the estimate .
Fix $N\in \N$. Then for each $n$ the support condition for the ${\widehat}f_k$ implies $$S_n \sum_{k=0}^N f_k = S_n \sum_{k=n-k_0}^N f_k\quad \text{if }n \leq N+k_0, \qquad S_n \sum_{k=0}^N f_k = 0\quad \text{if }n > N+k_0.$$ Therefore, since $s> 0$, $$\begin{aligned}
\Big\|\Big (2^{sn}S_n\sum_{k=0}^N f_k\Big)_{n\geq 0}\Big\|_{L^{p_0(p_1)}(\R^d,w_1; \ell^q(X))} &\,= \Big \| \Big(2^{sn}S_n \sum_{l=-k_0}^{N-n} f_{n+l} \Big)_{n\geq 0} \Big \|_{L^{p_0(p_1)}(\R^d,w_1;\ell^q(X))}\nonumber\\
&\,\leq \sum_{l=-k_0}^\infty 2^{-sl}\Big \| \Big(2^{s(n+l)}S_n f_{n+l} \Big)_{n\geq 0} \Big \|_{L^{p_0(p_1)}(\R^d,w_1;\ell^q(X))}\label{6003}\\
&\, \leq C\,\sup_{l\geq -k_0}\Big \| \Big(2^{s(n+l)}S_n f_{n+l} \Big)_{n\geq 0} \Big \|_{L^{p_0(p_1)}(\R^d,w_1;\ell^q(X))}, \label{6999}\end{aligned}$$ where we set $\sum_{l=-k_0}^{N-n}$ equal to zero whenever $N-n<-k_0$.
To estimate the right-hand side of , define $\psi_n$ by $\psi_n(x) = \sup_{|y|\geq |x|} |\varphi_n(y)|$ and set $g_{n+l} = 2^{s(n+l)}\|f_{n+l}\|$. Applying [@GraClass Theorem 2.1.10] we find that for every $n\geq 0$, $$\begin{aligned}
\|2^{s(n+l)} S_n f_{n+l}(x)\| \leq \|\psi_n\|_{L^1(\R^d)} Mg_{n+l}(x) \leq C Mg_{n+l}(x),\end{aligned}$$ where $M$ is the Hardy-Littlewood maximal operator. Lemma \[lem:maxoperator\] gives $$\begin{aligned}
\big\| \big(2^{s(n+l)}S_n f_{n+l} \big)_{n\geq 0} \big\|_{L^{p_0(p_1)}(\R^d,w_1;\ell^q(X))} & \leq C \big\| (M g_{n+l} )_{n\geq 0} \big \|_{L^{p_0(p_1)}(\R^d,w_1;\ell^q)}
\\ & \leq C \big\| (g_{n+l})_{n\geq 0} \big\|_{L^{p_0(p_1)}(\R^d,w_1;\ell^q)}
\\ & = C\big\| \big(2^{s(n+l)}f_{n+l} \big)_{n\geq 0} \big \|_{L^{p_0(p_1)}(\R^d,w_1;\ell^q(X))}.\end{aligned}$$ Combining this estimate with , we obtain $$\label{abc1}
\Big\|\Big (2^{sn}S_n\sum_{k=0}^N f_k\Big)_{n\geq 0}\Big\|_{L^{p_0(p_1)}(\R^d,w_1; \ell^q(X))} \leq C \big\| (2^{sk} f_{k})_{k\geq 0} \big\|_{L^{p_0(p_1)}(\R^d,w_1;\ell^q(X))},$$ with a constant $C$ independent of $N$. Now set $$\|g\|_{F_{p_0(p_1),q}^s(\R^d,w_1;X)} = \big\|\big (2^{sn}S_n g\big)_{n\geq 0}\big\|_{L^{p_0(p_1)}(\R^d,w_1; \ell^q(X))}.$$ This defines a complete space of distributions, which embeds continuously into ${{\mathscr S'}}(\R^d;X)$ (see the proofs of [@Tri83 Theorem 2.3.3] and [@JS07 Proposition 10], and use [@MeyVer1 Lemma 4.5]). It follows from that $(\sum_{k=0}^N f_k)_{N\geq 0}$ is a Cauchy sequence in $F_{p_0(p_1),1}^{s-\varepsilon}(\R^d,w_1;X)$ for $\varepsilon> 0$. Hence it converges in ${{\mathscr S'}}(\R^d;X)$. A Fatou argument as in applied to yields the estimate .
The other estimates can be derived in a similar way. In case when the Fourier supports satisfy , the sum $\sum_{l=-k_0}^\infty$ in can be replaced by $\sum_{l=-k_0}^{k_0}$. Then the restriction on $s$ is not necessary.
*Step 2.* Consider the case $q = 1$. Assume . Then and can be shown as before, where instead of Lemma \[lem:maxoperator\] it suffices to use the boundedness of $M$ on $L^{p_1}(\R,w_1)$ and on $L^p(\R^d,w)$, respectively.
Assume and $w_1\in A_\infty$. We prove , the arguments for the other estimates are similar. Arguing as before, we get $$\Big\|\Big (2^{sn}S_n\sum_{k=0}^N f_k\Big)_{n\geq 0}\Big\|_{L^{p_0(p_1)}(\R^d,w_1; \ell^1(X))} \leq C\,\sum_{|l|\leq k_0}\Big \| \Big(2^{s(n+l)}S_n f_{n+l} \Big)_{n\geq 0} \Big \|_{L^{p_0(p_1)}(\R^d,w_1;\ell^1(X))}.$$ Choose $r\in (0,1)$ such that $w_1\in A_{p_1/r}(\R)$. For $x\in \R^d$ we have $$\begin{aligned}
\|S_n f_{n+l}(x)\| \leq \sup_{z\in \R^d} \frac{\|f_{n+l}(x-z)\|}{1+ |2^nz|^{d/r}} \int_{\R^d} (1+ |2^ny|^{d/r}) |\varphi_n(y)|\,d y.\end{aligned}$$ Here the second factor is bounded independent of $n$ since $\varphi_n = 2^{nd} \varphi_1(2^{n-1}\cdot)$. The diameter of the Fourier support of $f_{n+l}$ is comparable to $2^n$. We thus obtain from the proof of [@Tri83 Theorem 1.6.2] that $$\begin{aligned}
2^{s(n+l)} \|S_n f_{n+l}(x)\| &\, \leq C 2^{s(n+l)} \sup_{z\in \R^d} \frac{\|f_{n+l}(x-z)\|}{1+ |2^nz|^{d/r}} \\
&\, \leq C 2^{s(n+l)}(M\|f_{n+l}\|^r(x))^{1/r} = C (Mg_{n+l}^r(x))^{1/r},\end{aligned}$$ where as above $g_{n+l} = 2^{s(n+l)}\|f_{n+l}\|$. Since $1/r> 1$ and $w_1\in A_{p_1/r}(\R)$, we can use Lemma \[lem:maxoperator\] to estimate $$\begin{aligned}
\Big \| \Big(2^{s(n+l)}S_n f_{n+l} \Big)_{n\geq 0} \Big \|_{L^{p_0(p_1)}(\R^d,w_1;\ell^1(X))} &\leq
C \big\|(Mg_{n+l}^r)_{n\geq 0}\big\|_{L^{p_0/r(p_1/r)}(\R^d,w_1;\ell^{1/r})}^{1/r}
\\ & \leq C\big\|(g_{n+l}^r)_{n\geq 0}\big\|_{L^{p_0/r(p_1/r)}(\R^d,w_1;\ell^{1/r})}^{1/r}
\\ & = C\big \|(2^{s(n+l)} f_{n+l})_{n\geq 0}\big\|_{L^{p_0(p_1)}(\R,w_1;\ell^{1}(X))}.\end{aligned}$$ Now the proof can be finished as before.
We do not know how to prove and under the assumption for $q = 1$ and $A_{\infty}$-weights. The above argument does not work since the supports of the ${\widehat}f_{n}$ are too large.
[10]{}
H. Amann. , 1995.
K.F. Andersen and R.T. John. Weighted inequalities for vector-valued maximal functions and singular integrals. , 69(1):19–31, 1980/81.
J. Bergh and J. L[ö]{}fstr[ö]{}m. . Springer-Verlag, Berlin, 1976.
J.-M. [Bony]{}. , 14:209–246, 1981.
J. Bourgain. Some remarks on [B]{}anach spaces in which martingale difference sequences are unconditional. , 21(2):163–168, 1983.
J. Bourgain. Vector-valued singular integrals and the [$H\sp 1$]{}-[BMO]{} duality. In [*Probability theory and harmonic analysis (Cleveland, Ohio, 1983)*]{}, volume 98 of [*Monogr. Textbooks Pure Appl. Math.*]{}, pages 1–19. Dekker, New York, 1986.
H.-Q. Bui. Weighted [B]{}esov and [T]{}riebel spaces: interpolation by the real method. , 12(3):581–605, 1982.
D.L. Burkholder. Martingales and [F]{}ourier analysis in [B]{}anach spaces. In [*Probability and analysis (Varenna, 1985)*]{}, volume 1206 of [*Lecture Notes in Math.*]{}, pages 61–108. Springer, Berlin, 1986.
D.L. Burkholder. Martingales and singular integrals in [B]{}anach spaces. In [*Handbook of the geometry of Banach spaces, Vol. I*]{}, pages 233–269. North-Holland, Amsterdam, 2001.
A. P. Calderón. , 4:33–49, 1961.
R. Denk, M. Hieber, and J. Pr[ü]{}ss. -boundedness, [F]{}ourier multipliers and problems of elliptic and parabolic type. , 166(788), 2003.
J. Diestel, H. Jarchow, and A. Tonge. , volume 43 of [*Cambridge Studies in Advanced Mathematics*]{}. Cambridge University Press, Cambridge, 1995.
J. Diestel and J. J. Uhl, Jr. . American Mathematical Society, Providence, R.I., 1977.
J. Franke. On the spaces [${\bf F}_{pq}^s$]{} of [T]{}riebel-[L]{}izorkin type: pointwise multipliers and spaces on domains. , 125:29–68, 1986.
J. Garc[í]{}a-Cuerva and J.L. Rubio de Francia. , volume 116 of [*North-Holland Mathematics Studies*]{}. North-Holland Publishing Co., Amsterdam, 1985.
M. Girardi and L. Weis. Operator-valued [F]{}ourier multiplier theorems on [$L_p(X)$]{} and geometry of [B]{}anach spaces. , 204(2):320–354, 2003.
L. Grafakos. , volume 249 of [*Graduate Texts in Mathematics*]{}. Springer, New York, second edition, 2008.
L. Grafakos. , volume 250 of [*Graduate Texts in Mathematics*]{}. Springer, New York, second edition, 2009.
P. Grisvard. , 17:255–296, 1963.
R. Haller, H. Heck, and A. Noll. Mikhlin’s theorem for operator-valued [F]{}ourier multipliers in [$n$]{} variables. , 244:110–130, 2002.
Y.-S. Han and Y. Meyer. A characterization of [H]{}ilbert spaces and the vector-valued [L]{}ittlewood-[P]{}aley theorem. , 3(2):228–234, 1996.
T. S. Hänninen and T. P. Hytönen. The [$A_2$]{} theorem and the local oscillation decomposition for [Banach]{} space valued functions. , to appear. Preprint, arXiv:1210.6236.
D.D. Haroske and L. Skrzypczak. Entropy and approximation numbers of embeddings of function spaces with [M]{}uckenhoupt weights. [I]{}. , 21(1):135–177, 2008.
T. Hyt[ö]{}nen. An operator-valued [$Tb$]{} theorem. , 234(2):420–463, 2006.
T. Hyt[ö]{}nen and M.C. Veraar. -boundedness of smooth operator-valued functions. , 63(3):373–402, 2009.
T. Hyt[ö]{}nen and L. Weis. A [$T1$]{} theorem for integral transformations with operator-valued kernel. , 599:155–200, 2006.
T. [Hytönen]{} and L. [Weis]{}. , 16(4):495–513, 2010.
J. Johnsen and W. Sickel. , 5(2):183–198, 2007.
J. Johnsen and W. Sickel. On the trace problem for [L]{}izorkin-[T]{}riebel spaces with mixed norms. , 281(5):669–696, 2008.
P.C. Kunstmann and L. Weis. Maximal [$L\sb p$]{}-regularity for parabolic equations, [F]{}ourier multiplier theorems and [$H\sp \infty$]{}-functional calculus. In [*Functional analytic methods for evolution equations*]{}, volume 1855 of [*Lecture Notes in Math.*]{}, pages 65–311. Springer, Berlin, 2004.
M. Ledoux and M. Talagrand. , volume 23 of [*Ergebnisse der Mathematik und ihrer Grenzgebiete*]{}. Springer-Verlag, Berlin, 1991.
J.-L. Lions and E. Magenes. Problèmes aux limites non homogènes. [IV]{}. , 15:311–326, 1961.
J. [Marschall]{}. , 87:79–92, 1987.
T.R. McConnell. On [F]{}ourier multiplier transformations of [B]{}anach-valued functions. , 285(2):739–757, 1984.
M. Meyries and M.C. Veraar. . In preparation.
M. Meyries and M.C. Veraar. Sharp embedding results for spaces of smooth functions with power weights. , 208(3):257–293, 2012.
M. Meyries and M.C. Veraar. Traces and embeddings of anisotropic function spaces. Online first in [*Math. Ann.*]{} 2014.
P.F.X. M[ü]{}ller and M. Passenbrunner. A decomposition theorem for singular integral operators on spaces of homogeneous type. , 262(4):1427–1465, 2012.
J. [Peetre]{}. Duke Univ. Math. Series, Duke Univ., Durham, 1976.
J.L. Rubio de Francia. Martingale and integral transforms of [B]{}anach space valued functions. In [*Probability and Banach spaces (Zaragoza, 1985)*]{}, volume 1221 of [*Lecture Notes in Math.*]{}, pages 195–222. Springer, Berlin, 1986.
T. Runst and W. Sickel. , 1996.
V.S. Rychkov. Littlewood-[P]{}aley theory and function spaces with [$A^{\rm loc}_p$]{} weights. , 224:145–180, 2001.
B. [Scharf]{}, H.-J. [Schmeisser]{}, and W. [Sickel]{}. , 285(8-9):1082–1106, 2012.
H.-J. Schmeisser and W. Sickel. . Jena manuscript, 2004.
H.-J. Schmeisser and W. Sickel. Vector-valued [S]{}obolev spaces and [G]{}agliardo-[N]{}irenberg inequalities. In [*Nonlinear elliptic and parabolic problems*]{}, volume 64 of [ *Progr. Nonlinear Differential Equations Appl.*]{}, pages 463–472. Birkhäuser, Basel, 2005.
R. Seeley. Interpolation in [$L\sp{p}$]{} with boundary conditions. , 44:47–60, 1972.
E. [Shamir]{}. , 255:448–449, 1962.
W. [Sickel]{}.
W. [Sickel]{}. , 176:209–250, 1999.
W. [Sickel]{}. In [*[The Maz’ya anniversary collection. Vol. 2: Rostock conference on functional analysis, partial differential equations and applications, Rostock, Germany, August 31–September 4, 1998]{}*]{}, pages 295–321. Basel: Birkhäuser, 1999.
E.M. Stein. , volume 43 of [*Princeton Mathematical Series, Monographs in Harmonic Analysis, III*]{}. Princeton University Press, Princeton, NJ, 1993.
R.S. Strichartz. Multipliers on fractional [S]{}obolev spaces. , 16:1031–1060, 1967.
. [Š]{}trkalj and L. Weis. On operator-valued [F]{}ourier multiplier theorems. , 359(8):3529–3547 (electronic), 2007.
H. Triebel. , volume 78 of [*Monographs in Mathematics*]{}. Birkhäuser Verlag, Basel, 1983.
H. Triebel. . Johann Ambrosius Barth, Heidelberg, second edition, 1995.
H. Triebel. , volume 97 of [*Monographs in Mathematics*]{}. Birkhäuser Verlag, Basel, 2001.
H. [Triebel]{}. , 15(2):475–524, 2002.
H. Triebel. , volume 100 of [*Monographs in Mathematics*]{}. Birkhäuser Verlag, Basel, 2006.
M.C. Veraar. Embedding results for $\gamma$-spaces. In [*Recent Trends in Analysis: proceedings of the conference in honor of Nikolai Nikolski (Bordeaux, 2011)*]{}, [Theta series in Advanced Mathematics]{}, pages 209–220. [The Theta Foundation]{}, Bucharest, 2013.
Ch. Walker. PhD thesis, University of Z[ü]{}rich, 2003.
L. Weis. Operator-valued [F]{}ourier multiplier theorems and maximal [$L\sb
p$]{}-regularity. , 319(4):735–758, 2001.
F. Zimmermann. On vector-valued [F]{}ourier multiplier theorems. , 93(3):201–222, 1989.
[^1]: The first author was partially supported by the Deutsche Forschungsgemeinschaft (DFG)
|
---
abstract: 'Recent work has shown that a simple chain of interacting spins can be used as a medium for high-fidelity quantum communication. We describe a scheme for quantum communication using a spin system that conserves $z$-spin, but otherwise is arbitrary. The sender and receiver are assumed to directly control several spins each, with the sender encoding the message state onto the larger state-space of her control spins. We show how to find the encoding that maximises the fidelity of communication, using a simple method based on the singular-value decomposition. Also, we show that this solution can be used to increase communication fidelity in a rather different circumstance: where no encoding of initial states is used, but where the sender and receiver control exactly two spins each and vary the interactions on those spins over time. The methods presented are computationally efficient, and numerical examples are given for systems having up to 300 spins.'
author:
- 'Henry L. Haselgrove'
title: Optimal state encoding for quantum walks and quantum communication over spin systems
---
epsf
Introduction
============
Quantum communication, the transfer of a quantum state from one place or object to another, is an important task in quantum information science[@DiVincenzo00a]. The problem of communicating quantum information is profoundly different to the classical case[@Preskill98c; @Nielsen00a]. For example, quantum communication could not possibly be achieved by just measuring an unknown state in one place, and reconstructing in another. Rather, an entire system of source, target, and medium must evolve in a way that maintains quantum coherence.
In this paper we consider an idealised system of interacting spin-1/2 objects, isolated from the environment. The aim is to use the system’s natural evolution to communicate a qubit state from one part of the system to another. The motivation is that such a system could be used as a simple “quantum wire” in future quantum information-processing devices. The most obvious configuration to choose is a simple one-dimensional open-ended chain, with interactions between nearest-neighbour spins, in which case we want the chain’s evolution to transfer a qubit state from one end to the other. The methods in this paper apply to this simple type of chain, and also to spin networks of arbitrary graph.
A number of interesting proposals exist for quantum communication through spin chains. In [@Bose02a], the 1D Heisenberg chain was considered, with coupling strengths constant over the length of the chain and with time. The idea was to initialise all spins in the “down” state, except the first spin, which was given the state of the qubit to be sent. After the system was allowed to evolve, the spin at the far end of the chain would then contain the sent state, to some level of fidelity. Simulations were carried out for a range of chain lengths, and it was shown that the fidelity was high only for very small chains.
In [@Christandl03a], a 1D spin chain with $XY$ couplings was considered. Here, the coupling strengths were constant over time, but were made to vary over the length of the chain in a specific way. Like [@Bose02a], the first spin was initialised in the state to be sent, with all other spins initialised to “down”. It was shown that this scheme allows a [*perfect*]{} state transfer to the far spin site, for any length of chain.
In [@Osborne03b], a scheme was presented for high-fidelity quantum communication over a [*ring*]{} of spins with nearest-neighbour Heisenberg couplings, using coupling strengths constant over the length of the ring and over time. The sender and receiver are located diametrically opposite to one another. The authors showed that excitations travel around the ring in a way that can be described using a concept from classical wave theory, the dispersion relation. Using this insight, they constructed a scheme where the sender, who controls several adjacent spins, constructs an initial state that is a Gaussian pulse having a particular group velocity chosen to minimise the broadening of the pulse over time. Using this state for the encoding of the $|1\rangle$ basis of the qubit message, and the all-down state as the encoding of $|0\rangle$, an arbitrary qubit can be sent with high fidelity over rings of any size, so long as the number of spins that the sender controls is at least the cube root of the total number.
Motivated by the results in [@Osborne03b], we pose the following problem. Say we are given the Hamiltonian for a system of interacting spins, where the graph of the interactions is not necessarily a ring structure, but is completely arbitrary. Also, the strength and type of interaction along each graph edge is arbitrary (so long as total $z$ spin is conserved). The sender Alice controls some given subset of the spins, and the receiver Bob controls some other given subset. How does Alice encode the qubit to be sent onto the spins she controls, in order to maximise the fidelity of communication? We know from [@Osborne03b] that the Gaussian pulse provides a near-optimal fidelity for the case of a Heisenberg ring (and is optimal in the limit of large ring sizes). What about other shapes of spin network? Can we find a general solution?
We provide a simple and efficient method for finding the maximum-fidelity encoding of the $|1\rangle$ message basis state, for a general $z$-spin-conserving spin system. (We assume that the encoding for the $|0\rangle$ basis state is fixed to the all-down state). So, unlike the schemes in [@Bose02a], [@Christandl03a], and [@Osborne03b], which use systems with interactions that have specific strengths and conform to a specific graph, our scheme is designed to “make the most” of whatever arbitrary system is given to us. We give a numerical example of our method, for a system of 300 spins (where Alice and Bob each control 20 spins), showing a near-perfect average fidelity.
We give a second scheme for increasing fidelity, that does not use encoding of initial states, but relies on Alice and Bob dynamically controlling the interactions on their control spins. Here, the number of control spins is fixed at two each for Alice and Bob. We give a straightforward method for deriving control functions, that give a fidelity (and communication time) equal to the values that would result if Alice and Bob had instead each controlled many more spins (with static interactions) and used the optimal initial-state encoding scheme. This method has the combined benefits of being applicable to arbitrary $z$-spin-conserving spin-chains, yet having a fixed two-spin “interface” with Alice and Bob. We give numerical examples, and plot the derived control functions, for a 104-spin and a 29-spin system, showing a near-perfect fidelity in each case.
In the remainder of this introductory section, we briefly describe the assumptions behind our schemes, and define our notation. Sec. \[firstsec\] describes our method of deriving the optimal message encoding. Sec. \[secondsec\] describes our scheme for increasing fidelity via dynamic control. Concluding remarks are made in Sec. \[conclusion\].
Assumptions and notation
------------------------
The solution presented in this paper relies on two main assumptions, which we now list. Firstly, the system Hamiltonian must commute with ${Z^{\mathrm{tot}}}$, which we define to be the $z$-component of the total spin operator $${\vec{\sigma}^\mathrm{tot} }\equiv \left({X^{\mathrm{tot}}},{Y^{\mathrm{tot}}},{Z^{\mathrm{tot}}}\right) \equiv \sum_j
\vec{\sigma}_j,$$ where $\vec{\sigma}_j$ is the vector of Pauli operators $(\sigma^x,\sigma^y,\sigma^z)$ acting on the $j$-th spin. The Pauli operators in the basis “down” $|\downarrow\rangle$ and “up” $|\uparrow\rangle$ are $$\sigma^x=\left[\begin{array}{rr} 0&1\\1&\phantom{-}0
\end{array} \right];
\quad
\sigma^y=\left[\begin{array}{rr} 0&-i\\i&0 \end{array}\right]
;\quad
\sigma^z=\left[\begin{array}{rr} 1&0\\0&-1 \end{array}
\right].$$ Secondly, the spin system must be initialised to the all-down state $|\downarrow\rangle \otimes \dots \otimes
|\downarrow\rangle$, before the communication is carried out. Note that the schemes in [@Bose02a],[@Christandl03a] and [@Osborne03b] also make use of these two assumptions. It could be argued that the first condition is reasonable because it follows from rotational invariance. Of course, any external magnetic field will destroy this invariance, and in particular any magnetic field which is not in the $z$ direction will mean that the $z$-component of total spin is no longer conserved. The Heisenberg and $XY$ interactions are examples of interactions that conserve ${Z^{\mathrm{tot}}}$. The second constraint might be rather difficult to achieve in practice. One possibility would be to apply a strong polarising magnetic field in the $z$ direction, over the entire system, and let the system relax to its ground state.
In the remainder of the paper, in place of the notation $|\downarrow\rangle$ and $|\uparrow\rangle$ for the eigenstates of $\sigma^z$, we will use the equivalent but more convenient notation $|0\rangle$ and $|1\rangle$. A [*computational basis state*]{} of the system is defined to be one where each spin is in either a $|0\rangle$ state or a $|1\rangle$ state. Note that the computational basis states are all eigenstates of ${Z^{\mathrm{tot}}}$, and the eigenvalue has one of $N+1$ possible values, given by the number of $|0\rangle$s minus the number of $|1\rangle$s. So in a system of $N$ spin-1/2 objects, we can break the state space into $N+1$ subspaces of different well-defined $z$-component of total spin. We use ${\mathcal{H}}^{(n)}$, $n=0,\dots,N$, to denote these subspaces. ${\mathcal{H}}^{(n)}$ is the eigenspace of ${Z^{\mathrm{tot}}}$ that is spanned by the $N \choose n$ computational basis states that have $n$ qubits in the $|1\rangle$ state and the rest in the $|0\rangle$ state.
Since the system Hamiltonian $H$ commutes with ${Z^{\mathrm{tot}}}$, a state in ${\mathcal{H}}^{(n)}$ will remain in ${\mathcal{H}}^{(n)}$ under the evolution of $H$. ${\mathcal{H}}^{(0)}$ is one-dimensional; it is spanned by the all-zero state $|0\rangle \otimes $…$\otimes|0\rangle$. So this state is a stationary state of $H$.
The optimal encoding scheme {#firstsec}
===========================
Say that Alice wishes to send the qubit state $\alpha|0\rangle +
\beta |1\rangle$. In our scheme, she does so by preparing the state $\alpha |\mathbf{0}\rangle_A + \beta |1_{ENC}\rangle_A$, where $|\mathbf{0}\rangle_A$ is the all-zero state on her spins, and $|1_{ENC}\rangle_A$ is some state orthogonal to $|\mathbf{0}\rangle_A$ (the “[*ENC*]{}” stands for “encoded”). (Note that Alice doesn’t necessarily know $\alpha$ and $\beta$. She would presumably prepare the state by some unitary operation acting on her spins and some external spin containing the state $\alpha|0\rangle+\beta|1\rangle$.) We assume that the entire spin chain is initialised to the all-zero state, so immediately after Alice prepares the abovementioned state on her spins, the state of the whole system is $$|\Psi(0)\rangle \equiv (\alpha |\mathbf{0}\rangle_A + \beta
|1_{ENC}\rangle_A) \otimes |\mathbf{0}\rangle_{\bar{A}},
{\label{init}}$$ where $\bar{A}$ refers to all spins that Alice does not control. The whole spin system is allowed to evolve for a time $T$, giving the state $|\Psi(T)\rangle=e^{-iHT}|\Psi(0)\rangle$. Using the fact that $|0\rangle\otimes\dots\otimes|0\rangle$ is a stationary state, $|\Psi(T)\rangle$ can be written (up to some global phase) as $$\begin{aligned}
|\Psi(T)\rangle &=& \beta
\sqrt{1-{\mathcal{C}}_B(T)}|\eta(T)\rangle + \nonumber\\
&&
\hspace{0.5cm}|\mathbf{0}\rangle_{\bar{B}}(\alpha|\mathbf{0}\rangle_B+\beta\sqrt{{\mathcal{C}}_B(T)}|\gamma(T)\rangle_B)
{\label{fin}} ,\end{aligned}$$ for some nonnegative ${\mathcal{C}}_B(T)$, some normalised $|\gamma(T)\rangle_{B}$ orthogonal to $|\mathbf{0}\rangle_B$, and for some normalised $|\eta(T)\rangle$ that is orthogonal to all states of the form $|0\rangle_{\bar{B}}\otimes|v\rangle_B$.
We now show that ${\mathcal{C}}_B(T)$ can be used as a measure of success. Comparing Eqs. (\[init\]) and (\[fin\]), we see that ${\mathcal{C}}_B(0)=0$. If ${\mathcal{C}}_B(T)$ reaches 1 for some later $T$, a perfect-fidelity quantum communication has resulted. This is because Bob will then have the state $\alpha|\mathbf{0}\rangle_B+\beta|\gamma\rangle_B$ on the qubits he controls, which can be “decoded” by a unitary operation into the state $\alpha|0\rangle + \beta |1\rangle$ of a single spin, since $|\mathbf{0}\rangle_B$ and $|\gamma\rangle_B$ are orthogonal. If ${\mathcal{C}}_B(T)$ is less than $1$, the unitary decoding by Bob will leave him with a qubit state $\rho$ that is generally different to the message state. That is, the measure of state fidelity $F\equiv( \alpha|0\rangle + \beta |1\rangle )^\dagger
\rho ( \alpha|0\rangle + \beta |1\rangle )$ between the message $\alpha|0\rangle +\beta|1\rangle$ and $\rho$, will generally be less than one whenever ${\mathcal{C}}_B(T)<1$. However, the value of $F$ is highly dependent on the message state — for example, if $\alpha=1$ then $F=1$ regardless of the value of ${\mathcal{C}}_B(T)$.
${\mathcal{C}}_B(T)$, on the other hand, is a message-independent measure of the fidelity of communication. Consider $\bar{F}$, defined to be the state fidelity $F$ averaged over all message states. For encodings $|1_{ENC}\rangle$ that belong to the ${\mathcal{H}}^{(1)}$ subspace, we have $$\bar{F}=\frac{1}{2}+\frac{1}{3}\sqrt{{\mathcal{C}}_B(T)} +
\frac{1}{6}{\mathcal{C}}_B(T), {\label{fbar}}$$ which is a monotonic function of ${\mathcal{C}}_B(T)$ [@Osborne03b]. So, in this case maximising the [*average*]{} state fidelity is equivalent to maximising ${\mathcal{C}}_B(T)$. More generally, for $|1_{ENC}\rangle$ not in ${\mathcal{H}}^{(1)}$, the expression in [Eq. (\[fbar\])]{} provides a reasonably tight lower bound on $\bar{F}$: $$\begin{aligned}
\frac{1}{2}+\frac{1}{3}\sqrt{{\mathcal{C}}_B(T)} + \frac{1}{6}{\mathcal{C}}_B(T) \leq
\bar{F} &\leq& \frac{1}{2}+\frac{1}{3}\sqrt{{\mathcal{C}}_B(T)} + \frac{1}{6}
\nonumber\\
&=&\frac{2}{3}+\frac{1}{3}\sqrt{{\mathcal{C}}_B(T)}.\end{aligned}$$ The precise value of $\bar{F}$ will then depend on $|\eta(T)\rangle$ and the full specification of Bob’s decoding unitary.
A further argument for using ${\mathcal{C}}_B(T)$ as a measure of communication fidelity comes from considering the system’s ability to transfer [*quantum entanglement*]{} from Alice to Bob. Suppose that Alice, instead of sending a message which is a pure quantum state $\alpha|0\rangle + \beta|1\rangle$, sends a state which is maximally entangled with some additional spin that Alice possesses. (The additional spin does not interact when the system evolves). If the communication is perfect, the result must be that Bob’s decoded message becomes maximally entangled with Alice’s additional spin. So more generally, when the communication is not perfect, the amount of entanglement generated between Alice and Bob would be a good measure of communication fidelity. In fact, the entanglement generated, measured by the [*concurrence*]{}, is equal to $\sqrt{{\mathcal{C}}_B(T)}$ (a proof of this fact is outlined in Appendix A). This is independent of $|\eta(T)\rangle$, or the full specification of Bob’s decoding unitary, or whether $|1\rangle_{ENC}$ belongs to ${\mathcal{H}}^{(1)}$.
To recap, when the Hamiltonian commutes with ${Z^{\mathrm{tot}}}$ and the state is initialized to $|\mathbf{0}\rangle$, the problem of achieving a high communication fidelity can be boiled down to choosing an appropriate initial encoding $|1_{ENC}\rangle$ for the $|1\rangle$ qubit basis state. We seek a state $|1_{ENC}\rangle_A \otimes
|\mathbf{0}\rangle_{\bar{A}}$ that has the property that it evolves to (or near to) a state of the form $|\mathbf{0}\rangle_{\bar{B}} \otimes |\gamma\rangle_B$, or in other words such that ${\mathcal{C}}_B(T)\approx 1$ for some $T$. Alice’s choice for the “encoding” of the $|0\rangle$ qubit basis state is fixed to the all-zero state. With perfect fidelity that basis state will evolve to the all-zero state on Bob’s spins. (Note that in some cases it may be possible to increase fidelity further by allowing an encoding for $|0\rangle$ other than the all-zero state. We ignore such a possibility, in order to keep the method for finding the encoding simple and efficient. The simplification is used likewise in [@Osborne04a].)
We now show that the encoding $|1_{ENC}\rangle$ which maximises ${\mathcal{C}}_B(T)$ for a given $T$ can be found by performing the singular value decomposition on a modified version of the evolution matrix $e^{-iHT}$.
Let ${\mathcal{A}}$ be the vector subspace of states of the form $|1_{ENC}\rangle_A \otimes |\mathbf{0}\rangle_{\bar{A}}$, such that $_A\langle\mathbf{0}|1_{ENC}\rangle_A=0$. Similarly, let ${\mathcal{B}}$ be the vector subspace of states of the form $|\mathbf{0}\rangle_{\bar{B}} \otimes |\gamma\rangle_B$, such that $_B\langle\mathbf{0}|\gamma\rangle_B=0$. In other words, ${\mathcal{A}}$ reflects all the possible encodings that Alice could use for the $|1\rangle$ qubit basis state (regardless of the fidelity they would achieve). ${\mathcal{B}}$ is the set of states that we would like some state in ${\mathcal{A}}$ to evolve to; a state in ${\mathcal{A}}$ that evolves to one in ${\mathcal{B}}$ represents an encoding for $|1\rangle$ that gives ${\mathcal{C}}_B(T)=1$ and thus a perfect average fidelity.
Let $P_{\mathcal{A}}$ and $P_{\mathcal{B}}$ be the projectors onto the subspaces ${\mathcal{A}}$ and ${\mathcal{B}}$. Let $U(T)\equiv e^{-iHT}$ be the time-evolution operator. From [Eqs. (\[init\])]{} and (\[fin\]), we can write $${\mathcal{C}}_B(T)= \| \hspace{1mm} P_{\mathcal{B}}U(T) |1_{ENC}\rangle_A {\otimes}
|\mathbf{0}\rangle_{\bar{A}} \hspace{1mm} \|^2,$$ where $\| \cdot \|$ denotes the $l_2$-norm. This means that for a particular total communication time $T$, choosing the optimal initial encoding for the $|1\rangle$ state is a matter of finding the normalised $|\psi\rangle \in \mathbb{C}^{2^N}$ that maximises $\| P_{\mathcal{B}}U(T) P_{\mathcal{A}}|\psi\rangle \|$. The maximum value is given by the largest singular value of $\tilde{U}(T)\equiv P_{\mathcal{B}}U(T)
P_{\mathcal{A}}$, and the corresponding optimal $|\psi\rangle$ is the first right-singular-vector of $\tilde{U}(T)$ [@Horn91a]. Recall, the SVD (singular value decomposition) of $\tilde{U}(T)$ is $$\begin{aligned}
\tilde{U}(T)&=&V S W^\dagger \\
&=& \left( \begin{array}{ccc}
\vec{v}_1 \hspace{0.5mm} & \vec{v}_2 \hspace{0.5mm} & \ldots \\
\downarrow & \downarrow & \ldots \\
{\rule{0em}{1.55em}} & {} & {}
\end{array}
\right) \left( \begin{array}{ccc}
s_1 & {} & {} \\
{} & s_2 & {} \\
{} & {} & \ddots
\end{array}
\right) \left( \begin{array}{ccc}
\vec{w}_1^* & \rightarrow & {\hspace{3mm}} \\
\vec{w}_2^* & \rightarrow & {} \\
\vdots & \vdots & {}
\end{array}
\right), \nonumber\end{aligned}$$ where $s_1 \ge s_2 \ge \dots \ge 0$ are the [*singular values*]{}, the orthonormal $\vec{w}_j$ are the [*right singular vectors*]{}, and the orthonormal $\vec{v}_j$ are the [*left singular vectors*]{} of $\tilde{U}(T)$. Numerical packages such as Matlab have in-built routines for easily calculating the SVD. So, we have that ${\mathcal{C}}_B(T)$ has its maximum value, $s_1^2$, when Alice chooses the initial state $|1_{ENC}\rangle_A \otimes
|\mathbf{0}\rangle_{\bar{A}}=\vec{w}_1$ to encode $|1\rangle$.
Other parts of the decomposition could be useful as well. Say Alice wants to transmit two qubits [*simultaneously*]{} to Bob. If she uses the all-down state to encode the $|00\rangle$ basis state, then she should use the encodings $\vec{w}_1$, $\vec{w}_2$, and $\vec{w}_3$ for the other three basis states $|01\rangle$, $|10\rangle$, and $|11\rangle$. Then, so long as $s_3\approx 1$, the two qubits would be simultaneously communicated with high fidelity.
The vectors $\vec{w}_j$ and the values $s_j$ are also the eigenvectors and square-root eigenvalues respectively of $(P_B\tilde{U}(T)P_A)^\dagger P_B\tilde{U}(T)P_A $ = $P_A
\tilde{U}^\dagger(T) P_B \tilde{U}(T) P_A$. Now, ${Z^{\mathrm{tot}}}$ commutes with $P_A \tilde{U}^\dagger(T) P_B \tilde{U}(T) P_A$ because it commutes with each of $P_A$, $P_B$, $\tilde{U}(T)$, and $\tilde{U}^\dagger(T)$ separately. So the $\vec{w}_j$ will all have well-defined total $Z$ spin (or can be chosen to, wherever ambiguities exist because of degeneracies in the $s_j$). This is important when it comes to calculating these solutions efficiently. Instead of performing the full $2^N$ by $2^N$ matrix exponential and SVD, the calculation can be done separately for each of the smaller subspaces ${\mathcal{H}}^{(n)}$, starting each calculation with the $N \choose n$-by-$N \choose n$ part of the Hamiltonian that acts on the ${\mathcal{H}}^{(n)}$ subspace.
Alice can’t create a state with more than $|A|$ qubits in the “one” state, where $|A|$ is the number of spins she controls. So, in fact the calculation only needs to be done over the ${\mathcal{H}}^{(1)}$, …[ ]{}, ${\mathcal{H}}^{(|A|)}$ subspaces (in other words, the singular values corresponding to states in other subspaces will always be zero).
In practice we have found that the optimal solution $\vec{w}_1$ often belongs to the ${\mathcal{H}}^{(1)}$ subspace. (In particular, we calculated the optimal solution for a range of different values of $T$ for various 8 and 9-spin systems, and found that only for a very small minority of the values of $T$, for each system, was the solution [*not*]{} in the ${\mathcal{H}}^{(1)}$ subspace ). A rudimentary argument for this can be made as follows. Looking for solutions in ${\mathcal{H}}^{(n)}$ means optimising over Alice’s ${|A|} \choose n$ degrees of freedom (of the space ${\mathcal{A}}\cap {\mathcal{H}}^{(n)}$), in order to make the final state land in or near a ${|B|} \choose
n$-dimensional target space ${\mathcal{B}}\cap {\mathcal{H}}^{(n)}$. This must be achieved despite the fact that the Hamiltonian is “trying” to move the state through a much larger $N \choose n$-dimensional space ${\mathcal{H}}^{(n)}$. Over the various values of $n=1,\dots,|A|$, the dimensionality of ${\mathcal{A}}\cap {\mathcal{H}}^{(n)}$ and ${\mathcal{B}}\cap {\mathcal{H}}^{(n)} $ as a fraction of the dimensionality of ${\mathcal{H}}^{(n)}$ is largest when $n=1$. In other words, when $n=1$, the size of the target space, and amount of control available of the initial state, is largest as a fraction of the dimensionality of the entire subspace ${\mathcal{H}}^{(n)}$.
So, in general we can restrict all the calculations to the $N$-dimensional subspace ${\mathcal{H}}^{(1)}$, and there will still be a good chance that we will arrive exactly at the globally-optimal encoding $\vec{w}_1$. Ignoring solutions in the other subspaces will increase the efficiency of calculation considerably, especially for large chains.
The evolution of a state in the ${\mathcal{H}}^{(1)}$ subspace can also be interpreted as a [*continuous quantum walk*]{} of a particle over a graph. ( For an introduction to quantum walks, see for example [@Kempe03b] and references therein). The graph is simply the graph of interactions between spins in the Hamiltonian $H$, and the state $|\mathbf{0}\rangle_{\bar{j}}\otimes|1\rangle_j$ corresponds to the particle being at vertex $j$ of that graph. So our methods for increasing communication fidelity are, equivalently, methods for guiding a quantum walk from one part of a graph to another. We point out this connection because of the significant interest currently in using quantum walks for solving computational problems (see for example [@Childs03c; @Ambainis03a; @Osborne04a] and references therein).
To demonstrate the use of the SVD optimal-encoding technique, we now consider a numerical example. Imagine that Alice and Bob are joined by a $300$-site open-ended chain with nearest-neighbour couplings given by the antiferromagnetic Heisenberg interaction, with coupling strengths all equal to 1. That is, $$H=\sum_{j=1}^{299} \vec{\sigma}_j\cdot\vec{\sigma}_{j+1}.
{\label{hexamp}}$$ Assume that Alice and Bob control the first and last 20 spins respectively.
In light of the earlier discussion, we restrict our optimisation to the ${\mathcal{H}}^{(1)}$ subspace, and thus ignore all singular vectors in other subspaces. A Matlab program is used to carry out the following calculations. First, the 300 by 300 matrix $H^{(1)}$, defined to be the part of $H$ that acts on ${\mathcal{H}}^{(1)}$, is constructed. Then, the SVD of $$\tilde{U}^{(1)}(T)=P_{{\mathcal{B}}\cap {\mathcal{H}}^{(1)}} e^{-i H^{(1)} T} P_{{\mathcal{A}}\cap {\mathcal{H}}^{(1)}}$$ is calculated for a range of values of $T$. The optimal value for communication time is not known beforehand, so this repetition of the calculation for different values of $T$ is needed in order to find a reasonable tradeoff between communication time and fidelity.
The four largest singular values, $s_1$, …, $s_4$, of $\tilde{U}^{(1)}(T)$ are plotted in Figure \[fig:a\]. Over the range of $T$ shown, $s_1(T)$ has its maximum of $0.99999$ at $T =
75.75$. So, this system can transmit a qubit with near-perfect fidelity, over a time interval of $75.75$. The graph shows that $s_2(75.75)$ and $s_3(75.75)$ are also very close to 1, so in fact [*two qubits*]{} could be transmitted simultaneously with high fidelity in this example, using the encodings $|\bf{0}\rangle$, $\vec{w}_1(75.75)$, $\vec{w}_2(75.75)$ and $\vec{w}_3(75.75)$ for the two-qubit basis states.
Let’s look at the actual optimally encoded states that are generated in this example. We visualise a state in ${\mathcal{H}}^{(1)}$ by plotting the square magnitude of the coefficients $\psi_j$, where $\psi_j$ is the coefficient of the basis state that has the $j$-th spin in the $|1\rangle$ state: $${\mathcal{H}}^{(1)} \ni |\psi\rangle = \sum_{j=1,\dots,N} \psi_j
\hspace{1mm} |1\rangle_j \otimes |\mathbf{0}\rangle_{\bar{j}}.
{\label{expansion}}$$
In Figure \[fig:b\] we show the evolution of the state $\vec{w}_1(75.75)$. That is, we set $|\psi(0)\rangle=\vec{w}_1(75.75)$, and $|\psi(t)\rangle=e^{-iH^{(1)}t} |\psi(0)\rangle$, and plot the magnitudes $|\psi_j|^2$ for a sequence of equally-spaced times $t$. As is necessarily the case, the $t=0$ state has non-zero coefficients only on Alice’s spins, $j=1\dots20$. The state deforms itself into a Gaussian shape quite quickly. This is interesting in comparison with the results in [@Osborne03b]. Whilst Gaussian initial states were shown to optimise fidelity on a Heisenberg ring, the best initial states for an open-ended Heisenberg chain are ones that deform into a Gaussians. From the total communication time in this example, the group velocity of the pulse is roughly $3.95$ (defining the distance between neighbouring spins to be 1). Thus, in the open-ended Heisenberg chain we have found the same phenomena that appeared in the Heisenberg ring in [@Osborne03b], notably that the system has a preferred group velocity that gives a minimum dispersion and thus maximum fidelity. This explains the fact that in Figure \[fig:a\] the singular values drop for $T$ greater than $75.75$, and rise again to a near-maximum at $T\approx 225 \approx 3 \times
75.75$: the high-fidelity communication for $T\approx 225$ is also operating at the preferred group velocity, but the wave packet is traversing the chain three times, after bouncing from each end.
Curiously, the lower solutions $\vec{w}_2(75.75)$ and $\vec{w}_3(75.75)$ seem to evolve into a sum of two and three Gaussians respectively (see Figure \[fig:c\]). Animations of these evolutions are available online [@Haselgrove04z].
Dynamic control {#secondsec}
===============
The scheme we presented in Sec. [\[firstsec\]]{} utilises the evolution of a system having completely static interactions. The control that Alice and Bob have over the chain is only for an instant at the beginning and end of the procedure, and so their only degrees of freedom for increasing the fidelity lie in the encoding they use. For very long chains, the number of control sites needed to give a high fidelity might become impractically large, as suggested by the results in [@Osborne03b]. In this section, we consider the advantage that can be gained by allowing Alice and Bob to control their spins throughout the procedure, by modulating the strength of interactions on those spins. The advantages of the scheme are that the number of control spins are fixed at four, and that suitable functions for Alice and Bob to use to vary the interaction strengths are easily derived from a simple extension of the SVD approach already described.
This type of control scheme is an example of a fundamental problem in quantum information processing, that of determining how to use the limited physical control that one has of a quantum system, in such a way as to achieve the dynamics that are required. For the task at hand, our method provides a practical and efficient way of finding an appropriate dynamical control.
A general schematic for the system is shown in Figure \[fig:d\]. Alice is now in control of just two spins, labeled $A_1$ and $A_2$, and, likewise, Bob controls two spins $B_1$ and $B_2$. All the other spins in the system are collectively denoted $C$. The graph of the interactions connecting Alice and Bob’s spins can be arbitrary, except that $A_1$ must directly couple only to $A_2$, and $B_1$ must directly couple only to $B_2$. Like the previous scheme, we require that the system Hamiltonian $H$ commutes with ${Z^{\mathrm{tot}}}$, and that all spins are initialised to $|0\rangle$ before the procedure starts. So again the problem is that of finding a way of sending the $|1\rangle$ basis state with high fidelity.
The protocol works as follows. At time $t=0$, Alice transfers the qubit state she wishes to send onto the spin $A_1$. Then she varies the coupling strength between $A_1$ and $A_2$ according to some function which we denote $J_A(t)$, and varies the $z$ magnetic field on $A_1$ and $A_2$ according to functions $B_{A1}(t)$ and $B_{A2}(t)$. At the same time, Bob varies his coupling strength and $z$ magnetic fields according to the functions $J_B(t)$, $B_{B1}(t)$ and $B_{B2}(t)$. This process is continued over some time interval $0<t<T$. At $t=T$ Bob’s spin $B_1$ will contain the sent qubit state, to a level of fidelity that depends on the six control functions.
How do we choose the control functions in a way that gives us a high average communication fidelity? The trick is to imagine a modified version of the system, where a number of “phantom” spins have been added to both Alice and Bob’s set of control spins, but all couplings are now fixed (see Figure \[fig:e\]).
The SVD method is applied to this modified system, to find the optimal initial state on Alice’s extended set of control spins. The evolution of the encoded state through the modified system is then simulated on a (classical) computer, and the results of the simulation are used to derive appropriate control functions for the actual physical system, using a method which we describe below. Since, over the bulk of the physical system, the initial state and interactions are identical to those at the corresponding regions of the modified system, the problem reduces to finding control functions which make the state on $A_2$ and $B_2$ in the physical system evolve in the same way as those corresponding spins in the modified chain. When that is achieved, the state on the bulk of the physical chain will evolve in the same way as in the modified chain, which means that we can communicate a qubit with the same fidelity as for the optimally encoded state in the modified system.
For the sake of clarity, we describe the method in detail for a less general configuration, the 1D $XY$ chain. The derivation is simpler in this case because, as we shall see, the magnetic control is not needed. The system Hamiltonian is given by $$\begin{aligned}
H(t)&=&\sum_{j=2}^{N-2} J_j \left[ \sigma^x_j \sigma^x_{j+1} +
\sigma^y_j \sigma^y_{j+1}\right] \nonumber \\
&& \hspace{0.5cm} + J_A(t)\left(
\sigma^x_1\sigma^x_2+\sigma^y_1\sigma^y_2\right) \nonumber \\
&& \hspace{0.5cm} + J_B(t) \left(
\sigma^x_{N-1}\sigma^x_N+\sigma^y_{N-1}\sigma^y_N \right) .
{\label{orig}}\end{aligned}$$ That is, there are some arbitrary fixed $J_j$ that specify the strengths of the $XY$ couplings over the bulk of the chain. The strengths at the first and last links can be varied over time by Alice and Bob.
We write down a Hamiltonian $\tilde{H}$ of a modified system, where we have extended the length of the chain in both directions by adding $N_P$ phantom spins to both Alice and Bob’s sides. The new coupling strengths are chosen to be $1$, and the two strengths that were time-varying in the original system are now also fixed at one. So that we can use the same numbering system for the spins as in [Eq. (\[orig\])]{}, we let the indices of the spins range into the negative in the modified system, running from $1-N_P$ to $N+N_P$. The modified Hamiltonian is written simply as $$\tilde{H}=\sum_{j=1-{N_P}}^{N+N_P-1} J_j \left[ \sigma^x_j
\sigma^x_{j+1} + \sigma^y_j \sigma^y_{j+1}\right],$$ where we have extended the definition of $J_j$ so that it equals $1$ for $j=(1-N_P),\dots, 1$ and for $j=(N-1),\dots,(N+N_P-1)$.
Next, using the SVD method, we find the best encoded initial state on the set of spins from index $(1-N_P)$ to $1$, while assuming that the target set of spins ranges from $N$ to $(N+N_P-1)$. That is, we are imagining a “modified Alice” that controls the first $(N_P+1)$ spins and a “modified Bob” that controls the last $(N_P+1)$ spins of this extended chain. Recall that the SVD method depends on a choice of total communication time $T$. As in the example earlier, we may wish to search over a range of values of $T$ to find the most suitable value. It is important for the procedure at hand that we restrict ourselves to states in the ${\mathcal{H}}^{(1)}$ subspace (whereas before this restriction was just a way of making the solution much faster to compute). So, we calculate $\vec{w}_1(T)$, the first right-singular-vector of $P_{{\mathcal{B}}\cap {\mathcal{H}}^{(1)}} e^{-i \tilde{H}^{(1)} T} P_{{\mathcal{A}}\cap
{\mathcal{H}}^{(1)}}$, where $\tilde{H}^{(1)}$ is the part of $\tilde{H}$ that acts on the ${\mathcal{H}}^{(1)}$ subspace.
Then, we need to be able to calculate the evolution of the state $\vec{w}_1$, over a range of times $t$ from $0$ to $T$. Let $|\psi(0)\rangle$=$\vec{w}_1(T)$, and $|\psi(t)\rangle=e^{-iH^{(1)}t}|\psi(0)\rangle$. As earlier, the evolving state is a series of complex coefficients $\psi_j(t)$, where $j$ is the index to a spin site, ranging from $(1-N_P)$ to $(N+N_P-1)$. We need to know $\psi_1(t)$ and $\psi_N(t)$ for every value of $t$ that we wish to calculate $J_A(t)$ and $J_B(t)$ for.
Similarly we use $\phi_j(t)$, $j=1,\dots,N$ to denote the evolution of the $|1\rangle$ qubit state over the original physical chain. Recall that in this scheme, Alice places the qubit state to be sent, unencoded, onto spin number 1, after all other spins have been initialised to zero. So, initially we have $\phi_j(0)=\delta_{j,1}$. The functions $\phi_j(t)$ depend on the control functions $J_A(t)$ and $J_B(t)$ (whereas the $\psi_j(t)$ do not).
The aim is to chose control functions $J_A(t)$ and $J_B(t)$ in such a way as to force $\phi_j(t)=\psi_j(t)$, for all the spins in the range $j=2,\dots,N-1$, and for all $t$ in the interval $[0,T]$. That is, we know the way the optimal encoded state evolves over the modified chain, and we want to make the $|1\rangle$ state in the physical system evolve in exactly the same way, over all spins except $1$ and $N$. In this way, the physical system will carry a qubit across it’s length with the same fidelity as the encoded modified system does.
The interactions on the spins from site 3 to site $N-2$ are the same in the physical chain as in the modified chain. So, the differential equations for the $\psi_j(t)$ are the same as those for the $\phi_j(t)$, for $j=3,\dots,N-2$. Specifically, $$\begin{aligned}
\frac{d\psi_j(t)}{dt} &=& -2i \left[ J_{j-1}\psi_{j-1}(t) +
J_{j}\psi_{j+1}(t) \right] \hspace{0.5cm} \mathrm{and}
\nonumber \\
\frac{d\phi_j(t)}{dt} &=& -2i \left[ J_{j-1}\phi_{j-1}(t) +
J_{j}\phi_{j+1}(t)\right],\end{aligned}$$ for $j=3,\dots,N-2$. Also, the initial conditions are the same between the $\psi_j$ and the $\phi_j$, for $j=2,\dots,(N-1)$: $\psi_j(0)=\phi_j(0)=0$ .
It follows that if we can use our control functions to force $\frac{d\phi_2(t)}{dt}=\frac{d\psi_2(t)}{dt}$, and $\frac{d\psi_{N-1}(t)}{dt}=\frac{d\phi_{N-1}(t)}{dt}$, over the time range $t=0,\dots,T$, then we will have $\psi_j(t)=\phi_j(t)$ for all $t$ in that time range, and for [*all*]{} $j=2,\dots,N-1$, as desired.
Now, $$\frac{d\psi_2(t)}{dt}=-2i\left[ \psi_1(t) + J_2\psi_3(t) \right]$$ and $$\frac{d\phi_2(t)}{dt}=-2i\left[ J_A(t) \phi_1(t) + J_2\phi_3(t)
\right].$$
So, assuming that at time $t$ $\phi_j(t)=\psi_j(t)$ for $j=2,\dots, N-1$, then $\frac{d\phi_2(t)}{dt}=\frac{d\psi_2(t)}{dt}$ by setting $$J_A(t)=\frac{\psi_1(t)}{\phi_1(t)}, {\label{JA}}$$ and $\frac{d\phi_{N-1}(t)}{dt}=\frac{d\psi_{N-1}(t)}{dt}$ by setting $$J_B(t)=\frac{\psi_N(t)}{\phi_N(t)}. {\label{JB}}$$
Thus, the practical task of numerically calculating $J_A(t)$ and $J_B(t)$ involves simulating the evolution of both the $\phi_j$ and $\psi_j$ states on the original and modified systems respectively, over the time interval $[0,T]$, and evaluating [Eqs. (\[JA\])]{} and (\[JB\]).
The functions $J_A(t)$ and $J_B(t)$ must of course be real-valued, for the Hamiltonian to be Hermitian. Equations (\[JA\]) and (\[JB\]) will indeed be real for the $XY$ chain. The expressions for the $\frac{d\phi_j}{dt}$ are all given by a purely imaginary linear combination of the nearest-neighbour values $\phi_{j-1}$ and $\phi_{j+1}$. Then, considering the initial conditions, $\phi_j(0)=\delta_{1,j}$, it’s clear that the $\phi_j(t)$ are real for odd $j$ and imaginary for even $j$, for all values of $t$. The values $\psi_j(t)$ also have this property of alternating real and imaginary values. Again, the time derivatives of $\psi_j(t)$ are purely imaginary linear combinations of the values $\psi_{j-1}(t)$ and $\psi_{j+1}(t)$. Thus, by performing the change of variables $$\psi_j'(t) = \left\{
\begin{array}{ll}
\psi_j(t) & \textrm{if $j$ is
odd, and} \\
i\psi_j(t) \hspace{0.5cm} & \textrm{if $j$ is even,}
\end{array}
\right.$$ the differential equations for $\psi'_j(t)$ will all have real coefficients. So, the entries of the evolution matrix $e^{-i
\tilde{H}^{(1)}T}$ must be real, after that change of variables. Thus, so must be the entries of $P_{{\mathcal{B}}\cap {\mathcal{H}}^{(1)}} e^{-i
\tilde{H}^{(1)} T} P_{{\mathcal{A}}\cap {\mathcal{H}}^{(1)}}$. So, $\vec{w}(T)$, which is the right-singular vector of $P_{{\mathcal{B}}\cap {\mathcal{H}}^{(1)}}
e^{-i \tilde{H}^{(1)} T} P_{{\mathcal{A}}\cap {\mathcal{H}}^{(1)}}$, will also have all real coefficients with respect to the changed variables. Changing variables back, the initial encoded state $\psi_j(0)=\vec{w}_j(T)$ will thus have the property of having real values for odd $j$ and imaginary values for even $j$, and so will $\psi_j(t)$ for all $t$. So, [Eqs. (\[JA\])]{} and (\[JB\]) will be real-valued as required.
[Eqs. (\[JA\])]{} and (\[JB\]) will never be infinite. In fact, $|J_A(t)|$ and $|J_B(t)|$ will be at most 1. This is a simple consequence of conservation of probability. Since $\psi_j(t)=\phi_j(t)$ over the bulk of the chain ($j=2,\dots,N-1$) and Alice and Bob’s sides only interact via the bulk of the chain for both the physical and modified systems, we have that $$\begin{aligned}
\sum_{j=1-N_P}^1 |\psi_j|^2 &=& |\phi_1|^2\textrm{, and} \\
\sum_{j=N}^{N+N_P} |\psi_j|^2 &=& |\phi_N|^2,\end{aligned}$$ from which it follows that $|\psi_1(t)|\le|\phi_1(t)|$ and $|\psi_N(t)|\le|\phi_N(t)|$. So, $|J_A(t)|\le 1$ and $|J_B(t)|\le
1$, if they are defined. If $J_A(t)$ (or $J_B(t)$) is undefined ($0/0$), it means that the requirement of $\frac{d\phi_2(t)}{dt}=\frac{d\psi_2(t)}{dt}$ ( respectively $\frac{d\phi_{N-1}(t)}{dt}=\frac{d\psi_{N-1}(t)}{dt}$ ) is satisfied [*regardless*]{} of the value of $J_A(t)$ (respectively $J_B(t)$) for that $t$, in which case the value of the control function can be chosen arbitrarily at that time.
We now plot the derived control functions $J_A(t)$ and $J_B(t)$ for two simple example $XY$ chain systems. We used numerical integration in these examples, in calculating the evolution of the $\phi_j(t)$ due to the time-varying Hamiltonian. We divided the total evolution into a number of discrete time steps, where the approximation was made that the Hamiltonian remains constant throughout each step. The value of $J_A(t)$ and $J_B(t)$ for a step was calculated from the state of the system at the previous step. We used 2000 time steps, which gave a final fidelity in the physical chain within two significant figures of the correct value given by the evolution of the static modified system.
The first example is a chain 104 spins long (ie. 100 non-controlled spins, plus the four control spins), with all the non-controlled coupling strengths set to the same value, 1. The control functions were derived by using a modified chain 144 spins long (that is, 20 phantom spins added to each side) with all coupling strengths set to 1, and the total communication time $T$ chosen to be $36$. Figure (\[fig:const\]) shows that the resulting $J_A(t)$ and $J_B(t)$ are quite simple and well behaved. The fidelity measure ${\mathcal{C}}_B(T)$ is $1.0$, to 6 decimal places. This can be compared with the fidelity in the same 104-spin system but without the time-dependent control, that is with $J_A(t)$ and $J_B(t)$ fixed at 1: over the time interval $0<t<1000$, the value of ${\mathcal{C}}_B(t)$ is at most $0.2809$.
The second example is an $XY$ chain 29 spins long, but where the non-controlled coupling strengths are randomly sampled uniformly from the interval $[0.95,1.05]$. This is as if the chain has been manufactured with random imperfections in the coupling strengths, but these coupling strengths have been somehow measured after the manufacturing process and are known to Alice and Bob. (A shorter chain was chosen in this example, compared with the previous example, in order that a near-perfect fidelity would still result. We have observed that when random couplings are used, the achievable fidelity will decrease as a function of the chain length). We derived control functions using a modified system with 25 phantom spins added to each side, where the new arbitrary coupling strengths are set to 1. The communication time $T$ was chosen to be $19.5$. Figure (\[fig:rand\]) shows control functions which are a little more complicated in this case, but still rather smooth. The fidelity measure is ${\mathcal{C}}_B(T)=0.99625$. In comparison, in the non-controlled version of this system, with $J_A(t)=J_B(t)=1$, the value of ${\mathcal{C}}_B(t)$ does not exceed $0.496$ over the interval $0<t<1000$. Animations of both examples are available online [@Haselgrove04z].
What about a system that is not simply an $XY$ chain, but any configuration conforming to Figure (\[fig:d\]) and conserving ${Z^{\mathrm{tot}}}$? Then, the ideas and methods are almost the same, but with the added complication that we need control the $z$ magnetic fields on Alice and Bob’s qubits as well as controlling $J_A(t)$ and $J_B(t)$. The are two reasons why the magnetic control is needed, which we explain for Alice’s side. First, the simple phase relation between $\psi_1(t)$ and $\phi_1(t)$ that we saw in the $XY$ chain does not occur in general. So, $B_{A1}(t)$ is chosen simply to keep $\phi_1(t)$ in constant relative phase to the $\psi_1(t)$. Second, the type of interaction that the $J_A(t)$ is modulating may contain it’s own magnetic-field-like interactions (that is, non-equal diagonal elements in the Hamiltonian) that need to be cancelled by $B_{A1}(t)$ and $B_{A2}(t)$. General expressions for $B_{A1}(t)$ and $B_{A2}(t)$ are straightforward to derive, but not particularly illuminating, so will not be given here.
Conclusion
==========
We have considered the problem of communicating a quantum state over an arbitrary ${Z^{\mathrm{tot}}}$-conserving spin system. Our first scheme used a static system Hamiltonian, and utilised the fact that the sender and receiver control several spins each, to increase fidelity by performing state encoding. We showed that choosing the optimal state encoding is a simple matter of performing a SVD on a modified evolution matrix.
We have also shown that if the sender and receiver have control of just two spins each, but can vary the interactions on these four spins over time, then they can achieve a fidelity that is equal to if they each controlled many more spins on a static system and used the optimal state encoding. We have given a practical method of deriving suitable control functions. The advantage of this scheme is the “fixed interface” that Alice and Bob have with the chain. That is, if the chain is altered, the only change that Alice and Bob need make is to their control functions, rather than to the number of spins they control.
It should be noted that the systems we have considered are idealised to a high degree. In particular, we haven’t considered the effects of external noise, or the effect of having a Hamiltonian that only [*approximately*]{} commutes with ${Z^{\mathrm{tot}}}$, or the case where Alice and Bob have only an approximate knowledge of the system Hamiltonian. These issues will be the subject of future work by the author.
Acknowledgements {#acknowledgements .unnumbered}
================
I would like to thank Tobias Osborne and Michael Nielsen for their detailed comments on the original manuscript, and for helpful and enlightening discussions relating to this work.
Here we outline a proof of the claim in Sec. \[firstsec\] regarding the connection between ${\mathcal{C}}_B(T)$ and the system’s ability to transmit entanglement from Alice to Bob. This connection helps establish ${\mathcal{C}}_B(T)$ as a good measure of communication fidelity.
Suppose that Alice sends a state which is maximally entangled with some additional spin that Alice possesses. (After the maximally entangled state is created, the additional spin is assumed to not interact during the remainder of the communication procedure). Then, after the communication procedure of Sec. \[firstsec\] is carried out, the entanglement (measured by concurrence) between Alice’s additional spin and Bob’s decoded message, equals $\sqrt{{\mathcal{C}}_B(T)}$.
**Proof:** Note that it doesn’t matter which maximally-entangled state is used — all such states are equivalent up to a local unitary on the additional spin, and such a local unitary could not possibly affect the way entanglement is transferred through the system.
Let the additional spin “${+}$” and the spin “$M$” containing the message have the maximally entangled state $\frac{1}{\sqrt{2}}(|0\rangle_{{+}}|0\rangle_M+|1\rangle_{{+}}|1\rangle_M$). Thus, after Alice performs her encoding, the entire state is: $$|\Phi(0)\rangle=\frac{1}{\sqrt{2}}\left[
|0\rangle_{{+}}|\mathbf{0}\rangle_A|\mathbf{0}\rangle_{\bar{A}} +
|1\rangle_{{+}}|1_{ENC}\rangle_A|\mathbf{0}\rangle_{\bar{A}}\right].$$ After the system evolves for time $T$, the state becomes $$\begin{aligned}
|\Phi(T)\rangle&=&\frac{1}{\sqrt{2}}\Big[
|0\rangle_{{+}}|\mathbf{0}\rangle_{\bar{B}}|\mathbf{0}\rangle_{B}
+ |1\rangle_{{+}} ( \sqrt{1-{\mathcal{C}}_B(T)}|\eta(T)\rangle \nonumber\\
&&+ \sqrt{{\mathcal{C}}_B(T)}|\mathbf{0}\rangle_{\bar{B}}|\gamma(T)
\rangle_B) \Big].\end{aligned}$$ Then Bob performs a decoding unitary, denoted ${U_{\mathrm{dec}}}$, on the spins he controls. ${U_{\mathrm{dec}}}$ is defined to act as follows: ${U_{\mathrm{dec}}}|\mathbf{0}\rangle_B = |\mathbf{0}\rangle_B$ and ${U_{\mathrm{dec}}}|\gamma(T)\rangle_B = |0\dots01\rangle_B$, where $|0\dots01\rangle_B$ is the $|1\rangle$ state on spin $N$ and the all-zero state on Bob’s other spins. After Bob’s decoding, the joint state of Alice’s additional spin and Bob’s decoded spin is: $$\begin{aligned}
\rho_{{+}/N}&=&{\mathrm{tr}}_{\overline{{+}/N}}({U_{\mathrm{dec}}}{\mathrm{tr}}_{\bar{B}}(|\Phi(T)\rangle\langle\Phi(T)|){U_{\mathrm{dec}}}^\dagger)
\nonumber \\
&=& \frac{1}{2}\Big[(1-{\mathcal{C}}_B(T))|1\rangle\langle 1|\otimes
{\tilde{\rho}}+ {\mathcal{C}}_B(T)|11\rangle\langle 11|
\nonumber\\
&&\hspace{1em}+ |00\rangle\langle00|+ \sqrt{{\mathcal{C}}_B(T)} (
|00\rangle\langle11|
\nonumber\\&&
\hspace{1em}+ |11\rangle\langle00|)\Big],\end{aligned}$$ where ${\tilde{\rho}}\equiv{\mathrm{tr}}_{\overline{{+}/N}}({U_{\mathrm{dec}}}{\mathrm{tr}}_{\bar{B}}(|\eta(T)\rangle\langle\eta(T)|){U_{\mathrm{dec}}}^\dagger)$, and where ${\mathrm{tr}}_{(\cdot)}(\cdot)$ is the partial trace performed over the spins indicated.
Concurrence is a measure of entanglement between two qubits [@Wootters98a]. The value of concurrence for a density matrix $\rho_{{+}/N}$ is equal to $$E(\rho_{{+}/N}) = \max\{ 0,
\sqrt{\lambda_1}-\sqrt{\lambda_2}-\sqrt{\lambda_3}-\sqrt{\lambda_4}\},$$ where the $\lambda_j$s are the eigenvalues, in nonincreasing order, of the matrix $\rho_{{+}/N}(\sigma^y\otimes\sigma^y)\rho_{{+}/N}^*(\sigma^y\otimes\sigma^y)$, where $^*$ represents complex conjugation in the computational basis. It can be shown that $$\begin{aligned}
\lambda_1&=&\frac{1}{4}\left(\sqrt{{\tilde{\rho}}_{11}{\mathcal{C}}_B(T) + 1 - {\tilde{\rho}}_{11}}+\sqrt{{\mathcal{C}}_B(T)}\right)^2 \nonumber\\
\lambda_2&=&\frac{1}{4}\left(\sqrt{{\tilde{\rho}}_{11}{\mathcal{C}}_B(T) + 1 -
{\tilde{\rho}}_{11}}-\sqrt{{\mathcal{C}}_B(T)}\right)^2
\nonumber\\
\lambda_3&=&0 \nonumber\\
\lambda_4&=&0,\end{aligned}$$ where ${\tilde{\rho}}_{11}=\langle 0 | {\tilde{\rho}}| 0\rangle$. Thus, using the fact that ${\mathcal{C}}_B(T)$ and ${\tilde{\rho}}_{11}$ each lie in the interval $[0,1]$, we have $$E(\rho_{{+}/N}) = \sqrt{{\mathcal{C}}_B(T)},$$ as required.
[13]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{}
, ****, (), .
, ** (, , ), .
, ** (, , ).
, ****, (), .
, , , , ().
, ().
, ().
, ** (, , ).
, ****, ().
, ().
, , , , , , in ** (, , ), pp. .
(), .
, ****, ().
|
---
abstract: 'We show that the widely used density dependent magnetic field prescriptions, necessary to account for the variation of the field intensity from the crust to the core of neutron stars violate one of the Maxwell equations. We estimate how strong the violation is when different equations of state are used and check for which cases the pathological problem can be cured.'
author:
- 'Débora P. Menezes'
- 'Marcelo D. Alloy'
title: Maxwell equation violation by density dependent magnetic fields in neutron stars
---
Motivation
==========
#### {#section .unnumbered}
The physics underlying the quantum chromodynamics (QCD) phase diagram has still not been probed at all temperatures and densities. While some aspects can be confirmed either by lattice QCD or experimentally in heavy ion collisions for instance, other aspects depend on extraterrestrial information. One of them is the possible constitution and nature of neutron stars (NS), which are compact objects related to the low temperature and very high density portion of the QCD phase diagram. Astronomers and astrophysicists can provide a handful of information obtained from observations and infer some macroscopic properties, namely NS masses, radii, rotation period and external magnetic fields, which have been guiding the theory involving microscopic equations of state (EOS) aimed to describe this specific region of the QCD phase diagram.
#### {#section-1 .unnumbered}
There are different classes of NS and three of them have shown to be compatible with highly magnetised compact objects, known as magnetars [@Duncan; @Usov], namely, the soft gamma-ray repeaters, the anomalous X-ray pulsars and more recently, the repeating fast radio burst [@frb]. The quest towards explaining these NS with strong surface magnetic fields, has led to a prescription that certainly violates Maxwell equations [@Chakrabarty]. The aim of this letter is to show how strong this violation is and check whether the density dependent magnetic field prescriptions can or cannot be justified. Magnetars are likely to bear magnetic fields of the order of $10^{15}$ G on their surfaces, which are three orders of magnitude larger than magnetic fields in standard NS. In the last years, many papers dedicated to the study of these objects have shown that the EOS are only sensitive to magnetic fields as large as $10^{18}$ G or stronger [@originalB; @originalNS]. The Virial theorem and the fact that some NS can be quark (also known as strange) stars allow these objects to support central magnetic fields as high as $3 \times 10^{18}$ G if they contain hadronic constituents and up to $10^{19}$ G if they are quark stars. To take into account these possibly varying magnetic field strength that increases towards the centre of the stars, the following proposition was made [@Chakrabarty]: $$B_z(n) = B^{surf} + B_0\bigg [ 1 - \exp \bigg \{ - \beta \bigg (
\frac{n}{n_0} \bigg )^{\gamma} \bigg \} \bigg ],
\label{brho}$$ where $B^{surf}$ is the magnetic field on the surface of the neutron stars taken as $10^{14}G$ in the original paper, $n$ is the total number density, and $n_0$ is the nuclear saturation density In subsequent papers [@Mao; @Rabhi; @Menezes1; @Ryu; @Rabhi2; @Mallick; @Lopes1; @Dex; @Benito1; @Benito2; @Mallick2; @Ro; @Dex2], the above prescription was extensively used, with many variations in the values of $B^{surf}$, generally taken as $10^{15}$G on the surface, $\beta$ and $\gamma$, arbitrary parameters that cannot be tested by astronomical observations. The high degree of arbitrariness was checked already in [@Chakrabarty] and later in [@Lopes2015] and more than 50% variation in the maximum stellar mass and 25% variation in the corresponding radius was found.
#### {#section-2 .unnumbered}
With the purpose of reducing the number of free parameters from two to one and consequently the arbitrariness in the results, another prescription was then proposed in [@Lopes2015]: $$B_z(\epsilon) = B^{surf} + B_0 \bigg ({\frac{\epsilon}{\epsilon_c}} \bigg
)^{\alpha},
\label{bepsilon}$$ where $\epsilon_c$ is the energy density at the centrer of the maximum mass neutron star with zero magnetic field, $\alpha$ is any positive number and $B_0$ is the fixed value of the magnetic field. With this recipe, all magnetic fields converge to a certain value at some large energy density, despite the $\alpha$ value used.
#### {#section-3 .unnumbered}
One of the Maxwell equations tells us that $\nabla\cdot\vec B=0$. In most neutron star calculations, the magnetic field is chosen as static and constant in the $z$ direction, as proposed in the first application to quark matter [@originalB]. In this case, the energy associated with the circular motion in the $x-y$ plane is quantised (in units of $2qB$, $q$ being the electric charge) and the energy along $z$ is continuous. The desired EOS is then obtained and the energy density, pressure and number density depend on the filling of the Landau levels. Apart from the original works [@originalB; @originalNS], detailed calculations for different models can also be found, for instance, in [@Menezes1; @Benito1] and we will not enter into details here. However, it is important to stress that the magnetic field is taken as constant in the $z$ direction, what results in different contributions in the $r$ and $\theta$ directions when one calculates the magnetic field in spherical coordinates.
#### {#section-4 .unnumbered}
According to [@Goldreich; @Danielle], and assuming a perfectly conducting neutron star ($B_r(R)=0$) that bears a magnetic dipole moment aligned with the rotation axis such that $\mu=B_p R^3/2$, where $R$ is the radius of the star and $B_p$ the magnetic field intensity at the pole, the components of the magnetic field in spherical coordinates are given by: $$B_r=B_P \cos \theta \left(\frac{R}{r}\right)^3, \quad
B_\theta=\frac{B_P}{2} \sin \theta \left(\frac{R}{r}\right)^3,
\label{daniele}$$ and in this case, it is straightforward to show that $\nabla \cdot \vec
B=0$.
#### {#section-5 .unnumbered}
If one cast eqs. (\[brho\]) and (\[bepsilon\]) in spherical coordinates, from now on called respectively original and LL’s prescriptions, they acquire the form $B_r=\cos \theta B_z$, $B_\theta=-\sin \theta B_z$ and $B_\phi=0$ and the resulting divergent reads $$\nabla \cdot \vec B= \cos \theta \frac{\partial B}{\partial r},
\label{divB}$$ where the magnetic field in the radial direction can be obtained from the solution of the TOV equations [@tov], where $r$ runs from the centre to the radius of the star. As a simple conclusion, $\nabla\cdot\vec B$ is generally not zero, except for some specific values of the parameters that we discuss in the next Section.
Results and Discussion
======================
#### {#section-6 .unnumbered}
In what follows we analyse how much $\nabla\cdot\vec B$ deviates from zero when $B$ is allowed to vary either with the original prescription as in eq. (\[brho\]) or with LL’s proposal, as in eq. (\[bepsilon\]) with two different models, the NJL and the MIT bag model. These two models have been extensively used to describe stellar matter in the interior of quark stars. It is important to point out that the same test could be performed with hadronic models used to account for magnetised NS with hadronic constituents, as in [@Mao; @Rabhi; @Ryu; @Rabhi2; @Mallick; @Lopes1; @Dex; @Benito1; @Mallick2; @Ro], for instance.
#### {#section-7 .unnumbered}
Before we analyse the behaviour of $\nabla\cdot\vec B /B$ that depends on $\frac{\partial B}{\partial r} = \frac{\partial B}{\partial n}
\frac{\partial n}{\partial r}$, we show in Fig. \[fig0\] how the magnetic field varies with the star radius for one specific case, i.e., the MIT bag model with $B=10^{18}$ G and the prescription given in eq.(\[bepsilon\]). All other cases studied next present a very similar behaviour. It is interesting to notice that the curve changes concavity around half the stellar radius.
![ Magnetic field versus r for the MIT model, bag constant 148 MeV$^{1/4}$, $M_{max}=1.4~M_\odot$ and $R=10.04$ km.[]{data-label="fig0"}](b-r.eps)
#### {#section-8 .unnumbered}
In Fig. \[fig1\] we plot $\nabla\cdot\vec B /B$ as a function of the star radius for different latitude ($\theta$) angles and a magnetic field equal to $B_0=10^{18} G$. In both cases the equations of state were obtained with the Nambu-Jona-Lasinio model [@Menezes1; @Luiz2016]. The violation is quite strong and $\nabla\cdot\vec B /B$ can reach 70% for small angles.
![Equations of state obtained with the NJL model and $B_0=10^{18}$ G. Chakrabarty’s prescription was calculated with $M_{max}=1.44~M_\odot$ and $R=8.88$ km, $\beta=5 \times 10^{-4}$ and $\gamma=3$. LL’s prescription was calculated with $M_{max}=1.46~M_\odot$, $R=8.83$ km and $\zeta=3$.[]{data-label="fig1"}](NJL-c-LL-1e18.eps)
#### {#section-9 .unnumbered}
In Figure \[fig2\] we plot the same quantity as in Figure \[fig1\] for the LL’s prescription and the much simpler and also more used MIT bag model for different latitude angles. Again the violation amounts to the same values as the ones obtained within the NJL model. Finally, in Figure \[fig3\], we show how large the deviation can be for different values of the magnetic field intensity and a fixed angle $\theta=45$ degrees and the original prescription. We see that the deviation reaches approximately the same percentages, independently of the field intensity.
![Equations of state obtained with the MIT model, bag constant 148 MeV$^{1/4}$, $M_{max}=1.4~M_\odot$ and $R=10.04$ km for different latitude angles. []{data-label="fig2"}](mit1e18a.eps)
![ Quark stars described by NJL model with different values of $B_0$ and $\theta=45$ degrees. For $B_0=10^{18}$ G, $M_{max}=1.44_\odot$, $R=8.88$ km and central energy density $\epsilon_c=7.67~fm^{-4}$. For $B_0=3\times10^{18}$ G: $M_{max}=1.45_\odot$, $R=8.88$ km and central energy density $\epsilon_c=7.65~fm^{-4}$. For $B_0=10^{19} G$: $M_{max}=1.50_\odot$, $R=8.80$ km and central energy density $\epsilon_c=8.11~fm^{-4}$. []{data-label="fig3"}](NJL-c-45.eps)
#### {#section-10 .unnumbered}
Now, we turn our attention to a possible generalisation of eqs. (\[brho\]) and (\[bepsilon\]) in spherical coordinates in order to verify for which conditions the divergent becomes zero and check whether the situation can be circumvented. The generalized magnetic field components is given by $$B_r= B_0 \cos \theta \left( f(r)\right)^\eta, \quad
B_\theta=-\frac{B_0}{\zeta} \sin \theta \left( f(r)\right)^\eta,$$ and the corresponding divergent reads: $$\nabla \cdot \vec B= B_0 \cos \theta \left[
\frac{2 f(r)^{\eta}}{r} + \eta f(r)^{\eta-1} \frac{df}{dr} -
\frac{2 f(r)^{\eta}}{r \zeta} \right],
\label{divBLL}$$ which is zero either for the trivial solution $\cos \theta=0$ or if $$\frac{2}{r} + \frac{\eta}{f(r)} \frac{df}{dr} -
\frac{2}{r \zeta}=0,$$ for which a general solution has the form $f(r)=A r^{\frac{2-2 \zeta}{\eta \zeta}}$ . When $\zeta=-2$ and $\eta=3$, $A=R$ and $B_0=B_p$, eqs.(\[daniele\]) are recovered. When $\zeta=1$, the numerator of the exponent becomes zero and $f(r)$ is simply a constant ($A$), does not depending on $r$. Another possibility is the assumption that $f(r)$ is a function of the density, for instance, as $f(n(r))$ or of the energy density as $f(\varepsilon(r))$. In these cases, $$\frac{df}{d n} \frac{d n}{dr} = \frac{2~f}{r~\eta} \left(
\frac{1-\zeta}{\zeta}\right), \quad
\frac{df}{d \varepsilon} \frac{d \varepsilon}{dr} = \frac{2~f}{r~\eta} \left(
\frac{1-\zeta}{\zeta}\right).$$
If we take $\zeta=1$, the result resembles the original (LL’s) prescription, $ \frac{df}{d n} =0$ ($\frac{df}{d \varepsilon} =0$) because $\frac{d n}{dr}$ ($\frac{d \varepsilon}{dr}$) is obtained from the TOV equations and is never zero. For $\zeta \ne 1$, solutions can be obtained from numerical integration.\
\
Final Remarks
=============
#### {#section-11 .unnumbered}
We have shown that both existent prescriptions for a density dependent magnetic field widely used in the study of magnetised neutron star matter equations of state strongly violate one of the Maxwell equations, showing a pathological problem. However, they can be cured if used only for well defined values of functionals that guarantee that the Maxwell equation is not violated. We have shown the results obtained from EOS used to describe quarks stars. Had we shown the same quantities for neutron stars constituted of hadronic matter only, the qualitative results would be the same. As a final word of caution, we would like to comment that the divergence problem in general relativity goes beyond the simple analysis we have performed and will deserve an attentive look in the future.
This work was partially supported by CNPq under grant 300602/2009-0. We would like to thank very fruitful discussions with Prof. Constança Providência.
[99]{}
R. C. Duncan, C. Thompson, Astrophys. J. $\textbf{392}$, L9 (1992); R. C. Duncan, C. Thompson, Mon. Not. R. Astron. Soc. $\textbf{275}$, 255 (1995); R. C. Duncan, C. Thompson, Astrophys. J. $\textbf{473}$, 322 (1996).
V. V. Usov, Nature $\textbf{357}$, 472 (1992).
L.G. Spitler et al, Nature 531, 202 (2016).
D. Bandyopadhyay, S. Chakrabarty, S. Pal, Phys. Rev. Lett. $\textbf{79}$, 2176 (1997)
S. Chakrabarty, Phys. Rev. D 54, 1306 (1996).
A. Broderick, M. Prakash, and J. M. Lattimer, Astrophys. J. 537, 351 (2000).
G. J. Mao, C. J. Mao, A. Iwamoto, Z. X. Li, Chin. J. Astron. Astrophys. $\textbf{3}$, 359 (2003).
A. Rabhi et al., J. Phys. G $\textbf{36}$, 115204 (2009).
D. P. Menezes et al, Phys. Rev. C, 79, 035807 (2009); D. P. Menezes et al., Phys. Rev. C $\textbf{80}$, 065805 (2009).
C. Y. Ryu, K. S. Kim, M. Ki Cheoun, Phys. Rev. C $\textbf{82}$, 025804 (2010).
A. Rabhi, P. K. Panda, C. Providencia, Phys. Rev. C $\textbf{84}$, 035803 (2011).
R. Mallick, M. Sinha, Mon. Not. R. Astron. Soc. $\textbf{414}$, 2702 (2011).
L. L. Lopes, D. P. Menezes, Braz. J. Phys. $\textbf{42}$, 428 (2012).
V. Dexheimer, R. Negreiros, S. Schramm, Eur. J. Phys. A $\textbf{48}$, 189 (2012)
R. H. Casali, L. B. Castro, D. P. Menezes, Phys. Rev. C $\textbf{89}$, 015805 (2014).
D. P. Menezes et al., Phys. Rev. C $\textbf{89}$ 055207 (2014).
R. Mallick, S. Schramm, Phys. Rev. C $\textbf{89}$, 045805 (2014).
R. O. Gomes, V. Dexheimer, C. A. Z. Vasconcellos, Astron. Nachr. $\textbf{335}$, 666 (2014).
V. Dexheimer, D. P. Menezes M. Strickland, J. Phys. G $\textbf{41}$, 015203 (2014).
Luiz Lopes and Debora Menezes, J. Cosmology and Astroparticle Physics 08 (2015) 002.
Daniele Viganò, [*Magnetic Fields in Neutron Stars*]{}, Ph.D. Thesis, Universidad de Alicante, 2013.
P. Goldreich and W.H. Julian, ApJ 157, 869 (1969).
R.C. Tolman, Phys. Rev. [**55**]{}, 364 (1939); J. R. Oppenheimer, G. M. Volkoff, Phys. Rev. $\textbf{55}$, 374 (1939)
Débora Peres Menezes and Luiz Laércio Lopes, Eur. Phys. J. A (2016) 52:17.
|
---
abstract: 'Many asteroids are rubble piles with irregular shapes. While the irregular shapes of large asteroids may be attributed to collisional events, those of small asteroids may result from not only impact events but also rotationally induced failure, a long-term consequence of small torques caused by, for example, solar radiation pressure. A better understanding of shape deformation induced by such small torques will allow us to give constraints on the evolution process of an asteroid and its structure. However, no quantitative study has been reported to provide the relationship between an asteroid’s shape and its failure mode due to its fast rotation. Here, we use a finite element model (FEM) technique to analyze the failure modes and conditions of 24 asteroids with diameters less than 30 - 40 km, which were observed at high resolution by ground radar or asteroid exploration missions. Assuming that the material distribution is uniform, we investigate how these asteroids fail structurally at different spin rates. Our FEM simulations describe the detailed deformation mode of each irregularly shaped asteroid at fast spin. The failed regions depend on the original shape. Spheroidal objects structurally fail from the interior, while elongated objects experience structural failure on planes perpendicular to the minimum moment of inertia axes in the middle of their structure. Contact binary objects have structural failure across their most sensitive cross sections. We further investigate if our FEM analysis is consistent with earlier works that theoretically explored a uniformly rotating triaxial ellipsoid. The results show that global shape variations may significantly change the failure condition of an asteroid. Our work suggests that it is critical to take into account the actual shapes of asteroids to explore their failure modes in detail.'
address: |
$^1$Department of Aerospace Engineering, Auburn University, 211 Davis Hall, Auburn, AL 36849-5338, United States\
$^2$Aerospace Engineering Sciences, University of Colorado, 429 UCB, Boulder, CO 80309-0429, United States
author:
- 'Masatoshi Hirabayashi$^1$ and Daniel J. Scheeres$^2$'
title: Rotationally induced failure of irregularly shaped asteroids
---
Asteroids ,Asteroids, dynamics ,Asteroids, rotation ,Asteroids, surfaces
Introduction {#Sec:Intro}
============
Over the last few decades, spacecraft explorations and ground observations have revealed that many small asteroids are loosely packed aggregates, so-called rubble piles, and have irregular shapes. These asteroids are subject to many different kinds of external forces that could change their spin states. Some forces may be small but act continuously, generating significant effects on their rotational and orbital evolution over the lifetime. Such forces include solar radiation pressure.
Solar radiation pressure generates small but continuous forces on sunlit surfaces of planetary objects. If asteroids are small enough to be affected by solar radiation pressure-driven forces, their asymmetric bodies experience torques and change their spin states [@Rubincam2000]. Ground observations have detected rotational acceleration/deceleration of small asteroids due to solar radiation pressure [e.g. @Lowry2007; @Taylor2007; @Durech2008]. The breakup of the active asteroid P2013/R3 has been interpreted as the result of fast rotation caused by solar radiation pressure [@Jewitt2014R3; @Jewitt2017]. Spin-state variations due to solar radiation pressure, called the YORP effect, depend on an asteroid’s orientation of an asteroid towards the Sun [@Nesvorny2007] and its shape [@Scheeres2007B], causing complex rotational dynamics [@Scheeres2008Rotational]. Topographic sensitivity of the YORP effect plays a significant role in rotational dynamics of small asteroids significantly [@Statler2009]. The YORP effect is also responsible for the formation of binary, triples, and pairs [e.g. @Cuk2005; @Goldreich2009; @Jacobson2011].
The YORP effect is considered to have caused small asteroids to reach their spin limits. The behavior of these asteroids at the spin limits is key to answering their evolution. Friction affects shape equilibrium [@Holsapple2001] while cohesion may help asteroids keep asteroids’ original shapes at fast rotation [@Holsapple2007]. Surface deformation processes also contribute to material shedding [@Scheeres2015Land]. A hard-sphere discrete element model showed that the equatorial ridge of a top-shaped object might result from the movement of materials toward the equator due to fast rotation [@Walsh2008; @Walsh2012]. Shape deformation changes the YORP-driven torque, causing stochastic variations in the rotational state of an asteroid [@Cotto2015]. Soft-sphere discrete element methods have shown that a randomly packed sphere might have internal deformation at fast spin [e.g. @Sanchez2012; @Sanchez2016] although a heterogeneity in the internal structure would control the failure modes and conditions [@Zhang2017].
Substantial progress has been made in theoretical modeling of the internal deformation processes of asteroids. A key trend is that models assumed an asteroid to be a triaxial ellipsoid. This assumption allows for deriving the internal stress analytically, making problems clear and reasonably solvable [e.g. @Love1927; @Dobrovolskis1982; @Holsapple2001]. While we have seen many pioneering works that shed light on the deformation mechanism of asteroids [e.g. @Dobrovolskis1982; @Davidsson2001; @Holsapple2001; @Holsapple2004; @Holsapple2007; @Holsapple2010; @Sharma2009], we assert that the shape evolution due to rotationally induced deformation is still an open question. The main reason is that asteroids do not have ideal shapes; in other words, they are neither spheres nor ellipsoids.
The purpose of this study is to use a finite element model (FEM) analysis [@Hirabayashi2014DA; @Hirabayashi2016Itokawa; @Scheeres2016Bennu; @Hirabayashi2016Nature] to quantify the failure modes and conditions of 24 observed asteroids of which high-resolution polyhedron shape models were generated. We choose asteroids smaller than 30 - 40 km in diameter because asteroids in this size range could be spun up/down by the YORP effect [@Vokrouhlicky2015]. The plastic FEM technique developed by the authors is based on the work done by [@Holsapple2008A]. Here, we will investigate the stress conditions of these asteroids at different spin rates and evaluate their failure modes.
We outline the contents employed in this paper. First, we will discuss the strength model used. Second, we will categorize 24 asteroid into four shape types: contact binary objects, elongated objects, spheroidal objects, and non-classified objects. Although this classification is subjective, it will help us quantify the failure modes and conditions of these asteroids. Third, we will review our plastic FEM technique. Fourth, using the FEM technique, we will compute the failure mode and condition of each asteroid. Finally, we will compare the results from our FEM technique and those from earlier works that used a volume-averaging technique to explore a uniformly rotating triaxial ellipsoid [e.g. @Holsapple2004; @Holsapple2007; @Holsapple2010; @Sharma2009; @Rozitis2014; @Hirabayashi2014Biaxial]. We finally note that we distinguish “failure mode" and “failure condition." The failure mode means what regions in an asteroid would structurally fail while the failure condition describes when the asteroid experiences such a failure mode.
Shear resistance of a material
==============================
Our model assumes granular materials in asteroids to be continuum media; in other words, we consider very fine-grained materials filling interstices in the continuum limit, the idea of which is consistent with [@Sanchez2011]. This section discusses the strength model used in this study. We use the Drucker-Prager model to describe the yield condition [@Chen1988]: $$\begin{aligned}
f = \alpha I_1 + \sqrt{J_2} - s \le 0, \label{Eq:DPcriterion}\end{aligned}$$ where $I_1$ and $J_2$ are the stress invariants. $\alpha$ and $s$ are defined such that the Drucker-Prager yield surface touches the Mohr-Coulomb yield surface at the compression meridian in the principal stress space. At the compression meridian, these parameters are given as [@Chen1988] $$\begin{aligned}
\alpha = \frac{2 \sin \phi}{\sqrt{3} (3 - \sin \phi)}, \:\:\: s = \frac{6 Y \cos \phi}{\sqrt{3} (3 - \sin \phi)}, \label{Eq:alpha&s}\end{aligned}$$ where $Y$ is the cohesive strength, and $\phi$ is the friction angle. The friction angle of a geological material is widely known to range from 30$^\circ$ to 40$^\circ$ [@Lambe1969]. Thus, in this study, we fix the friction angle at the mean value of this range, i.e., 35$^\circ$, which is consistent with earlier works [e.g. @Hirabayashi2016Nature].
Studies have shown that rubble pile asteroids might have cohesive strength. Wan der Waals forces would generate a significant level of cohesive strength, compared to the gravity level in an asteroid less than kilometers in diameter [@Scheeres2010]. The breakup event of P/2013 R3 gave constraints on the cohesive strength of this asteroid as 50 - 250 Pa [@Hirabayashi2014R3]. The fast rotation of 1950 DA indicated that this asteroid needs a cohesive strength higher than $\sim$80 Pa, which is consistent with that of P2013 R3 [@Rozitis2014; @Hirabayashi2014DA]. Recent observations proposed that fast-rotating asteroids would have cohesive strengths higher than 100 Pa [@Polishook2016]. Since these studies show that the cohesive strength is a key physical property that characterizes the internal structure, we consider the cohesive strength to be a free parameter to quantify the failure modes and conditions.
Subjective shape classification of asteroids
============================================
We investigate the rotationally induced failure modes and conditions of 24 asteroids with mean diameters less than 30 - 40 km, which were observed at high resolution. The shape models of these asteroids were developed either by high-resolution images from asteroid exploration missions or by ground radar observations. We show the physical properties of these asteroids in Table \[Table:AsteroidList\]. The first column gives the asteroid name. The second column describes the current spin period of each asteroid. The third column shows the bulk density, and we only describe the well-estimated values; otherwise, we put dashes and assume the bulk density to be 2.5 g/cm$^3$. The fourth column indicates the volume of the shape model. The fifth column shows the shape class.
Before explaining the details of the shape classification, we introduce the literature of [*subjective*]{} shape classification, which was mainly based on observations. Asteroids observed at high resolution have usually been categorized into one of the following five classes: contact binaries, elongated bodies, spheroidal bodies, multiple asteroid systems, and non-classified asteroids[^1] [@Taylor2012]. For contact binaries, [@Benner2015] provided a clear (but still subjective) description that they consist of two lobes, which might once have been separated, currently resting on each other. More broadly, it can be considered that these objects have narrow necks. Elongated bodies have stretched shapes along one direction without narrow necks. Spheroidal bodies are more or less rounded and thus have relatively low aspect ratios. Multiple asteroid systems are bodies having satellites. Some binary systems consist of a spheroidal primary having an equatorial ridge, known as the “top" shape, and a relatively small secondary [@Margot2015; @Walsh2015]. Fourteen percent of NEAs imaged by radar are contact-binary candidates [@Taylor2012], while 16$\%$ of them are binary asteroids [@Margot2002]. Lastly, non-classified asteroids are those not categorized into these classes.
This study denotes the asteroid shapes using four alphabetic letters, C, E, S, and N, omitting multiple system objects. Asteroids are categorized into these four shapes in a subjective way based on the above arguments; therefore, the shapes of some asteroids might not be determined uniquely. Shape C means a contact binary object. There are six contact binaries in our asteroids (see Table \[Table:AsteroidList\]). For Mithra, based on an uncertainty of its pole direction, [@Brozovic2010] reported a prograde model and a retrograde model. Here, we use the prograde model. Shape E indicates an elongated body, including four objects. Shape S is a spheroidal body. More than half of the considered asteroids are spheroidal. The top-shaped asteroids are included in this class. For the shape of 1950 DA, we use the retrograde shape model by following [@Farnocchia2014], who showed a $99 \: \%$ likelihood that this object is a retrograde rotator based on the analysis of non-gravitational perturbation. We also mention that although the shape model of 2000 ET70 was updated [@Marshall2016], we refer to the model developed by [@Naidu2013]. Among these objects, 1999 KW4, 1994 CC, 2001 SN263, and 2000 DP107 are multiple system asteroids, and we analyze the current shapes of their primaries. Shape N stands for a non-classified object. Golevka, a tooth-like shape, is the only shape categorized into this type. Note that our alphabetic classification is not related to taxonomy classification [@Bus2002; @DeMeo2009].
\[Table:AsteroidList\]
Analysis
========
Normalization and notational definitions {#Sec:Normalization}
----------------------------------------
We use normalized parameters in this paper. We apply the definitions used by [@Hirabayashi2015internal] and [@Hirabayashi2015Sphere]. The density, mean radius, and gravitational constant are defined as $\rho$, $R$ and $G$, respectively. We normalize the lengths, forces, spin rates, and stresses (and thus cohesive strength) by $R$, $\pi \rho G R$, $\sqrt{\pi \rho G}$, and $\pi \rho^2 G R^2$, respectively. In our discussion, the normalized spin rate is defined as $\omega$. We also denote the normalized cohesive strength as $Y$. We introduce the key physical parameters used in our work in Table \[Table:param\].
We introduce four physical parameters to discuss the failure modes of the sample asteroids. The first parameter is the minimum cohesive strength, which is denoted as $Y^\ast$. This quantity represents the minimum value of the cohesive strength that can hold the original shape of the body. The second parameter is the normalized current spin rate, which is given as $\omega_c$. The third parameter, $\omega_0$, describes the normalized critical spin rate at which stresses transit from a compression-dominant mode to a tension-dominant mode. Thus, tension starts to control the failure mode at a spin rate higher than $\omega_0$. The fourth parameter, $\eta$, shows a factor of the increase in $Y^\ast$ at a high spin rate. $\omega_0$ and $\eta$ will be discussed more in Section \[Sec:FMD\].
[X l l]{} Parameters & Symbols & Units\
Gravitational constant $(=6.674 \times 10^{-11})$ & $G$ & m$^3$ kg$^{-1}$ s$^{-2}$\
Current spin period & $P$ & hr\
Volume & $V$ & km$^3$\
Bulk density of material & $\rho$ & kg m$^{-3}$\
Friction angle & $\phi$ & deg\
Mean radius & $R$ & m\
Cohesive strength & $\hat Y$ & Pa\
Poisson’s ratio & $\nu$ & \[-\]\
Normalized cohesive strength & $Y$ & \[-\]\
Normalized minimum cohesive strength that holds the original shape & $Y^\ast$ & \[-\]\
Normalized current spin rate & $\omega_c$ & \[-\]\
Slope parameter & $\eta$ & \[-\]\
Spin rate parameter & $\omega_0$ & \[-\]\
Triaxial ellipsoid’s aspect ratio of the semi-intermediate axis to the semi-major axis & $\beta$ & \[-\]\
Triaxial ellipsoid’s aspect ratio of the semi-minor axis to the semi-major axis & $\gamma$ & \[-\]\
FEM technique {#Sec:stressAnalysis}
-------------
In this section, we explain our FEM technique for analyzing the failure conditions and modes of an irregularly shaped asteroid due to fast rotation [@Hirabayashi2014DA; @Hirabayashi2015Sphere; @Hirabayashi2016Itokawa; @Hirabayashi2016Nature]. In this work, we use an FEM solver of ANSYS Mechanical APDL (18.1) licensed by Auburn University’s College of Engineering. For the development of an FEM mesh, see Section \[Sec:FEMmesh\]. We assume that because rotational change due to the YORP effect is on the order of a few million years [@Rubincam2000], the evolution of the internal stress is nearly static [@Holsapple2010]. This assumption allows for eliminating the dynamical terms in the stress equations [@Holsapple2010].
The deformation process consists of elastic and plastic deformation. The elastic model uses Hooke’s law, assuming that the material uniformly deforms in any directions. On the other hand, the plastic model in this work uses the associated flow rule to characterize plastic behavior of materials. The associated flow rule defines a relationship between stress and plastic strain based on the yield condition, which is given as $$\begin{aligned}
d {{\mbox{\boldmath{$\epsilon$}}}}_{ij}^p = d \lambda \frac{\partial f}{\partial {{\mbox{\boldmath{$\sigma$}}}}_{ij}},\end{aligned}$$ where $d {{\mbox{\boldmath{$\epsilon$}}}}_{ij}^p$ and ${{\mbox{\boldmath{$\sigma$}}}}_{ij}$ are the plastic strain change and the stress. $i$ and $j$ are indices that describe the components. $d \lambda$ is a constant value. $f$ is the yield condition and, in the present work, corresponds to the Drucker-Prager yield criterion, which is given in Equation (\[Eq:DPcriterion\]). Similar to the elastic model, this plastic model describes uniform deformation. In addition, our model assumes no hardening and softening effects, guaranteeing that plastic deformation occurs under constant stress [@Chen1988].
We consider that a sample asteroid is uniformly rotating at a given spin period. To apply this condition to our FEM simulation, we employed the following method. First, we define a loading acting at each node. Because this loading consists of the gravitational effect and the centrifugal effect, it is necessary to consider the volume of each tetrahedral mesh element. For simplicity, we split its volume into each edge node equally so that it has a mass concentration. Because of this setup operation, however, the location of the center of mass and the orientation of the principal axes change. Because this offset causes translational forces and rotational torques, the simulation quality becomes significantly low, causing unrealistic stress concentration [@Hirabayashi2016Itokawa]. To fix this issue, we remove the residual forces induced by this misalignment. This process allows for avoiding the rotational and translational motion of a sample asteroid in our FEM simulation.
For a description of the deformation process in space, it is ideal that we do not have any constraints on degrees of freedom. However, such an FEM setting does not allow for solving this problem numerically because there are not enough equations. Thus, the number of constraints on degrees of freedom is a key element to making our FEM simulation realistic as much as possible. Here, we only constrain the translational and rotational motion. To do so, we constrain six degrees of freedom. The first three degrees of freedom are added at the note located at the center of mass to remove the translational motion. The next two degree of freedom are given at one of two nodes located along the minimum principal axis to remove the rotation motion along the maximum and intermediate principal axes. The last degree of freedom is provided at one of two nodes located along the minimum principal axis. Note that our FEM technique uses the body-fixed frame.
To describe the regions that experience plastic deformation, we use the stress ratio, a ratio of the current stress to the yield stress, which is defined by the Drucker-Prager criterion. When the stress ratio is unity, materials at this location should experience plastic deformation. On the other hand, when the stress ratio does not have a unity value, plastic deformation does not occur.
Using these techniques, we conduct FEM simulation to determine the failure conditions and modes of the asteroids. We use the following iteration process. First, we choose a relatively high cohesive strength as an input parameter and conduct FEM simulation, while the friction angle is fixed at 35 deg through all the iteration processes. Then, we check if there are plastically deformed regions in the considered asteroid. If we do not observe them, we decrease the input value of the cohesive strength and conduct another FEM simulation. We iterate this process until we observe plastic deformation in the sample asteroid.
We note that the associated flow model used in this analysis may be ideal; however, because we consider the structural failure process of an asteroid due to quasi-static spin-up, our plastic can reasonably predict the locations of the failed regions [@Hirabayashi2015internal; @Hirabayashi2015Sphere; @Hirabayashi2016Nature]. We also mention that while our study assumes the uniformly distributed material condition, it may be possible that the deformation mode may be controlled by the internal structure. In fact, the internal structure distribution have been discussed by many earlier works (e.g., [@Hirabayashi2015internal], [@Sanchez2016], [@Zhang2017], and [@Bagatin2018]). Such a detailed analysis will be a subject of our future work.
FEM mesh of an irregularly shaped body {#Sec:FEMmesh}
--------------------------------------
The quality of an FEM mesh controls the numerical convergence[^2] when an asteroid experiences plastic deformation in calculation. In this section, we describe how to create FEM meshes of the considered asteroids. We use a 10-node tetrahedron FEM mesh. Here, we introduce two mesh-generation techniques. The first technique generates a low-resolution FEM mesh (Technique A), while the second technique produces a high-resolution FEM mesh (Technique B).
We introduce Technique A, which uses TetGen, a publicly available mesh generator [@Si2015], and ANSYS. The FEM meshes of most of the sample asteroids are generated using this technique. First, we start by generating a 4-node FEM mesh from a polyhedron shape model. Second, we convert this mesh to a 10-node FEM mesh, finding the middle points between the nodes on the edges of each element. A key issue is that TetGen and ANSYS use different measurements to check the quality of an FEM-mesh tetrahedron. While TetGen uses a radius-edge ratio, which is the ratio of the smallest length of the edge to the radius of the circumferential sphere of the tetrahedron, ANSYS applies the edge angle, an angle between two edges of the tetrahedron. This difference induces some inconsistencies; therefore, we refine the quality of the derived FEM mesh using ANSYS. In this work, we define the tolerance value of the edge angle on ANSYS as 165 deg. Finally, we finally add a node at the center of mass of the considered asteroid.
As seen in the supplemental text (Text S1), however, we have difficulties in obtaining the stress ratio distributions of 1996 HW1 and Nereus because of the quality of their FEM meshes obtained by Technique A. A strength of TetGen is that there is a function that automatically changes the mesh resolution in volume, resulting in a high-quality FEM mesh with a fewer number of mesh elements than the same-quality mesh developed at uniform resolution [@Si2015]. However, this function results in a difficulty in generating mesh elements in the neck region of 1996 HW1 and a low mesh quality in the central region of Nereus.
We apply Technique B to avoid these issues to generate the FEM meshes of these asteroids. In this technique, we only use ANSYS. We input the polyhedron shape models of these asteroids to ANSYS and directly generate a 10-node FEM mesh. Then, we add one node at the center of mass manually. We confirm that simulation convergence improves for these asteroids. We also find that even when we use two meshes developed by these techniques, the solutions are comparable under the same physical condition (Text S1). We note, however, that the size of a mesh generated by Technique B is higher than that by Technique A, significantly increasing computational burden.
Failure mode diagram {#Sec:FMD}
====================
This section explains how we evaluate the failure modes and conditions of the asteroids. If an asteroid experiences structural failure at a low spin rate, $Y^\ast$ of this asteroid should be low because the centrifugal force is small. On the other hand, if this asteroid structurally fails at a high spin rate, $Y^\ast$ should be high to keep its original shape. This fact implies that the failure mode depends on the spin rate, and $Y^\ast$ is a function of $\omega$. To describe the variation in the failure mode, we use a failure mode diagram (FMD), a technique for identifying a failure mode at a given spin rate [@Hirabayashi2016Itokawa]. The present study computes the FMDs for all the 24 asteroids; the main text shows the FMDs of four selected asteroids (Itokawa, Geographos, 2008 EV5, and Golevka), and the FMDs of the other asteroids are given in the supplemental information.
We discuss how to read the FMD. Here, as an example, we use Figure \[Fig:Itokawa\]**c**, which describes the FMD of Itokawa. The black line with markers indicates $Y^\ast$ derived from our FEM analysis, while the blue dashed line is a function fitting with our results. If the actual cohesive strength is within the gray region, Itokawa can physically exist. On the other hand, if the cohesive strength is outside this region, Itokawa should structurally fail. Therefore, the white region is considered to be a region in which the cohesive strength of Itokawa violates the spin and shape conditions.
We divide the trend of $Y^\ast$ into two regimes by the black dot-dashed line (again, see Figure \[Fig:Itokawa\]**c**). The left-hand side from the black dot-dashed line is a compression-dominant region, which does not induce structural failure significantly because $Y^\ast$ is nearly zero. On the other hand, to the right of the black dot-dashed line is a tension-dominant region in which the failure model becomes significant. Because $Y^\ast$ monotonically increases as shown in Figure \[Fig:Itokawa\]**c**, Itokawa has to have a certain amount of cohesive strength to keep its original shape at a given spin rate.
To describe $Y^\ast$, we introduce two parameters, $\eta$ and $\omega_0$. In the compression-dominant region, $Y^\ast$ is negligible because tension-driven failure is not a critical factor. In the tension-dominant region, we assume $Y^\ast$ to be a function of a normalized spin rate, $\omega$, which is given as $$\begin{aligned}
Y^\ast = \eta (\omega^2 - \omega_0^2). \label{Eq:Yast}\end{aligned}$$ In this equation, $\omega_0$ is the normalized spin rate at which the peak centrifugal stress first balances the local gravity, which is dependent on the shape. On the other hand, $\eta$ is the parameter that describes how quickly the peak centrifugal stress increases with spin rate. If the cross-section perpendicular to the minimum principal axis is small, $\eta$ becomes larger. For Itokawa, because of its contact binary feature, we expect that this asteroid should have a higher value than other asteroids. We determine $\omega_0$ and $\eta$ by fitting Equation (\[Eq:Yast\]) to our FEM calculations.
Results
=======
This section discusses how asteroids structurally fail at different spin rates. As discussed in Section \[Sec:FMD\], we will show the failure modes of four selected asteroids representing different shapes. For the C shape, we introduce the FMD of Itokawa. For the E shape, we describe the FMD of Geographos. For the S shape, we indicate the FMD of 2008 EV5. For the N shape, we give the FMD of Golevka. The FMDs of other shapes are described by $\omega_0$ and $\eta$ in Table \[Table:AsteriodAnalysis\] and are displayed in the supplemental information.
In the following, to describe the structurally failed regions clearly, we will compute the bimodal stress ratio distribution, i.e., failed regions described in yellow (a unity stress ratio) and non-failed regions given in green (a non-unity stress ratio). To create the bimodal stress ratio distribution, our visualization method defines each triangular tile, or a face, in the same color. Thus, this method uses the node solutions from our FEM simulation and sorts them out. If there is at least one node on a triangular tile, this method decides that this tile is considered to be failed structurally. We note that this process causes the plastic failure region to look wider, depending on the size of a tetrahedron element. However, because our focus is on determining the failure condition and mode, this visualization does not affect our results.
Contact binary objects
----------------------
The failure mode of the C-shape asteroids is characterized as the internal failure of their necks. Figure \[Fig:Itokawa\] describes the failure mode of Itokawa [@Hirabayashi2016Itokawa]. Figures \[Fig:Itokawa\]**a** and \[Fig:Itokawa\]**b** show the distribution of the stress ratio on the surface and across the cross section at a normalized spin rate of 1.43. The cohesive strength is $Y^\ast = 2.3$[^3]. Figure \[Fig:Itokawa\]**c** shows the FMD of Itokawa. We show the spin rate condition used for the simulation above (the red dot-dashed line) and the normalized current spin rate, which is given as $\omega_c$ in Table \[Table:AsteriodAnalysis\] (the cyan dot-dashed line). The bottom-right equation indicates Equation (\[Eq:Yast\]) for Itokawa, providing $\eta$ and $\omega_0$ as 1.41 and 0.50, respectively (Table \[Table:AsteriodAnalysis\]). $\omega_0$ of this body is 0.5, which is given by the black dot-dashed line. As discussed above, at a spin rate less than $\omega_0$, $Y^\ast$ is nearly zero. On the other hand, if the spin rate is higher than $\omega_0$, $Y^\ast$ is described in Equation (\[Eq:Yast\]).
We also calculate the failure modes of other C-shape asteroids; see $\omega_0$ and $\eta$ in Table \[Table:AsteriodAnalysis\] and the failure modes in Figures S.4 through S.8 in the supplemental information. Again, this shape includes Eros, Toutatis, Mithra, Castalia, and 1996 HW1, as well as Itokawa. $\eta$ of this shape ranges from 0.90 to 8.10, giving a mean value of 2.42. 1996 HW1 has a remarkably high $\eta$ value of 8.10. On the other hand, $\omega_0$ ranges between 0.50 and 0.73, and thus the mean value is 0.60. A common feature of the failure mode in this shape class is that the YORP-driven spin-up eventually induces structural failure at their narrow cross sections. When an asteroid in this shape spins up along the maximum principal axis, the centrifugal force increases along the minimum principal axis. Because the cross section becomes smaller on the narrow neck, tensile stress acting on that section grows rapidly. Thus, the C-shape asteroids become sensitive to structural failure at a low spin rate. This mode leads to a breakup into two components.
![Failure mode and FMD of Itokawa. The failure mode is obtained by solving a case with a cohesive strength equal to $Y^\ast$ at a given spin rate. Figures \[Fig:Itokawa\]**a** and \[Fig:Itokawa\]**b** give the bimodal stress ratio distribution on the surface and across the cross-section. The yellow regions are failed regions, while the green regions are non-failed regions. The spin axis corresponds to the vertical direction. Figures \[Fig:Itokawa\]**c** indicates the FMD. The black solid line with markers shows the distribution of $Y^\ast$ computed by the plastic FEM analysis. The blue dashed line is a fitting function described by Equation (\[Eq:Yast\]), and the displayed equation is this fitting function. The gray area represents the region in which Itokawa can keep its current shape. The black dot-dashed line describes $\omega_0 = 0.5$. The cyan dot-dashed line gives the normalized current spin rate, $\omega_c$, while the red dot-dashed line indicates that in Figures \[Fig:Itokawa\]**a** and \[Fig:Itokawa\]**b**.[]{data-label="Fig:Itokawa"}](Figure1.eps){width="\textwidth"}
Elongated bodies
----------------
The E shape includes Geographos, Bacchus, Nereus, and 1992 SK. Figure \[Fig:Geographos\] shows the stress ratio distribution and the FMD of Geographos. Earlier works have shown that the shape of this object might result from a tidal effect during planetary flyby [@Bottke1999; @Walsh2014]. Geographos is highly elongated but does not have a clear neck structure. The failure mode of this asteroid may be similar to that of the C shape asteroids. However, because there is no distinctive narrow neck, the failed region is located at the middle of the body (Figures \[Fig:Geographos\]**a** and \[Fig:Geographos\]**b**). This failure mode is attributed to the stress condition on a plane perpendicular to the minimum principle axis around the center of mass of the body.
We also discuss the FMD of Geographos. The fitting function of $Y^\ast$ at a spin rate higher than $\omega_0 = 0.67$ is given as $1.15 (-0.67^2 + \omega^2)$, leading to $\eta = 1.15$ and $\omega_0 = 0.67$ (Table \[Table:AsteroidList\]). It is found that $\omega_0$ of this object is higher than that of Itokawa, while $\eta$ of this object is smaller than that of Itokawa. This contrast results from whether or not there is a neck. Because Geographos has no clear neck feature, it does not experience stress concentration that Itokawa has in the neck region. Thus, this asteroid can hold its structure at a higher spin rate than Itokawa, leading to a higher value of $\omega_0$. Also, even when Geographos experiences tension, it does not need a high cohesive strength at a high spin rate, which results in a lower value of $\eta$.
We show other E-shape asteroids (Table \[Table:AsteriodAnalysis\] and Figures S.9 through S.11 in the supplemental information). These objects have $\eta$ ranging between 0.33 and 1.15 (Table \[Table:AsteriodAnalysis\]). The mean value of $\eta$ of the E-shape asteroids is 0.66, which is lower than that of the C shape (2.42). $\omega_0$ is between 0.67 and 0.87, and its mean value is 0.74, which is higher than that of the C shape (0.60). The failure modes of these asteroids are similar to the failure mode of Geographos; commonly, they experience structural failure around the middle of their structure, which induces a breakup at a higher spin rate than the C shape asteroids do.
We finally point out that 1992 SK may have a different failure mode (Figure S.11). At a spin rate higher than $\omega_0 = 0.87$, the failure region spreads from the middle of the body to the thicker end, implying that the shape would structurally stretch in the left direction. Therefore, instead of breaking up, this asteroid would eventually have material shedding in that way. We suspect that 1992 SK is an example of an asteroid that could have a non-breakup failure mode due to its topological deviation from an elongated shape.
![Failure mode and FMD of Geographos. The format of this plot is similar to Figure \[Fig:Itokawa\]. Figures **a** and **b** indicate the stress ratio distribution when $Y^\ast = 1.04$ at $\omega = 1.21$. Figure **c** shows the FMD of this asteroid. The black dot-dashed line gives $\omega_0 = 0.67$. The cyan dot-dashed line describes $\omega_c = 0.45$, while the red dot-dashed line indicates $\omega = 1.21$.[]{data-label="Fig:Geographos"}](Figure2.eps){width="\textwidth"}
Spheroidal objects
------------------
In this section, we discuss the failure modes of the S-shape asteroids. An earlier work showed structural failure at the center of the body at a high spin rate [@Hirabayashi2015Sphere]. This deformation mode consists of vertically inward deformation at high latitudes and horizontally outward deformation at low latitudes [@Hirabayashi2014DA; @Scheeres2016Bennu]. In the case when materials were uniformly distributed, this failure mode was also observed by two independent Soft-Sphere Discrete Element Models [@Hirabayashi2015internal; @Zhang2017].
Figure \[Fig:EV5\] shows the failure mode and FDM of 2008 EV5. We note that this object has a concave feature in the equatorial region, which might result from an impact cratering event [@Busch2011] or detachment of a small chunk [@Tardivel2018]. However, because this feature is small compared to the entire volume, we consider this object to be in the S shape. Figures \[Fig:EV5\]**a** and **b** describe the stress ratio distribution when $Y^\ast = 0.13$ at $\omega = 1.10$. While we observe failed regions around the concave feature, the majority of the subsurface region is intact, and the central region experiences structural failure. This feature is consistent with the results from earlier theoretical and numerical works [@Hirabayashi2014DA; @Hirabayashi2015Sphere; @Hirabayashi2015internal; @Scheeres2016Bennu; @Zhang2017].
Figure \[Fig:EV5\] indicates the FMD of 2008 EV5. The fitting function of $Y^\ast$ is described as $0.26 (-0.85^2 + \omega^2)$; therefore, $\omega_0$ and $\eta$ are obtained as 0.85 and 0.26, respectively. We find that $\omega_0$ of this object is higher than that of the C and E-shape asteroids, while $\eta$ of this object is smaller than that of these two shapes. Because the shape is spheroidal, the centrifugal force along the minimum principal axis is limited, compared to other two shape classes. This fact allows $\omega_0$ to become higher. Also, the cross section is wide, so even if the spin rate is high, an asteroid in this type does not need a high cohesive strength to keep the original shape, leading to low $Y^\ast$.
We also show the failure modes of other S-shape asteroids (Table \[Table:AsteriodAnalysis\] and Figures S.12. through S.23 in the supplemental information). The derived failure mode of these asteroids is consistent with the mode of 2008 EV5; they have low $\eta$ and high $\omega_0$, compared to other shape classes.
![Failure mode and FMD of 2008 EV5. The format of this plot is similar to Figure \[Fig:Itokawa\]. Figures \[Fig:EV5\]:**a** and **b** display the stress ratio distribution in the case of $Y^\ast = 0.13$ at $\omega = 1.10$. Figure \[Fig:EV5\]**c** provides the FMD. The black dot-dashed line shows $\omega_0 = 0.85$. The cyan dot-dashed line describes $\omega_c = 0.59$, while the red dot-dashed line shows the case of Figure \[Fig:EV5\]**a** and **b**.[]{data-label="Fig:EV5"}](Figure3.eps){width="\textwidth"}
Non-classified objects
----------------------
This section discusses the failure modes of the N-shape asteroid. In our study, only Golevka is categorized into this shape class. This body looks like a tooth and is currently rotating with a spin period of 6.029 hr. Given an assumed bulk density of 2.5 g/cm$^3$, its normalized current spin rate, $\omega_c$, is 0.40 (Table \[Table:AsteriodAnalysis\]). Figure \[Fig:Golevka\] provides the stress ratio distribution and the FMD of this object. Figures \[Fig:Golevka\]**a** and \[Fig:Golevka\]**b** show that the interior fails at a normalized spin rate of 1.21 and $Y^\ast = 0.31$. This failure mode is similar to other shape classes, which had structural failure of the internal structure at a high spin rate. However, because of its tooth-like shape, structural failure appears at the edges of this body, implying that the shape plays an important role in the failure mode. Figure \[Fig:Golevka\]**c** displays the FMD of Golevka. $Y^\ast$ is given as $0.35 (-0.68^2 + \omega^2)$; therefore, $\omega_0$ and $\eta$ are 0.68 and 0.35, respectively.
![Failure mode and FMD of Golevka. Figure \[Fig:Golevka\]**a** and **b** indicate the stress ratio distribution in the case of $Y^\ast = 0.31$ at $\omega = 1.21$. **c** gives the FMD. The black dot-dashed line shows $\omega_0 = 0.68$. The cyan dot-dashed line is the normalized current spin rate, $\omega_c = 0.40$, while the red dot-dashed line is the case of **a** and **b**.[]{data-label="Fig:Golevka"}](Figure4.eps){width="\textwidth"}
Asteroid System $\eta$ $\omega_0$ $\omega_c$ $P(\omega_0)$ \[hr\] $Y^\ast = 1$ \[Pa\]
---------------------------------- -------- ------------ ------------ ---------------------- --------------------- --
[*Contact binary objects*]{}
\(433) Eros 1.57 0.58 0.44 4.03 $1.06 \times 10^2$
\(4179) Toutatis 1.11 0.55 0.026 4.39 $1.96 \times 10^3$
\(4486) Mithra 1.40 0.73 0.036 3.28 $9.37 \times 10^2$
\(4769) Castalia 0.90 0.73 0.59 3.33 $3.85 \times 10^2$
\(8567) 1996 HW1 8.10 0.52 0.28 5.16 $8.58 \times 10^2$
\(25143) Itokawa 1.41 0.50 0.24 5.73 $1.73 \times 10^1$
[*Elongated objects*]{}
\(1620) Geographos 1.15 0.67 0.45 3.60 $2.16 \times 10^3$
\(2063) Bacchus 0.73 0.67 0.16 3.57 $1.30 \times 10^2$
\(4660) Nereus 0.43 0.75 0.16 3.21 $3.64 \times 10^1$
\(10115) 1992 SK 0.33 0.87 0.33 2.97 $3.31 \times 10^2$
[*Spheroidal objects*]{}
\(1580) Betulia 0.37 0.83 0.39 2.89 $9.52 \times 10^3$
\(2100) Ra-Shalom 0.33 0.91 0.12 2.65 $1.70 \times 10^3$
\(29075) 1950 DA 0.35 0.78 1.38 3.08 $5.52 \times 10^2$
\(33342) 1998 WT24 0.32 0.95 0.65 2.54 $5.65 \times 10^1$
\(52760) 1998 ML14 0.27 0.76 0.16 3.16 $3.22 \times 10^2$
\(66391) 1999 KW4 0.30 0.99 0.98 2.74 $3.50 \times 10^2$
\(101955) Bennu 0.28 0.92 0.79 3.70 $2.01 \times 10^1$
\(136617) 1994 CC 0.25 0.90 0.99 2.94 $8.73 \times 10^1$
\(153591) 2001 SN263 0.24 0.87 1.06 4.20 $4.21 \times 10^2$
\(162421) 2000 ET70 0.25 0.72 0.27 3.73 $1.07 \times 10^3$
\(185851) 2000 DP107 0.28 0.95 1.17 3.54 $7.45 \times 10^1$
2002 CE26 0.30 1.00 1.22 4.00 $5.08 \times 10^2$
2008 EV5 0.26 0.85 0.59 2.60 $7.75 \times 10^1$
[*Irregularly shaped objects*]{}
\(6489) Golevka 0.35 0.68 0.40 3.52 $9.20 \times 10^1$
: Computationally derived $\eta$, $\omega_0$, and $\omega_c$ for all the asteroids. The fifth column, $P(\omega_0)$, describes the dimensional spin period \[hr\] at $\omega_0$. The sixth column indicates the dimensional cohesive strength \[Pa\] at $Y^\ast = 1$. These quantities are obtained by taking into account the size and bulk density of each asteroid given in Table \[Table:AsteroidList\].
\[Table:AsteriodAnalysis\]
Correlation of the shape and failure conditions
-----------------------------------------------
We discuss how the shape of an asteroid influences the failure condition, using $\omega_0$ and $\eta$. As discussed in Section \[Sec:FMD\], these parameters are dependent on the shape. We compare these quantities from our results with those from a well-accepted averaging theory for a uniformly rotating triaxial ellipsoid [e.g. @Holsapple2001; @Holsapple2004; @Holsapple2007; @Holsapple2010; @Sharma2009; @Rozitis2014; @Hirabayashi2014Biaxial]. If the irregularity of the shape does not control the failure conditions, these techniques should give consistent results. Here, to describe the elongation, we define $\beta$ and $\gamma$. $\beta$ is the ratio of the semi-intermediate axis to the semi-major axis, while $\gamma$ is that of the semi-minor axis to the semi-major axis.
Figures \[Fig:eta\] and \[Fig:Omega0\] describe the values of $\eta$ and $\omega_0$, respectively, as a function of $\gamma$. The circles describe the results from our FEM study (see Table \[Table:AsteriodAnalysis\]). On the other hand, the lines show the triaxial ellipsoid model with different $\beta$s and $\gamma$s. If $\gamma = 1$, the asteroid is oblate, giving the upper bound of $\omega_0$ and the lower bound of $\eta$. If $\beta = \gamma$, it is a biaxial ellipsoid, providing the lower bound of $\omega_0$ and the upper bound of $\eta$. If this theoretical model can describe the failure conditions and modes, all the FEM results should be inside the regions sandwiched by these boundaries, which are given in gray.
We briefly explain how to derive $\omega_0$ and $\eta$ of the triaxial ellipsoid model. First, we compute the stress components that are averaged over the entire volume. Because of the axisymmetric shape, this averaging technique makes the shear stress components zero. Second, substituting the stress components into Equation (\[Eq:DPcriterion\]), we obtain the FMD for this ideal shape. Third, we numerically determine $\omega_0$ and $\eta$. Because of complexity of Equation (\[Eq:DPcriterion\]), it is difficult to determine these quantities analytically. Thus, we develop a numerical algorithm that approximately finds these quantities. In this algorithm, we choose two normalized spin rates in the tension region and use Equation (\[Eq:DPcriterion\]) to compute the normalized cohesive strengths at these spin rates. Then, we compute $\omega_0$ and $\eta$ such that these two points are on the parabolic curve given in Equation (\[Eq:Yast\]).
The results show that our FEM analysis and the theoretical model give different failure conditions. The variations in $\eta$ indicate dependence of the failure condition on the shape (Figure \[Fig:eta\]). The S-shape objects are almost on the theoretical prediction curve, while the elongated objects are slightly off the line. The C-shape objects are highly deviated from that curve, indicating that the neck region of each asteroid has to hold strong tension to keep the original shape unchanged. In fact, 1996 HW1 has the highest of $\eta$ among the considered asteroids, i.e., $> 8$ (see Table \[Table:AsteriodAnalysis\]), and has an extremely narrow neck (see Figure S8). Golevka, the N-shape asteroid, has a trend similar to the spheroidal objects.
We also observe that the $\omega_0$ distributions of our results are different from those of the triaxial ellipsoid model (Figure \[Fig:Omega0\]). While all the shape types give deviation from the triaxial ellipsoid model prediction, we emphasize that the S-shape bodies are highly deviated from it. We attribute this trend to the fact that global irregularity in shape changes the loading condition, causing these asteroids to have different internal failure modes and conditions.
![Distribution of $\eta$ as a function of the elongation. The yellow circles describe the S-shape objects, the blue circles indicate the E-shape objects, the red circles show the C-shape objects, and the black circle draws the N-shape object. The blue line indicates the theoretical prediction.[]{data-label="Fig:eta"}](Figure5.eps){width="5in"}
![Distribution of $\omega_0$ as a function of the elongation. The format follows Figure \[Fig:Omega0\]. []{data-label="Fig:Omega0"}](Figure6.eps){width="5in"}
Discussion
==========
Our results show several key insights into the failure modes of asteroids due to quasi-static spin-up. Importantly, we find that although the failure modes of irregularly shaped bodies are different, each shape type has its own distinctive features. In this section, we discuss interpretations of our results into the shape evolution of asteroids.
Influence of shapes on failure modes
-------------------------------------
Our results indicated that the “global" shape is a strong contributor to the failure modes and conditions. We divided the sample asteroids into four subjective shape categories. Each category had different failure features. For the S-shape objects, structural failure starts from the internal part, spreading over the entire structure. Most of the E-shape objects have a failure mode in which the failed region spreads over the middle of the body while the edge regions are usually not failed. The failure mode of the C-shape objects is similar to that of the E-shape objects; however, the failed regions appear at their narrow necks. Because the area across the neck is smaller than other areas, the stress is usually concentrated on the neck, causing an asteroid in this shape type to be more sensitive to structure failure than other asteroids. The failure mode of the N-shape object was different from that of an asteroid in other categories.
We also compared $\eta$ and $\omega_0$ from our FEM technique with those from the volume-averaging technique, concluding that the results from these two techniques were not consistent. There are two reasons for this inconsistency. First, it is too ideal to use a uniformly rotating ellipsoid to model the shape of an asteroid. Global topographic variations change the self-gravity forces and the centrifugal forces, changing the stress field. This induces different failure modes and conditions. Second, the triaxial ellipsoid model technique might overestimate $\omega_0$. Again, $\omega_0$ is the value at which the stress becomes tensile and causes the shape to fail without cohesion. This technique takes an average of the stress distribution over the volume, simplifying the problem. However, by doing so, we lose the fact some regions may experience compression while other regions may have tension. This process might produce strong deviation of the triaxial ellipsoid model from our results. We conclude that consideration of the actual shape is critical to determine the failure condition of an irregularly shaped asteroid.
Interpretation into the shape evolution of asteroids
----------------------------------------------------
The failure modes of the C-shape asteroids and the E-shape asteroids give insights into the formation scenario that these asteroids settle into the C shape. For the C-shape asteroids, the narrow neck is the most sensitive to failure, leading to fission. For the E-shape asteroids except for 1992 SK, the middle region first experiences structural failure. Similar to the C-shape asteroids, they are likely to split into two components. Once the body breaks up, the split components could eventually contact one another due to their mutual dynamics. Whether or not this reconfiguration process occurs depends on the mass ratios of the split components [@Scheeres2007A; @Pravec2010; @Jacobson2011; @Scheeres2018]. If the mass ratio of the smaller component to the larger component is higher than 0.2, the mutual orbit is unstable, leading to a soft contact of these components to generate a new contact binary configuration. [@Hirabayashi2016Nature] confirmed that the bilobate nuclei of 67P satisfied this condition.
To test this hypothesis, we calculate the predicted mass ratios after fission of the C-shape asteroids and the E-shape asteroids except for 1992 SK (Table \[Table:Massratio\]). To derive the mass ratios, we first find planes across the simulated failure regions of these asteroids. Then, we numerically cut them through the planes and computed their mass ratios. For the C-shape asteroids, the cutting planes are aligned along the narrow necks. For the E-shape asteroids, on the other hand, the cutting plans are located almost in the middle of their structure.
We obtain that all the asteroids have mass ratios higher than 0.2, indicating that after fission, the split components of these asteroids have unstable mutual orbits and eventually contact each other with a low collision velocity. Thus, a possible formation process is that after asteroids become elongated due to some processes such as the YORP-driven spin-up [@Holsapple2010] or the tidal effect [@Bottke1999; @Walsh2014], they eventually split into multiple components and re-accrete due to soft-contact, having contact binary configurations. We note that another possible scenario is that tidal disruption could directly turn an E-shape object into multiple C-shape objects. Finally, objects that have topographic deviation from an elongated shape, like 1992 SK, may have a different deformation process.
[l l l]{} Asteroid System & Mass ratio\
&\
(433) Eros & 0.95\
(4179) Toutatis & 0.21\
(4486) Mithra & 0.70\
(4769) Castalia & 0.67\
(8567) 1996 HW1 & 0.51\
(25143) Itokawa & 0.24\
\
[*Elongated bodies except for 1992 SK*]{} &\
(1620) Geographos & 0.66\
(2063) Bacchus & 0.63\
(4660) Nereus & 0.77\
\[Table:Massratio\]
Effects of heterogeneous structure on failure modes
---------------------------------------------------
An understanding of the failure modes of irregularly shaped asteroids will provide strong constraints on the internal structure. Our analysis showed that an S-shape asteroid could structurally fail from its central point when it rotates rapidly. This failure mode consists of vertical deformation along the spin axis and outward deformation on the equatorial plane. Thus, the shape becomes more oblate than before as the interior pushes the equatorial surface outward [@Hirabayashi2014DA; @Hirabayashi2015internal; @Scheeres2016Bennu].
On the contrary, earlier works argued that the oblateness of an S-shape body could result from landslides on its surface [@Walsh2008; @Minton2008; @Harris2009; @Walsh2012; @Scheeres2015Land]. [@Hirabayashi2015internal] discussed that these different mechanisms would both be possible due to various internal structures. If the structure were uniform, the central point would experience the failure mode first. If the body were covered by loosely packed materials, landslides would be a likely contributor to the oblateness.
On the other hand, [@Tardivel2018] proposed that the cohesive strength of the body causes a small, aggregated mass to depart from the equatorial ridge of the body at a rapid rotation state, suggesting that this process could induce the observed concave regions on 2008 EV5 [@Busch2011] and 2000 DP107 [@Naidu2015]. The ongoing asteroid exploration missions, Hayabusa 2 [@Tsuda2013] and OSIRIS-REx [@Lauretta2012], will be able to shed light on the internal structure based on the surface morphology. We finally note that [@Statler2015] hypothesized that the top-shaped asteroids could be less affected by the YORP effect, challenging studies about the spin-up mechanism of S-shape asteroids due to the YORP effect.
Possible numerical artifacts in our FEM technique
-------------------------------------------------
In this section, we introduce possible artifacts in our numerical FEM technique. We first consider a possibility that while the numerical convergence criterion is satisfied in each iteration, the numerically obtained solution may not be equal to the true solution. Investigations of this issue may require a numerous amount of simulation cases and thus a tremendous amount of time, which may not be ideal. To address this issue, however, we refer to [@Hirabayashi2015Sphere], who analyzed the failure condition and mode of a spherical object rotating at a constant spin rate. He compared the theoretical solution and the FEM solution. It was found that while small variations in the stress distribution were observed in local areas, these solutions were consistent. Therefore, while the possibility of this issue in all the cases cannot be ruled out, because our present FEM framework is based on [@Hirabayashi2015Sphere], we consider our technique to provide reasonable solutions.
Another point to be addressed is whether the resolution of the polyhedron shape model affects the FEM solutions. This issue is partially answered by [@Hirabayashi2015Sphere], in which the theoretical model was consistent with the ANSYS calculation even if the geometry of the FEM mesh was not entirely identical to that of the theoretical sphere because of the artificial resolution. Also, the local topographic features might affect the FEM solution. The higher the resolution of the polyhedron shape model becomes, the more the shape model accounts for small topographic variations such as the detailed shapes of boulders. However, because the failure conditions on a global scale are controlled by the global distributions of the gravity force and the centrifugal force, the small topographic variations do not globally contribute to structure failure much.
Conclusion
==========
In this study, we discussed how the shape of an irregularly shaped body would evolve due to the YORP effect, using a finite element model technique. Assuming that materials in objects were homogeneously distributed, we investigated the YORP-driven failure conditions and modes of 24 asteroids ($<$ 40km in diameter) observed at high resolution. Our results showed that the irregular shape of an asteroid is a critical factor that controls the failure mode and condition, pointing out a limited capability of the well-accepted averaging technique. We used a subjective shape classification that divided asteroids into four shape classes: spheroidal objects, elongated objects, contact binary objects, and non-classified objects. We found distinctive trends of the failure mode for each shape type, shedding light on the shape formation processes of asteroids. Structural failure of the spheroidal objects always started from the interior, while the elongated objects had structural failure in the middle of their structure. The contact binary objects, on the other hand, experienced structural failure around their neck region. Also, the failure conditions were highly controlled by these shape features. Further investigations will give constraints on the formation processes of asteroids.
Acknowledgements
================
M.H. acknowledges support from the faculty startup fund from Department of Aerospace Engineering at Auburn University. The software package ANSYS 18.1, licensed by Samuel Ginn College of Engineering at Auburn University, was used for the stress analyses presented in this paper. M. H. thanks Dr. Shantanu Naidu (JPL), Dr. Marina Brozovi[ć]{} (JPL), Dr. Jean-Luc Margot (UCLA), Dr. Tracy Becker (SwRI), Dr. Ellen Howell (U. of Arizona), Dr. Patrick Taylor (Arecibo Observatory), and Dr. Michael Busch (SETI) for shape models and useful discussions.
[100]{} natexlab\#1[\#1]{}url \#1[`#1`]{}urlprefix
Bagatin, A. C., Alema[ñ]{}, R. A., Benavidez, P. G., Richardson, D. C., 2018. Internal structure of asteroid gravitational aggregates. Icarus 302 (1), 343–359.
Becker, T. M., Howell, E. S., Nolan, M. C., Magri, C., Pravec, P., Taylor, P. A., Oey, J., Higgins, D., Vil[á]{}gi, J., Korno[š]{}, L., et al., 2015. [Physical modeling of triple near-Earth asteroid (153591) 2001 SN 263 from radar and optical light curve observations]{}. Icarus 248 (1), 499–515.
Benner, L. A., Busch, M. W., Giorgini, J. D., Taylor, P. A., Margot, J.-L., 2015. Radar observations of near-earth and main-belt asteroids. Asteroids IV, 165–182.
Benner, L. A. M., Hudson, R. S., Ostro, S. J., Rosema, K. D., Giorgini, J. D., Yeomans, D. K., Jurgens, R. F., Mitchell, D. L., Winkler, R., Rose, R., Slade, M. A., Thomas, M. L., Pravec, P., 1999. [Radar observations and asteroid 2063 Bacchus]{}. Icarus 139 (2), 309–327.
Bottke, W. F., Richardson, D. C., Michel, P., Love, S. G., 1999. [1620 Geographos and 433 Eros: Shaped by planetary tides?]{} The Astronomical Journal 117 (4), 1921.
Brozovic, M., Benner, L. A., Magri, C., Ostro, S. J., Scheeres, D. J., Giorgini, J. D., Nolan, M. C., Margot, J.-L., Jurgens, R. F., Rose, R., 2010. [Radar observations and a physical model of contact binary Asteroid 4486 Mithra]{}. Icarus 208 (1), 207–220.
Brozovic, M., Benner, L. A. M., Taylor, P. A., Nolan, M. C., Howell, E. S., Magri, C., Scheeres, D. J., Giorgini, J. D., Pollock, J. T., Pravec, P., Galad, A., Fang, J., Margot, J.-L., Busch, M. W., Shepard, M. K., Reichart, D. E., Ivarsen, K. M., Haislip, J. B., LaCluyze, A. P., Jao, J., Slade, M. A., Lawrence, K. J., Hicks, M. D., 2011. [Radar and optical observations and physical modeling of triple near-Earth asteroid (136617) 1994 CC]{}. Icarus 216 (1), 241–256.
Brozovic, M., Ostro, S. J., Benner, L. A. M., Giorgini, J. D., Jurgens, R. F., Rose, R., Nolan, M. C., Hine, A. A., Magri, C., Scheeres, D. J., Margot, J.-L., 2009. [Radar observations and a physical model of Asteroid 4660 Nereus, a prime space mission target]{}. Icarus 201 (1), 153–166.
Bus, S. J., Binzel, R. P., 2002. [Phase II of the small main-belt asteroid spectroscopic survey: A feature-based taxonomy]{}. Icarus 158 (1), 146–177.
Busch, M. W., Benner, L. A. M., Ostro, S. J., Giorgini, J. D., Jurgens, R. F., Rose, R., Scheeres, D. J., Magri, C., Margot, J.-L., Nolan, M. C., Hine, A. A., 2008. [Physical peroperties of near-Earth asteroid (33342) 1998 WT24]{}. Icarus 195 (2), 614–621.
Busch, M. W., Giorgini, J. D., Ostro, S. J., Benner, L. A. M., Jurgens, R. F., Rose, R., Hicks, M. D., Pravec, P., Kusnirak, P., Ireland, M., Scheeres, D. J., Broschart, S. B., Magri, C., Nolan, M. C., Hine, A. A., Margot, J.-L., 2007. [Physical modeling of near-Earth asteroid (29075) 1950 DA]{}. Icarus 190 (2), 608–621.
Busch, M. W., Ostro, S. J., Benner, L. A. M., Brozovic, M., Giorgini, J. D., Jao, J. S., Scheeres, D. J., Magri, C., Nolan, M. C., Howell, E. S., Taylor, P. A., Margot, J.-L., Brisken, W., 2011. [Radar observations and the shape of near-Earth asteroid 2008 EV5]{}. Icarus 212 (2), 649–660.
Busch, M. W., Ostro, S. J., Benner, L. A. M., Giorgini, J. D., Jurgens, R. F., Rose, R., Magri, C., Pravec, P., Scheeres, D. J., Brochart, S. B., 2006. [Radar and optical observations and physical modeling of near-Earth asteroid 10115 (1992 SK)]{}. Icarus 181 (1), 145–155.
Chen, W. F., Han, D. J., 1988. Plasticity for Structural Engineers. Springer-Verlag.
Chesley, S. R., Farnocchia, D., Nolan, M. C., Vokrouhlick[y]{}, D., Chodas, P. W., Milani, A., Spoto, F., Rozitis, B., Benner, L. A., Bottke, W. F., et al., 2014. [Orbit and bulk density of the OSIRIS-REx target Asteroid (101955) Bennu]{}. Icarus 235, 5–22.
Cotto-Figueroa, D., Statler, T. S., Richardson, D. C., Tanga, P., 2015. [Coupled spin and shape evolution of small rubble-pile asteroids: Self-limitation of the YORP effect]{}. The Astrophysical Journal 803 (1), 25.
uk, M., Burns, J. A., 2005. [Effects of thermal radiation on the dynamics of binary NEAs]{}. Icarus 176 (2), 418–431.
Davidsson, B. J. R., 2001. Tidal splitting and rotational breakup of solid biaxial ellipsoids. Icarus 149 (2), 375–383.
DeMeo, F. E., Binzel, R. P., Slivan, S. M., Bus, S. J., 2009. [An extension of the Bus asteroid taxonomy into the near-infrared]{}. Icarus 202 (1), 160–180.
Dobrovolskis, A. R., 1982. [Internal stresses in Phobos in other triaxial bodies]{}. Icarus 52 (1), 136–148.
urech, J., Vokrouhlick[y]{}, D., Kaasalainen, M., Higgins, D., Krugly, Y. N., Gaftonyuk, N., Shevchenko, V., Chiorny, V., Hamanowa, H., Reddy, V., et al., 2008. [Detection of the YORP effect in asteroid (1620) Geographos]{}. Astronomy & Astrophysics 489 (2), L25–L28.
Fang, J., Margot, J.-L., 2011. [Near-Earth binaries and triples: Origin and evolution of spin-orbital properties]{}. The Astronomical Journal 143 (1), 24.
Farnocchia, D., Chesley, S., 2014. [Assessment of the 2880 impact threat from asteroid (29075) 1950 DA]{}. Icarus 229, 321–327.
Fujiwara, A., Kawaguchi, J., Yeomans, D. K., Abe, M., Mukai, T., Okada, T., Saito, J., Yano, H., Yoshikawa, M., Scheeres, D. J., Barnouin-Jha, O., Cheng, A. F., Demura, H., Gaskell, R. W., Ikeda, H., Kominato, T., Miyamoto, H., Nakamura, A. M., Nakamura, R., Sasaki, S., Uesugi, K., 2006. [The rubble-pile asteroid Itokawa as observed by Hayabusa]{}. Science 312 (5778), 1330–1334.
Gaskell, R., 2008. [Gaskell Eros Shape Model V1.0. NEAR-A-MSI-5-EROSSHAPE-V1.0]{}. Tech. rep., NASA Planetary Data System.
Goldreich, P., et al., 2009. Tidal evolution of rubble piles. The Astrophysical Journal 691 (1), 54 – 60.
Harris, A. W., Fahnestock, E. G., Pravec, P., 2009. [On the shapes and spins of “rubble pile" asteroids]{}. Icarus 199 (2), 310–318.
Hirabayashi, M., 2014. Structural failure of two-density-layer cohesionless biaxial ellipsoids. Icarus 236, 178–180.
Hirabayashi, M., 2015. [Failure modes and conditions of a cohesive, spherical body due to YORP spin-up]{}. Monthly Notices of the Royal Astronomical Society 454 (2), 2249–2257.
Hirabayashi, M., S[á]{}nchez, D. P., Scheeres, D. J., 2015. [Internal structure of asteroids having surface shedding due to rotational instability]{}. The Astrophysical Journal 808 (1), 63.
Hirabayashi, M., Scheeres, D. J., 2014. [Stress and failure analysis of rapidly rotating asteroid (29075) 1950 DA]{}. The Astrophysical Journal Letters 798 (1), L8.
Hirabayashi, M., Scheeres, D. J., 2016. Failure mode diagram of rubble pile asteroids: Application to (25143) asteroid itokawa. In: IAU Symposium. Vol. 318. pp. 122–127.
Hirabayashi, M., Scheeres, D. J., Chesley, S. R., Marchi, S., McMahon, J. W., Steckloff, J., Mottola, S., Naidu, S. P., Bowling, T., 2016. [Fission and reconfiguration of bilobate comets as revealed by 67P/Churyumov–Gerasimenko]{}. Nature 534, 352 – 355.
Hirabayashi, M., Scheeres, D. J., Gabriel, T., et al., 2014. [Constraints on the physical properties of main belt comet P/2013 R3 from its breakup event]{}. The Astrophysical Journal Letters 789 (1), L12.
Holsapple, K. A., 2004. [Equilibrium figures of spinning bodies with self-gravity]{}. Icarus 172 (1), 272–303.
Holsapple, K. A., 2007. [Spin limits of solar system bodies: From the small fast-rotators to 2003 EL61]{}. Icarus 187 (2), 500–509.
Holsapple, K. A., 2008. [Spinning rods, elliptical disks and solid ellipsoidal bodies: Elastic and plastic stresses and limit spins]{}. International journal of Non-Linear Mechanics 43 (2), 733–742.
Holsapple, K. A., 2010. [On YORP-induced spin deformations of asteroids]{}. Icarus 205 (2), 430 – 442.
Hosapple, K. A., 2001. Equilibrium configurations of solid cohesionless bodies. Icarus 154 (2), 432–448.
Hudson, R. S., Ostro, S. J., 1994. [Shape of asteroid 4769 Castalia (1989 PB) from inversion of radar images]{}. Science 263 (5149), 940–943.
Hudson, R. S., Ostro, S. J., 1995. [Shape and non-principal axis spin state of asteroid 4179 Toutatis]{}. Science 270 (5233), 84–86.
Hudson, R. S., Ostro, S. J., 1999. [Physical model of asteroid 1620 Geographos from radar and optical data]{}. Icarus 140 (2), 369–378.
Hudson, R. S., Ostro, S. J., Jurgens, R. F., Rosema, K. D., Giorgini, J. D., Winkler, R., Rose, R., Choate, D., Cormier, R. A., Franck, C. R., Frye, R., 2000. [Radar observations and physical model of asteroid 6489 Golevka]{}. Icarus 148 (1), 37–51.
Jacobson, S. A., Scheeres, D. J., 2011. Dynamics of rotationally fissioned asteroids: Source of observed small asteroid systems. Icarus 214 (1), 161–178.
Jewitt, D., Agarwal, J., Li, J., Weaver, H., Mutchler, M., Larson, S., 2014. [Disintegrating asteroid P/2013 R3]{}. The astrophysical journal letters 784 (1), L8.
Jewitt, D., Agarwal, J., Li, J., Weaver, H., Mutchler, M., Larson, S., 2017. [Anatomy of an Asteroid Break-Up: The Case of P/2013 R3]{}. The astrophysical journal 153 (5), 223 – 240.
Kaasalainen, M., Pravec, P., Krugly, Y. N., Sarounova, L., Torppa, J., Virtanen, J., Kaasalainen, S., Erikson, A., Nathues, A., Durech, J., Wolf, M., Lagerros, J. S. V., Lindgren, M., Lagerkvist, C.-I., Koff, R., Davies, J., Mann, R., Kusnirak, P., Gaftonyuk, N. M., Shevchenko, V. G., Chiorny, V. G., Belskaya, I. N., 2004. [Photometry and models of eight near-Earth asteroids]{}. Icarus 167 (1), 178–196.
Kohnke, P., 2009. Theory Reference for the Mechanical APDL and Mechanical Applications. ANSYS, Inc., Southpointe 275 Technology Drive Canonsburg, PA 15317, 12th Edition.
Lambe, T. W., Whitman, R. V., 1969. [Soil Mehanics (Series in soil engineering)]{}. John Wiley and Sons, Inc.
Lauretta, D. S., [The OSIRIS-REx Team]{}, 2012. [An overview of the OSIRIS-REx asteroid sample return mission]{}. In: Lunar and Planetary Science Conference. Vol. 43. p. 2491.
Love, A. E. H., 1927. A treatise of the mathematical theory of elasticity. MacMillan Company.
Lowry, S. C., Fitzsimmons, A., Pravec, P., Vokrouhlick[ý]{}, D., Boehnhardt, H., Taylor, P. A., Margot, J.-L., Gal[á]{}d, A., Irwin, M., Irwin, J., Kusnir[á]{}k, P., 2007. [Direct detection of the asteroidal YORP effect]{}. Science 316 (5822), 272–274.
Magnusson, P., Dahlgren, M., Barucci, M. A., Jorda, L., Binzel, R. P., Slivan, S. M., Blanco, C., Riccioli, D., Buratti, B. J., Colas, F., Berthier, J., de Angelis, G., di Martino, M., Dotto, E., Drummond, J. D., Fink, U., Hicks, M., Grundy, W., Winsniewski, W., Gaftonyuk, N. M., Geyer, E. H., Bauer, T., Hoffmann, M., Ivanova, V., Komitov, B., Donchev, Z., Denchev, P., Krugly, Y. N., Velichko, F. P., Chiorny, V., Lupishko, D. F., Shevchenko, V. G., Kwiatkowski, T., Kryszczynska, A., Lahulla, J. F., Licandro, J., Mendez, O., Mottola, S., Erikson, A., Ostro, S. J., Pravec, P., Pych, W., Tholen, D. J., Whiteley, R., Wild, W. J., Wolf, M., Sarounova, L., 1996. [Photometric observations and modeling of asteroid 1620 Geographos]{}. Icarus 123 (1), 227–244.
Magri, C., Howell, E. S., Nolan, M. C., Taylor, P. A., Fernandez, Y. R., Mueller, M., Jr., R. J. V., Benner, L. A. M., Giorgini, J. D., Ostro, S. J., Scheeres, D. J., Hicks, M. D., Rhoades, H., Somers, J. M., Gaftonyuk, N. M., Kouprianov, V. V., Krugly, Y. N., Molotov, I. E., Busch, M. W., Margot, J.-L., Benishek, V., Protitch-Benishek, V., Galad, A., Higgins, D., Kusnirak, P., Pray, D. P., 2011. [Radar and photometric observations and shape modeling of contact binary near-Earth asteroid (8567) 1996 HW1]{}. Icarus 214 (1), 2011–227.
Magri, C., Ostro, S. J., Scheeres, D. J., Nolan, M. C., Giorgini, J. D., Benner, L. A. M., Margot, J.-L., 2007. [Radar observations and a physical model of asteroid 1580 Betulia]{}. Icarus 186 (1), 152–177.
Margot, J.-L., Nolan, M., Benner, L., Ostro, S., Jurgens, R., Giorgini, J., Slade, M., Campbell, D., 2002. [Binary asteroids in the near-Earth object population]{}. Science 296 (5572), 1445–1448.
Margot, J.-L., Pravec, P., Taylor, P., Carry, B., Jacobson, S., 2015. Asteroid systems: binaries, triples, and pairs. Asteroids IV. Univ. Arizona Press, Tucson, 355–374.
Marshall, S. E., Howell, E. S., Magri, C., Vervack Jr, R. J., Campbell, D. B., Fern[á]{}ndez, Y. R., Nolan, M. C., Crowell, J. L., Hicks, M. D., Lawrence, K. J., et al., 2017. [Thermal properties and an improved shape model for near-Earth asteroid (162421) 2000 ET70]{}. Icarus 292, 22 – 35.
Miller, J. K., Konopliv, A., Antreasian, P. G., Bordi, J. J., Chesley, S., Helfrich, C. E., Owen, W. M., Wang, T. C., Williams, B. G., Yeomans, D. K., Scheeres, D. J., 2002. [Determination of shape, gravity, and rotational state of asteroid 433 Eros]{}. Icarus 155, 3–17.
Minton, D. A., 2008. The topographic limits of gravitationally bound, rotating sand piles. Icarus 195 (2), 698–704.
Naidu, S. P., Margot, J.-L., Busch, M. W., Taylor, P. A., Nolan, M. C., Brozovic, M., Benner, L. A., Giorgini, J. D., Magri, C., 2013. [Radar imaging and physical characterization of near-Earth asteroid (162421) 2000 ET70]{}. Icarus 226 (1), 323–335.
Naidu, S. P., Margot, J.-L., Taylor, P. A., Nolan, M. C., Busch, M. W., Benner, L. A., Brozovic, M., Giorgini, J. D., Jao, J. S., Magri, C., 2015. [Radar imaging and characterization of the binary near-Earth asteroid (185851) 2000 DP107]{}. The Astronomical Journal 150 (2), 54.
Nesvorn[ý]{}, D., Vokrouhlick[ý]{}, D., 2007. [Analytic theory of the YORP effect for near-spherical objects]{}. The Astronomical Journal 134 (5), 1750 – 1768.
Nolan, M. C., Magri, C., Howell, E. S., Benner, L. A., Giorgini, J. D., Hergenrother, C. W., Hudson, R. S., Lauretta, D. S., Margot, J.-L., Ostro, S. J., et al., 2013. [Shape model and surface properties of the OSIRIS-REx target asteroid (101955) Bennu from radar and lightcurve observations]{}. Icarus 226 (1), 629–640.
Ostro, S. J., Chandler, J. F., Hine, A. A., Rosema, K. D., Shapiro, I. I., Yeomans, D. K., 1990. [Radar images of asteroid 1989 PB]{}. Science 248 (4962), 1523–1528.
Ostro, S. J., Hudson, R. S., Benner, L. A. M., Nolan, M. C., Giorgini, J. D., Scheeres, D. J., Jurgens, R. F., Rose, R., 2001. [Radar observations of asteroid 1998 ML14]{}. Meteoritics and Planetary Science 36 (9), 1225 – 1236.
Ostro, S. J., Jurgens, R. F., Rosema, K. D., Giorgini, J. D., Winkler, R., Yeomans, D. K., Choate, D., Rose, R., Slade, M. A., Howard, S. D., Scheeres, D. J., Mitchell, D. L., 1996. [Radar observations of asteroid 1620 Geographos]{}. Icarus 121 (1), 44–66.
Ostro, S. J., Margot, J.-L., Benner, L. A. M., Giorgini, J. D., Scheeres, D. J., Fahnestock, E. G., Broschart, S. B., Bellerose, J., Nolan, M. C., Magri, C., Pravec, P., Scheirich, P., Rose, R., Jurgens, R. F., Jong, E. M. D., Suzuki, S., 2006. [Radar imaging of binary near-Earth asteroid (66391) 1999KW4]{}. Science 314 (5803), 1276–1280.
Polishook, D., Moskovitz, N., Binzel, R., Burt, B., DeMeo, F., Hinkle, M., Lockhart, M., Mommert, M., Person, M., Thirouin, A., et al., 2016. [A 2km-size asteroid challenging the rubble-pile spin barrier–A case for cohesion]{}. Icarus 267, 243–254.
Pravec, P., Vokrouhlick[y]{}, D., Polishook, D., Scheeres, D. J., Harris, A. W., Galad, A., Vaduvescu, O., Pozo, F., Barr, A., Longa, P., et al., 2010. Formation of asteroid pairs by rotational fission. Nature 466 (7310), 1085–1088.
Rozitis, B., MacLennan, E., Emery, J. P., 2014. [Cohesive forces prevent the rotational breakup of rubble-pile asteroid (29075) 1950 DA]{}. Nature 512 (7513), 174–176.
Rubincam, D. P., 2000. Radiative spin-up and spin-down of small asteroids. Icarus 148 (1), 2–11.
S[á]{}nchez, D. P., Scheeres, D. J., 2012. [DEM simulation of rotation-induced reshaping and disruption of rubble-pile asteroids]{}. Icarus 218 (2), 876–894.
S[á]{}nchez, P., Scheeres, D. J., February 2011. Simulating asteroid rubble piles with a self-gravitating soft-sphere distinct element mothed model. The astrophysical journal 727 (2), 120 – 134.
S[á]{}nchez, P., Scheeres, D. J., 2016. [Disruption patterns of rotating self-gravitating aggregates: A survey on angle of friction and tensile strength]{}. Icarus 271, 453–471.
Scheeres, D., 2015. Landslides and mass shedding on spinning spheroidal asteroids. Icarus 247, 1–17.
Scheeres, D., Hesar, S., Tardivel, S., Hirabayashi, M., Farnocchia, D., McMahon, J., Chesley, S., Barnouin, O., Binzel, R., Bottke, W., et al., 2016. [The geophysical environment of Bennu]{}. Icarus 276, 116–140.
Scheeres, D. J., 2007. [Rotation fission of contact binary asteroids]{}. Icarus 189 (2), 370–385.
Scheeres, D. J., 2007. [The dynamical evolution of uniformly rotating asteroids subject to YORP]{}. Icarus 188 (2), 430–450.
Scheeres, D. J., 2018. [Stability of the Euler resting N-body relative equilibria]{}. [Accepted in Celestial Mechanics and Dynamical Astronomy]{}.
Scheeres, D. J., Hartzell, C. M., S[á]{}nchez, P., Swift, M., 2010. [Scaling forces to asteroid surfaces: The role of cohesion]{}. Icarus 210 (2), 968–984.
Scheeres, D. J., Mirrahimi, S., 2008. Rotational dynamics of a solar system body under solar radiation torques. Celestial Mechanics and Dynamical Astronomy 101 (1-2), 69–103.
Scheeres, D. J., Ostro, S. J., 1996. [Orbits close to asteroid 4769 Castalia]{}. Icarus 121 (1), 67–87.
Sharma, I., Jenkins, J. T., Burns, J. A., 2009. [Dynamical passage to approximate equilibrium shapes for spinning, gravitating rubble asteroids]{}. Icarus 200 (1), 304–322.
Shepard, M. K., Clark, B. E., Nolan, M. C., Benner, L. A. M., Ostro, S. J., Giorgini, J. D., Vilas, F., Jarvis, K., Lederer, S., Lim, L. F., McConnochie, T., Bell, J., Margot, J.-L., Rivkin, A. S., Magri, C., Scheeres, D. J., Pravec, P., 2008. [Multi-wavelength observations of asteroid 2100 Ra-Shalom]{}. Icarus 193 (1), 20–38.
Shepard, M. K., Margot, J.-L., Magri, C., Nolan, M. C., Schlieder, J., Estes, B., Bus, S. J., Volquardsen, E. L., Rivkin, A. S., Benner, L. A. M., Giorgini, J. D., Ostro, S. J., Busch, M. W., 2006. [Radar and infrared observations of binary near-Earth asteroid 2002 CE26]{}. Icarus 184 (1), 198–210.
Si, H., 2015. Tetgen, a delaunay-based quality tetrahedral mesh generator. [ACM Transactions on Mathematical Software (TOMS)]{} 41 (11), 1 – 36.
Spencer, J. R., Akimov, L. A., Angeli, C., Angelini, P., Barucci, M. A., Birch, P., Blanco, C., Buie, M. W., Caruso, A., Chiornij, V. G., et al., 1995. [The lightcurve of 4179 Toutatis: Evidence for complex rotation]{}. Icarus 117 (1), 71–89.
Statler, T. S., 2009. [Extreme sensitivity of the YORP effect to small-scale topography]{}. Icarus 202 (2), 502–513.
Statler, T. S., 2015. [Obliquities of ?top-shaped? asteroids may not imply reshaping by YORP spin-up]{}. Icarus 248, 313–317.
Tardivel, S., S[á]{}nchez, P., Scheeres, D., 2018. Equatorial zcavities on asteroids, an evidence of fission events. Icarus 304, 192 – 208.
Taylor, P. A., Howell, E., Nolan, M., Thane, A., 2012. The shape and spin distributions of near-earth asteroids observed with the arecibo radar system. In: American Astronomical Society Meeting Abstracts\# 220. Vol. 220.
Taylor, P. A., Margot, J.-L., Vokrouhlick[ý]{}, D., Scheeres, D. J., Pravec, P., Lowry, S. C., Fitzsimmons, A., Nolan, M. C., Ostro, S. J., Benner, L. A. M., Giorgini, J. D., Magri, C., 2007. [Spin rate of asteroid (54509) 2000 PH5 increasing due to the YORP effect]{}. Science 316 (5822), 274–277.
Tsuda, Y., Yoshikawa, M., Abe, M., Minamino, H., Nakazawa, S., 2013. [System design of the Hayabusa 2?asteroid sample return mission to 1999 JU3]{}. Acta Astronautica 91, 356–362.
Vokrouhlick[ý]{}, D., Bottke, W. F., Chesley, S. R., Scheeres, D. J., Statler, T. S., 2015. [The Yarkovsky and YORP effect]{}. Asteroid IV, 509–531.
Walsh, K. J., Jacobson, S. A., 2015. Formation and evolution of binary asteroids. Asteroids IV. Univ. Arizona Press, Tucson, 375–393.
Walsh, K. J., Richardson, D. C., Michel, P., 2008. [Rotational breakup as the origin of small binary asteroids]{}. Nature 454, 188–191.
Walsh, K. J., Richardson, D. C., Michel, P., 2012. [Spin-up of rubble-pile asteroids: Disruption, satellite formation, and equilibrium shapes]{}. Icarus 220 (2), 514–529.
Walsh, K. J., Richardson, D. C., Schwartz, S. R., 2014. Tidal disruption revisited-creating bifurcated shapes among rubble pile asteroids. In: AAS/Division for Planetary Sciences Meeting Abstracts. Vol. 46.
Zhang, Y., Richardson, D. C., Barnouin, O. S., Maurel, C., Michel, P., Schwartz, S. R., Ballouz, R.-L., Benner, L. A., Naidu, S. P., Li, J., 2017. [Creep stability of the proposed AIDA mission target 65803 Didymos: I. Discrete cohesionless granular physics model]{}. Icarus 294, 98 – 123.
[^1]: [@Taylor2012] called these asteroids irregularly shaped asteroids.
[^2]: Our FEM simulation uses a criterion in which the $L2$ norm of a difference between an actual load and a restored load based on deformation becomes within $0.01 \%$ of the actual load. Note that the actual load is the set of forces acting on the defined nodes that we input based on the effects of gravity and centrifugal forces; on the other hand, the restored load is computed from the solution of deformation [@ANSYSThr]. We observe that if the numerical convergence occurs correctly, the convergence rate during each iteration with a constant load step is almost constant, and the $L2$-norm value constantly becomes close to its threshold. If we do not observe this behavior, we do not accept the solution as a proper result.
[^3]: The dimensional cohesive strength can be computed multiplying the values in the sixth column in Table \[Table:AsteriodAnalysis\].
|
---
abstract: 'We present a supersymmetric model of quark and lepton based on $S_4\times Z_3\times Z_4$ flavor symmetry. The $S_4$ symmetry is broken down to Klein four and $Z_3$ subgroups in the neutrino and the charged lepton sectors respectively. Tri-Bimaximal mixing and the charged lepton mass hierarchies are reproduced simultaneously at leading order. Moreover, a realistic pattern of quark masses and mixing angles is generated with the exception of the mixing angle between the first two generations, which requires a small accidental enhancement. It is remarkable that the mass hierarchies are controlled by the spontaneous breaking of flavor symmetry in our model. The next to leading order contributions are studied, all the fermion masses and mixing angles receive corrections of relative order $\lambda^2_c$ with respect to the leading order results. The phenomenological consequences of the model are analyzed, the neutrino mass spectrum can be normal hierarchy or inverted hierarchy, and the combined measurement of the $0\nu2\beta$ decay effective mass $m_{\beta\beta}$ and the lightest neutrino mass can distinguish the normal hierarchy from the inverted hierarchy.'
---
2.5cm
[**Fermion Masses and Flavor Mixings in a Model with $S_4$ Flavor Symmetry** ]{}
0.2 cm 0.5 cm
[Gui-Jun Ding]{} [^1]\
.2cm [*Department of Modern Physics,*]{}\
[*University of Science and Technology of China, Hefei, Anhui 230026, China*]{}\
0.7cm
2truecm
Introduction
============
Neutrino has provided us a good window to the new physics beyond the Standard Model. Neutrino oscillation experiments have provided solid evidence that neutrinos have small but non-zero masses. Global data fit to the current neutrino oscillation data demonstrates that the mixing pattern in the leptonic sector is so different from the one in the quark sector. Two independent fits for the mixing angles and the mass squared differences are listed in Table \[tab:data\_fit\].
[|c|cc|cc|]{} & &\
parameter & best fit$\pm 1\sigma$ & 3$\sigma$ interval & best fit$\pm 1\sigma$ & 3$\sigma$ interval\
$\Delta m^2_{21}\: [10^{-5}{\rm eV^2}]$ & $7.65^{+0.23}_{-0.20}$ & 7.05–8.34 & $7.67^{+0.22}_{-0.21}$ & 7.07–8.34\
$\Delta m^2_{31}\: [10^{-3}{\rm eV^2}]$ & $\pm 2.40^{+0.12}_{-0.11}$ & $\pm$(2.07–2.75) &
------------------
$-2.39 \pm 0.12$
$+2.49 \pm 0.12$
------------------
: \[tab:data\_fit\]Three flavour neutrino oscillation parameters from two global data fits [@Schwetz:2008er; @GonzalezGarcia:2007ib].
&
----------------
$-$(2.02–2.79)
$+$(2.13–2.88)
----------------
: \[tab:data\_fit\]Three flavour neutrino oscillation parameters from two global data fits [@Schwetz:2008er; @GonzalezGarcia:2007ib].
\
$\sin^2\theta_{12}$ & $0.304^{+0.022}_{-0.016}$ & 0.25–0.37 & $0.321^{+0.023}_{-0.022}$ & 0.26–0.40\
$\sin^2\theta_{23}$ & $0.50^{+0.07}_{-0.06}$ & 0.36–0.67 & $0.47^{+0.07}_{-0.06}$ & 0.33–0.64\
$\sin^2\theta_{13}$ & $0.01^{+0.016}_{-0.011}$ & $\leq$ 0.056 & $0.003\pm 0.015$ & $\leq$ 0.049\
As is obvious, the current neutrino oscillation data is remarkably compatible with the so called Tri-Bimaximal (TB) mixing pattern [@TBmix], which suggests the following mixing pattern $$\label{1}\sin^2\theta_{12,TB}=\frac{1}{3},~~\sin^2\theta_{23,TB}=\frac{1}{2},~~\sin^2\theta_{13,TB}=0$$ These values lie in the $1\sigma$ range of global data analysis shown in Table \[tab:data\_fit\][^2]. Correspondingly, the leptonic Pontecorvo-Maki-Nakagawa-Sakata (PMNS) mixing matrix is given by $$\label{2}U^{TB}_{PMNS}=U_{TB}\;{\rm diag}(1,{\rm
e}^{i\alpha_{21}/2},{\rm e^{i\alpha_{31}/2}})$$ where $\alpha_{21}$ and $\alpha_{31}$ are the Majorana CP violating phases, and $U_{TB}$ is given by $$\label{3}U_{TB}=\left(\begin{array}{ccc}
\sqrt{\frac{2}{3}}&\frac{1}{\sqrt{3}}&0\\
-\frac{1}{\sqrt{6}}&\frac{1}{\sqrt{3}}&\frac{1}{\sqrt{2}}\\
-\frac{1}{\sqrt{6}}&\frac{1}{\sqrt{3}}&-\frac{1}{\sqrt{2}}
\end{array}\right)$$ The mixing in the quark sector is described by the famous CKM matrix [@Cabibbo:1963yz], and there is large mass hierarchies within the quarks and charged leptons sectors respectively [@pdg]. The origin of the observed fermion mass hierarchies and flavor mixings is a great puzzle in particle physics. Nowadays promising candidates for understanding such issue are the models based on spontaneously breaking flavor symmetry, various models based on discrete or continuous flavor symmetry have been proposed so far [@continuous; @horizontal]. Recently it was found that flavor symmetry based on discrete group is particularly suitable to reproduce specific mixing pattern at leading order [@review]. The $A_4$ models are especially attractive, it has received considerable interest in the recent past [@TBModel; @Altarelli:2005yp; @Altarelli:2005yx; @Altarelli:2008bg; @Branco:2009by; @Altarelli:2009kr]. So far various $A_4$ flavor models have been proposed, and their phenomenological consequences were analyzed [@Bertuzzo:2009im; @Hagedorn:2009jy; @AristizabalSierra:2009ex; @Felipe:2009rr]. These models assumed that $A_4$ symmetry is realized at a high energy scale, the lepton fields transform nontrivially under the symmetry group, and the flavor symmetry is spontaneously broken by a set of flavons with the vacuum expectation value (VEV) along a specific direction. The misalignment in the flavor space between the charged lepton and the neutrino sectors results in the TB lepton mixing.
If extend the $A_4$ symmetry to the quark sector, the quark mixing matrix $V_{CKM}$ turns out to be unity matrix at leading order [@Altarelli:2005yx], However, the subleading contributions of the higher dimensional operators are too small to provide large enough deviations of $V_{CKM}$ from the identity matrix. The possible ways of resolving this issue are to consider new sources of symmetry breaking or enlarge the symmetry group. Two discrete groups $T'$ [@Ding:2008rj; @Carr:2007qw; @Feruglio:2007uu; @Chen:2007afa; @Frampton:2007et; @Aranda:2007dp; @Frampton:2008bz; @Eby:2008uc] and $S_4$ [@Ma:2005pd; @Bazzocchi:2008ej; @s4; @Bazzocchi:2009da; @Ishimori:2008fi; @Altarelli:2009gn] are found to be promising, both groups have two dimensional irreducible representation, which is very useful to describing the quark sector. The $S_4$ symmetry is particularly interesting, $S_4$ as a horizontal symmetry group has been proposed long ago [@Pakvasa:1978tx], and some models with different purposes have been built [@Hagedorn:2006ug]. Recently it was claimed to be minimal flavor group capable of yielding the TB mixing without fine tuning [@Lam:2008rs; @Lam:2008sh; @Lam:2009hn]. However, Grimus et al. were against this point [@Grimus:2009pg].
In this work, we build a SUSY model based on $S_4\times Z_3\times
Z_4$ flavor group, the neutrino mass is generated via the conventional type I See-Saw mechanism [@seesaw]. Our model naturally produces the TB mixing and the charged lepton mass hierarchy at leading order. Furthermore, we extend the model to the quark sector, the realistic patterns of quark masses and mixing angles are generated. In our model the mass hierarchies are controlled by the spontaneous breaking of the flavor symmetry instead of the Froggatt-Nielsen (FN) mechanism [@Froggatt:1978nt].
This article is organized as follows. Section 2 is the group theory of $S_4$ group, where the subgroup, the equivalent class, and the representation of $S_4$ are presented. In section 3 we justify the vacuum alignment of our model in the supersymmetric limit. In section 4 we present our model in both the lepton and quark sectors, its basic features and theoretical predictions are discussed. In section 5 we analyze the phenomenological implications of the model in detail, which include the mass spectrum, neutrinoless double beta decay and the Majorana CP violating phases etc. The corrections induced by the next to leading order terms are studied in section 6. Finally we summarize our results in the conclusion section.
The discrete group $S_4$
========================
$S_4$ is the permutation group of 4 objects. The group has 24 distinct elements, and it can be generated by two elements $S$ and $T$ obeying the relations $$\label{4} S^4=T^3=1,~~~ST^2S=T$$ Without loss of generality, we could choose $$\label{5}S=(1234),~~~~~T=(123)$$ where the cycle (1234) denotes the permutation $(1,2,3,4)\rightarrow(2,3,4,1)$, and (123) means $(1,2,3,4)\rightarrow(2,3,1,4)$. The 24 elements belong to 5 conjugate classes and are generated from $S$ and $T$ as follows $$\begin{aligned}
&&{\cal C}_1:1\\
&&{\cal
C}_2:\;STS^2=(12),\;TSTS^2=(13),\;ST^2=(14),\;S^2TS=(23),\;TST=(24),\;T^2S=(34)\\
&&{\cal C}_3:\;TS^2T^2=(12)(34),\;S^2=(13)(24),\;T^2S^2T=(14)(23)\\
&&{\cal
C}_4:\;T=(123),\;T^2=(132),\;T^2S^2=(124),\;S^2T=(142),\;S^2TS^2=(134),\;STS=(143),\\
&&~~~~~S^2T^2=(234),\;TS^2=(243)\\
&&{\cal
C}_5:\;S=(1234),\;T^2ST=(1243),\;ST=(1324),\;\;TS=(1342),\;TST^2=(1423),\;S^3=(1432)\\\end{aligned}$$ The structure of the group $S_4$ is rather rich, it has thirty proper subgroups of orders 1, 2, 3, 4, 6, 8, 12 or 24. Concretely, the subgroups of $S_4$ are as follows
1. [The trivial group only consisting of the unit element.]{}
2. [Six two-element subgroups generated by a transposition of the form $\{1,(ij)\}$ with $i\neq j$]{}
$H^{(1)}_2=\{1,STS^2\}$, $H^{(2)}_2=\{1,TSTS^2\}$, $H^{(3)}_2=\{1,ST^2\}$, $H^{(4)}_2=\{1,S^2TS\}$, $H^{(5)}_2=\{1,TST\}$ and $H^{(6)}_2=\{1,T^2S\}$
3. [Three two-element subgroups generated by a double transition of the form $\{1,(ij)(kl)\}$ with $i\neq j\neq k\neq l$ ]{}
$H^{(7)}_2=\{1,TS^2T^2\}$, $H^{(8)}_2=\{1,S^2\}$, $H^{(9)}_2=\{1,T^2S^2T\}$
4. [Four subgroups of order three, which is spanned by a three-cycle]{}
$H^{(1)}_3=\{1,T,T^2\}$,$H^{(2)}_3=\{1,T^2S^2,S^2T\}$,$H^{(3)}_3=\{1,S^2TS^2,STS\}$,$H^{(4)}_3=\{1,S^2T^2,TS^2\}$
5. [The four-element subgroups generated by a four-cycle, they are of the form $\{1,g,g^2,g^3\}$ with $g$ any four-cycle]{}
$H^{(1)}_4=\{1,S,S^2,S^3\}$, $H^{(2)}_4=\{1,TS,T^2ST,T^2S^2T\}$,$H^{(3)}_4=\{1,
TST^2,ST,TS^2T^2\}$
6. [The four-element subgroups generated by two disjoint transpositions, which is isomorphic to Klein four group]{}
$H^{(4)}_4=\{1,STS^2,T^2S,TS^2T^2\}$, $H^{(5)}_4=\{1,TSTS^2,TST,S^2\}$, $H^{(6)}_4=\{1,ST^2,S^2TS,T^2S^2T\}$
7. [The order four subgroup comprising of the identity and three double transitions, which is isomorphic to Klein four group]{}
$H^{(7)}_4=\{1,TS^2T^2,S^2,T^2S^2T\}$
8. [Four subgroups of order six, which is isomorphic to $S_3$. They are the permutation groups of any three of the four objects, leaving the fourth invariant]{}
$H^{(1)}_6=\{1, STS^2,TSTS^2,S^2TS,T,T^2\}$, $H^{(2)}_6=\{1,STS^2,ST^2,TST,T^2S^2,S^2T\}$,\
$H^{(3)}_6=\{1,TSTS^2,ST^2,T^2S,S^2TS^2,STS\}$, $H^{(4)}_6=\{1,S^2TS,TST,T^2S,S^2T^2,TS^2\}$
9. [Three eight-element subgroups, which is isomorphic to $D_4$ ]{}
$H^{(1)}_8=\{1,TSTS^2,TST,S,S^3,TS^2T^2,S^2,T^2S^2T\}$,\
$H^{(2)}_8=\{1,STS^2,T^2S,ST,TST^2,TS^2T^2,S^2,T^2S^2T\}$\
$H^{(3)}_8=\{1,ST^2,S^2TS,T^2ST,TS,TS^2T^2,S^2,T^2S^2T\}$
10. [The alternating group $A_4$]{}
$A_4=\{1,TS^2T^2,S^2,T^2S^2T,T,T^2,T^2S^2,S^2T,S^2TS^2,STS,S^2T^2,TS^2\}$
11. [The whole group]{}
In particular, $H^{(7)}_4$ and $A_4$ are the invariant subgroups of $S_4$. Since the number of the unequivalent irreducible representation is equal to the number of class, the $S_4$ group has five irreducible representations: $1_1$, $1_2$, 2, $3_1$ and $3_2$. $1_1$ is the identity representation and $1_2$ is the antisymmetric one. The Young diagram for the two dimensional representations is self associated, and the Young diagrams corresponding to the three dimensional representations $3_1$ and $3_2$ are associated Young diagrams. For the same group element, the representation matrices of $3_1$ and $3_2$ are exactly the same if the element is an even permutation. Whereas the overall signs are opposite if the group element is an odd permutation. It is notable that $S_4$ together with $T'$ is the smallest group containing one, two and three dimensional representations. The character table of $S_4$ group is shown in Table \[tab:character\].
------------------ -------------- -------------- -------------- -------------- --------
${\cal C}_1$ ${\cal C}_2$ ${\cal C}_3$ ${\cal C}_4$ ${\cal
C}_5$
$n_{{\cal C}_i}$ 1 6 3 8 6
$h_{{\cal C}_i}$ 1 2 2 3 4
$1_1$ 1 1 1 1 1
$1_2$ 1 -1 1 1 -1
2 2 0 2 -1 0
$3_1$ 3 1 -1 0 -1
$3_2$ 3 -1 -1 0 1
------------------ -------------- -------------- -------------- -------------- --------
: \[tab:character\]Character table of the $S_4$ group. $n_{{\cal C}_i}$ denotes the number of the elements contained in the class ${\cal C}_i$, and $h_{{\cal C}_i}$ is the order of the elements of ${\cal C}_i$.
From the character table of the $S_4$ group, we can straightforwardly obtain the multiplication rules between the various representations $$\begin{aligned}
\nonumber&&1_i\otimes1_j=1_{((i+j)\;{\rm mod}\;
2)+1},~~~~1_i\otimes2=2,~~~~1_i\otimes3_j=3_{((i+j)\;{\rm mod}\;
2)+1}\\
\nonumber&&2\otimes2=1_1\oplus1_2\oplus2,~~~~2\otimes3_i=3_1\oplus3_2,~~~~3_i\otimes3_i=1_1\oplus2\oplus3_1\oplus3_2,\\
\label{6}&&3_1\otimes3_2=1_2\oplus2\oplus3_2\oplus3_2, ~~~{\rm
with}~ i,j=1,2\end{aligned}$$ The explicit representation matrices of the generators $S$, $T$ and other group elements for the five irreducible representations are listed in Appendix A. From these representation matrices, one can explicitly calculate the Clebsch-Gordan coefficients for the decomposition of the product representations, and the same results as those in Ref.[@s4] are obtained.
\[sec:alignment\]Field content and the vacuum alignment
=======================================================
The model is supersymmetric and based on the discrete symmetry $S_4\times Z_3\times Z_4$. Supersymmetry is introduced in order to simplify the discussion of the vacuum alignment. The $S_4$ component controls the mixing angles, the auxiliary $Z_3$ symmetry guarantees the misalignment in flavor space between the neutrino and the charged lepton mass eigenstates, and the $Z_4$ component is crucial to eliminating the unwanted couplings and reproducing the observed mass hierarchy. The fields of the model and their classification under the flavor symmetry are shown in Table \[tab:trans\], where two Higgses doublets $h_{u,d}$ of the minimal supersymmetric standard model are present. If the $S_4$ flavor symmetry is preserved until the electroweak scale, then all the fermions would be massless. Therefore $S_4$ symmetry should be broken by the suitable flavon fields, which are standard model singlets. Another critical issue of the flavor model building is the vacuum alignment, a global continuous $U(1)_R$ symmetry is exploited to simplify the vacuum alignment problem. This symmetry is broken to the discrete R parity once we include the gaugino mass in the model. The matter fields carry +1 R-charge, the Higgses and the flavon supermultiplets have R-charge 0. The spontaneous breaking of $S_4$ symmetry can be implemented by introducing a new set of multiplets, the driving fields carrying 2 unit R-charge. Consequently the driving fields enter linearly into the superpotential. The suitable driving fields and their transformation properties are shown in Table \[tab:driving\]. In the following, we will discuss the minimization of the scalar potential in the supersymmetric limit. At the leading order, the most general superpotential dependent on the driving fields, which is invariant under the flavor symmetry group $S_4\times Z_3\times Z_4$, is given by
$\ell$ $e^{c}$ $\mu^{c}$ $\tau^{c}$ $\nu^c$ $Q_L$ $Q_3$ $u^{c}$ $c^{c}$ $t^{c}$ $d^c$ $s^c$ $b^{c}$ $h_{u,d}$ $\varphi$ $\chi$ $\theta$ $\eta$ $\phi$ $\Delta$
-------------- ---------- ------------ ------------ ------------ --------- ------- ------- --------- --------- --------- ---------- ------- --------- ----------- ----------- -------- ---------- ------------ ------------ ------------
$\rm{S_4}$ $3_1$ $1_1$ $1_2$ $1_1$ $3_1$ 2 $1_1$ $1_1$ $1_2$ $1_1$ $1_1$ $1_2$ $1_2$ $1_1$ $3_1$ $3_2$ $1_2$ 2 $3_1$ $1_2$
$\rm{Z_{3}}$ $\omega$ $\omega^2$ $\omega^2$ $\omega^2$ $1$ 1 1 1 1 1 $\omega$ 1 1 1 1 1 1 $\omega^2$ $\omega^2$ $\omega^2$
$\rm{Z_{4}}$ 1 i -1 -i 1 -1 1 i 1 1 1 1 1 1 i i 1 1 1 -1
: \[tab:trans\]The transformation rules of the matter fields and the flavons under the symmetry groups $S_4$, $Z_3$ and $Z_4$. $\omega$ is the third root of unity, i.e. $\omega=e^{i\frac{2\pi}{3}}=(-1+i\sqrt{3})/2$. We denote $Q_L=(Q_1,Q_2)^t$ which are doublets of $S_4$, where $Q_1=(u,d)^t$ and $Q_2=(c,s)^t$ are the electroweak SU(2) doublets of the first two generations. $Q_3=(t,b)^t$ is the electroweak SU(2) doublet of the third generation.
Fields $\varphi^{0}$ $\xi^{'0}$ $~\theta^{0}~$ $~\eta^{0}~$ $~\phi^{0}~$ $~\Delta^{0}~$
-------------- --------------- ------------ ---------------- -------------- -------------- ----------------
$\rm{S_4}$ $3_1$ $1_2$ $1_1$ 2 $3_2$ $1_1$
$\rm{Z_{3}}$ 1 1 1 $\omega^2$ $\omega^2$ $\omega^2$
$\rm{Z_{4}}$ -1 -1 1 1 1 1
: \[tab:driving\] The driving fields and their transformation properties under the flavor group $S_4\times Z_3\times
Z_4$.
$$\begin{aligned}
\nonumber &&w_v=g_1(\varphi^{0}(\varphi\varphi)_{3_1})_{1_1}+g_2(\varphi^{0}(\chi\chi)_{3_1})_{1_1}+g_3(\varphi^{0}(\varphi\chi)_{3_1})_{1_1}+g_4\xi^{'0}(\varphi\chi)_{1_2}+M^2_{\theta}\theta^{0}+\kappa\theta^{0}\theta^2+\\
\label{7}&&f_1(\eta^0(\eta\eta)_2)_{1_1}+f_2(\eta^{0}(\phi\phi)_2)_{1_1}+f_3(\phi^{0}(\eta\phi)_{3_2})_{1_1}+h_1\Delta^{0}\Delta^2+h_2\Delta^{0}(\eta\eta)_{1_1}+h_3\Delta^{0}(\phi\phi)_{1_1}\end{aligned}$$
where the subscript $1_1$ denotes the contraction in $1_1$, similar rule applies to other subscripts $1_2$, $2$, $3_1$ and $3_2$. In the SUSY limit, the vacuum configuration is determined by the vanishing of the derivative of $w_v$ with respect to each component of the driving fields $$\begin{aligned}
\nonumber&&\frac{\partial w_v}{\partial
\varphi^{0}_1}=2g_1(\varphi^2_1-\varphi_2\varphi_3)+2g_2(\chi^2_1-\chi_2\chi_3)+g_3(\varphi_2\chi_3-\varphi_3\chi_2)=0\\
\nonumber&&\frac{\partial
w_v}{\partial\varphi^{0}_2}=2g_1(\varphi^2_2-\varphi_1\varphi_3)+2g_2(\chi^2_2-\chi_1\chi_3)+g_3(\varphi_3\chi_1-\varphi_1\chi_3)=0\\
\nonumber&&\frac{\partial w_v}{\partial
\varphi^{0}_3}=2g_1(\varphi^2_3-\varphi_1\varphi_2)+2g_2(\chi^2_3-\chi_1\chi_2)+g_3(\varphi_1\chi_2-\varphi_2\chi_1)=0\\
\nonumber&&\frac{\partial w_v}{\partial
\xi^{'0}}=g_4(\varphi_1\chi_1+\varphi_2\chi_3+\varphi_3\chi_2)=0\\
\label{8}&&\frac{\partial w_v}{\partial
\theta^{0}}=M^2_{\theta}+\kappa\theta^2=0\end{aligned}$$ This set of equations admit the solution $$\begin{aligned}
\label{9}&&\langle\varphi\rangle=(0,v_{\varphi},0),~~~\langle\chi\rangle=(0,v_{\chi},0),~~~\langle\theta\rangle=v_{\theta}\end{aligned}$$ with $$\label{10}v^2_{\varphi}=-\frac{g_2}{g_1}v^2_{\chi},
~~~~v^2_{\theta}=-\frac{M^2_{\theta}}{\kappa},~~~v_{\chi} {\rm~
undetermined}$$ From the driving superpotential $w_v$, we can also derive the equations from which to extract the vacuum expectation values of $\eta$, $\phi$ and $\Delta$ $$\begin{aligned}
\nonumber&&\frac{\partial w_v}{\partial
\eta^{0}_1}=f_1\eta^2_1+f_2(\phi^2_3+2\phi_1\phi_2)=0\\
\nonumber&&\frac{\partial w_v}{\partial
\eta^{0}_2}=f_1\eta^2_2+f_2(\phi^2_2+2\phi_1\phi_3)=0\\
\nonumber&&\frac{\partial w_v}{\partial
\phi^{0}_1}=f_3(\eta_1\phi_2-\eta_2\phi_3)=0\\
\nonumber&&\frac{\partial w_v}{\partial
\phi^{0}_2}=f_3(\eta_1\phi_1-\eta_2\phi_2)=0\\
\nonumber&&\frac{\partial w_v}{\partial
\phi^{0}_3}=f_3(\eta_1\phi_3-\eta_2\phi_1)=0\\
\label{11}&&\frac{\partial w_v}{\partial
\Delta^0}=h_1\Delta^2+2h_2\eta_1\eta_2+h_3(\phi^2_1+2\phi_2\phi_3)=0\end{aligned}$$ The solution to the above six equations is $$\begin{aligned}
\label{12}\langle\eta\rangle=(v_{\eta},v_{\eta}),~~~~\langle\phi\rangle=(v_{\phi},v_{\phi},v_{\phi}),~~~~\langle\Delta\rangle=v_{\Delta}\end{aligned}$$ with the conditions $$\label{13}v^2_{\phi}=-\frac{f_1}{3f_2}v^2_{\eta},~~~~v^2_{\Delta}=\frac{f_1h_3-2f_2h_2}{f_2h_1}v^2_{\eta},~~~~v_{\eta} {\rm~
undetermined}$$ The vacuum expectation values (VEVs) of the flavons can be very large, much larger than the electroweak scale, and we expect that all the VEVs are of a common order of magnitude. This is a very common assumption in the flavor model building, which guarantees the reasonability of the subsequent perturbative expansion in inverse power of the cutoff scale $\Lambda$. Acting on the vacuum configurations of Eq.(\[9\]) and Eq.(\[12\]) with the elements of the flavor symmetry group $S_4$, we can see that the VEVs of $\eta$ and $\phi$ are invariant under four elements 1, $TST$, $TSTS^2$ and $S^2$, which exactly constitute the Klein four group $H^{(5)}_4$. On the contrary, the VEVs of $\varphi$ and $\chi$ break $S_4$ completely. Under the action of $T$ or $T^2$, the directions of $\langle\varphi\rangle$ and $\langle\chi\rangle$ are invariant except an overall phase. Considering the enlarged group $S_4\times
Z_3$, the vacuum configuration Eq.(9) preserves the subgroup $Z_3$ generated by $\omega T$, which is defined as the simultaneous transformation of $T\in S_4$ and $\omega\in Z_3$. As we shall see later that the $S_4$ flavor symmetry is spontaneously broken down by the VEVs of $\eta$ and $\phi$ in the neutrino sector at the leading order(LO), and it is broken down by the VEVs of $\varphi$ and $\chi$ in the charged lepton sector. Whereas both $\eta$, $\phi$ and $\varphi$, $\chi$ are involved in generating the quark masses. The $S_4$ flavor symmetry is broken into the Klein four symmetry $H^{(5)}_4$ and the $Z_3$ symmetry generated by $T$ in the neutrino and the charged lepton sector respectively at LO. This symmetry breaking chain is crucial to generating the TB mixing.
The model with $S_4\times Z_3\times Z_4$ flavor symmetry
========================================================
In this section we shall propose a concise supersymmetric (SUSY) model based on $S_4\times Z_3\times Z_4$ flavor symmetry with the vacuum alignment of Eq.(9) and Eq.(12).
Charged leptons
---------------
The charged lepton masses are described by the following superpotential $$\begin{aligned}
\nonumber&&w_{\ell}=\frac{y_{e1}}{\Lambda^3}\;e^{c}(\ell\varphi)_{1_1}(\varphi\varphi)_{1_1}h_d+\frac{y_{e2}}{\Lambda^3}\;e^{c}((\ell\varphi)_2(\varphi\varphi)_2)_{1_1}h_d+\frac{y_{e3}}{\Lambda^3}\;e^{c}((\ell\varphi)_{3_1}(\varphi\varphi)_{3_1})_{1_1}h_d\\
\nonumber&&~~+\frac{y_{e4}}{\Lambda^3}\;e^{c}((\ell\chi)_2(\chi\chi)_2)_{1_1}h_d+\frac{y_{e5}}{\Lambda^3}\;e^{c}((\ell\chi)_{3_1}(\chi\chi)_{3_1})_{1_1}h_d+\frac{y_{e6}}{\Lambda^3}\;e^{c}(\ell\varphi)_{1_1}(\chi\chi)_{1_1}h_d\\
\nonumber&&~~+\frac{y_{e7}}{\Lambda^3}\;e^{c}((\ell\varphi)_2(\chi\chi)_2)_{1_1}h_d+\frac{y_{e8}}{\Lambda^3}\;e^{c}((\ell\varphi)_{3_1}(\chi\chi)_{3_1})_{1_1}h_d+\frac{y_{e9}}{\Lambda^3}\;e^{c}((\ell\chi)_2(\varphi\varphi)_2)_{1_1}h_d\\
\label{14}&&~~+\frac{y_{e10}}{\Lambda^3}\;e^{c}((\ell\chi)_{3_1}(\varphi\varphi)_{3_1})_{1_1}h_d+\frac{y'_{\mu}}{\Lambda^2}\mu^{c}(\ell(\varphi\chi)_{3_2})_{1_2}h_d+\frac{y_{\tau}}{\Lambda}\tau^{c}(\ell\varphi)_{1_1}h_d+...\end{aligned}$$ In the above superpotential $w_{\ell}$, for each charged lepton, only the lowest order operators in the expansion in powers of $1/\Lambda$ are displayed explicitly. Dots stand for higher dimensional operators. Note that the auxiliary $Z_4$ symmetry imposes different powers of $\varphi$ and $\chi$ for the electron, mu and tau terms. At LO only the tau mass is generated, the muon and the electron masses are generated by high order contributions. After the flavor symmetry breaking and the electroweak symmetry breaking, the charged leptons acquire masses, and $w_{\ell}$ becomes $$\begin{aligned}
\nonumber&&w_{\ell}=\Big[(y_{e2}-2y_{e3})\frac{v^3_{\varphi}}{\Lambda^3}+(-y_{e4}+2y_{e5})\frac{v^3_{\chi}}{\Lambda^3}+(y_{e7}-2y_{e8})\frac{v_{\varphi}v^2_{\chi}}{\Lambda^3}+(-y_{e9}+2y_{e10})\frac{v_{\chi}v^2_{\varphi}}{\Lambda^3}\Big]v_de^{c}e\\
\nonumber&&~~~~+2y'_{\mu}\frac{v_{\varphi}v_{\chi}}{\Lambda^2}\;v_{d}\mu^{c}\mu+y_{\tau}\frac{v_{\varphi}}{\Lambda}\;v_d\tau^{c}\tau\\
\label{15}&&~~\equiv
y_e\frac{v^3_{\varphi}}{\Lambda^3}\;v_de^{c}e+y_{\mu}\frac{v_{\varphi}v_{\chi}}{\Lambda^2}\;v_{d}\mu^{c}\mu+y_{\tau}\frac{v_{\varphi}}{\Lambda}\;v_d\tau^{c}\tau\end{aligned}$$ where $v_d=\langle h_d\rangle$, $y_e=y_{e2}-2y_{e3}+(-y_{e4}+2y_{e5})\frac{v^3_{\chi}}{v^3_{\varphi}}+(y_{e7}-2y_{e8})\frac{v^2_{\chi}}{v^2_{\varphi}}+(-y_{e9}+2y_{e10})\frac{v_{\chi}}{v_{\varphi}}$ and $y_{\mu}=2y'_{\mu}$. As a result, the charged lepton mass matrix is diagonal at LO $$\label{16}m_{\ell}=\left(\begin{array}{ccc}
y_e\frac{v^3_{\varphi}}{\Lambda^3}&0&0\\
0&y_{\mu}\frac{v_{\varphi}v_{\chi}}{\Lambda^2}&0\\
0&0&y_{\tau}\frac{v_{\varphi}}{\Lambda}
\end{array}\right)v_d$$ It is obvious that the hermitian matrix $m^{\dagger}_{\ell}m_{\ell}$ is invariant under both $T$ and $T^2$ displayed in the Appendix A, i.e., $$\label{17}T^{\dagger}m^{\dagger}_{\ell}m_{\ell}T=m^{\dagger}_{\ell}m_{\ell}$$ Conversely, the general matrix invariant under $T$ and $T^{2}$ must be diagonal. Consequently the $S_4$ symmetry is broken to the $Z_3$ subgroup $H^{(1)}_1\equiv G_{\ell}$ in the charged lepton sector. The charged lepton masses can be read out directly as $$\label{18}m_e=\Big|y_e\frac{v^3_{\varphi}}{\Lambda^3}v_d\Big|,~~~m_{\mu}=\Big|y_{\mu}\frac{v_{\varphi}v_{\chi}}{\Lambda^2}v_d\Big|,~~~m_{\tau}=\Big|y_{\tau}\frac{v_{\varphi}}{\Lambda}v_d\Big|$$ we notice that the charged lepton mass hierarchies are naturally generated by the spontaneous symmetry breaking of $S_4$ symmetry without exploiting the FN mechanism [@Altarelli:2005yx]. Using the experimental data on the ratio of the lepton masses, one can estimate the order of magnitude of $v_{\varphi}/\Lambda$ and $v_{\chi}/\Lambda$. Assuming that the coefficients $y_e$, $y_{\mu}$ and $y_{\tau}$ are of ${\cal O}(1)$, we obtain $$\begin{aligned}
\nonumber&&\frac{m_e}{m_{\tau}}\sim\frac{v^2_{\varphi}}{\Lambda^2}\simeq3\times10^{-4}\\
\label{19}&&\frac{m_{\mu}}{m_{\tau}}\sim\frac{v_{\chi}}{\Lambda}\simeq6\times10^{-2}\end{aligned}$$ Obviously the solution to the above equations is $$\label{20}(\frac{v_{\varphi}}{\Lambda},~\frac{v_{\chi}}{\Lambda})\sim(\pm1.73\times10^{-2},~6\times10^{-2})$$ we see that the amplitudes of both $v_{\varphi}/\Lambda$ and $v_{\chi}/\Lambda$ are roughly of the same order about ${\cal
O}(\lambda^2_c)$, where $\lambda_c$ is the Cabibbo angle.
Neutrinos
---------
The superpotential contributing to the neutrino mass is as follows $$\begin{aligned}
\label{21}w_{\nu}=\frac{y_{\nu1}}{\Lambda}((\nu^{c}\ell)_2\eta)_{1_1}h_u+\frac{y_{\nu2}}{\Lambda}((\nu^{c}\ell)_{3_1}\phi)_{1_1}h_u+\frac{1}{2}M(\nu^c\nu^c)_{1_1}+...\end{aligned}$$ where dots denote the higher order contributions, $M$ is a constant with dimension of mass, and the factor $\frac{1}{2}$ is a normalization factor for convenience. The first two terms in Eq.(\[21\]) determine the neutrino Dirac mass matrix, and the third term is Majorana mass term. After electroweak and $S_4$ symmetry breaking, we obtain the following LO contributions to the neutrino Dirac and Majorana mass matrices $$\label{22}m^{D}_{\nu}=\left(\begin{array}{ccc}2b&a-b&a-b\\
a-b&a+2b&-b\\
a-b&-b&a+2b\end{array}\right)v_{u},~~~~M_N=\left(\begin{array}{ccc}
M&0&0\\
0&0&M\\
0&M&0\end{array}\right)$$ where $v_{u}=\langle h_u\rangle$, $a=y_{\nu1}\frac{v_{\eta}}{\Lambda}$ and $b=y_{\nu2}\frac{v_{\phi}}{\Lambda}$. We notice that the Dirac mass matrix is symmetric and it is controlled by two parameters $a$ and $b$. The eigenvalues of the Majorana matrix $M_N$ are given by $$\label{23}M_1=M,~~M_2=M,~~M_3=-M$$ The right handed neutrino masses are exactly degenerate, this is a remarkable feature of our model. Integrating out the heavy degrees of freedom, we get the light neutrino mass matrix, which is given by the famous See-Saw relation $$\label{24}m_{\nu}=-(m^{D}_{\nu})^{T}M^{-1}_{N}m^{D}_{\nu}=-\frac{v^2_u}{M}\left(\begin{array}{ccc}2a^2+6b^2-4ab&a^2-3b^2+2ab&a^2-3b^2+2ab\\
a^2-3b^2+2ab&a^2-3b^2-4ab&2a^2+6b^2+2ab\\
a^2-3b^2+2ab&2a^2+6b^2+2ab&a^2-3b^2-4ab
\end{array}\right)$$ The above light neutrino mass matrix $m_{\nu}$ is $2\leftrightarrow3$ invariant and it satisfies the magic symmetry $(m_{\nu})_{11}+(m_{\nu})_{13}=(m_{\nu})_{22}+(m_{\nu})_{23}$. Therefore it is exactly diagonalized by the TB mixing $$\label{25}U^{T}_{\nu}m_{\nu}U_{\nu}={\rm diag}(m_1,m_2,m_3)$$ The unitary matrix $U_{\nu}$ is written as $$\label{26}U_{\nu}=U_{TB}\,{\rm
diag}(e^{-i\alpha_1/2},e^{-i\alpha_2/2},e^{-i\alpha_3/2})$$ The phases $\alpha_{1}$, $\alpha_2$ and $\alpha_3$ are given by $$\begin{aligned}
\nonumber&&\alpha_1={\rm arg}(-(a-3b)^2/M)\\
\nonumber&&\alpha_2={\rm arg}(-4a^2/M)\\
\label{27}&&\alpha_3={\rm arg}((a+3b)^2/M)\end{aligned}$$ $m_{1}$, $m_{2}$ and $m_{3}$ in Eq.(2\[5\]) are the light neutrino masses, $$\begin{aligned}
\nonumber&&m_1=|(a-3b)^2|\frac{v^2_u}{|M|}\\
\nonumber&&m_2=4|a^2|\frac{v^2_u}{|M|}\\
\label{28}&&m_3=|(a+3b)^2|\frac{v^2_u}{|M|}\end{aligned}$$ Concerning the neutrinos, the $S_4$ symmetry is spontaneously broken by the VEVs of $\eta$ and $\phi$ at the LO. since both $\langle\eta\rangle$ and $\langle\phi\rangle$ are invariant under the actions of $TSTS^2$, $TST$ and $S^2$, the flavor symmetry $S_4$ is broken down to the Klein four subgroup $G_{\nu}\equiv
H^{(5)}_4=\{1,TSTS^2,TST,S^2\}$ in the neutrino sector. We can straightforwardly check that the light neutrino mass matrix $m_{\nu}$ is really invariant under $TSTS^2$, $TST$ and $S^2$. On the contrary, the most general neutrino mass matrix invariant under the Klein four group $G_{\nu}$ is given by $$\label{29}m_{\nu}=\left(\begin{array}{ccc}
m_{11}&m_{12}&m_{12}\\
m_{12}&m_{22}&m_{11}+m_{12}-m_{22}\\
m_{12}&m_{11}+m_{12}-m_{22}& m_{22}
\end{array}\right)$$ where $m_{11}$, $m_{12}$ and $m_{22}$ are arbitrary parameters. In the present model, the light neutrino mass matrix is given by Eq.(\[24\]), which is a particular version of the neutrino mass matrix in Eq.(\[29\]). Since only two parameters $a$ and $b$ are involved in our model, additional constraint has to be satisfied, i.e. $3m^2_{11}+4m_{12}m_{11}-4m_{22}m_{11}-8m^2_{12}-4m^2_{22}=0$, which is generally not implied by the invariance under $G_{\nu}$. This is because that in our model the fields which break $S_4$ are a doublet $\eta$ and a triplet $\phi$, there are no further flavons transforming as $1_1$ or $3_2$ which couple to the neutrino sector.
In short summary, at the LO the $S_4$ flavor symmetry is broken down to $Z_3$ and Klein four subgroup in the charged lepton and neutrino sector respectively. We have obtained a diagonal and hierarchical charged lepton mass matrix, the heavy neutrino masses are degenerate, and the neutrino mixing matrix is exactly the TB matrix.
Effective operators
-------------------
In the previous section, the neutrinos acquire masses via the See-Saw mechanism. It is interesting to note that higher dimension Weinberg operator cloud also contribute to the neutrino mass directly, which may correspond to exchanging some heavy particles rather than the right handed neutrinos $\nu^{c}$. In the present model, these effective light neutrino mass operators are $$\begin{aligned}
\nonumber&&w^{eff}_{\nu}=\frac{x}{\Lambda^3}(\ell h_u\ell
h_u)_{1_1}\Delta^2+\frac{y_1}{\Lambda^3}(\ell h_u\ell
h_u)_{1_1}(\eta^2)_{1_1}+\frac{y_2}{\Lambda^3}((\ell h_u\ell
h_u)_2(\eta^2)_2)_{1_1}+\frac{z_1}{\Lambda^3}(\ell h_u\ell
h_u)_{1_1}(\phi^2)_{1_1}\\
\label{30}~~~~~~&&+\frac{z_2}{\Lambda^3}((\ell h_u\ell
h_u)_2(\phi^2)_2)_{1_1}+\frac{z_3}{\Lambda^3}((\ell h_u\ell
h_u)_{3_1}(\phi^2)_{3_1})_{1_1}+\frac{w}{\Lambda^3}((\ell h_u\ell
h_u)_{3_1}(\eta\phi)_{3_1})_{1_1}\end{aligned}$$ With the vacuum configurations displayed in Eq.(\[12\]), the high dimension operators $w^{eff}_{\nu}$ leads to the following effective light neutrino mass matrix $$\label{31}m^{eff}_{\nu}=\left(\begin{array}{ccc}
\alpha+2\gamma&\beta-\gamma&\beta-\gamma\\
\beta-\gamma&\beta+2\gamma&\alpha-\gamma\\
\beta-\gamma&\alpha-\gamma&\beta+2\gamma\end{array}\right)\frac{v^2_u}{\Lambda}$$ where $$\begin{aligned}
\nonumber&&\alpha=2x\frac{v^2_{\Delta}}{\Lambda^2}+4y_1\frac{v^2_{\eta}}{\Lambda^2}+6z_1\frac{v^2_{\phi}}{\Lambda^2}\\
\nonumber&&\beta=2y_2\frac{v^2_{\eta}}{\Lambda^2}+6z_2\frac{v^2_{\phi}}{\Lambda^2}\\
\label{32}&&\gamma=4w\frac{v_{\eta}v_{\phi}}{\Lambda^2}\end{aligned}$$ Obviously $m^{eff}_{\nu}$ have the same texture as that in Eq.(29), and it is remarkable that this mass matrix is diagonalized by TB matrix, $$\label{33}U^{T}_{TB}m^{eff}_{\nu}U_{TB}={\rm
diag}(m^{eff}_1,m^{eff}_2,m^{eff}_3)$$ where $m^{eff}_1$, $m^{eff}_2$ and $m^{eff}_3$ are the effective light neutrino masses coming from the above high dimension Weinberg operators, they are given by $$\begin{aligned}
\nonumber&&m^{eff}_1=(\alpha-\beta+3\gamma)\frac{v^2_u}{\Lambda}\\
\nonumber&&m^{eff}_1=(\alpha+2\beta)\frac{v^2_u}{\Lambda}\\
\label{34}&&m^{eff}_3=(-\alpha+\beta+3\gamma)\frac{v^2_u}{\Lambda}\end{aligned}$$ If we consider the parameters $y_{\nu1,\nu2}\sim{\cal O}(1)$, $y_{1,2}\sim{\cal O}(1)$, $z_{1,2,3}\sim{\cal O}(1)$ and $x\sim
w\sim{\cal O}(1)$, we get the ratio $$\label{35}\frac{m^{eff}_i}{m_i}\sim\frac{M}{\Lambda}$$ We see that the importance of Weinberg operators depends on the relative size of $M$ and $\Lambda$. Since we have assumed that the light neutrino masses mainly come from the See-Saw mechanism, the right handed neutrino mass $M$ should be much smaller than the cutoff scale $\Lambda$. In the context of a grand unified theory, this corresponds to the requirement that $M$ is of order ${\cal
O}(M_{GUT})$ rather than of ${\cal O}(M_{Planck})$. In some flavor models, right handed neutrino masses are required to be below the cutoff $\Lambda$ as well, in order to reproduce the experimental value of the small parameter $\Delta m^2_{sol}/\Delta m^2_{atm}$ [@Altarelli:2009kr; @Altarelli:2009gn]. Another convenient way of suppressing the contributions of the effective operators is to introduce auxiliary symmetry further, so that the Weinberg operators arise at much higher order and its contributions can be neglected.
Extension to the quark sector
-----------------------------
The Yukawa superpotentials in the quark sector are $$\label{36}w_{q}=w_u+w_d$$ In the up quark sector, we have $$\begin{aligned}
\nonumber&&w_u=y_tt^{c}Q_3h_u+\sum^{3}_{i=1}\frac{y_{ti}}{\Lambda^2}t^{c}(Q_L{\cal
O}^{(1)}_i)_{1_1}h_u+\sum^{2}_{i=1}\frac{y'_{ti}}{\Lambda^3}t^{c}(Q_L{\cal
O}^{(2)}_i)_{1_1}h_u+\sum^{3}_{i=1}\frac{y_{ci}}{\Lambda^2}c^{c}(Q_L{\cal
O}^{(1)}_i)_{1_2}h_u\\
\nonumber&&~~~~+\sum^{2}_{i=1}\frac{y'_{ci}}{\Lambda^3}c^{c}(Q_L{\cal
O}^{(2)}_i)_{1_2}h_u+\frac{y_{ct}}{\Lambda}c^cQ_3\theta
h_u+\sum^{8}_{i=1}\frac{y_{ui}}{\Lambda^4}u^{c}(Q_L{\cal
O}^{(3)}_i)_{1_1}h_u\\
\label{37}&&~~~~+\sum^{2}_{i=1}\frac{y'_{ui}}{\Lambda^3}u^{c}Q_3({\cal
O}^{(4)})_{1_1}h_u+...\end{aligned}$$ where $$\begin{aligned}
\nonumber&&{\cal O}^{(1)}=\{\varphi\varphi, \varphi\chi, \chi\chi\}\\
\nonumber&&{\cal O}^{(2)}=\{\eta^2\Delta, \phi^2\Delta\}\\
\nonumber&&{\cal
O}^{(3)}=\{\varphi\phi^3,\chi\phi^3,\varphi\eta\phi^2,\chi\eta\phi^2,\varphi\eta^2\phi,\chi\eta^2\phi,\varphi\phi\Delta^2,\chi\phi\Delta^2\}\\
\label{38}&&{\cal O}^{(4)}=\{\varphi^3,\varphi\chi^2\}\end{aligned}$$
The superpotentials contributing to the down quark masses are as follows $$\begin{aligned}
\nonumber&&w_d=\frac{y_b}{\Lambda}b^cQ_3\theta
h_d+\sum^{3}_{i=1}\frac{y_{bi}}{\Lambda^2}b^{c}(Q_L{\cal
O}^{(1)}_i)_{1_2}h_d+\sum^{2}_{i=1}\frac{y'_{bi}}{\Lambda^3}b^c(Q_L{\cal
O}^{(2)}_i)_{1_2}h_d+\sum^{3}_{i=1}\frac{y_{si}}{\Lambda^2}s^{c}(Q_L{\cal
O}^{(1)}_i)_{1_2}h_d\\
\nonumber&&~~~~+\sum^{2}_{i=1}\frac{y'_{si}}{\Lambda^3}s^{c}(Q_L{\cal
O}^{(2)}_i)_{1_2}h_d+\sum^{3}_{i=1}\frac{y''_{si}}{\Lambda^3}s^{c}Q_3({\cal
O}^{(5)}_i)_{1_2}h_d+\sum^{6}_{i=1}\frac{y_{di}}{\Lambda^3}d^{c}(Q_L{\cal
O}^{(6)}_i)_{1_1}h_d\\
\label{39}&&~~~~+\frac{y'_{d1}}{\Lambda^3}d^{c}Q_3(\varphi\chi)_{1_2}\Delta
h_d+\sum^{9}_{i=1}\frac{y''_{di}}{\Lambda^4}d^{c}Q_3({\cal
O}^{(7)}_i)_{1_1}h_d+...\end{aligned}$$ where $$\begin{aligned}
\nonumber&&{\cal O}^{(5)}=\{\eta^3,\eta\phi^2,\theta^3\}\\
\nonumber&&{\cal
O}^{(6)}=\{\varphi^2\eta,\varphi^2\phi,\chi^2\eta,\chi^2\phi,\varphi\chi\eta,\varphi\chi\phi\}\\
\label{40}&&{\cal
O}^{(7)}=\{\varphi^2\theta\Delta,\chi^2\theta\Delta,\eta^4,\eta^2\Delta^2,\eta^2\phi^2,\eta\phi^3,\phi^4,\phi^2\Delta^2,\Delta^4\}\end{aligned}$$ Since the quantum numbers of $b^{c}$ and $s^{c}$ are exactly the same, as is obvious from Table \[tab:trans\], there are no fundamental distinctions between $b^{c}$ and $s^{c}$, we have defined $b^{c}$ as the one which couples to $Q_3\theta h_d$ in the superpotential $w_d$. We notice that both the supermultiplets $\varphi$, $\chi$ and $\eta$, $\phi$, which control the flavor symmetry breaking in the charged lepton and neutrino sectors respectively, couple to the quarks. Consequently the $S_4$ flavor symmetry is completely broken in the quark sector. By recalling the vacuum configuration in Eq.(\[9\]) and Eq.(\[12\]), we can write down the mass matrices for the up and down quarks $$\begin{aligned}
\label{41}&&m_{u}=\left(\begin{array}{ccc}
y^{(u)}_{11}\frac{v_{\varphi}v^3_{\phi}}{\Lambda^4}&y^{(u)}_{12}\frac{v_{\varphi}v^3_{\phi}}{\Lambda^4}&y^{(u)}_{13}\frac{v^3_{\varphi}}{\Lambda^3}\\
y^{(u)}_{21}\frac{v^2_{\phi}v_{\Delta}}{\Lambda^3}&y^{(u)}_{22}\frac{v^2_{\varphi}}{\Lambda^2}&y^{(u)}_{23}\frac{v_{\theta}}{\Lambda}\\
y^{(u)}_{31}\frac{v^2_{\phi}v_{\Delta}}{\Lambda^3}&y^{(u)}_{32}\frac{v^2_{\varphi}}{\Lambda^2}&y^{(u)}_{33}
\end{array}\right)v_{u}\\
\nonumber&&\\
\label{42}&&m_{d}=\left(\begin{array}{ccc}
y^{(d)}_{11}\frac{v^2_{\varphi}v_{\phi}}{\Lambda^3}&y^{(d)}_{12}\frac{v^2_{\varphi}v_{\phi}}{\Lambda^3}&y^{(d)}_{13}\frac{v^4_{\phi}}{\Lambda^4}\\
y^{(d)}_{21}\frac{v^2_{\phi}v_{\Delta}}{\Lambda^3}&y^{(d)}_{22}\frac{v^2_{\varphi}}{\Lambda^2}&y^{(d)}_{23}\frac{v^3_{\theta}}{\Lambda^3}\\
y^{(d)}_{31}\frac{v^2_{\phi}v_{\Delta}}{\Lambda^3}&y^{(d)}_{32}\frac{v^2_{\varphi}}{\Lambda^2}&y^{(d)}_{33}\frac{v_{\theta}}{\Lambda}
\end{array}\right)v_d\end{aligned}$$ where $y^{(u)}_{ij}$ and $y^{(d)}_{ij}$ ($i,j=1,2,3$) are the sum of all the different terms appearing in the superpotential, all of them are expect to be of order one. We note that the contribution of $d^{c}Q_3(\varphi\chi)_{1_2}\Delta h_d$ vanishes with the LO vacuum alignment, accordingly the (13) element of the down quark mass matrix $m_d$ arise at order $1/\Lambda^4$. Diagonalizing the above quark mass matrices in Eq.(\[41\]) and Eq.(\[42\]) with the standard perturbation technique, we obtain the quark masses as follows $$\begin{aligned}
\nonumber&&m_{u}\simeq\Big|y^{(u)}_{11}\frac{v_{\varphi}v^3_{\phi}}{\Lambda^4}v_u\Big|\\
\nonumber&&m_c\simeq\Big|y^{(u)}_{22}\frac{v^2_{\varphi}}{\Lambda^2}v_u\Big|\\
\nonumber&&m_{t}\simeq\Big|y^{(u)}_{33}v_u\Big|\\
\nonumber&&m_{d}\simeq\Big|y^{(d)}_{11}\frac{v^2_{\varphi}v_{\phi}}{\Lambda^3}v_d\Big|\\
\nonumber&&m_{s}\simeq\Big|y^{(d)}_{22}\frac{v^2_{\varphi}}{\Lambda^2}v_d\Big|\\
\label{43}&&m_b\simeq\Big|y^{(d)}_{33}\frac{v_{\theta}}{\Lambda}v_d\Big|\end{aligned}$$ We see that the quark mass hierarchies are correctly produced if the VEVs $v_{\eta}$, $v_{\phi}$, $v_{\theta}$ and $v_{\Delta}$ are of order ${\cal O}(\lambda^2_c\Lambda)$ as well. This is consistent with our naive expectation that all the VEVs should be of the same order of magnitude. Note that the quark mass hierarchies are generated through the spontaneous breaking of the flavor symmetry instead of the FN mechanism. It is obvious that the mass hierarchies between top and bottom quark mainly come from the symmetry breaking parameter $v_{\theta}/\Lambda$, and $\tan\beta\equiv\frac{v_u}{v_d}$ should be of order one in our model. Comparing with the tau lepton mass $m_{\tau}$ predicted in Eq.(\[18\]), we see that $m_{\tau}$ and $m_b$ are of the same order, this is consistent with the $b-\tau$ unification predicted in many grand unification models.
For the quark mixing, the CKM matrix elements are estimated as $$\begin{aligned}
\nonumber&&V_{ud}\simeq V_{cs}\simeq V_{tb}\simeq1\\
\nonumber&&V^{*}_{us}\simeq-V_{cd}\simeq\Big(\frac{y^{(d)}_{21}}{y^{(d)}_{22}}-\frac{y^{(u)}_{21}}{y^{(u)}_{22}}\Big)\frac{v^3_{\phi}}{\Lambda
v^2_{\varphi}}\\
\nonumber&&V^{*}_{ub}\simeq\frac{y^{(u)}_{22}y^{(d)}_{31}-y^{(u)}_{21}y^{(d)}_{32}}{y^{(u)}_{22}y^{(d)}_{33}}\frac{v^3_{\phi}}{\Lambda^2v_{\theta}}\\
\nonumber&&V^{*}_{cb}\simeq-V_{ts}\simeq\frac{y^{(d)}_{32}}{y^{(d)}_{33}}\frac{v^2_{\varphi}}{\Lambda
v_{\theta}}\\
\label{44}&&V_{td}\simeq\frac{y^{(d)}_{21}y^{(d)}_{32}-y^{(d)}_{22}y^{(d)}_{31}}{y^{(d)}_{22}y^{(d)}_{33}}\frac{v^3_{\phi}}{\Lambda^2v_{\theta}}\end{aligned}$$ We see that the correct orders of the CKM matrix elements are reproduced with the exception of Cabibbo angle. The $V_{us}$(or $V_{cd}$) is the combination of two independent contributions of order $\lambda^2_c$, we need an accidental enhancement of the combination $\Big(\frac{y^{(d)}_{21}}{y^{(d)}_{22}}-\frac{y^{(u)}_{21}}{y^{(u)}_{22}}\Big)$ of order $1/\lambda_c$ in order to obtain the correct Cabibbo angle.
Phenomenological implications
=============================
In the following we shall study the constraints on the model imposed by the observed values of $\Delta m^2_{sol}\equiv m^2_2-m^2_1$ and $\Delta
m^2_{atm}\equiv|m^2_3-m^2_1(m^2_2)|$. The important physical consequences of our model are investigated in details, and the corresponding predictions are presented. In this section we mainly concentrate on the neutrino sector. We assume that the right handed neutrino mass $M$ is much smaller than the cutoff scale $\Lambda$ of the theory, then the light neutrino masses are dominantly generated via the See-Saw mechanism.
The neutrino mass spectrum
--------------------------
According to Eq.(\[28\]), the light neutrino mass spectrum is controlled by two parameters $a$ and $b$, which are in general both complex numbers. For convenience, we define $$\label{45}\frac{b}{a}=R\,e^{i\Phi}$$ with $R=|\frac{b}{a}|$. As we will see in the following, all the low energy observables can be expressed in terms of only three independent quantities: the ratio $R$, the relative phase $\Phi$ between $a$ and $b$, and the lightest neutrino mass. Experimentally, only two spectrum observables $\Delta m^2_{sol}$ and $\Delta
m^2_{atm}$ have been measured, therefore the light neutrino mass spectrum can be normal hierarchy(NH) or inverted hierarchy(IH). The ratio between $\Delta m^2_{sol}$ and $\Delta m^2_{atm}$ is given by $$\label{add1}\frac{\Delta m^2_{sol}}{\Delta
m^2_{atm}}=\frac{15-81R^4-18R^2-36R^2\cos^2\Phi+12R(1+9R^2)\cos\Phi}{24R(1+9R^2)|\cos\Phi|}$$ where we have taken $\Delta m^2_{atm}=|m_3^2-m_1^2|$ for both the NH and IH neutrino spectrums for convenience. Moreover, we have the following relationships for the neutrino masses $$\begin{aligned}
\nonumber&&\frac{16m^2_1}{m^2_2}=1+81R^4+18R^2+36R^2\cos^2\Phi-12R(1+9R^2)\cos\Phi\\
\label{46}&&\frac{16m^2_3}{m^2_2}=1+81R^4+18R^2+36R^2\cos^2\Phi+12R(1+9R^2)\cos\Phi\end{aligned}$$ Then the parameters $R$ and $\cos\Phi$ can be expressed in terms of light neutrino mass as follows $$\label{47}\left\{\begin{array}{l}R=\frac{1}{3}\sqrt{\frac{2(m_3+m_1)}{m_2}-1}\\
\\
\cos\Phi=\frac{m_3-m_1}{m_2}\frac{1}{\sqrt{\frac{2(m_3+m_1)}{m_2}-1}}
\end{array}\right.$$ or $$\label{48}\left\{\begin{array}{l}
R=\frac{1}{3}\sqrt{\frac{2|m_3-m_1|}{m_2}-1}\\
\\
\cos\Phi=\frac{m^2_3-m^2_1}{m_2|m_3-m_1|}\frac{1}{\sqrt{\frac{2|m_3-m_1|}{m_2}-1}}
\end{array}\right.$$
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![\[fig:cosphi\_No\] The variation of $\cos\Phi$ with respect to the lightest neutrino mass $m_1$ for the normal hierarchy spectrum. In Fig. \[fig:cosphi\_No\]a, $\cos\Phi$ is the taken to be the expression in Eq.(\[47\]), and Fig. \[fig:cosphi\_No\]b corresponds to the value of $\cos\Phi$ in Eq.(\[48\]).](cosphi1_No.eps "fig:") ![\[fig:cosphi\_No\] The variation of $\cos\Phi$ with respect to the lightest neutrino mass $m_1$ for the normal hierarchy spectrum. In Fig. \[fig:cosphi\_No\]a, $\cos\Phi$ is the taken to be the expression in Eq.(\[47\]), and Fig. \[fig:cosphi\_No\]b corresponds to the value of $\cos\Phi$ in Eq.(\[48\]).](cosphi2_No.eps "fig:")
(a) (b)
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
These results hold for both the normal hierarchy and inverted hierarchy spectrum. In the case of normal hierarchy, $m_2$ and $m_3$ can be expressed as functions of the lightest neutrino mass: $m_2=\sqrt{m^2_1+\Delta m^2_{sol}}$ and $m_3=\sqrt{m^2_1+\Delta
m^2_{atm}}$. For the inverted hierarchy, $m_3$ is the lightest neutrino mass, the remaining two masses are $m_1=\sqrt{m^2_3+\Delta
m^2_{atm}}$ and $m_2=\sqrt{m^2_3+\Delta m^2_{sol}+\Delta
m^2_{atm}}$. As a result, taking into account the experimental information on $\Delta m^2_{sol}$ and $\Delta m^2_{atm}$, there is only one real parameter undetermined, and it is chose to be the lightest neutrino mass $m_l$($m_1$ or $m_3$) in the present work. Hence our model is quite predictive. We display $\cos\Phi$ as a function of the lightest neutrino mass in Fig.\[fig:cosphi\_No\] and Fig.\[fig:cosphi\_Io\] for the normal hierarchy and inverted hierarchy respectively, where the best fit values of $\Delta
m^2_{sol}=7.65\times10^{-5}\;{\rm eV^2}$ and $\Delta
m^2_{atm}=2.40\times10^{-3}\;{\rm eV^2}$ have been used. For the solution of $R$ and $\cos\Phi$ shown in Eq.(\[48\]), we can clearly see that the corresponding value of $|\cos\Phi|$ would be larger than 1 in the case of both normal hierarchy and inverted hierarchy spectrum. Furthermore, we have verified that $|\cos\Phi|$ is always larger than 1 for the $3\sigma$ range of $\Delta
m^2_{sol}$ and $\Delta m^2_{atm}$, therefore the solution in Eq.(\[48\]) can be disregarded thereafter. From the condition $|\cos\Phi|\leq1$, we obtain the following constraints on the lightest neutrino mass, $$\begin{aligned}
\nonumber&&m_1\geq0.011\;{\rm eV},~~~~{\rm Normal~~~hierarchy}\\
\label{49}&&m_3>0.0\;{\rm eV},~~~~~~~{\rm Inverted~~~hierarchy}\end{aligned}$$
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![\[fig:cosphi\_Io\] $\cos\Phi$ as a function of the lightest neutrino mass $m_3$ for the inverted hierarchy spectrum. Fig. \[fig:cosphi\_Io\]a and Fig. \[fig:cosphi\_Io\]b are for the $\cos\Phi$ values in Eq.(\[47\]) and Eq.(\[48\]) respectively.](cosphi1_Io.eps "fig:") ![\[fig:cosphi\_Io\] $\cos\Phi$ as a function of the lightest neutrino mass $m_3$ for the inverted hierarchy spectrum. Fig. \[fig:cosphi\_Io\]a and Fig. \[fig:cosphi\_Io\]b are for the $\cos\Phi$ values in Eq.(\[47\]) and Eq.(\[48\]) respectively.](cosphi2_Io.eps "fig:")
(a) (b)
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
For the NH spectrum, we have a lower bound on $m_1$, which is satisfied for $\Phi=0$. The corresponding values of $\cos\Phi$ are positive, then $\Phi$ is in the range of $0\sim\pi/2$ or $3\pi/2\sim2\pi$. In the case of IH, $\cos\Phi$ is negative so that $\Phi$ varies between $\pi/2$ to $3\pi/2$. From Fig. \[fig:cosphi\_Io\] we can see that $\cos\Phi$ is very close to -1 for $m_3$ tending to zero, the lightest neutrino mass $m_3$ is less constrained.
Neutrinoless double beta decay
------------------------------
Neutrinoless double beta decay (0$\nu2\beta$) is a sensitive probe to the scale of the neutrino masses, it is a very slow lepton-number-violating nuclear transition that occurs if neutrinos have mass and are Majorana particles. The rate of $0\nu2\beta$ decay is determined by the nuclear matrix elements and the effective 0$\nu2\beta$-decay mass $|m_{\beta\beta}|$, which is defined as $|m_{\beta\beta}|=|\sum_{k}(U_{PMNS})^2_{ek}m_k|$. In the present model it is given by $$\begin{aligned}
\nonumber&&|m_{\beta\beta}|=|2a^2+6b^2-4ab|\frac{v^2_{u}}{|M|}\\
\label{50}&&~~~=\frac{m_2}{2}\Big[1+9R^4+4R^2+6R^2\cos(2\Phi)-12R^3\cos\Phi-4R\cos\Phi\Big]^{1/2}\end{aligned}$$ By using Eq.(\[47\]), we can express $|m_{\beta\beta}|$ in terms of the lightest neutrino mass $m_l$, the corresponding results are shown in Fig.\[fig:0nubeta\]. The vertical line represents the future sensitivity of the KATRIN experiment [@katrin], the horizontal ones denote the present bound from the Heidelberg-Moscow experiment [@HM] and the future sensitivity of some $0\nu2\beta$ decay experiments, which are 15 meV, 20 meV and 90 meV, respectively of CUORE [@cuore], Majorana [@majorana]/GERDA III [@gerda] and GERDA II experiments. From Fig. \[fig:0nubeta\] we conclude that for the allowed values of $m_l$, the predictions for $m_{\beta\beta}$ approach the future experimental sensitivity. For the NH spectrum, the effective mass $m_{\beta\beta}$ can reach a very low value about 7.8 meV. Whereas the lower bound for $m_{\beta\beta}$ is approximately 44.3 meV in the case of IH. A combined measurement of the effective mass $m_{\beta\beta}$ and the lightest neutrino mass can determine whether the neutrino spectrum is NH or IH in our model.
![\[fig:0nubeta\] $m_{\beta\beta}$ as a function of the lightest neutrino mass $m_l$, the solid and dashed lines represent the NH and IH cases respectively.](Neutrinoless_double_beta_decay.eps)
Beta decay
----------
One can directly search for the kinetic effect of nonzero neutrino masses in beta decay by modification of the Kurie plot. This search is sensitive to neutrino masses regardless of whether the neutrinos are Dirac or Majorana particles. For small neutrino masses, this effect will occur near to the end point of the electron energy spectrum and will be sensitive to the quantity $m_{\beta}=\big[\sum_k|(U_{PMNS})_{ek}|^2m^2_k\big]^{1/2}$. For the present model, we have $$\label{51}m_{\beta}=\frac{1}{\sqrt{3}}(2m^2_1+m^2_2)^{1/2}$$ This result holds for both the NH and IH spectrums. In Fig.\[fig:beta\] we plot $m_{\beta}$ versus the lightest neutrino mass $m_{l}$, the horizontal line represents the future sensitivity of 0.2 eV from the KATRIN experiment.
![\[fig:beta\] Variation of $m_{\beta}$ with respect to the lightest neutrino mass $m_l$, the solid and dashed lines represent the NH and IH spectrum respectively.](Neutrino_beta_decay.eps)
Sum of the neutrino masses
--------------------------
The sum of the neutrino masses $\sum_km_k$ is constrained by the cosmological observation. In Fig. \[fig:sum\] we display the sum of the neutrino masses as a function of the lightest neutrino mass $m_l$. The vertical line denotes the future sensitivity of KATRIN experiment, and the horizontal lines are the cosmological bounds [@Fogli:2008cx]. There are typically five representative combinations of the cosmological data, which lead to increasingly stronger upper bounds on the sum of the neutrino masses. We show the two strongest ones in Fig. \[fig:sum\]. The first one at $0.60$ eV corresponds to the combination of the Cosmic Microwave Background (CMB) anisotropy data (from WMAP 5y [@WMAP2], Arcminute Cosmology Bolometer Array Receiver (ACBAR) [@acbar07], Very Small Array (VSA) [@vsa], Cosmic Background Imager (CBI) [@cbi] and BOOMERANG [@boom03] experiments) plus the large-scale structure (LSS) information on galaxy clustering (from the Luminous Red Galaxies Sloan Digital Sky Survey (SDSS) [@Tegmark]) plus the Hubble Space Telescope (HST) plus the luminosity distance SN-Ia data of [@astier] and finally plus the BAO data from [@bao]. The second one at $0.19$ eV corresponds to all the previous data combined to the small scale primordial spectrum from Lyman-alpha (Ly$\alpha$) forest clouds [@Ly1]. We see that the current cosmological information on the sum of the neutrino masses can hardly distinguish the NH spectrum from the IH spectrum.
![\[fig:sum\] The sum of the neutrino masses $\sum_km_k$ versus the lightest neutrino mass $m_l$, the solid and dashed lines represent the NH and IH spectrum respectively.](Neutrino_mass_sum.eps)
The Majorana CP violating phases
--------------------------------
In the standard parametrization [@pdg], the lepton PMNS mixing matrix is defined by $$\label{52}U_{PMNS}=\left(\begin{array}{ccc}c_{12}c_{13}&s_{12}c_{13}&s_{13}e^{-i\delta}\\
-s_{12}c_{23}-c_{12}s_{23}s_{13}e^{i\delta}&c_{12}c_{23}-s_{12}s_{23}s_{13}e^{i\delta}&s_{23}c_{13}\\
s_{12}s_{23}-c_{12}c_{23}s_{13}e^{i\delta}&-c_{12}s_{23}-s_{12}c_{23}s_{13}e^{i\delta}&c_{23}c_{13}\end{array}\right)\,{\rm diag}(1,e^{i\alpha_{21}/2},e^{i\alpha_{31}/2})$$ where $c_{ij}=\cos\theta_{ij}$, $s_{ij}=\sin\theta_{ij}$ with $\theta_{ij}\in[0,\pi/2]$, $\delta$ is the Dirac CP violating phase, $\alpha_{21}$ and $\alpha_{31}$ are the two Majorana CP violating phases, all the three CP violating phases $\delta$, $\alpha_{21}$ and $\alpha_{31}$ are allowed to vary in the range of $0\sim2\pi$. Recalling that the leptonic mixing matrix is given by Eq.(\[26\]) at LO, in the standard parametrization it is, $$\label{53}U_{PMNS}=e^{-i\alpha_1/2}\;{\rm diag}(1,1,-1)U_{TB}\;{\rm diag}(1,e^{i(\alpha_1-\alpha_2)/2},e^{i(\alpha_1-\alpha_3)/2})$$ where the overall phase $e^{-i\alpha_1/2}$ can be absorbed into the charged lepton fields. Comparing Eq.(\[53\]) with Eq.(\[52\]), we can identify the two CP violating phases as $$\label{54}\alpha_{21}=\alpha_1-\alpha_2,~~~~~~~\alpha_{31}=\alpha_{1}-\alpha_3$$ Similar to other low energy observables, $\alpha_{21}$ and $\alpha_{31}$ can be written as functions of the parameters $R$ and $\Phi$, $$\begin{aligned}
\label{55}&&\left\{\begin{array}{l}\sin\alpha_{21}=\frac{9R^2\sin(2\Phi)-6R\sin\Phi}{1+9R^2-6R\cos\Phi},\\
\\
\cos\alpha_{21}=\frac{1+9R^2\cos(2\Phi)-6R\cos\Phi}{1+9R^2-6R\cos\Phi}\end{array}\right.\\
\nonumber&&\\
\label{56}&&\left\{\begin{array}{l}\sin\alpha_{31}=\frac{12R(1-9R^2)\sin\Phi}{1+81R^4+18R^2-36R^2\cos^2\Phi}\\
\\
\cos\alpha_{31}=-\frac{1+81R^4-36R^2+18R^2\cos(2\Phi)}{1+81R^4+18R^2-36R^2\cos^2\Phi}
\end{array}\right.\end{aligned}$$ Note that the relations between $R$, $\Phi$ and the light neutrino masses are displayed in Eq.(\[47\]). In contrast to other low energy observables such as the light neutrino masses, $m_{\beta\beta}$ and $m_{\beta}$ etc., the Majorana phases $\alpha_{21}$ and $\alpha_{31}$ depend on both $\cos\Phi$ and $\sin\Phi$, and not only on $\cos\Phi$. In Fig.\[fig:majorana\_phase\] we show the behavior of the Majorana phases $\alpha_{21}$ and $\alpha_{31}$ with respect to the lightest neutrino mass $m_{l}$, where we choose $\sin\Phi>0$ for illustration. It is well-known that the See-Saw mechanism provides an elegant explanation for the smallness of the neutrino mass, meanwhile the baryon asymmetry may be produced through the out of equilibrium CP violating decays of the right handed neutrino $\nu^{c}$. As is shown in Eq.(\[23\]), at LO the heavy neutrino masses are exactly degenerate in our model, then leptogenesis can be naturally implemented via the so-called resonant leptogenesis mechanism [@Pilaftsis:2005rv]. Since more other subtle issues are involved in the resonant leptogenesis, the analysis of whether the observed baryon asymmetry can be naturally generated in our model is beyond the range of the present paper, which will be discussed in future work [@ding].
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![\[fig:majorana\_phase\] The dependence of the Majorana CP violating phases $\alpha_{21}$ and $\alpha_{31}$ on the lightest neutrino mass $m_l$. Solid and dashed lines refer to $\alpha_{21}$ and $\alpha_{31}$ respectively. Fig.\[fig:majorana\_phase\]a corresponds to the NH mass spectrum, and Fig.\[fig:majorana\_phase\]b is for the IH case, where $\sin\Phi$ is taken to be positive.](Majorana_phase_No.eps "fig:") ![\[fig:majorana\_phase\] The dependence of the Majorana CP violating phases $\alpha_{21}$ and $\alpha_{31}$ on the lightest neutrino mass $m_l$. Solid and dashed lines refer to $\alpha_{21}$ and $\alpha_{31}$ respectively. Fig.\[fig:majorana\_phase\]a corresponds to the NH mass spectrum, and Fig.\[fig:majorana\_phase\]b is for the IH case, where $\sin\Phi$ is taken to be positive.](Majorana_phase_Io.eps "fig:")
(a) (b)
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
The phenomenological consequences for the LO order predictions of our model have been analyzed . As we shall show in the next section that our model gets corrections when higher dimensional operators are included in the Lagrangian. These corrections modify the leading order predictions by terms of relative order $\lambda^2_c$, hence the results presented so far are still correct approximately. However, we would like to note that due to the next to the leading order contributions, a cancallation could be present for the effective $0\nu2\beta$-decay mass, consequently $m_{\beta\beta}$ could reach zero in the NH case.
Next to the leading order corrections
=====================================
The results of the previous section hold to first approximation. At the next to leading order (NLO), the superpotentials $w_{v}$, $w_{\ell}$, $w_{\nu}$, $w_{u}$ and $w_{d}$ are corrected by higher dimensional operators compatible with the symmetry of the model, whose contributions are suppressed by at least one additional power of $\Lambda$. The residual Klein four and $Z_3$ symmetry in the neutrino and the charged lepton sectors at LO would be broken completely by the NLO contributions. The NLO terms in the driving superpotential leads to small deviation from the LO vacuum alignment. The masses and mixing matrices are corrected by both the shift of the vacuum configuration and the NLO operators in the Yukawa superpotentials $w_{\ell}$, $w_{\nu}$, $w_{u}$ and $w_{d}$. In the following, the NLO corrections to the vacuum alignment and the mass matrices will be discussed one by one, and the resulting physical effects are studied.
Corrections to the vacuum alignment
-----------------------------------
The NLO operators of the driving superpotential $w_v$ and the corresponding corrections to the LO vacuum alignment in Eq.(\[9\]) and Eq.(\[12\]) are discussed in the Appendix B in details. The inclusion of the higher dimensional operators results in the shift of the VEVs of the flavon fields, the vacuum configuration is modified into $$\begin{aligned}
\nonumber&&\langle\varphi\rangle=(\delta v_{\varphi_1},v_{\varphi}+\delta v_{\varphi_2},\delta v_{\varphi_3}),~~~~\langle\chi\rangle=(\delta v_{\chi_1},v_{\chi},\delta v_{\chi_3}),\\
\nonumber&&\langle\eta\rangle=(v_{\eta},v_{\eta}+\delta v_{\eta_2}),~~~~~~\langle\phi\rangle=(v_{\phi}+\delta v_{\phi},v_{\phi}+\delta v_{\phi},v_{\phi}+\delta v_{\phi}),\\
\label{57}&&\langle\theta\rangle=v_{\theta}+\delta v_{\theta} $$ where $v_{\chi}$ and $v_{\eta}$ are still undetermined, and the VEV $v_{\Delta}$ is not corrected by the NLO terms. All the corrections are suppressed by $1/\Lambda$, and the shift of $\langle\phi\rangle$ turns out to be proportional to its LO VEV. Since all the VEVs are required to be of order ${\cal O}(\lambda^2_c\Lambda)$, we expect these corrections would modify the LO VEVs by terms of relative order $\lambda^2_c$.
Corrections to the mass matrices
--------------------------------
The corrections to the fermion mass matrices originate from two sources: the first is the higher dimensional operators in the Yukawa superpotentials $w_{\ell}$, $w_{\nu}$, $w_{u}$ and $w_{d}$, and the second is the deviation from the LO vacuum alignment, which is induced by the NLO terms in the driving potential. As a result, at NLO the mass matrices are the sum of the contributions of higher dimension operators evaluated with the insertion of the the LO VEV, and those from the LO superpotentials evaluated with the NLO VEVs.
For the charged leptons, the superpotential $w_{\ell}$ is corrected by the following sisteen NLO operators $$\begin{aligned}
\nonumber&&e^{c}(\ell\varphi^3)_{1_2}\theta
h_d,~~~~e^{c}(\ell\varphi^2\chi)_{1_2}\theta
h_d,~~~~e^{c}(\ell\varphi\chi^2)_{1_2}\theta
h_d,~~~e^{c}(\ell\chi^3)_{1_2}\theta h_d,\\
\nonumber&&e^{c}(\ell\varphi\eta^2)_{1_2}\Delta h_d,~~e^{c}(\ell\chi\eta^2)_{1_2}\Delta h_d,~~e^{c}(\ell\varphi\eta\phi)_{1_2}\Delta h_d,~~e^{c}(\ell\chi\eta\phi)_{1_2}\Delta h_d,\\
\nonumber&&e^{c}(\ell\varphi\phi^2)_{1_2}\Delta h_d,~~e^{c}(\ell\chi\phi^2)_{1_2}\Delta h_d,~~\mu^{c}(\ell\varphi^2)_{1_1}\theta h_d,~~~~~\mu^{c}(\ell\chi^2)_{1_1}\theta h_d,\\
\label{58}&&\mu^{c}(\ell\varphi\chi)_{1_1}\theta h_d,~~~~\mu^{c}(\ell\phi^2)_{1_1}\Delta h_d,~~~\mu^{c}(\ell\eta\phi)_{1_1}\Delta h_d,~~~~\tau^{c}(\ell\chi)_{1_2}\theta h_d\end{aligned}$$ Taking into account the contributions of the modified vacuum alignment at NLO, each diagonal entry of the charged lepton mass matrix receives a small correction factor, while the off-diagonal entries become non-zero and of the order of the diagonal term in each row multiplied by $\varepsilon$, which parameterizes the ratio $VEV/\Lambda$ with order ${\cal O}(\lambda^2_c)$. Then we have $$\label{59}m_{\ell}=\left(\begin{array}{ccc}
m^{\ell}_{11}\varepsilon^2&m^{\ell}_{12}\varepsilon^3&m^{\ell}_{13}\varepsilon^3\\
m^{\ell}_{21}\varepsilon^2&m^{\ell}_{22}\varepsilon&m^{\ell}_{23}\varepsilon^2\\
m^{\ell}_{31}\varepsilon&m^{\ell}_{32}\varepsilon&m^{\ell}_{33}
\end{array}\right)\varepsilon v_d$$ where the coefficients $m^{\ell}_{ij}$($i,j=$1,2,3) are order one unspecified constants. The hermitian matrix $m^{\dagger}_{\ell}m_{\ell}$ is diagonalized by the unitary matrix $U_{\ell}$, which exactly corresponds to the transformation of the charged leptons used to diagonalize $m_{\ell}$, $$\label{60}U^{\dagger}_{\ell}m^{\dagger}_{\ell}m_{\ell}U_{\ell}\simeq{\rm diag}(|m^{\ell}_{11}\varepsilon^3|^2,|m^{\ell}_{22}\varepsilon^2|^2,|m^{\ell}_{33}\varepsilon|^2)v^2_d$$ The charged lepton masses are modified by terms of relative order $\varepsilon$ with respect to LO results, consequently the NLO corrections don’t spoil the charged lepton mass hierarchies predicted at LO. The unitary matrix $U_{\ell}$ is approximately given by $$\label{61}U_{\ell}\simeq\left(\begin{array}{ccc}
1&(\frac{m^{\ell}_{21}}{m^{\ell}_{22}}\varepsilon)^{*}&(\frac{m^{\ell}_{31}}{m^{\ell}_{33}}\varepsilon)^{*}\\
-\frac{m^{\ell}_{21}}{m^{\ell}_{22}}\varepsilon&1&(\frac{m^{\ell}_{32}}{m^{\ell}_{33}}\varepsilon)^{*}\\
-\frac{m^{\ell}_{31}}{m^{\ell}_{33}}\varepsilon&-\frac{m^{\ell}_{32}}{m^{\ell}_{33}}\varepsilon&1
\end{array}\right)$$ Then we turn to the neutrino sector. The NLO correction to the Majorana masses of the right handed neutrino arises at order $1/\Lambda$, the corresponding higher dimensional operator is $(\nu^{c}\nu^{c})_{1_1}\theta^2$, whose contribution can be completely absorbed into the redefinition of the mass parameter $M$. The NLO corrections to the neutrino Dirac couplings are $$\begin{aligned}
\label{62}\frac{y_{\nu1}}{\Lambda}(\nu^{c}\ell\delta\eta)_{1_1}h_u+\frac{y_{\nu2}}{\Lambda}(\nu^{c}\ell\delta\phi)_{1_1}h_u+\frac{x_{\nu1}}{\Lambda^2}(\nu^{c}\ell\eta)_{1_2}\theta h_u+\frac{x_{\nu2}}{\Lambda^2}(\nu^{c}\ell\phi)_{1_2}\theta h_u\end{aligned}$$ where $\delta\eta$ and $\delta\phi$ represent the shifted VEVs of the flavons $\eta$ and $\phi$ respectively. Through redefining the LO parameters $a\rightarrow
a-x_{\nu1}\frac{v_{\eta}v_{\theta}}{\Lambda^2}$ and $b\rightarrow
b+y_{\nu2}\frac{\delta v_{\phi}}{\Lambda}$, the NLO corrections to $m^D_{\nu}$ are $$\label{63}\delta m^D_{\nu}=\left(\begin{array}{ccc}
0&\delta_2&\delta_1-\delta_2\\
-\delta_2&\delta_1&\delta_2\\
\delta_1+\delta_2&-\delta_2&0
\end{array}\right)v_u$$ where $\delta_1=y_{\nu1}\frac{\delta
v_{\eta_2}}{\Lambda}+2x_{\nu1}\frac{v_{\eta}v_{\theta}}{\Lambda^2}$ and $\delta_2=x_{\nu2}\frac{v_{\phi}v_{\theta}}{\Lambda^2}$. We notice that both $\delta_1$ and $\delta_2$ are of order $\varepsilon
a$ (or $\varepsilon b$). Therefore, the NLO corrections to the light neutrino mass matrix are given by $$\begin{aligned}
\nonumber&&\delta m_{\nu}=-(m^{D}_{\nu})^{T}M^{-1}_{N}\delta m^{D}_{\nu}-(\delta m^{D}_{\nu})^{T}M^{-1}_{N}m^{D}_{\nu}\\
\label{64}&&~~~=\frac{v^2_u}{M}\left(\begin{array}{ccc}-2(a-b)\delta_1&-(2a+b)\delta_1-6b\delta_2&-b(\delta_1-6\delta_2)\\
-(2a+b)\delta_1-6b\delta_2&2b(\delta_1+3\delta_2)&-(2a+b)\delta_1\\
-b(\delta_1-6\delta_2)&-(2a+b)\delta_1&-2(a-b)\delta_1-6b\delta_2
\end{array}\right)\end{aligned}$$ Diagonalizing the modified light neutrino mass matrix, we obtain the neutrino masses to LO in $\delta_{1,2}$ as follows, $$\begin{aligned}
\nonumber&&m_1=\Big|(a-3b)^2+(a-3b)\delta_1\Big|\frac{v^2_{u}}{|M|}\\
\nonumber&&m_{2}=4\Big|a^2+a\delta_1\Big|\frac{v^2_u}{|M|}\\
\label{65}&&m_{3}=\Big|(a+3b)^2+(a+3b)\delta_1\Big|\frac{v^2_u}{|M|}\end{aligned}$$ The PMNS matrix becomes $U_{PMNS}=U^{\dagger}_{\ell}U_{\nu}$, where $U_{\ell}$ associated with the diagonalization of the charged lepton mass matrix is given by Eq.(\[61\]), and the unitary matrix $U_{\nu}$ diagonalizes the neutrino mass matrix $m_{\nu}+\delta
m_{\nu}$ including the NLO contributions. The parameters of the lepton mixing matrix are modified as $$\begin{aligned}
\nonumber&&|U_{e3}|=\frac{1}{\sqrt{2}}\Big|\frac{1}{6(|a|^2+9|b|^2)(ab^{*}+a^{*}b)}[(a+3b)^2(a^{*}\delta^{*}_1
+6b^{*}\delta^{*}_2)-(a^{*}-3b^{*})^2(a\delta_1+6b\delta_2)]\\
\nonumber&&~~~~+\Big(\frac{m^{\ell}_{21}}{m^{\ell}_{22}}\varepsilon\Big)^{*}-\Big(\frac{m^{\ell}_{31}}{m^{\ell}_{33}}\varepsilon\Big)^{*}\Big|\\
\nonumber&&\sin^2\theta_{12}=\frac{1}{3}\Big[1-\frac{m^{\ell}_{21}}{m^{\ell}_{22}}\varepsilon-\frac{m^{\ell}_{31}}{m^{\ell}_{33}}\varepsilon-\Big(\frac{m^{\ell}_{21}}{m^{\ell}_{22}}\varepsilon\Big)^{*}-\Big(\frac{m^{\ell}_{31}}{m^{\ell}_{33}}\varepsilon\Big)^{*}\Big]\\
\nonumber&&\sin^2\theta_{23}=\frac{1}{2}+\frac{1}{2(|a|^{2}+9|b|^2)(ab^{*}+a^{*}b)}\Big[ab(a^{*}\delta^{*}_1+6b^{*}\delta^{*}_2)+a^{*}b^{*}(a\delta_1+6b\delta_2)\Big]\\
\label{66}&&~~~~+\frac{1}{2}\Big[\frac{m^{\ell}_{32}}{m^{\ell}_{33}}\varepsilon+\Big(\frac{m^{\ell}_{32}}{m^{\ell}_{33}}\varepsilon\Big)^{*}\Big]\end{aligned}$$
We see that the neutrino masses and mixing angles receive corrections of order $\lambda^2_c$ with respect to LO results. The value of $\sin^2\theta_{12}$ is still within the $3\sigma$ range of global data fit, and the corrections to both $\theta_{23}$ and $\theta_{13}$ are within the current data uncertainties as well. In particular, a non-vanishing $\theta_{13}$ of order $\lambda^2_c$ is close to the reach of the next generation neutrino oscillation experiments and will provide a valuable test of model.
The NLO corrections to the mass matrices in quark sector have been analyzed following the same method as that for the lepton sector. Since every entry of the mass matrices $m_u$ and $m_d$ in Eq.(\[41\]) and Eq.(\[42\]) is nonvanishing, the NLO contributions leads to small corrections of relative order $\lambda^2_c$ in each entry. Consequently the quark masses and mixing angles are corrected by terms of relative order $\lambda^2_c$ with respect to LO results, the successful LO predictions are not spoiled.
Conclusion
==========
We have constructed a SUSY model for fermion masses and flavor mixings based on the flavor symmetry $S_4\times Z_3\times Z_4$, the neutrino masses are assumed to be generated through the See-Saw mechanism. At LO the $S_4$ symmetry is broken down to Klein four and $Z_3$ symmetry in the neutrino and charged lepton sector respectively, this breaking chain exactly leads to the TB mixing. It is remarkable that the mass hierarchies among the charged leptons are controlled by the spontaneous breaking of the flavor symmetry. We further extend the flavor symmetry to the quark sector, where the $S_4$ symmetry is completely broken. The correct orders of quark masses and CKM matrix elements are generated with the exception of the mixing angle between the first two generations, which requires a samll accidental enhancement.
We have carefully analyzed the NLO contributions due to higher dimensional operators which modify both the Yukawa couplings and the LO vacuum alignment, and we have verified that all the fermino masses and mixing angles are corrected by terms of relative order $\lambda^2_c$ with respect to the LO results. As a result, the successful LO predictions are not spoiled. Particularly we expect the mixing angle $\theta_{13}$ would be of order $\lambda^2_c$, it is within the sensitivity of the experiments which are now in preparation and will take data in the near future [@Ardellier:2006mn; @Wang:2006ca]. Precise measurement of $\theta_{13}$ is an important test to our model.
The phenomenological consequences of our model are analyzed in detail. The low energy observables including the neutrino mass squared difference, neutrinoless double decay, beta decay, the sum of the neutrino masses and the Majorana CP violating phases are considered. All the low energy observables can be expressed in terms of three independent parameters: the ratio $R=|b/a|$, the relative phase $\Phi$ between $a$ and $b$ and the lightest neutrino mass $m_l$. Once the parameters are fixed to match $\Delta m^2_{sol}$ and $\Delta m^2_{atm}$, there is only one parameter left, which is chose to be $m_l$ in the present work. Both the normal and inverted hierarchy neutrino spectrum are allowed in our model. For normal hierarchy there is a low bound on $m_1$ of approximately 0.011 eV. In the case of inverted hierarchy, $m_3$ is less constrained and we only obtain the trivial constraint that $m_3$ should be positive. The lower bounds of the effective mass $m_{\beta\beta}$ are approximately 7.8 meV and 44.3 meV for the NH and IH spectrum respectively. A combined measurement of $m_{\beta\beta}$ and the lightest neutrino mass can distinguish the NH from the IH spectrum. The Majorana CP violating phases depend both on $\cos\Phi$ and $\sin\Phi$, whereas only $\cos\Phi$ is involved in other low energy observables. It is remarkable that the right handed neutrino masses are exactly degenerate at LO, the baryon asymmetry may be generated via the rosonant leptogenesis.
Acknowledgements {#acknowledgements .unnumbered}
================
We are grateful to Prof. Mu-Lin Yan for stimulating discussions. This work is supported by the Chinese Academy KJCX2-YW-N29 and the 973 project with Grant No. 2009CB825200.
Appendix A: Representation matrices of the $S_4$ group {#appendix-a-representation-matrices-of-the-s_4-group .unnumbered}
======================================================
In this appendix, we explicitly show the representation matrices of the $S_4$ group for the five irreducible representations. The matrices for the generators $S$ and $T$ depend on the representations as follows $$\begin{aligned}
\begin{array}{lcc}
1_1,&S=1,&T=1\\
1_2,&S=-1,&T=1\\
2,&~~~S=\left(\begin{array}{cc} 0&1\\
1&0
\end{array}
\right),&~~~T=\left(\begin{array}{cc}
\omega&0\\
0&\omega^2
\end{array}
\right)
\\
3_1,& ~~~~~~~~~S=\frac{1}{3}\left(\begin{array}{ccc} -1&2\omega&2\omega^2\\
2\omega&2\omega^2&-1\\
2\omega^2&-1&2\omega
\end{array}
\right),&~~~~~~~~~ T=\left(\begin{array}{ccc}
1&0&0\\
0&\omega^2&0\\
0&0&\omega
\end{array}
\right)\\
3_2,&~~~~~~~~S=\frac{-1}{3}\left(\begin{array}{ccc} -1&2\omega&2\omega^2\\
2\omega&2\omega^2&-1\\
2\omega^2&-1&2\omega
\end{array}
\right), &~~~~~~~~~ T=\left(\begin{array}{ccc}
1&0&0\\
0&\omega^2&0\\
0&0&\omega
\end{array}
\right)
\end{array}\end{aligned}$$ where $\omega=e^{2\pi i/3}=(-1+\sqrt{3})/2$. In the identity representation $1_1$, all the elements are mapped onto the number 1. In the antisymmetric representation $1_2$, the group elements correspond to 1 or -1 respectively for even permutation and odd permutation. For the 2 representation, the representation matrices are as follows $$\begin{aligned}
\nonumber&&{\cal C}_1:\;\left(\begin{array}{cc} 1&0\\
0&1
\end{array}\right)\\
\nonumber&&{\cal C}_2:\; STS^2=T^2S=\left(\begin{array}{cc}
0&\omega^2\\
\omega&0
\end{array}\right),~~TSTS^2=TST=\left(\begin{array}{cc}
0&1\\
1&0
\end{array}\right),~~
ST^2=S^2TS=\left(\begin{array}{cc}
0&\omega\\
\omega^2&0
\end{array}\right)\\
\nonumber&&{\cal C}_3:\; TS^2T^2=S^2=T^2S^2T=\left(
\begin{array}{cc}
1&0\\
0&1
\end{array}
\right)\\
\nonumber&&{\cal C}_4:\;T=S^2T=TS^2=S^2TS^2=\left(\begin{array}{cc}\omega&0\\
0&\omega^2
\end{array}\right),~~T^2=S^2T^2=STS=T^2S^2=\left(\begin{array}{cc}\omega^2&0\\
0&\omega
\end{array}\right)\\
\nonumber&&{\cal C}_5:\; S=S^3=\left(\begin{array}{cc} 0&1\\ 1&0
\end{array}\right),~~T^2ST=TS=\left(\begin{array}{cc} 0&\omega\\ \omega^2&0
\end{array}\right),~~ST=TST^2=\left(\begin{array}{cc} 0&\omega^2\\ \omega&0
\end{array}\right)\end{aligned}$$ For the $3_1$ representation, the representation matrices are $$\begin{aligned}
\nonumber&&{\cal C}_1:\;\left(\begin{array}{ccc} 1&0&0\\
0&1&0\\
0&0&1
\end{array}\right)\\
\nonumber&&{\cal C}_2:\; STS^2=\left(\begin{array}{ccc}
1&0&0\\
0&0&\omega\\
0&\omega^2&0
\end{array}\right),~~TSTS^2=\left(\begin{array}{ccc}
1&0&0\\
0&0&1\\
0&1&0
\end{array}\right),~~ST^2=\frac{1}{3}\left(\begin{array}{ccc}
-1&2\omega^2&2\omega\\
2\omega&2&-\omega^2\\
2\omega^2&-\omega&2
\end{array}\right),\\
\nonumber&&~~~~~~~S^2TS=\left(\begin{array}{ccc}
1&0&0\\
0&0&\omega^2\\
0&\omega&0
\end{array}\right),~~TST=\frac{1}{3}\left(\begin{array}{ccc}
-1&2&2\\
2&2&-1\\
2&-1&2
\end{array}\right),~~T^2S=\frac{1}{3}\left(\begin{array}{ccc}
-1&2\omega&2\omega^2\\
2\omega^2&2&-\omega\\
2\omega&-\omega^2&2
\end{array}\right)\\
\nonumber&&{\cal C}_3:\;TS^2T^2=\frac{1}{3}\left(
\begin{array}{ccc}
-1&2\omega&2\omega^2\\
2\omega^2&-1&2\omega\\
2\omega&2\omega^2&-1
\end{array}
\right),S^2=\frac{1}{3}\left(
\begin{array}{ccc}
-1&2&2\\
2&-1&2\\
2&2&-1
\end{array}
\right),T^2S^2T=\frac{1}{3}\left(
\begin{array}{ccc}
-1&2\omega^2&2\omega\\
2\omega&-1&2\omega^2\\
2\omega^2&2\omega&-1
\end{array}
\right)\\
\nonumber&&{\cal C}_4:\;T=\left(\begin{array}{ccc} 1&0&0\\
0&\omega^2&0\\
0&0&\omega
\end{array}\right),~~T^2=\left(\begin{array}{ccc} 1&0&0\\
0&\omega&0\\
0&0&\omega^2
\end{array}\right),~~T^2S^2=\frac{1}{3}\left(\begin{array}{ccc}-1&2&2\\
2\omega&-\omega&2\omega\\
2\omega^2&2\omega^2&-\omega^2\\
\end{array}\right),\\
\nonumber&&~S^2T=\frac{1}{3}\left(\begin{array}{ccc}-1&2\omega^2&2\omega\\
2&-\omega^2&2\omega\\
2&2\omega^2&-\omega
\end{array}\right),S^2TS^2=\frac{1}{3}\left(\begin{array}{ccc}-1&2\omega&2\omega^2\\
2\omega&-\omega^2&2\\
2\omega^2&2&-\omega
\end{array}\right),STS=\frac{1}{3}\left(\begin{array}{ccc}-1&2\omega^2&2\omega\\
2\omega^2&-\omega&2\\
2\omega&2&-\omega^2
\end{array}\right),\\
\nonumber&&~~S^2T^2=\frac{1}{3}\left(\begin{array}{ccc}-1&2\omega&2\omega^2\\
2&-\omega&2\omega^2\\
2&2\omega&-\omega^2
\end{array}\right),~~TS^2=\frac{1}{3}\left(\begin{array}{ccc}-1&2&2\\
2\omega^2&-\omega^2&2\omega^2\\
2\omega&2\omega&-\omega
\end{array}\right)\\
\nonumber&&{\cal C}_5:\;
S=\frac{1}{3}\left(\begin{array}{ccc} -1&2\omega&2\omega^2\\
2\omega&2\omega^2&-1\\
2\omega^2&-1&2\omega
\end{array}\right),T^2ST=\frac{1}{3}\left(\begin{array}{ccc} -1&2&2\\
2\omega^2&2\omega^2&-\omega^2\\
2\omega&-\omega&2\omega
\end{array}\right),ST=\frac{1}{3}\left(\begin{array}{ccc}-1&2&2\\
2\omega&2\omega&-\omega\\
2\omega^2&-\omega^2&2\omega^2
\end{array}\right),\\
\nonumber&&~~~~TS=\frac{1}{3}\left(\begin{array}{ccc} -1&2\omega&2\omega^2\\
2&2\omega&-\omega^2\\
2&-\omega&2\omega^2
\end{array}\right),~TST^2=\frac{1}{3}\left(\begin{array}{ccc}-1&2\omega^2&2\omega\\
2&2\omega^2&-\omega\\
2&-\omega^2&2\omega
\end{array}\right),~S^3=\frac{1}{3}\left(\begin{array}{ccc} -1&2\omega^2&2\omega\\
2\omega^2&2\omega&-1\\
2\omega&-1&2\omega^2
\end{array}\right)\end{aligned}$$ Since the signs of the generator $S$ are opposite in $3_1$ and $3_2$ representations, the represention matrices for the $3_2$ representation can be found from those of the $3_1$ representation: the matrices are exactly the same for ${\cal C}_1$, ${\cal C}_3$ and ${\cal C}_4$ classes, whereas they are the opposite for ${\cal C}_2$ and ${\cal C}_5$.
Appendix B: NLO corrections to the vacuum alignment {#appendix-b-nlo-corrections-to-the-vacuum-alignment .unnumbered}
===================================================
In this appendix, we will analyze the NLO corrections to the vacuum alignment induced by the higher dimensional operators. At NLO the driving superpotential dependent on the driving fields is modified into $$\label{b1}w_{v}+\Delta w_{v}$$ where $w_v$ is the LO contributions shown in Eq.(\[7\]), which are of dimension three. $\Delta w_v$ is most general set of terms suppressed by one power of the cutoff, which are linear in the driving fields and are invariant under the symmetry of the model. Concretely $\Delta w_v$ is given by $$\begin{aligned}
\label{b2}\Delta w_{v}=\frac{1}{\Lambda}\Big[\sum^{2}_{i=1}p_i{\cal O}^{\varphi}_i+\sum^5_{i=1}x_i{\cal O}^{\xi'}_i+\sum^3_{i=1}t_i{\cal O}^{\theta}_i+\sum^{2}_{i=1}e_i{\cal O}^{\eta}_i+\sum^{2}_{i=1}s_i{\cal O}^{\phi}_i\Big]\end{aligned}$$ where $p_i$, $x_i$, $t_i$, $e_i$ and $s_i$ are order one coefficients, $\{{\cal O}^{\varphi}_i,{\cal O}^{\xi'}_i,{\cal O}^{\theta}_i,{\cal O}^{\eta}_i,{\cal O}^{\phi}_i\}$ are the complete set of invariant operators of dimension four, $$\begin{aligned}
\label{b3}{\cal O}^{\varphi}_1=(\varphi^{0}(\varphi\chi)_{3_2})_{1_2}\,\theta,~~~~{\cal O}^{\varphi}_2=(\varphi^{0}(\eta\phi)_{3_2})_{1_2}\,\Delta\end{aligned}$$ $$\begin{aligned}
\label{b4}{\cal O}^{\xi}_1=\xi'^{0}\theta(\varphi^2)_{1_1},~~~{\cal
O}^{\xi}_2=\xi'^{0}\theta(\chi^2)_{1_1},~~~{\cal
O}^{\xi}_3=\xi'^{0}\Delta(\eta^2)_{1_1}, ~~~{\cal
O}^{\xi}_4=\xi'^{0}\Delta(\phi^2)_{1_1},~~~{\cal
O}^{\xi}_5=\xi'^{0}\Delta^3\end{aligned}$$ $$\begin{aligned}
\label{b5}{\cal O}^{\theta}_1=\theta^{0}(\eta^{3})_{1_1},~~~~{\cal O}^{\theta}_2=\theta^{0}(\phi^3)_{1_1},~~~~{\cal O}^{\theta}_3=\theta^{0}(\eta\phi^2)_{1_1}\end{aligned}$$ $$\begin{aligned}
\label{b6}{\cal O}^{\eta}_1=(\eta^{0}(\eta^2)_2)_{1_2}\theta,~~~{\cal O}^{\eta}_2=(\eta^{0}(\phi^2)_{2})_{1_2}\theta\end{aligned}$$ $$\begin{aligned}
\label{b7}{\cal O}^{\phi}_1=(\phi^{0}(\phi^2)_{3_1})_{1_2}\theta,~~~~{\cal O}^{\phi}_2=(\phi^{0}(\eta\phi)_{3_1})_{1_2}\theta\end{aligned}$$ The NLO superpotential $\Delta w_{v}$ results in shift of the LO VEVs, then the vacuum configuration is modified into $$\begin{aligned}
\nonumber&&\langle\varphi\rangle=(\delta v_{\varphi_1},v_{\varphi}+\delta v_{\varphi_2},\delta v_{\varphi_3}),~~~~\langle\chi\rangle=(\delta v_{\chi_1},v_{\chi},\delta v_{\chi_3}),\\
\nonumber&&\langle\eta\rangle=(v_{\eta},v_{\eta}+\delta v_{\eta_2}),~~~~~~\langle\phi\rangle=(v_{\phi}+\delta v_{\phi_1},v_{\phi}+\delta v_{\phi_2},v_{\phi}+\delta v_{\phi_3}),\\
\label{b8}&&\langle\theta\rangle=v_{\theta}+\delta v_{\theta} $$ Note that the VEV $v_{\Delta}$ doesn’t receive correction at NLO. Similar to section \[sec:alignment\], the new vacuum configuration is obtained by imposing the vanish of the first derivative of $w_v+\Delta w_v$ with respect to the driving fields $\varphi^{0}$, $\xi'^{0}$, $\theta^{0}$, $\eta^{0}$ and $\phi^{0}$. Only terms linear in the shift $\delta v$ are kept, and terms of order $\delta
v/\Lambda$ are neglected, then the minimization equations become, $$\begin{aligned}
\nonumber&&(2g_1v_{\varphi}+g_3v_{\chi})\delta v_{\varphi_3}+(2g_2v_{\chi}-g_3v_{\varphi})\delta v_{\chi_3}=0\\
\nonumber&&2g_1\delta v_{\varphi_2}+p_1\frac{v_{\chi}v_{\theta}}{\Lambda}=0\\
\nonumber&&(2g_1v_{\varphi}-g_3v_{\chi})\delta v_{\varphi_1}+(2g_2v_{\chi}+g_3v_{\varphi})\delta v_{\chi_1}=0\\
\nonumber&&g_4(v_{\varphi}\delta v_{\chi_3}+v_{\chi}\delta v_{\varphi_3})+\frac{1}{\Lambda}(2x_3v_{\Delta}v^2_{\eta}+3x_4v_{\Delta}v^2_{\phi}+x_5v^3_{\Delta})=0\\
\label{b9}&&\kappa v_{\theta}\delta v_{\theta}+\frac{1}{\Lambda}(t_1v^3_{\eta}+3t_3v_{\eta}v^2_{\phi})=0\end{aligned}$$ Solving the above linear equations, we obtain $$\begin{aligned}
\nonumber&&\delta v_{\varphi_1}=\frac{2g_2v_{\chi}+g_3v_{\varphi}}{
g_3v_{\chi}-2g_1v_{\varphi}}\delta v_{\chi_1}\\
\nonumber&&\delta v_{\varphi_2}=-\frac{p_1}{2g_1}\frac{v_{\chi}v_{\theta}}{\Lambda}\\
\nonumber&&\delta v_{\varphi_3}=\frac{2g_2v_{\chi}-g_{3}v_{\varphi}}{2g_4\Lambda(2g_1v^2_{\varphi}+g_3v_{\chi}v_{\varphi})}[2x_3v_{\Delta}v^2_{\eta}+3x_4v_{\Delta}v^2_{\phi}+x_5v^3_{\Delta}]\\
\nonumber&&\delta v_{\chi_3}=-\frac{2g_1v_{\varphi}+g_3v_{\chi}}{2g_4\Lambda(2g_1v^2_{\varphi}+g_3v_{\chi}v_{\varphi})}[2x_3v_{\Delta}v^2_{\eta}+3x_4v_{\Delta}v^2_{\phi}+x_5v^3_{\Delta}]\\
\label{b10}&&\delta v_{\theta}=-\frac{t_1}{\kappa}\frac{v^3_{\eta}}{\Lambda v_{\theta}}-\frac{3t_3}{\kappa}\frac{v_{\eta}v^2_{\phi}}{\Lambda v_{\theta}}\end{aligned}$$ where $\delta v_{\varphi_{2,3}}$, $\delta v_{\chi_3}$ and $\delta
v_{\theta}$ are of order $1/\Lambda$, $\delta v_{\chi_{1}}$ is undetermined, and it is expected to be suppressed by $1/\Lambda$ as well. The minimization conditions for the shift $\delta v_{\eta_2}$ and $\delta v_{\phi_{1,2,3}}$ are $$\begin{aligned}
\nonumber&&2f_2v_{\phi}(\delta v_{\phi_1}+\delta v_{\phi_2}+\delta v_{\phi_3})+e_1\frac{v^2_{\eta}v_{\theta}}{\Lambda}+3e_2\frac{v^2_{\phi}v_{\theta}}{\Lambda}=0\\
\nonumber&&2f_1v_{\eta}\delta v_{\eta_2}+2f_2v_{\phi}(\delta v_{\phi_1}+\delta v_{\phi_2}+\delta v_{\phi_3})-e_1\frac{v^2_{\eta}v_{\theta}}{\Lambda}-3e_2\frac{v^2_{\phi}v_{\theta}}{\Lambda}=0\\
\nonumber&&f_{3}(v_{\eta}\delta v_{\phi_2}-v_{\eta}\delta v_{\phi_3}-v_{\phi}\delta v_{\eta_2})+2s_2\frac{v_{\eta}v_{\phi}v_{\theta}}{\Lambda}=0\\
\nonumber&&f_{3}(v_{\eta}\delta v_{\phi_1}-v_{\eta}\delta v_{\phi_2}-v_{\phi}\delta v_{\eta_2})+2s_2\frac{v_{\eta}v_{\phi}v_{\theta}}{\Lambda}=0\\
\label{b11}&&f_{3}(v_{\eta}\delta v_{\phi_3}-v_{\eta}\delta
v_{\phi_1}-v_{\phi}\delta
v_{\eta_2})+2s_2\frac{v_{\eta}v_{\phi}v_{\theta}}{\Lambda}=0\end{aligned}$$ The solutions to the above equations are $$\begin{aligned}
\nonumber&&\delta v_{\eta_2}=\frac{2s_2}{f_3}\frac{v_{\eta}v_{\theta}}{\Lambda}\\
\label{b12}&&\delta v_{\phi}\equiv\delta v_{\phi_1}=\delta v_{\phi_2}=\delta v_{\phi_3}=-\frac{e_1}{6f_2}\frac{v^2_{\eta}v_{\theta}}{\Lambda v_{\phi}}-\frac{e_2}{2f_2}\frac{v_{\phi}v_{\theta}}{\Lambda}\end{aligned}$$ We notice that $\langle\phi\rangle$ acquires ${\cal O}(1/\Lambda)$ corrections in the same directions, and the shift $\delta v_{\eta_2}$ is in general non-zero. Since all the VEVs approximately are of the same order ${\cal O}(\lambda^2_c\Lambda)$ at LO, we expect $\frac{\delta VEV}{VEV}\sim\frac{VEV}{\Lambda}\sim\lambda^2_c$. Therefore the LO vacuum alignment in Eq.(\[9\]) and Eq.(\[12\]) is stable under the NLO corrections.
[99]{}
T. Schwetz, M. A. Tortola and J. W. F. Valle, New J. Phys. [**10**]{}, 113011 (2008) \[arXiv:0808.2016 \[hep-ph\]\]; M. Maltoni and T. Schwetz, arXiv:0812.3161 \[hep-ph\].
M. C. Gonzalez-Garcia and M. Maltoni, Phys. Rept. [**460**]{}, 1 (2008) \[arXiv:0704.1800 \[hep-ph\]\].
P. F. Harrison, D. H. Perkins and W. G. Scott, Phys. Lett. B [**530**]{}, 167 (2002), hep-ph/0202074; P. F. Harrison and W. G. Scott, Phys.Lett. B [**535**]{}, 163 (2002), hep-ph/0203209; Z. Z. Xing, Phys.Lett. B [**533**]{}, 85 (2002), hep-ph/0204049; X. G. He and A. Zee, Phys. Lett. B [**560**]{}, 87 (2003), hep-ph/0301092.
N. Cabibbo, Phys. Rev. Lett. [**10**]{}, 531 (1963); M. Kobayashi and T. Maskawa, Prog. Theor. Phys. [**49**]{}, 652 (1973).
C. Amsler et al. (Particle Data Group), Phys. Lett. B667, 1 (2008).
S. F. King, JHEP [**0508**]{} (2005) 105 \[arXiv:hep-ph/0506297\]; I. de Medeiros Varzielas and G. G. Ross, arXiv:hep-ph/0507176; I. de Medeiros Varzielas, S. F. King and G. G. Ross, Phys. Lett. B [**644**]{} (2007) 153 \[arXiv:hep-ph/0512313\]; I. de Medeiros Varzielas, S. F. King and G. G. Ross, Phys. Lett. B [**648**]{} (2007) 201 \[arXiv:hep-ph/0607045\]; S. F. King and M. Malinsky, JHEP [**0611**]{} (2006) 071 \[arXiv:hep-ph/0608021\]; F. Feruglio and Y. Lin, Nucl. Phys. B [**800**]{}, 77 (2008) \[arXiv:0712.1528 \[hep-ph\]\]. . M. Leurer, Y. Nir and N. Seiberg, Nucl. Phys. B [**420**]{}, 468 (1994) \[arXiv:hep-ph/9310320\]; M. Leurer, Y. Nir and N. Seiberg, Nucl. Phys. B [**398**]{}, 319 (1993) \[arXiv:hep-ph/9212278\]; L. E. Ibanez and G. G. Ross, Phys. Lett. B [**332**]{}, 100 (1994), arXiv:hep-ph/9403338; P. Binetruy and P. Ramond, Phys. Lett. B [**350**]{}, 49 (1995), arXiv:hep-ph/9412385; E. Dudas, S. Pokorski and C. A. Savoy, Phys. Lett. B [**356**]{}, 45 (1995), arXiv:hep-ph/9504292.
G. G. Ross, “Models of fermions masses”, [*[Prepared for Theoretical Advanced Study Institute in Elementary Particle Physics (TASI 2000): Flavor Physics for the Millennium, Boulder, Colorado, 4-30 Jun 2000]{}*]{}; G. Altarelli, arXiv:0711.0161 \[hep-ph\].
E. Ma and G. Rajasekaran, Phys. Rev. D [**64**]{} (2001) 113012, arXiv:hep-ph/0106291; E. Ma, Mod. Phys. Lett. A [**17**]{} (2002) 627 \[arXiv:hep-ph/0203238\]; K. S. Babu, E. Ma and J. W. F. Valle, Phys. Lett. B [**552**]{} (2003) 207, arXiv:hep-ph/0206292; M. Hirsch, J. C. Romao, S. Skadhauge, J. W. F. Valle and A. Villanova del Moral, arXiv:hep-ph/0312244; Phys. Rev. D [**69**]{} (2004) 093006 \[arXiv:hep-ph/0312265\]; E. Ma, Phys. Rev. D [**70**]{} (2004) 031901; Phys. Rev. D [**70**]{} (2004) 031901 \[arXiv:hep-ph/0404199\]; New J. Phys. [**6**]{} (2004) 104 \[arXiv:hep-ph/0405152\]; arXiv:hep-ph/0409075; S. L. Chen, M. Frigerio and E. Ma, Nucl. Phys. B [**724**]{} (2005) 423 \[arXiv:hep-ph/0504181\]; E. Ma, Phys. Rev. D [**72**]{} (2005) 037301 \[arXiv:hep-ph/0505209\]; M. Hirsch, A. Villanova del Moral, J. W. F. Valle and E. Ma, Phys. Rev. D [**72**]{} (2005) 091301 \[Erratum-ibid. D [**72**]{} (2005) 119904\] \[arXiv:hep-ph/0507148\]. K. S. Babu and X. G. He, arXiv:hep-ph/0507217; E. Ma, Mod. Phys. Lett. A [**20**]{} (2005) 2601 \[arXiv:hep-ph/0508099\]; A. Zee, Phys. Lett. B [**630**]{} (2005) 58 \[arXiv:hep-ph/0508278\]; E. Ma, Phys. Rev. D [**73**]{} (2006) 057304 \[arXiv:hep-ph/0511133\]; X. G. He, Y. Y. Keum and R. R. Volkas, JHEP [**0604**]{} (2006) 039 \[arXiv:hep-ph/0601001\]; B. Adhikary, B. Brahmachari, A. Ghosal, E. Ma and M. K. Parida, Phys. Lett. B [**638**]{} (2006) 345 \[arXiv:hep-ph/0603059\]; E. Ma, Mod. Phys. Lett. A [**21**]{} (2006) 2931 \[arXiv:hep-ph/0607190\]; Mod. Phys. Lett. A [**22**]{} (2007) 101 \[arXiv:hep-ph/0610342\]; L. Lavoura and H. Kuhbock, Mod. Phys. Lett. A [**22**]{} (2007) 181 \[arXiv:hep-ph/0610050\]; E. Ma, H. Sawanaka and M. Tanimoto, Phys. Lett. B [**641**]{}, 301 (2006), hep-ph/0606103; S. F. King and M. Malinsky, Phys. Lett. B [**645**]{} (2007) 351 \[arXiv:hep-ph/0610250\]; S. Morisi, M. Picariello and E. Torrente-Lujan, Phys. Rev. D [**75**]{} (2007) 075015 \[arXiv:hep-ph/0702034\]; M. Hirsch, A. S. Joshipura, S. Kaneko and J. W. F. Valle, Phys. Rev. Lett. [**99**]{}, 151802 (2007) \[arXiv:hep-ph/0703046\]. F. Yin, Phys. Rev. D [**75**]{} (2007) 073010 \[arXiv:0704.3827 \[hep-ph\]\]; F. Bazzocchi, S. Kaneko and S. Morisi, JHEP [**0803**]{} (2008) 063 \[arXiv:0707.3032 \[hep-ph\]\]. F. Bazzocchi, S. Morisi and M. Picariello, Phys. Lett. B [**659**]{} (2008) 628 \[arXiv:0710.2928 \[hep-ph\]\]; M. Honda and M. Tanimoto, Prog. Theor. Phys. [**119**]{} (2008) 583 \[arXiv:0801.0181 \[hep-ph\]\]; B. Brahmachari, S. Choubey and M. Mitra, Phys. Rev. D [**77**]{} (2008) 073008 \[Erratum-ibid. D [**77**]{} (2008) 119901\] \[arXiv:0801.3554 \[hep-ph\]\]; F. Bazzocchi, S. Morisi, M. Picariello and E. Torrente-Lujan, J. Phys. G [**36**]{} (2009) 015002 \[arXiv:0802.1693 \[hep-ph\]\]; B. Adhikary and A. Ghosal, Phys. Rev. D [**78**]{} (2008) 073007 \[arXiv:0803.3582 \[hep-ph\]\]; M. Hirsch, S. Morisi and J. W. F. Valle, Phys. Rev. D [**78**]{} (2008) 093007 \[arXiv:0804.1521 \[hep-ph\]\]; Y. Lin, Nucl. Phys. B [**813**]{}, 91 (2009) \[arXiv:0804.2867 \[hep-ph\]\]; P. H. Frampton and S. Matsuzaki, arXiv:0806.4592 \[hep-ph\]; F. Feruglio, C. Hagedorn, Y. Lin and L. Merlo, Nucl. Phys. B [**809**]{}, 218 (2009) \[arXiv:0807.3160 \[hep-ph\]\]; F. Bazzocchi, M. Frigerio and S. Morisi, Phys. Rev. D [**78**]{}, 116018 (2008) \[arXiv:0809.3573 \[hep-ph\]\]; S. Morisi, Phys. Rev. D [**79**]{}, 033008 (2009) \[arXiv:0901.1080 \[hep-ph\]\]; P. Ciafaloni, M. Picariello, E. Torrente-Lujan and A. Urbano, Phys. Rev. D [**79**]{}, 116010 (2009) \[arXiv:0901.2236 \[hep-ph\]\]; M. C. Chen and S. F. King, JHEP [**0906**]{}, 072 (2009) \[arXiv:0903.0125 \[hep-ph\]\]; T. J. Burrows and S. F. King, arXiv:0909.1433 \[hep-ph\]. . G. Altarelli and F. Feruglio, Nucl. Phys. B [**720**]{}, 64 (2005), hep-ph/0504165. G. Altarelli and F. Feruglio, Nucl. Phys. B [**741**]{}, 215 (2006), hep-ph/0512103. G. Altarelli, F. Feruglio and C. Hagedorn, JHEP [**0803**]{}, 052 (2008) \[arXiv:0802.0090 \[hep-ph\]\].
G. C. Branco, R. Gonzalez Felipe, M. N. Rebelo and H. Serodio, Phys. Rev. D [**79**]{}, 093008 (2009) \[arXiv:0904.3076 \[hep-ph\]\].
G. Altarelli and D. Meloni, J. Phys. G [**36**]{}, 085005 (2009) \[arXiv:0905.0620 \[hep-ph\]\].
E. Bertuzzo, P. Di Bari, F. Feruglio and E. Nardi, arXiv:0908.0161 \[hep-ph\].
C. Hagedorn, E. Molinaro and S. T. Petcov, JHEP [**0909**]{}, 115 (2009) \[arXiv:0908.0240 \[hep-ph\]\].
D. Aristizabal Sierra, F. Bazzocchi, I. de Medeiros Varzielas, L. Merlo and S. Morisi, arXiv:0908.0907 \[hep-ph\].
R. G. Felipe and H. Serodio, arXiv:0908.2947 \[hep-ph\].
G. J. Ding, Phys. Rev. D [**78**]{}, 036011 (2008) \[arXiv:0803.2278 \[hep-ph\]\].
P. D. Carr and P. H. Frampton, arXiv:hep-ph/0701034. F. Feruglio, C. Hagedorn, Y. Lin and L. Merlo, Nucl. Phys. B [**775**]{}, 120 (2007), hep-ph/0702194. M. C. Chen and K. T. Mahanthappa, Phys. Lett. B [**652**]{}, 34 (2007), arXiv:0705.0714 \[hep-ph\]. P. H. Frampton and T. W. Kephart, JHEP [**0709**]{}, 110 (2007), arXiv:0706.1186 \[hep-ph\]. A. Aranda, Phys. Rev. D [**76**]{}, 111301 (2007), arXiv:0707.3661 \[hep-ph\].
P. H. Frampton, T. W. Kephart and S. Matsuzaki, Phys. Rev. D [**78**]{}, 073004 (2008) \[arXiv:0807.4713 \[hep-ph\]\].
D. A. Eby, P. H. Frampton and S. Matsuzaki, Phys. Lett. B [**671**]{}, 386 (2009) \[arXiv:0810.4899 \[hep-ph\]\].
E. Ma, Phys. Lett. B [**632**]{}, 352 (2006) \[arXiv:hep-ph/0508231\].
F. Bazzocchi and S. Morisi, arXiv:0811.0345 \[hep-ph\].
F. Bazzocchi, L. Merlo and S. Morisi, arXiv:0901.2086 \[hep-ph\].
F. Bazzocchi, L. Merlo and S. Morisi, arXiv:0902.2849 \[hep-ph\].
H. Ishimori, Y. Shimizu and M. Tanimoto, Prog. Theor. Phys. [**121**]{}, 769 (2009) \[arXiv:0812.5031 \[hep-ph\]\].
G. Altarelli, F. Feruglio and L. Merlo, JHEP [**0905**]{}, 020 (2009) \[arXiv:0903.1940 \[hep-ph\]\].
S. Pakvasa and H. Sugawara, Phys. Lett. B [**82**]{}, 105 (1979); T. Brown, N. Deshpande, S. Pakvasa and H. Sugawara, Phys. Lett. B [**141**]{}, 95 (1984); Y. Yamanaka, H. Sugawara and S. Pakvasa, Phys. Rev. D [**25**]{}, 1895 (1982) \[Erratum-ibid. D [**29**]{}, 2135 (1984)\]; T. Brown, S. Pakvasa, H. Sugawara and Y. Yamanaka, Phys. Rev. D [**30**]{}, 255 (1984).
D. G. Lee and R. N. Mohapatra, Phys. Lett. B [**329**]{}, 463 (1994) \[arXiv:hep-ph/9403201\]; C. Hagedorn, M. Lindner and R. N. Mohapatra, JHEP [**0606**]{}, 042 (2006) \[arXiv:hep-ph/0602244\]; Y. Cai and H. B. Yu, Phys. Rev. D [**74**]{}, 115005 (2006) \[arXiv:hep-ph/0608022\]; H. Zhang, Phys. Lett. B [**655**]{}, 132 (2007) \[arXiv:hep-ph/0612214\]; Y. Koide, JHEP [**0708**]{}, 086 (2007) \[arXiv:0705.2275 \[hep-ph\]\]; M. K. Parida, Phys. Rev. D [**78**]{}, 053004 (2008) \[arXiv:0804.4571 \[hep-ph\]\]. C. S. Lam, Phys. Rev. Lett. [**101**]{}, 121602 (2008) \[arXiv:0804.2622 \[hep-ph\]\].
C. S. Lam, Phys. Rev. D [**78**]{}, 073015 (2008) \[arXiv:0809.1185 \[hep-ph\]\].
C. S. Lam, arXiv:0907.2206 \[hep-ph\].
W. Grimus, L. Lavoura and P. O. Ludl, arXiv:0906.2689 \[hep-ph\].
P. Minkowski, [*Phys. Lett.*]{} [**B67**]{} (1977) 421; T. Yanagida, in [*Proceedings of the Workshop on Unified Theory and the Baryon Number of the Universe*]{}, edited by O. Sawada and A. Sugamoto (KEK, Tsukuba, 1979); M. Gell-Mann, P. Ramond and R. Slansky, in [*Supergravity*]{}, edited by P. van Nieuwenhuizen and D. Freedman (North Holland, Amsterdam, 1979); S. L. Glashow, in [*Quarks and Leptons*]{}, edited by M. L$\acute{\rm e}$vy [*et al.*]{} (Plenum, New York, 1980); R. N. Mohapatra and G. Senjanovic, [*Phys. Rev. Lett.*]{} [**44**]{} (1980) 912.
C. D. Froggatt and H. B. Nielsen, Nucl. Phys. B [**147**]{}, 277 (1979).
A. Osipowicz [*et al.*]{} \[KATRIN Collaboration\], arXiv:hep-ex/0109033; see also: http://www-ik.fzk.de/ katrin/index.html
L. Baudis [*et al.*]{}, Phys. Rev. Lett. [**83**]{}, 41 (1999) \[arXiv:hep-ex/9902014\].
A. Giuliani \[CUORE Collaboration\], J. Phys. Conf. Ser. [**120**]{} (2008) 052051.
Majorana Collaboration, arXiv:0811.2446 \[nucl-ex\].
A. A. Smolnikov and f. t. G. Collaboration, arXiv:0812.4194 \[nucl-ex\]. G. L. Fogli, E. Lisi, A. Marrone, A. Palazzo and A. M. Rotunno, arXiv:0809.2936 \[hep-ph\].
WMAP Collaboration, E. Komatsu [*et al.*]{}, arXiv:0803.0547 \[astro-ph\].
ACBAR Collaboration, C. L. Reichardt [*et al.*]{}, arXiv:0801.1491 \[astro-ph\].
VSA Collaboration, C. Dickinson [*et al.*]{}, Mon. Not. Roy. Astron. Soc. [**353**]{}, 732 (2004) \[arXiv:astro-ph/0402498\].
CBI Collaboration, A. C. S. Readhead [*et al.*]{}, Astrophys. J. [**609**]{}, 498 (2004) \[arXiv:astro-ph/0402359\].
BOOMERANG Collaboration, C. J. MacTavish [*et al.*]{}, Astrophys. J. [**647**]{}, 799 (2006) \[arXiv:astro-ph/0507503\].
SDSS Collaboration, M. Tegmark [*et al.*]{}, Phys. Rev. D [**74**]{} (2006) 123507 \[arXiv:astro-ph/0608632\].
SNLS Collaboration, P. Astier [*et al.*]{} Astron. Astrophys. [**447**]{}, 31 (2006) \[arXiv:astro-ph/0510447\].
SDSS Collaboration, D. J. Eisenstein [*et al.*]{}, Astrophys. J. [**633**]{}, 560 (2005) \[arXiv:astro-ph/0501171\].
P. McDonald [*et al.*]{}, Astrophys. J. Suppl. [**163**]{}, 80 (2006); P. McDonald [*et al.*]{}, Astrophys. J. [**635**]{}, 761 (2005).
A. Pilaftsis and T. E. J. Underwood, Phys. Rev. D [**72**]{}, 113001 (2005) \[arXiv:hep-ph/0506107\].
Gui-Jun Ding, work in progress.
F. Ardellier [*et al.*]{} \[Double Chooz Collaboration\], arXiv:hep-ex/0606025.
Y. f. Wang, arXiv:hep-ex/0610024.
[^1]: e-mail address: dinggj@ustc.edu.cn
[^2]: $\sin^2\theta_{12,TB}$ is exactly within the $1\sigma$ range of the second global data fit, whereas it slightly above the $1\sigma$ up limit of the first fit.
|
---
abstract: 'We consider the problem of hedging a European contingent claim in a Bachelier model with temporary price impact as proposed by @AlmgChr:01. Following the approach of @RogerSin:10 and @NaujWes:11, the hedging problem can be regarded as a cost optimal tracking problem of the frictionless hedging strategy. We solve this problem explicitly for general predictable target hedging strategies. It turns out that, rather than towards the current target position, the optimal policy trades towards a weighted average of expected future target positions. This generalizes an observation of @GarlPeder:13.2 from their homogenous Markovian optimal investment problem to a general hedging problem. Our findings complement a number of previous studies in the literature on optimal strategies in illiquid markets as, e.g., [@GarlPeder:13.2], [@NaujWes:11], [@RogerSin:10], [@AlmgLi:15], [@MorMKSoner:15], [@KallMK:14], [@GuasoniWeb:15_1], [@GuasoniWeb:15_2], where the frictionless hedging strategy is confined to diffusions. The consideration of general predictable reference strategies is made possible by the use of a convex analysis approach instead of the more common dynamic programming methods.'
author:
- 'Peter Bank[^1] H. Mete Soner[^2] Moritz Vo[ß]{}[^3]'
bibliography:
- 'finance.bib'
title: Hedging with Temporary Price Impact
---
Mathematical Subject Classification (2010):
: 91G10, 91G80,\
91B06, 60H30
JEL Classification:
: G11, C61
Keywords:
: Hedging, illiquid markets, portfolio tracking
Introduction
============
The construction of effective hedging strategies against financial risk is one of the key problems in Mathematical Finance. The seminal work of @BlacSch:73 and @Mert:73 showed how this task can be carried out in an idealized frictionless market by dynamically trading perfectly liquid assets. However, in recent years there has been a growing awareness that these idealizations can lead to misguided hedging strategies with non negligible costs, particularly when these prescribe a fast reallocation of assets in short periods of time in the presence of liquidity frictions like price impact. This has spurred the development of new financial models which take into account the impact of transactions on execution prices; see, e.g., the survey by @GokayRochSoner:11.
The two most widely used models go back to @AlmgChr:01 as well as @ObiWang:13, respectively: Loosely speaking, the model of Almgren and Chriss is characterized by directly specifying functions describing the temporary and permanent impacts of a given order on the price. The model of Obizhaeva and Wang assumes that trading takes place in a block-shaped limit order book with persistent price impact which is vanishing at a finite resilience rate. As recently discussed in @KallMK:14, the former can be regarded as the high-resilience limit of the latter.
Within these two models, most of the existing literature studies the problem of optimally liquidating an exogenously given position by some fixed time horizon (cf., e.g., @AlmgChr:01, @Almg:03, @SchSchon:09, @ObiWang:13, @AlfFruSch:10 and @PredShaShr:11). Further work is also devoted to the more involved problem of optimal portfolio choice, cf., e.g., @GarlPeder:13.1, [@GarlPeder:13.2], @MorMKSoner:15, @GuasoniWeb:15_1, [@GuasoniWeb:15_2] and @KallMK:14. However, only a few papers directly address the problem of hedging a contingent claim in the presence of price impact as modeled above, cf. @RogerSin:10, @AlmgLi:15, @GueantPu:15, and also @NaujWes:11.
The papers most closely related to ours mathematically are @RogerSin:10 and @NaujWes:11. @RogerSin:10 analyse the problem of hedging a European contingent claim in a Black-Scholes model in the presence of purely temporary price impact as in @AlmgChr:01. They relate the hedging problem to a cost optimal tracking problem of the frictionless Black-Scholes delta hedge. @NaujWes:11 directly study the problem of optimally following a given target strategy in an illiquid financial market under the same type of liquidity costs; see also @CarJai:15 for a Markovian order flow tracking problem. By contrast to these papers, we will focus on a non-Markovian setup with general predictable target strategies.
Instead of the more common dynamic programming methods used in the papers cited above, our approach is convex analytic along the lines of Pontryagin’s maximum principle. This allows us to consider general predictable target strategies and not only continuous diffusion-type processes. This is particularly important for hedging in illiquid markets when the frictionless reference hedge portfolio prescribes sizable instantaneous reallocations as, e.g., already in the case of discrete Asian options which was not covered by the literature so far. We derive first order conditions of the considered quadratic optimization problems which take the form of a linear forward backward stochastic differential equation (FBSDE). Solutions to these are explicitly available and give us the optimal frictional hedges. In fact, when considered in a Brownian setting, our approach can be viewed as a special case of the stochastic linear-quadratic control problem studied by @KohlmannTang:02. Mathematically, the novelty of our contribution is the interpretation of the optimal tracking strategy. Indeed, it turns out that the optimal policy does *not* instantaneously trade from its current position towards the current target position but towards a weighted average of *expected future target positions* which does not occur in the work of @KohlmannTang:02. An interesting consequence from a financial point of view is that this averaging allows us to understand how singularities in the frictionless reference strategy have to be addressed in a model with frictions: A singularity in the frictionless target hedge is smoothed out when averaging the weighted future target positions which yields sensible hedging strategies for illiquid markets. Additionally, we also study a constrained version of the problem where the terminal hedging position is restricted to a certain exogenously prescribed level. This can be viewed as a way to deal with physical delivery in derivative contracts. Our explicit solution reveals how the hedger’s focus shifts systematically from tracking the frictionless target position to attaining the prescribed terminal position. Here, our convex analytic approach allows us to avoid the consideration of nonlinear Hamilton-Jacobi-Bellmann equations with singular terminal conditions and the challenges that these entail. We also give a sharp characterization of those terminal positions which can be reached with finite expected trading costs by characterizing the speed at which the size of these positions is revealed towards the end.
Conceptually, our result generalizes an observation by @GarlPeder:13.2 who consider quadratic utility maximization in homogeneous Markovian models on an infinite time horizon and interpret their solution as trading towards an exponentially weighted average of future expected Markowitz portfolios. A similar interpretation is given by @NaujWes:11 in their equally Markovian Example 7.1; see @CarJai:15 for a similarly Markovian study of tracking of order flow in high-frequency trading. These strategies as well as ours contrast with strategies targeting the present frictionless optimum directly, which are considered in many papers on asymptotically optimal portfolios with small transaction costs, including @RogerSin:10, @MorMKSoner:15, @GuasoniWeb:15_1, [@GuasoniWeb:15_2], and @KallMK:14. In all the literature cited above, the authors confine consideration to diffusion-type target strategies which, at least asymptotically, are equivalent to our averaged targets. Our approach, by contrast, allows one to deal with general predictable frictionless target strategies and so the examples considered in this paper include strategies with jumps or even singularities where the differences between these hedges become apparent.
@AlmgLi:15 study a quite similar hedging problem but they consider a model with permanent price impact which feeds into their target strategies via the well-known functions for Black-Scholes deltas and gammas. Hence, they consider a model where the target strategy is also affected by the targeting strategy which leads to a feedback effect that we are disregarding in our problem formulation. We refer to the introduction in @RogerSin:10 for further discussion of this idealization.
The rest of the paper is organized as follows. In Section \[sec:problem\] we introduce the setup and motivate our problem formulation by following the approach of Rogers and Singh [@RogerSin:10] (cf. also @NaujWes:11). Our main result is presented in Section \[sec:main\] and provides the explicit solution for a general hedging problem of a European style option in a Bachelier market model with temporary price impact. Section \[sec:illustrations\] contains some illustrations of optimal solutions in three examples. The technical proofs are deferred to Section \[sec:proofs\].
Problem setup and motivation {#sec:problem}
============================
We fix a finite deterministic time horizon $T > 0$, a filtered probability space $(\Omega,\mathcal{F},(\cF_t)_{0 \leq t \leq T},\PP)$ satisfying the usual conditions of right continuity and completeness and consider an agent who is trading in a financial market consisting of a risky asset, e.g., stock. The number of shares the agent holds at time $t \in [0,T]$ of the risky stock is defined as $$\label{eq:portfolio}
X^u_t \set x + \int_0^t u_s ds , \quad 0 \leq t \leq T,$$ where $x \in \RR$ denotes her given initial holdings. The real-valued stochastic process $(u_t)_{0 \leq t \leq T}$ represents the agent’s turnover rate, that is, the speed at which the agent trades in the risky asset. It is assumed to be chosen in the set $$\cU \set \left\{ u : u \text{ progressively measurable s.t. }
\mathbb{E} \int_0^T u^2_t dt < \infty \right\}.$$ The square-integrability requirement ensures that the induced quadratic transaction costs which are levied on the agent’s respective turnover rates due to temporary price impact as in @AlmgChr:01 are finite.
In such a frictional market, our agent seeks to track a target strategy which can be thought of, for instance, as a hedging strategy adopted from a frictionless setting. Mathematically, this problem can be formalized as follows: Given a real-valued predictable process $(\xi_t)_{0 \leq t \leq T}$ in $L^2(\PP \otimes dt)$ and a fixed constant $\kappa > 0$, the agent’s objective is to minimize the performance functional $$\label{eq:objFun}
J(u) \set \EE \left[ \frac{1}{2} \int_0^T (X^u_t - \xi_t)^2 dt
+ \frac{1}{2} \kappa \int_0^T u^2_t dt \right].$$ This leads to the optimal stochastic control problem $$\label{eq:optProb1}
J(u) \rightarrow \min_{u \in \cU}.$$ Since the agent’s terminal position $X^u_T$ may be important (for here future plans or physical delivery), we also consider the optimal stochastic control problem $$\label{eq:optProb2}
J(u) \rightarrow \min_{u \in \cU^{\Xi}_x}$$ where $\cU^{\Xi}_x$ is the set of constrained policies defined as $$\cU^{\Xi}_x \set \left\{ u : u \in \cU \text{ satisfying }
X^u_T \equiv x + \int_0^T u_s ds = \Xi_T \,\, \PP\text{-a.s.} \right\}$$ with predetermined terminal position $\Xi_T \in L^2(\PP,\cF_T)$ such that $$\label{eq:ccp}
\int_0^T \frac{d\EE[\Xi_t^2]}{T-t} < \infty$$ where $\Xi_t \set \EE[\Xi_T \vert \cF_t]$ for $0 \leq t \leq T$.
- Lemma \[lem:ccp\] below shows that a target $\Xi_T$ can be reached with finite expected costs in the sense that $\cU^{\Xi}_x \neq \varnothing$ if and only if is satisfied. Observe that this condition implies, in particular, that $\Xi_T \in \cF_{T-}$. In fact, can be interpreted as a condition on the speed at which one learns about the ultimate target position $\Xi_T$ as $t \uparrow T$.
- Concerning physical delivery at maturity $T$, it would be sufficient to impose the constraint $X_T^u \geq \Xi_T$. However, this would lead to an interesting, yet technically rather different optimization problem which is left for future research.
One motivation of the objective functional in (\[eq:objFun\]) and its connection to the problem of hedging a European contingent claim in the presence of temporary price impact is the following (cf. also @RogerSin:10 and @AlmgLi:15): Assume the agent wants to hedge a European-type option with payoff $H$ at maturity $T$ in a market where, for simplicity, interest rates are zero and the price process $S$ of the underlying risky asset follows a Brownian motion with volatility $\sigma > 0$: $$S_t = S_0 + \sigma W_t , \quad 0 \leq t \leq T.$$ In a frictionless setting, the payoff $H$ can be perfectly replicated by a predictable hedging strategy $\xi^H$. In a market with frictions where the agent faces liquidity costs as, e.g., in @AlmgChr:01, she may be confined to follow strategies $X^u$ as in (\[eq:portfolio\]). As a consequence, starting from some initial wealth $v_0 \in \RR$, her profits and losses from market fluctuations will incur a risk of deviating from $H$ at maturity $T$ that can be measured, e.g., by $$\EE\left[ \left( H - (v_0+\int_0^T X^u_t dS_t) \right)^2 \right] =
(\EE[H]-v_0)^2 + \EE\left[ \int_0^T (X_t^u - \xi^H_t )^2 \sigma^2 dt \right],$$ see @FollSond:86. This deviation can be made arbitrarily small if the agent is willing to incur arbitrarily high transaction costs. If, however, she puts a cap $c > 0$ on these she may want to solve the optimization problem $$\label{eq:motiv}
\EE\left[ \int_0^T (X_t^u - \xi^H_t )^2 \sigma^2 dt \right] \rightarrow \min_{u \in \cU}
\quad \text{subject to } \mathbb{E} \left[ \int_0^T u^2_t dt \right] \leq c,$$ which in its Lagrangian formulation amounts to an objective functional of the form (\[eq:objFun\]).
\[rem:prob\]
1. A similar hedging problem as formulated in (\[eq:objFun\]) is also studied in @RogerSin:10 and @AlmgLi:15. In contrast to our setting, @RogerSin:10 consider a Black-Scholes framework. @AlmgLi:15 also include permanent price impact.
2. Apart from hedging, the minimization problem of the objective in (\[eq:objFun\]) is also related to the problem of optimally executing a VWAP order as studied using dynamic programming methods in a Markovian setup in @FreiWes:13 and @CarJai:15, or, more generally, to the optimal curve following problem as discussed in @NaujWes:11 as well as @CaiRosenbaumTankov:15.
3. In a Brownian setting, our problem is a special case of a stochastic linear-quadratic control problem as studied, e.g., by @KohlmannTang:02.
Main result {#sec:main}
===========
Our main results are the following explicit descriptions of the optimal controls for problems (\[eq:optProb1\]) and (\[eq:optProb2\]) and their corresponding minimal costs for which it is convenient to introduce $$\tau^\kappa(t) \set \frac{T-t}{\sqrt{\kappa}} , \quad 0 \leq t \leq T.$$
\[thm:main1\] The optimal stock holdings $\hat{X}$ of problem (\[eq:optProb1\]) with unconstrained terminal position satisfy the linear ODE $$\label{eq:SDEX1}
d\hat{X}_t = \frac{\tanh (\tau^\kappa(t))}{\sqrt{\kappa}}
\left(\hat{\xi}_t - \hat{X}_t \right) dt, \quad \hat{X}_0=x,$$ where, for $0 \leq t < T$, we let $$\hat{\xi}_t \set
\EE \left[ \int_t^T \xi_u K(t,u) du \bigg\vert \mathcal{F}_t \right] \quad$$ with the kernel $$K(t,u) \set \frac{\cosh(\tau^\kappa(u))}
{\sqrt{\kappa}\sinh(\tau^\kappa(t))}, \quad 0 \leq t \leq u < T.$$ The minimal costs are given by $$\begin{aligned}
\inf_{u \in \cU} J(u) =
& \frac{1}{2} \sqrt{\kappa} \tanh(\tau^\kappa(0)) \left( x - \hat{\xi}_0 \right)^2 +
\frac{1}{2} \EE \left[ \int_0^T (\xi_t - \hat{\xi}_t)^2 dt \right] \nonumber \\
& + \frac{1}{2} \EE \left[ \int_0^T \sqrt{\kappa}
\tanh(\tau^\kappa(t)) d\langle \hat{\xi} \rangle_t \right]
< \infty.
\label{eq:cost1}
\end{aligned}$$
For the constrained problem we have similarly:
\[thm:main2\] The optimal stock holdings $\hat{X}^{\Xi}$ of problem (\[eq:optProb2\]) with constrained terminal position $\Xi_T \in L^2(\PP,\cF_{T})$ such that holds satisfy the linear ODE $$\label{eq:SDEX2}
d\hat{X}^{\Xi}_t = \frac{\coth (\tau^\kappa(t))}{\sqrt{\kappa}}
\left(\hat{\xi}^{\Xi}_t - \hat{X}^{\Xi}_t \right) dt,
\quad \hat{X}^{\Xi}_0 = x,$$ where, for $0 \leq t \leq T$, we let $$\begin{aligned}
\hat{\xi}^{\Xi}_t \set &
\EE \left[ \frac{1}{\cosh (\tau^\kappa(t))} \Xi_T
+ \left(1-\frac{1}{\cosh (\tau^\kappa(t))}\right)
\int_t^T \xi_u K^{\Xi}(t,u) du \bigg\vert \cF_t \right],
\end{aligned}$$ with the kernel $$K^{\Xi}(t,u) \set
\frac{\sinh(\tau^\kappa(u))}{\sqrt{\kappa}(\cosh(\tau^\kappa(t))-1)} ,
\quad 0 \leq t \leq u < T.$$ The solution $\hat{X}^\Xi$ of satisfies the terminal constraint in the sense that $$\lim_{t \uparrow T} \hat{X}^{\Xi}_t = \Xi_T \quad \mathbb{P}\text{-a.s.}$$ The minimal costs are given by $$\begin{aligned}
\inf_{u \in \cU^{\Xi}} J(u) =
& \frac{1}{2} \sqrt{\kappa} \coth(\tau^\kappa(0)) \left( x - \hat{\xi}^\Xi_0 \right)^2 +
\frac{1}{2} \EE \left[ \int_0^T (\xi_t - \hat{\xi}^\Xi_t)^2 dt \right] \nonumber \\
& + \frac{1}{2} \EE \left[ \int_0^T \sqrt{\kappa}
\coth(\tau^\kappa(t)) d\langle \hat{\xi}^\Xi \rangle_t \right]
< \infty.
\label{eq:cost2}
\end{aligned}$$
The convex-analytic proofs of Theorems \[thm:main1\] and \[thm:main2\] are deferred to Section \[sec:proofs\].
Note that, rather than towards the *current* target position $\xi_t$, the optimal frictional hedging rules in (\[eq:SDEX1\]) and (\[eq:SDEX2\]) prescribe to trade towards weighted averages $\hat{\xi}_t$ and $\hat{\xi}_t^{\Xi}$, respectively, of *expected future* target positions of $\xi$. Indeed, for each $0 \leq t \leq T$, $K(t,.)$ and $K^{\Xi}(t,.)$ specify nonnegative kernels which integrate to one over $[t,T]$, and so $\hat{\xi}$ and $\hat{\xi}^{\Xi}$ average out the expected future positions of $\xi$. For $\hat{\xi}^{\Xi}$ one chooses a convex combination of this average of $\xi$ with the expected terminal position $\Xi_T$, where the weight shifts gradually to $\Xi_T$ as $t \uparrow T$ since $1/\cosh(\tau^k(t)) \uparrow 1$ in that case.
According to and , the optimal tracking rate trades towards these targets at a speed proportional to their distance to the investor’s position at any time. The coefficient of proportionality is controlled by both the cost parameter $\kappa$ and the remaining time-to-maturity $T-t$. For the unconstrained solution in (\[eq:SDEX1\]), since $\lim_{t \uparrow T} \tanh (\tau^{\kappa}(t)) = 0$, trading slows down when approaching the final time $T$; in other words, towards the end, the investor does not worry about tracking $\xi$ anymore, but seeks to minimize trading costs. This becomes intuitive when comparing the effect of early interventions to later ones: with early interventions the investor ensures that she stays reasonably close to the target for the foreseeable future, but late interventions only can impact the investor’s performance for very short periods and therefore do not warrant, at least asymptotically, the associated costs. For the constrained solution in (\[eq:SDEX2\]) by contrast, we have $\lim_{t \uparrow T} \coth (\tau^{\kappa}(t)) = +\infty$ and so the optimal strategy trades with increased urgency towards $\hat{\xi}^{\Xi}$, which itself is easily seen to converge to the ultimate target position $\Xi_T = \lim_{t \uparrow T} \hat{\xi}^{\Xi}_{t}$ $\PP$-a.s. (cf. Proof of Theorem \[thm:main2\] below in Section \[sec:proofs\]).
Our tracking result generalizes an observation from @GarlPeder:13.2 from their homogeneous Markovian optimal investment problem to a general hedging problem with a general predictable target strategy $\xi$, also allowing for a random terminal portfolio position $\Xi_T$. It also sheds further light on the general structure of optimal portfolio strategies in markets with frictions. Indeed, the description of (asymptotically) optimal trading strategies obtained in @MorMKSoner:15, @KallMK:14, or @GuasoniWeb:15_1, [@GuasoniWeb:15_2] prescribe a reversion towards the frictionless strategy $\xi$ itself, *not* towards an average such as $\hat{\xi}$ or $\hat{\xi}^{\Xi}$. For sufficiently smooth $\xi$, e.g., of diffusion type, this is still optimal asymptotically for small liquidity costs as then these averages do not differ significantly from $\xi$. The next section, however, shows that this is no longer the case when we allow for singularities in the reference strategy.
Finally, our representations and for the values of the tracking problems and , respectively, show how these depend on the initial position $x$ and the $L^2$-distance between the target $\xi$ and the respective signal processes $\hat{\xi}$ and $\hat{\xi}^\Xi$. It also reveals the importance of the signals’ quadratic variation $\langle \hat{\xi} \rangle$, $\langle \hat{\xi}^\Xi \rangle$ which can be viewed as a measure for how effectively one can predict the target positions $\xi$ and $\Xi_T$. To the best of our knowledge, the key role played by the signals $\hat{\xi}$, $\hat{\xi}^\Xi$ was not observed in the general theory of stochastic linear-quadratic control problems as discussed, e.g., by @KohlmannTang:02.
As mentioned in the description of our problem setup in Section \[sec:problem\], the quadratic cost term in our objective function in is due to linear temporary price impact as in the model proposed by @AlmgChr:01. In this regard, one might likewise extend the objective functional also in order to account for expected costs resulting from linear *permanent price impact* (cf. in [@AlmgChr:01]). This would lead to the inclusion of the additional term $$\label{eq:permanent} \EE \left[ \theta \left(
\int_0^T u_t dt \right)^2 \right] = \theta \EE\left[ (X^u_T -
x)^2 \right]$$ for some constant $\theta > 0$. For the constrained problem in , this extra cost term obviously does not depend on the strategy and is thus irrelevant. For the unconstrained problem in , these extra costs can be regarded as a penalization term forcing the final position $X^u_T$ to be close to the initial position $x$. For ease of exposition, we refrain in the present paper from inducing this additional term, since our main intention here is to consider to outline the key role played by the optimal tracking signals $\hat{\xi}$, $\hat{\xi}^{\Xi}$ in the description of the optimal control as well as the corresponding minimal costs. A more general setup allowing for stochastic price impact, stochastic volatility and a penalization on the terminal position as in is left for future research.
Illustrations {#sec:illustrations}
=============
In this section we present a few case studies illustrating the structure of the optimal hedging strategies we found in Theorems \[thm:main1\] and \[thm:main2\]. The first two case studies are simple deterministic toy examples which allow us to understand the effect of jumps as well as of initial and terminal positions. The final case study considers a discretely monitored Asian option where random jumps in the reference hedge occur naturally.
In the first two cases we assume the initial position to be $x=0$ and consider a time horizon of $T=1$ when, in the constrained case, the position has to be liquidated, i.e., $\Xi_T=0$. We depict $\xi$ along with its averages $\hat{\xi}$ and $\hat{\xi}^{\Xi}$, respectively, as well as the corresponding optimal frictional hedges $\hat{X}$ and $\hat{X}^{\Xi}$. We also include a “myopic” benchmark strategy $\tilde{X}$ which directly targets $\xi$ (without final constraint) given by $$d\tilde{X}_t = \frac{1}{\sqrt{\kappa}} (\xi_t - \tilde{X}_t) dt , \quad 0 \leq t \leq T,$$ in order to compare with analogous strategies considered in @RogerSin:10, @MorMKSoner:15, @GuasoniWeb:15_1, [@GuasoniWeb:15_2], and @KallMK:14.
Frictionless deterministic hedge with a jump
--------------------------------------------
In our first case study we consider a deterministic target strategy $\xi$ (solid blue line in Figure \[fig:ex1\]) which can be viewed as a stock-buying schedule that prescribes to hold one stock until time $T/2$ when the position is doubled by a jump.
![Frictionless hedge $\xi$ with a jump at $t=T/2$ (blue), corresponding unconstrained (orange, dashed) and constrained (green, dashed) targets $\hat{\xi}$ and $\hat{\xi}^{\Xi}$, respectively, as well as the corresponding frictional hedges $\hat{X}$ (orange line) and $\hat{X}^{\Xi}$ (green line). The myopic benchmark hedge $\tilde{X}$ is plotted in red.[]{data-label="fig:ex1"}](example1.pdf)
One can observe that the effective target strategies $\hat{\xi}$ and $\hat{\xi}^{\Xi}$ of the optimal controls $\hat{u}$ and $\hat{u}^{\Xi}$, respectively, are smoothing out the jump of $\xi$. The target $\hat{\xi}^{\Xi}$ additionally takes into account the liquidation constraint $\Xi_T = 0$ of the agent’s position until maturity $T$. As expected, the optimal frictional hedges $\hat{X}$ and $\hat{X}^{\Xi}$ are indeed anticipating the upward jump of the target strategy $\xi$ at $t = T/2$ by building up their positions beyond the actual current position of $\xi$ even before the occurrence of the jump. This is not the case for the myopic benchmark strategy $\tilde{X}$ which increases its position much more slowly and exhibits a kink when the jump occurs after which trading speed picks up significantly. Finally, the optimal holdings $\hat{X}^{\Xi}$ in the constrained setting, where the position has to be unwound ultimately, are decreasing when time approaches maturity and end up in the final desired position $\hat{X}_T^{\Xi} = 0$.
Frictionless deterministic hedge with a singularity
---------------------------------------------------
The second target strategy $\xi$ (solid blue line in Figure \[fig:ex2\]) is again deterministic and also exhibits a singularity midway at $t=T/2$, this time, however, it is a jump from $-\infty$ to $+\infty$.
![Frictionless hedge $\xi$ with a singularity at $t=T/2$ (blue), corresponding unconstrained (orange, dashed) and constrained (green, dashed) targets $\hat{\xi}$ and $\hat{\xi}^{\Xi}$, respectively, as well as the corresponding frictional hedges $\hat{X}$ (orange line) and $\hat{X}^{\Xi}$ (green line). The myopic benchmark hedge $\tilde{X}$ is plotted in red.[]{data-label="fig:ex2"}](example2.pdf)
Once more, one can observe that the effective target strategies $\hat{\xi}$ and $\hat{\xi}^{\Xi}$ of the optimal controls $\hat{u}$ and $\hat{u}^{\Xi}$, respectively, are smoothing out the singularity of $\xi$. Again, the target $\hat{\xi}^{\Xi}$ additionally takes into account the liquidation constraint $\Xi_T = 0$ of the agent’s position until maturity $T$. In contrast to the benchmark strategy $\tilde{X}$, the optimal frictional hedges $\hat{X}$ and $\hat{X}^{\Xi}$ are anticipating the singularity of the target strategy $\xi$ at $t = T/2$ by gradually building up their positions before the singularity occurs. Actually, they are trading *away* from the current target positions of $\xi$ for some time prior to $T/2$. This is in stark contrast with the myopic benchmark strategy which keeps selling short more and more intensely even milliseconds before the reference strategy jumps to $+\infty$.
Discrete Asian option
---------------------
In this final example we investigate a situation where the target strategy $\xi$ is stochastic and exhibits a random jump. Specifically, we consider hedging a discrete Asian call with maturity $T > 0$ in the Bachelier model where the underlying risky asset $S$ is modeled by a Brownian motion with volatility $\sigma > 0$: $$S_t = S_0 + \sigma W_t , \quad 0 \leq t \leq T.$$ For simplicity, we assume that the average is discretely monitored over two fixing dates $T/2$ and $T$. That is, the payoff at maturity $T$ is given by $$H\set\left(\frac{1}{2}(S_{T/2} + S_T)-K\right)^+$$ for some strike $K \in \RR$. The Bachelier price of the discrete Asian option at time $t \in [0,T)$ can be computed as $$\begin{aligned}
& \pi_t\set \\
& \begin{cases}
\sigma \sqrt{5 T/8 - t} \, \varphi\Big( \frac{S_t-K}{\sigma
\sqrt{5T/8 - t}} \Big)+ S_t \Phi \Big(
\frac{S_t-K}{\sigma\sqrt{5T/8 - t}} \Big) , & 0 \leq t < T/2 \\
\frac{1}{2} \sigma \sqrt{T-t} \, \varphi\Big( \frac{S_{T/2} +
S_t-2K}{\sigma\sqrt{T-t}} \Big)\\
\qquad + \left(\frac{1}{2}
(S_{T/2} + S_t)-K\right) \Phi \Big( \frac{S_{T/2} +
S_t-2K}{\sigma\sqrt{T - t}} \Big) , & T/2 \leq t < T \\
\end{cases}\end{aligned}$$ where $\varphi$ and $\Phi$ denote the density and the cumulative distribution function of the standard normal distribution, respectively. Accordingly, the frictionless delta-hedging strategy is $$\xi_t =
\begin{cases}
\Phi \left( \frac{S_t-K}{\sigma\sqrt{5T/8 - t}} \right) , & 0 \leq t
\leq T/2 \\
\frac{1}{2} \Phi\left( \frac{S_{T/2} + S_t-2K}{\sigma \sqrt{T - t}}
\right) , & T/2 < t < T.
\end{cases}$$ Note that the delta-hedge exhibits a negative random jump at time $T/2$ since $$\xi_{\frac{T}{2}+} - \xi_{\frac{T}{2}-} \set \lim_{t \downarrow
\frac{T}{2}} \xi_t - \lim_{t \uparrow \frac{T}{2}} \xi_{t} =
-\frac{1}{2} \Phi\left( \frac{S_{T/2}-K}{\sigma \sqrt{T/8}} \right).$$ We assume that the initial position $x$ coincides with the initial frictionless delta, i.e., e.g., $x=1/2$ in the case of an at-the-money option with $K=S_0$. This allows us to focus on the hedging performance itself and avoids distortions from the initial built up of a sensible hedging position. As before, the terminal position will be allowed to be either unconstrained or mandating liquidation, i.e., $\Xi_T=0$.
The effective targets $\hat{\xi}$ and $\hat{\xi}^{\Xi}$ of the optimal frictional hedging strategy in (\[eq:SDEX1\]) and (\[eq:SDEX2\]), respectively, can be explicitly computed: $$\hat{\xi}_{t} =
\begin{cases}
\Phi\Big( \frac{2(S_t-K)}{\sigma\sqrt{5T/2 - 4t}} \Big)
\left( 1 - \frac{1}{2}
\frac{\sinh(\tau^{\kappa}(T/2))}{\sinh(\tau^{\kappa}(t))}
\right), & 0 \leq t < T/2, \\
\xi_t, & T/2 \leq t < T, \\
\end{cases}$$ and $$\hat{\xi}^{\Xi}_{t} =
\begin{cases}
\Phi\Big( \frac{2(S_t-K)}{\sigma\sqrt{5T/2 - 4t}} \Big) \left( 1 -
\frac{1}{2}
\frac{\cosh(\tau^{\kappa}(T/2))+1}{\cosh(\tau^{\kappa}(t))} \right),
& 0 \leq t < T/2 \\
\left(1-\frac{1}{\cosh(\tau^{\kappa}(t))}\right)\xi_t, & T/2 \leq t <
T. \\
\end{cases}$$
Observe that the Bachelier delta-hedge $\xi$ is a martingale on $[T/2,T]$ and thus the signal $\hat{\xi}$ coincides with it in thi period. However, the optimal target $\hat{\xi}$ differs from the frictionless hedge $\xi$ on $[0,T/2]$ since it is anticipating and systematically smoothing out the random jump at $T/2$ whose size is determined by the option’s moneyness at this point. The constrained target $\hat{\xi}^{\Xi}$ anticipates the liquidation requirement at maturity which plays a more and more dominating role after time $T/2$.
Again, the myopic benchmark strategy $$d\tilde{X}_t = \frac{\sigma}{\sqrt{\kappa}} (\xi_t - \tilde{X}_t) dt,
\quad 0 \leq t <T$$ is not taking into account the random jump at time $T/2$ and keeps on tracking the frictionless delta-hedge even milliseconds before $T/2$ (see Figure \[fig:ex3\]).
![ Frictionless hedge $\xi$ with a jump at $t=T/2$ (blue), corresponding unconstrained (orange, dashed) and constrained (green, dashed) targets $\hat{\xi}$ and $\hat{\xi}^{\Xi}$, respectively, as well as the corresponding frictional hedges $\hat{X}$ (orange line) and $\hat{X}^{\Xi}$ (green line). The myopic benchmark hedge $\tilde{X}$ is plotted in red. The moneyness is indicated by the light gray line.[]{data-label="fig:ex3"}](example3.pdf)
Proofs {#sec:proofs}
======
In order to prove our main Theorems \[thm:main1\] and \[thm:main2\] we use tools from convex analysis. Note that the performance functional $u \mapsto J(u)$ in (\[eq:objFun\]) is strictly convex. Given a control $u \in \cU$ recall the definition of the Gâteaux derivative of $J$ at $u$ in the direction of $w \in L^2(\PP \otimes dt)$: $$\langle J'(u), w \rangle \set \lim_{\rho \rightarrow 0} \frac{J(u + \rho w)-J(u)}{\rho}.$$ The following lemma provides an explicit expression for the Gâteaux derivative of our performance functional $J$:
\[lem:gateaux\] For $u \in \cU$ we have $$\langle J'(u), w \rangle =
\EE \left[ \int_0^T w_s \left( \kappa u_s +
\int_s^T (X^u_t - \xi_t) dt \right) ds \right]$$ for any $w \in L^2(\PP \otimes dt)$.
Let $\rho > 0$, $u \in \cU$ and $w \in L^2(\PP \otimes dt)$. Note that $X_t^{u+\rho w} = X^u_t + \rho \int_0^t w_s ds$. Then, we have $$\begin{aligned}
J(u + \rho w)-J(u) = & \, \rho \EE \left[ \int_0^T \kappa u_t w_t +
\left(\int_0^t w_s ds \right) (X^u_t - \xi_t) dt \right] \\
& + \rho^2 \EE \left[ \frac{\kappa}{2} \int_0^T w^2_t dt +
\frac{1}{2} \int_0^T \left(\int_0^t w_s ds \right)^2 dt \right].\end{aligned}$$ Hence, $$\langle J'(u), w \rangle =
\EE \left[ \int_0^T \kappa u_t w_t +
\left(\int_0^t w_s ds \right) (X^u_t - \xi_t) dt \right].$$ Note that due to Fubini’s Theorem we can write the second part of the above integral as $$\int_0^T \left(\int_0^t w_s ds \right) (X^u_t - \xi_t) dt =
\int_0^T \left( \int_s^T (X^u_t - \xi_t) dt \right) w_s ds$$ which finally yields the assertion.
Let us next derive necessary and sufficient first order conditions for problems (\[eq:optProb1\]) and (\[eq:optProb2\]).
\[lem:FOC\]
1. In the unconstrained problem (\[eq:optProb1\]), a control $\hat{u} \in \cU$ with $X\set X^{\hat{u}}$ minimizes the functional $J$ if and only if $X$ satisfies $$\label{eq:FBSDE1}
X_0 = x, \quad d\dot{X}_t = \frac{1}{\kappa} (X_t - \xi_t) dt + dM_t
\text{ for } 0 \leq t \leq T, \quad \dot{X}_T = 0,$$ for a suitable square integrable martingale $(M_t)_{0 \leq t \leq T}$.
2. In the constrained problem (\[eq:optProb2\]), a control $\hat{u} \in \cU^{\Xi}_x$ with $X\set X^{\hat{u}}$ minimizes the functional $J$ if and only if $X$ satisfies $$\label{eq:FBSDE2}
X_0 = x, \quad d\dot{X}_t = \frac{1}{\kappa} (X_t - \xi_t) dt + dM_t
\text{ for } 0 \leq t < T, \quad X_T = \Xi_T,$$ for a suitable square integrable martingale $(M_t)_{0 \leq t < T}$.
In other words, the first order conditions in (\[eq:FBSDE1\]) and (\[eq:FBSDE2\]) are taking the form of a coupled linear forward backward stochastic differential equation (FBSDE) for the pair $(X,u)$: $$\begin{aligned}
dX_t & = u_t dt, \\
du_t & = \frac{1}{\kappa} (X_t - \xi_t) dt + dM_t,\end{aligned}$$ with some square integrable martingale $M$ subject to $$X_0=x \text{ and }
\begin{cases}
u_T = 0 & \text{unconstrained case,}\\
X_T=\Xi_T &\text{constrained case.}
\end{cases}$$
*1.)* We start with the unconstrained problem (\[eq:optProb1\]). Since we are minimizing the strictly convex functional $u \mapsto J(u)$ over $\cU$, a necessary and sufficient condition for the optimality of $\hat{u} \in \cU$ with corresponding $X^{\hat{u}} = x + \int_0^{\cdot}
\hat{u}_s ds$ is given by $$\langle J'(\hat{u}), w \rangle = 0
\text{ for all } w \in \cU$$ (cf., e.g., @EkelTem:99). In view of Lemma \[lem:gateaux\] this means that $\hat{u} \in \cU$ is optimal if and only if $$\label{eq:FOC1}
\EE \left[ \int_0^T w_s \left( \kappa \hat{u}_s +
\int_s^T (X^{\hat{u}}_t - \xi_t) dt \right) ds \right] = 0$$ for all $w \in \cU$. We will now show that the first order condition in (\[eq:FOC1\]) is satisfied (i.e., $\hat{u} \in \cU$ is optimal) if and only if $X^{\hat{u}}$ satisfies the dynamics in (\[eq:FBSDE1\]).
*Necessity:* Assume that $\hat{u} \in \cU$ with $X^{\hat{u}} = x + \int_0^{\cdot} \hat{u}_s ds$ minimizes $J$, i.e., condition (\[eq:FOC1\]) is satisfied by $\hat{u}$. Then, by Fubini’s Theorem and optional projection, we also get that $$\EE \left[ \int_0^T w_s \left( \kappa \hat{u}_s +
\EE \left[\int_s^T (X^{\hat{u}}_t - \xi_t) dt \bigg\vert \cF_s\right]
\right) ds \right] = 0$$ for all $w \in \cU$. However, this is only possible if $$\label{eq:sol1FBSDE1}
\hat{u}_s = - \frac{1}{\kappa} \EE \left[ \int_s^T (X^{\hat{u}}_t -
\xi_t) dt \bigg\vert \cF_s \right]
\quad d\PP \otimes ds \text{-a.e. on } \Omega \times [0,T].$$ Hence, by defining the square integrable martingale $$\label{eq:martFBSDE1}
M_s \set \EE \left[ \int_0^T (X^{\hat{u}}_t - \xi_t) dt \bigg\vert
\cF_s \right], \quad 0 \leq s \leq T,$$ we obtain the representation $$\label{eq:sol2FBSDE1}
\hat{u}_s = -\frac{1}{\kappa} \left( M_s - \int_0^s (X^{\hat{u}}_t
- \xi_t) dt \right) \quad d\PP \otimes ds \text{-a.e. on } \Omega
\times [0,T],$$ in other words, $X^{\hat{u}}$ satisfies the dynamics in (\[eq:FBSDE1\]). In particular, $X^{\hat{u}}_0 = x$ and $\dot{X}_T^{\hat{u}} =\hat{u}_T = 0$ $\PP$-a.s.
*Sufficiency:* Assume now that $\hat{u} \in \cU$ with corresponding $X^{\hat{u}}$ satisfies the dynamics in (\[eq:FBSDE1\]) with $X^{\hat{u}}_0 = x$ and $\dot{X}_T^{\hat{u}} =
0$ $\PP$-a.s. Note that the unique strong solution to this linear FBSDE in (\[eq:FBSDE1\]) is indeed given by (\[eq:sol1FBSDE1\]) or, equivalently, by (\[eq:sol2FBSDE1\]). However, using this representation of $\hat{u}$ and applying Fubini’s Theorem yields $$\begin{aligned}
&\EE \left[ \int_0^T w_s \left( \kappa \hat{u}_s +
\int_s^T (X^{\hat{u}}_t - \xi_t) dt \right) ds \right]
= \EE \left[ \int_0^T w_s \left( M_T - M_s \right) ds \right] \\
&= \EE \left[ \int_0^T w_s \EE[ M_T - M_s \vert \cF_s] ds \right]
= \int_0^T \EE \left[ w_s \left( \EE[M_T \vert \cF_s] - M_s
\right) \right] ds
= 0 \end{aligned}$$ for all $w \in \cU$, since $M$ is a martingale. Consequently, the first order condition in (\[eq:FOC1\]) is satisfied and $\hat{u} \in
\cU$ is optimal.
*2.)* Similar as above, a necessary and sufficient condition for the optimality of $\hat{u}^{\Xi} \in \cU^{\Xi}_x$ with corresponding $X^{\hat{u}^{\Xi}} = x + \int_0^{\cdot} \hat{u}^{\Xi}_s ds$ satisfying $X^{\hat{u}^{\Xi}}_T = \Xi_T$ $\PP$-a.s. for the constrained problem (\[eq:optProb2\]) is given by $$\langle J'(\hat{u}^{\Xi}), w \rangle = 0
\text{ for all } w \in \cU^0_0.$$ In contrast to the unconstrained case, observe now that we have an additional constraint on $w$. Again, in view of Lemma \[lem:gateaux\], we get that $\hat{u}^{\Xi}\in \cU^{\Xi}_x$ is optimal if and only if $$\label{eq:FOC2}
\EE \left[ \int_0^T w_s \left( \kappa \hat{u}^{\Xi}_s +
\int_s^T (X^{\hat{u}^{\Xi}}_t - \xi_t) dt \right) ds \right] = 0
\text{ for all } w \in \cU^0_0.$$
We will now show that the first order condition in (\[eq:FOC2\]) is fulfilled (i.e., $\hat{u}^{\Xi} \in \cU^{\Xi}_x$ is optimal) if and only if $X^{\hat{u}^{\Xi}}$ satisfies the dynamics in (\[eq:FBSDE2\]).
*Sufficiency:* Assume that $\hat{u}^{\Xi} \in \cU^{\Xi}_x$ with corresponding $X^{\hat{u}^{\Xi}}$ satisfies the dynamics in (\[eq:FBSDE2\]) with $X_0^{\hat{u}^{\Xi}} = x$ and $X_T^{\hat{u}^{\Xi}} = \Xi_T$ $\PP$-a.s. That is, we have the representation $$\hat{u}^{\Xi}_t = \hat{u}^{\Xi}_0 + \frac{1}{\kappa} \int_0^t
(X^{\hat{u}^{\Xi}}_s - \xi_s) ds + M_t \quad d\PP \otimes dt
\text{-a.e. on } \Omega \times [0,T)$$ for some square integrable martingale $(M_t)_{0 \leq t < T}$. From $\hat{u}^{\Xi},\xi \in L^2(\PP \otimes dt)$, it follows that $\EE[\int_0^T M_s^2 ds] < \infty$. Defining the square integrable martingale $$N^{\Xi}_s \set \EE \left[ \int_0^T (X^{ \hat{u}^{\Xi}}_t - \xi_t)
dt \bigg\vert \cF_s \right], \quad 0 \leq s \leq T,$$ the above representation of $\hat{u}^{\Xi}$ yields $$\begin{aligned}
&\EE \left[ \int_0^T w_s \left( \kappa \hat{u}^{\Xi}_s +
\int_s^T (X^{\hat{u}^{\Xi}}_t - \xi_t) dt \right) ds \right] \\
& = \EE \left[ \int_0^T w_s \left( \kappa \hat{u}^{\Xi}_0 +
N^{\Xi}_T + \kappa M_s \right) ds \right] \\
& = \EE \left[ ( \kappa \hat{u}^{\Xi}_0 +
N^{\Xi}_T) \int_0^T w_s ds \right] + \kappa \EE \left[ \int_0^T
w_s M_s ds \right] \\
& = 0 \text{ for all } w \in \cU^0_0\end{aligned}$$ by virtue of Lemma \[lem:aux\] below. Consequently, the first order condition in (\[eq:FOC2\]) is satisfied and $\hat{u}^{\Xi} \in
\cU^{\Xi}_x$ is optimal.
*Necessity:* As shown in the proof of Theorem \[thm:main2\] below (which does *not* use the necessity assertion of the present Lemma), the optimal control $\hat{u}^\Xi$ in satisfies the dynamics in . Moreover, by strict concavity of the objective functional in , the solution to problem is unique. Therefore, the assertion is indeed necessary.
The following technical Lemma is needed in the proof of Lemma \[lem:FOC\] for the constrained problem .
\[lem:aux\] Let $M$ be an adapted càdlàg process on $[0,T)$ with $\EE[\int_0^T
M_s^2 ds] < \infty$. Then, $$\label{eq:lemaux}
\EE\left[\int_0^T w_s M_s ds\right] = 0 \text{ for all }
w \in \cU^0_0$$ if and only if $M$ is a square integrable martingale on $[0,T)$.
First, assume that $M$ is a square integrable martingale on $[0,T)$ with $\EE[\int_0^T M_s^2 ds] < \infty$. Consider a $w \in \cU^0_0$ such that $w = 0$ on $\Omega \times [T-\epsilon,T]$ for some $\epsilon > 0$. Then, by applying Fubini’s Theorem we have $$\EE \left[ \int_0^T w_s M_s ds \right] = \EE \left[
\int_0^{T-\epsilon} w_s \EE[M_{T-\epsilon} \vert \cF_s] ds \right]
= \EE \left[ M_{T-\epsilon} \int_0^T w_s ds \right] = 0.$$ Now, let $w \in \cU^0_0$ be arbitrary and consider an approximating sequence $(w^{(n)})_{n \geq 1} \subset \cU^0_0$ with $w^{(n)} = 0$ on $\Omega \times [T-\epsilon_n,T]$ for some $\epsilon_n \downarrow 0$ such that $w^{(n)} \rightarrow w$ in $L^2(\Omega \times [0,T],\PP
\otimes dt)$ for $n \rightarrow \infty$. Then, by the Cauchy-Schwarz inequality we obtain $$\lim_{n \rightarrow \infty} \EE \left[ \int_0^T \vert (w^{(n)}_s - w_s) M_s \vert ds\right] = 0.$$ Consequently, $$\EE \left[ \int_0^T w_s M_s ds \right]
= \lim_{n \rightarrow \infty} \EE \left[ \int_0^T w^{(n)}_s M_s ds \right] = 0,$$ where the last identity follows from our initial consideration for $w$s with support in $[T-\epsilon,T]$, $\epsilon>0$. Hence, the condition in (\[eq:lemaux\]) is satisfied.
Conversely, assume now that the condition in (\[eq:lemaux\]) is satisfied. We have to show that $M$ is a square integrable martingale on $[0,T)$. Let $0 \leq t < u < T$, $A \in \cF_t$, be arbitrary. For any $\epsilon > 0$ such that $t+\epsilon,u+\epsilon < T$ we define $$w^{\epsilon}_s(\omega) \set 1_A(\omega) \frac{1}{\epsilon} \left( 1_{[t,t+\epsilon]}(s) -
1_{[u,u+\epsilon]}(s) \right) \text{ on } \Omega \times [0,T].$$ Obviously, $w$ is progressively measurable, in $L^2(\PP \otimes ds)$ and satisfies $\int_0^T w_s ds = 0$ $\PP$-a.s. Hence, by assumption (\[eq:lemaux\]) we have $$\begin{aligned}
0 & = \EE\left [\int_0^T w^\epsilon_s M_s ds \right] = \EE \left[
1_A \frac{1}{\epsilon} \int_t^{t+\epsilon} M_s ds \right] - \EE
\left[ 1_A \frac{1}{\epsilon} \int_{u}^{u+\epsilon} M_s ds
\right]. \end{aligned}$$ Passing to the limit $\varepsilon \downarrow 0$, we obtain by right-continuity of $M$, $$\begin{aligned}
0 = \EE \left[ 1_A (M_t - M_u) \right] \text{ for all } 0 \leq t <
u < T.\end{aligned}$$ Consequently, $M$ is a martingale on $[0,T)$. By assumption, we have that $\EE[\int_0^T M_s^2 ds] < \infty$ which implies that $M$ is square integrable on $[0,T)$.
Now, we are ready to prove our main result by simple verification. We start with Theorem \[thm:main1\] for the unconstrained problem .
**Proof of Theorem \[thm:main1\].** We divide the proof in two parts. First, we prove optimality of the solution given in . Then, we compute the corresponding minimal costs given in .
*Optimality of :* In order to show that our candidate in is the optimal solution for problem , we need to check the first order condition in Lemma \[lem:FOC\] 1.). For this, define the processes $$Y_t \set \int_0^t \xi_s \cosh(\tau^\kappa(s)) ds \text{ and }
\tilde{M}_t \set \EE [Y_T \vert \cF_t], \quad 0 \leq t \leq T.$$ Since $Y_T \in L^2(\PP)$, we have that $(\tilde{M}_t)_{0 \leq t \leq
T}$ is a square integrable martingale. Moreover, note that $Y,\tilde{M} \in L^2(\PP \otimes dt)$. Hence, the process $\hat{\xi}$ in Theorem \[thm:main1\] can be written as $$\label{eq:xi1}
\hat{\xi}_t = \frac{1}{\sqrt{\kappa} \sinh(\tau^\kappa(t))} \left(
\tilde{M}_t - Y_t \right) \quad d\PP \otimes dt\text{-a.e. on }
\Omega \times [0,T)$$ with corresponding dynamics $$\begin{aligned}
\label{eq:dynxi1}
d\hat{\xi}_t = -\frac{\coth(\tau^\kappa(t))}{\sqrt{\kappa}} ( \xi_t
- \hat{\xi}_t ) dt + \frac{1}{\sqrt{\kappa} \sinh(\tau^\kappa(t))}
d\tilde{M}_t \quad \text{on } [0,T).\end{aligned}$$ Due to Lemma \[lem:est\] b), we know that $\hat{\xi} \in L^2(\PP
\otimes dt)$. Now, the density of the solution from satisfies $$\begin{aligned}
d\hat{u}_t = & - \frac{1}{\kappa} (1-\tanh(\tau^\kappa(t))^2) \left(
\hat{\xi}_t - \hat{X}_t \right) dt +
\frac{1}{\sqrt{\kappa}} \tanh(\tau^\kappa(t)) \left(
d\hat{\xi}_t - d\hat{X}_t \right) \\
= & \frac{1}{\kappa} \left( \left( \hat{X}_t -\xi_t \right) dt +
\frac{1}{\cosh(\tau^\kappa(t))} d\tilde{M}_t \right) \quad d\PP
\otimes dt\text{-a.e. on } \Omega \times [0,T],\end{aligned}$$ that is, $\hat{u}$ satisfies the BSDE-dynamics in (\[eq:FBSDE1\]). Obviously, it holds that $\hat{X}_0 = x$. Solving equation (\[eq:SDEX1\]) for $\hat{X}$ yields upon differentiation $$\begin{aligned}
\hat{u}_t = &- \frac{1}{\sqrt{\kappa}}
\frac{\sinh(\tau^\kappa(t))}{\cosh(\tau^\kappa(0))} x
\nonumber \\
& - \frac{1}{\kappa} \sinh(\tau^\kappa(t)) \int_0^t \hat{\xi}_s
\frac{\sinh(\tau^\kappa(s))}{\cosh(\tau^\kappa(s))^2} ds +
\frac{1}{\kappa} \frac{\tilde{M}_t -
Y_t}{\cosh(\tau^\kappa(t))} \label{eq:u1}\end{aligned}$$ and we observe that $\lim_{t \uparrow T} \hat{u}_t = 0$ $\PP$-a.s., i.e., the terminal condition in (\[eq:FBSDE1\]) is indeed satisfied. It remains to show that $\hat{u} \in L^2(\PP \otimes
dt)$. Since $\tilde{M}, Y \in L^2(\PP \otimes dt$), it suffices to observe that $\sinh(\tau^\kappa(s))/\cosh(\tau^\kappa(s))^2$ is bounded and therefore $$\begin{aligned}
\EE \left[ \int_0^T \left( \int_0^t \hat{\xi}_s
\frac{\sinh(\tau^\kappa(s))}{\cosh(\tau^\kappa(s))^2} ds \right)^2
\right] dt
& \leq \text{const} \, \EE \left[ \int_0^T \left( \int_0^t \vert
\hat{\xi}_s \vert ds \right)^2 dt
\right] \\
& \leq \text{const} \, \frac{T^2}{2} \norm{\hat{\xi}}_{L^2(\PP \otimes
dt)}^2 < \infty.\end{aligned}$$
*Computation of minimal costs:* To compute the minimal costs associated to the optimal control $\hat{u}$ given in (\[eq:cost1\]), note first that $\hat{u} \in
L^2(\PP\otimes dt)$ implies $\hat{X} \in L^2(\PP \otimes dt)$ and thus $J(\hat{u}) < \infty$. For ease of presentation, we define $$c(t) \set \sqrt{\kappa} \tanh(\tau^\kappa(t)) , \quad 0 \leq t \leq T,$$ so that $\hat{u}_t = c(t)(\hat{\xi}_t - \hat{X}_t)/\kappa$. Hence, the minimal costs can be written as $$\begin{aligned}
\infty > J(\hat{u}) = &~\EE \left[ \frac{1}{2} \int_0^T (\hat{X}_s - \xi_s)^2 ds
+ \frac{1}{2} \kappa \int_0^T \hat{u}^2_s ds \right] \nonumber \\
= & \lim_{t \uparrow T} \left\{ \frac{1}{2} \EE\left[ \int_0^t \hat{X}^2_s ds \right]
- \EE\left[ \int_0^t \hat{X}_s \xi_s ds \right] +
\frac{1}{2} \EE\left[ \int_0^t \xi^2_s ds \right] \right. \nonumber
\\
& \hspace{30pt} + \frac{1}{2\kappa} \EE\left[ \int_0^t c(s)^2
\hat{\xi}^2_s ds \right] - \frac{1}{\kappa} \EE\left[ \int_0^t
c(s)^2 \hat{X}_s \hat{\xi}_s ds \right] \nonumber \\
& \left. \hspace{30pt} + \frac{1}{2\kappa} \EE\left[ \int_0^t
c(s)^2 \hat{X}^2_s ds \right] \right\},
\label{eq1:p:cost1}\end{aligned}$$ due to monotone convergence. Observe that, using integration by parts and the dynamics of $\hat{\xi}$ from , we have, for all $t < T$, $$\begin{aligned}
\EE[c(t) \hat{X}^2_t] = & c(0) x^2
+ \frac{2}{\kappa} \EE\left[ \int_0^t c(s)^2 \hat{X}_s \hat{\xi}_s ds \right] \nonumber \\
& - \frac{1}{\kappa} \EE\left[ \int_0^t c(s)^2 \hat{X}^2_s ds
\right] - \EE\left[ \int_0^t \hat{X}^2_s ds \right]\end{aligned}$$ as well as $$\begin{aligned}
\EE[c(t) \hat{X}_t \hat{\xi}_t] = & c(0) \hat{\xi}_0 x
+ \frac{1}{\kappa} \EE\left[ \int_0^t c(s)^2 \hat{\xi}^2_s ds \right]
- \EE\left[ \int_0^t \hat{X}_s \xi_s ds \right]\end{aligned}$$ and $$\begin{aligned}
\EE[c(t) \hat{\xi}^2_t] = & c(0) \hat{\xi}^2_0
+ \frac{1}{\kappa} \EE\left[ \int_0^t c(s)^2 \hat{\xi}^2_s ds \right]
- 2 \EE\left[ \int_0^t \hat{\xi}_s \xi_s ds \right] \nonumber \\
& + \EE\left[ \int_0^t \hat{\xi}^2_s ds \right] + \EE\left[
\int_0^t c(s) d\langle \hat{\xi} \rangle_s \right]. \end{aligned}$$ Using these identities, we can write as $$\begin{aligned}
\infty > J(\hat{u}) = & \lim_{t \uparrow T} \left\{ \frac{1}{2} c(0) (x - \hat{\xi}_0)^2
+ \frac{1}{2} \EE \left[ \int_0^t (\hat{\xi}_s - \xi_s)^2 ds
\right] \right. \nonumber \\
& \left. \hspace{30pt} + \frac{1}{2} \EE \left[ \int_0^t c(s)
d\langle \hat{\xi} \rangle_s \right]
- \frac{1}{2} c(t) \EE[ (\hat{X}_t - \hat{\xi}_t)^2]
\right\}. \label{eq5:p:cost1}\end{aligned}$$ To conclude our assertion for the minimal costs in $\eqref{eq:cost1}$, observe that $$\EE[ (\hat{X}_{t} - \hat{\xi}_{t})^2] \leq
2 \left( \EE[ \hat{X}_{t}^2] + \EE [\hat{\xi}_{t}^2] \right),$$ and let us argue why $$\label{eq6:p:cost1}
\lim_{t \uparrow T} c(t) \EE[ \hat{X}_{t}^2] = 0
\quad \text{and} \quad \lim_{t \uparrow T} c(t) \EE
[\hat{\xi}_{t}^2] = 0.$$ By Jensen’s inequality, we have $$\EE[\hat{X}^2_t] \leq t \EE \left[ \int_0^t \hat{u}^2_s ds \right]
\leq T \, \EE \left[ \int_0^T \hat{u}^2_s ds \right] < \infty.$$ Hence, due to $\lim_{t\uparrow T} c(t) = 0$, the first convergence in holds true. Concerning the second convergence in , we use the representation in for $\hat{\xi}$ to obtain, again with Jensen’s inequality as well as the Cauchy-Schwarz inequality, $$\begin{aligned}
0 \leq c(t) \EE[\hat{\xi}^2_t] & = \frac{c(t)}{\kappa
\sinh(\tau^\kappa(t))^2} \EE[(\tilde{M}_t-Y_t)^2] \\
& \leq \frac{c(t)}{\kappa
\sinh(\tau^\kappa(t))^2} \EE[(Y_T-Y_t)^2] \\
& = \frac{c(t)}{\kappa
\sinh(\tau^\kappa(t))^2} \EE\left[ \left( \int_t^T \xi_s
\cosh(\tau^\kappa(s))ds \right)^2 \right] \\
& \leq \frac{\cosh(\tau^\kappa(0))^2}{\sqrt{\kappa}
\cosh(\tau^\kappa(t))} \frac{1}{\sinh(\tau^\kappa(t))}
(T-t) \EE \left[ \int_t^T \xi^2_s ds \right] \\
& \leq \frac{\cosh(\tau^\kappa(0))^2}{\cosh(\tau^\kappa(t))} \EE
\left[ \int_t^T \xi^2_s ds \right] \underset{t \uparrow
T}{\longrightarrow} 0, \end{aligned}$$ where for the last inequality we used that $\sinh(\tau) \geq \tau$ for all $\tau \geq 0$. In other words, also the second convergence in holds true. This finishes our proof of the representation of the minimal costs in .
Next, we come to the proof of Theorem \[thm:main2\] concerning the constrained problem .
**Proof of Theorem \[thm:main2\].** Again, we will proceed in two steps. First, we prove optimality of the solution given in . Then, we compute the corresponding minimal costs given in .
*Optimality of :* The verification of the optimality of $\hat{X}^{\Xi} = x + \int_0^{\cdot} \hat{u}^{\Xi}_t dt$ in Theorem \[thm:main2\] for the constrained problem (\[eq:optProb2\]) follows along the same lines as in the unconstrained case. Again, we have to check the first order condition in Lemma \[lem:FOC\] 2.). For this, we define the processes
$$Y_t \set \frac{1}{\sqrt{\kappa}} \int_0^t \xi_s
\sinh(\tau^\kappa(s)) ds \quad \text{and} \quad \tilde{M}^{\Xi}_t
\set \EE [ Y_T + \Xi_T \vert \cF_t]$$
for all $0 \leq t \leq T$. Since $Z_T, \Xi \in L^2(\PP)$, we have that $(\tilde{M}^{\Xi}_t)_{0 \leq t \leq T}$ is a square integrable martingale. Moreover, note that $Y,\tilde{M}^\Xi \in L^2(\PP \otimes
dt)$. Hence, the process $\hat{\xi}^{\Xi}$ in Theorem \[thm:main2\] can be written as $$\label{eq:xicon}
\hat{\xi}^{\Xi}_t = \frac{1}{\cosh(\tau^\kappa(t))} \left(
\tilde{M}^{\Xi}_t - Y_t \right) \quad d\PP \otimes
dt\text{-a.e. on } \Omega \times [0,T]$$ with corresponding dynamics $$\begin{aligned}
\label{eq:dynxi2}
d\hat{\xi}^\Xi_t = -\frac{\tanh(\tau^\kappa(t))}{\sqrt{\kappa}} (
\xi_t - \hat{\xi}^\Xi_t ) dt + \frac{1}{\cosh(\tau^\kappa(t))}
d\tilde{M}^{\Xi}_t \quad \text{on } [0,T].\end{aligned}$$ In particular, we observe that $\hat{\xi}^\Xi \in L^2(\PP \otimes
dt)$. Similar to the unconstrained case above, one easily checks that $$d\hat{u}^{\Xi}_t = \frac{1}{\kappa} (\hat{X}^{\Xi}_t -
\xi_t) dt + \frac{1}{\sqrt{\kappa}}
\frac{1}{\sinh(\tau^\kappa(t))} d\tilde{M}^{\Xi}_t \quad d\PP
\otimes dt\text{-a.e. on } \Omega \times [0,T),$$ that is, $\hat{u}^{\Xi}$ satisfies the dynamics in (\[eq:FBSDE2\]). Obviously, it holds that $\hat{X}^{\Xi}_0 = x$.
Next, we have to check the terminal condition in (\[eq:FBSDE2\]), that is, $\lim_{t \uparrow T} \hat{X}^{\Xi}_t = \Xi_T$ $\PP$-a.s. In order to show this, first note that we can consider a càdlàg version of $(\hat{\xi}_t^{\Xi})_{0 \leq t \leq T}$ due to its representation in (\[eq:xicon\]). Hence, since $\Xi_T$ is $\cF_{T-}$-measurable by assumption we obtain the $\PP$-a.s. limit $$\lim_{t \uparrow T} \hat{\xi}^{\Xi}_t = \EE[ \Xi_T \vert \cF_{T-}] =
\Xi_T$$ in (\[eq:xicon\]). In other words, for every $\epsilon > 0$ there exists a random time $\Upsilon_{\epsilon} \in [0, T)$ such that $\PP$-a.s. $$\Xi_T - \epsilon \leq \hat{\xi}^{\Xi}_t \leq \Xi_T + \epsilon \quad
\text{for all } t \in [\Upsilon_{\epsilon},T].$$ For $\lim_{t \uparrow T} \hat{X}^{\Xi}_t = \Xi_T$ $\PP$-a.s., it clearly suffices to show that for any $\epsilon > 0$ it holds that $$\limsup_{t \uparrow T} \hat{X}^{\Xi}_t \leq \Xi_T + \epsilon \quad
\text{and} \quad
\liminf_{t \uparrow T} \hat{X}^{\Xi}_t \geq \Xi_T - \epsilon \quad
\PP\text{-a.s.}$$ Define $X_t^{\epsilon} \set \Xi_T + \epsilon - \hat{X}^{\Xi}_t$ so that $\hat{\xi}^{\Xi}_t - \hat{X}^{\Xi}_t \leq X_t^{\epsilon}$ $\PP$-a.s. for $t \in [\Upsilon_{\epsilon},T)$. This yields $$\begin{aligned}
dX^{\epsilon}_t = & -d\hat{X}^{\Xi}_t = -\frac{1}{\sqrt{\kappa}}
\coth(\tau^\kappa(t)) (\hat{\xi}^{\Xi}_t -
\hat{X}^{\Xi}_t ) dt \\
\geq & -\frac{1}{\sqrt{\kappa}} \coth(\tau^\kappa(t)) X^{\epsilon}_t
dt. \end{aligned}$$ Moreover, note that for all $\omega \in \Omega$ the linear ODE on $[\Upsilon_{\epsilon}(\omega),T)$ given by $$dZ_t = -\frac{1}{\sqrt{\kappa}} \coth(\tau^\kappa(t)) Z_t dt, \quad
Z_{\Upsilon_{\epsilon}(\omega)} =
X^{\epsilon}_{\Upsilon_{\epsilon}(\omega)}(\omega),$$ admits the solution $$Z_t =
X^{\epsilon}_{\Upsilon_{\epsilon}}\exp\left(-\frac{1}{\sqrt{\kappa}}
\int_{\Upsilon_{\epsilon}}^t \coth(\tau^\kappa(s)) ds \right) =
X^{\epsilon}_{\Upsilon_{\epsilon}}
\frac{\sinh(\tau^{\kappa}(t))}{\sinh(\tau^{\kappa}(\Upsilon_{\epsilon}))},
\quad t < T,$$ with $\lim_{t \uparrow T} Z_t = 0$. By the comparison principle for ODEs, we get $\PP$-a.s. $X^{\epsilon}_t \geq Z_t$ for all $t \in
[\Upsilon_{\epsilon},T)$. Hence, $$\liminf_{t \uparrow T} X^{\epsilon}_t \geq \lim_{t \uparrow T} Z_t =
0 \quad \PP\text{-a.s.},$$ that is, $\limsup_{t \uparrow T} \hat{X}^{\Xi}_t \leq \Xi_T +
\epsilon$ $\PP$-a.s. Similarly, define $\tilde{X}_t^{\epsilon} \set
\Xi_T - \epsilon - \hat{X}^{\Xi}_t$ and observe as above that $\PP$-a.s. on $[\Upsilon_{\epsilon},T)$ we have $$d\tilde{X}^{\epsilon}_t \leq -\frac{1}{\sqrt{\kappa}}
\coth(\tau^\kappa(t)) \tilde{X}^{\epsilon}_t dt.$$ Again, as above by comparison principle we obtain $$\limsup_{t \uparrow T} \tilde{X}^{\epsilon}_t \leq 0 \quad \PP\text{-a.s.},$$ i.e., $\liminf_{t \uparrow T} \hat{X}^{\Xi}_t \geq \Xi_T - \epsilon$ $\PP$-a.s. as remained to be shown for .
Finally, we have to argue that $\hat{u}^\Xi \in L^2(\PP \otimes
dt)$. For this, we may assume without loss of generality that $x=0$. Moreover, let us denote $\hat{u}^{\Xi,\xi} \set \hat{u}^\Xi$, $\hat{X}^{\Xi,\xi} \set \hat{X}^\Xi$ and $\hat{\xi}^{\Xi,\xi} \set
\hat{\xi}^{\Xi}$ to emphasize also the dependence on the given target process $\xi$. With this notation it holds that $$\hat{u}^{\Xi,\xi} = \hat{u}^{\Xi,0} + \hat{u}^{0,\xi}.$$ Hence, we have to show that $\hat{u}^{\Xi,0} \in L^2(\PP \otimes dt)$ and $\hat{u}^{0,\xi} \in L^2(\PP \otimes dt)$.
Concerning $\hat{u}^{\Xi,0}$, observe that, using $\hat{\xi}^{\Xi,0}_t
= \Xi_t/\cosh(\tau^\kappa(t))$ with $\Xi_t \set \EE[\Xi_T \vert
\cF_t]$, $0 \leq t \leq T$, as well as the explicit solution $\hat{X}^{\Xi,0}_t$ for the ODE in , we obtain $$\begin{aligned}
\hat{u}^{\Xi,0}_t = &
\frac{\coth(\tau^\kappa(t))}{\sqrt{\kappa}}
\left(\hat{\xi}_t^{\Xi,0}-\hat{X}^{\Xi,0}_t \right) \nonumber \\
= & \frac{\coth(\tau^\kappa(t))}{\sqrt{\kappa}} \left( e^{-\int_0^t
\frac{\coth(\tau^\kappa(u))}{\sqrt{\kappa}}du}
\hat{\xi}^{\Xi,0}_0 + \right. \nonumber \\
& \hspace{68pt} \left. e^{-\int_0^t
\frac{\coth(\tau^\kappa(u))}{\sqrt{\kappa}}du} \int_0^t e^{\int_0^s
\frac{\coth(\tau^\kappa(u))}{\sqrt{\kappa}}du}
d\hat{\xi}^{\Xi,0}_s \right) \nonumber \\
= & \frac{\cosh(\tau^\kappa(t))}{\sqrt{\kappa}\sinh(\tau^\kappa(0))}
\hat{\xi}^{\Xi,0}_0 +\frac{\cosh(\tau^\kappa(t))}{\kappa}
\int_0^t \frac{\Xi_s}{\cosh(\tau^\kappa(s))^2} ds \nonumber \\
& + \frac{\cosh(\tau^\kappa(t))}{\sqrt{\kappa}} \int_0^t
\frac{2}{\sinh(2 \tau^\kappa(s))} d\Xi_s,
\label{eq:p:ccp1}\end{aligned}$$ where we used integration by parts in the second line. Obviously, the first two terms in belong to $L^2(\P \otimes
dt)$. The third term is in $L^2(\PP \otimes dt)$ since, using Fubini’s Theorem as well as $\sinh(\tau) \geq \tau$ for all $\tau \geq 0$, we get $$\begin{aligned}
& \EE \left[ \int_0^T \left( \int_0^t \frac{2 d\Xi_s }{\sinh(2 \tau^\kappa(s))} \right)^2 dt \right]
= \EE \left[ \int_0^T \int_0^t \left( \frac{2}{\sinh(2 \tau^\kappa(s))} \right)^2
d\langle \Xi \rangle_s dt \right] \\
& = \EE \left[ \int_0^T (T-s) \left( \frac{2}{\sinh(2 \tau^\kappa(s))} \right)^2
d\langle \Xi \rangle_s \right]
\leq \EE \left[ \int_0^T \frac{\kappa}{T-s} d\langle \Xi \rangle_s \right] \\
& = \kappa \int_0^T \frac{d\EE[\Xi^2_s]}{T-s} < \infty
\end{aligned}$$ by assumption .
Concerning $\hat{u}^{0,\xi}$, we use the explicit expressions for $\hat{\xi}^{0,\xi}_t$ and $\hat{X}^{0,\xi}_t$ to obtain in that $$\begin{aligned}
\hat{u}^{0,\xi}_t =
& \frac{\coth(\tau^\kappa(t))}{\sqrt{\kappa}}
\left(\hat{\xi}_t^{0,\xi}-\hat{X}^{0,\xi}_t
\right) \nonumber \\
= & \frac{\cosh(\tau^\kappa(t))-1}{\sqrt{\kappa}\sinh(\tau^\kappa(t))}
\EE\left[ \int_t^T \xi_u K^{\Xi}(t,u) du \Big\vert \cF_t \right] \nonumber \\
& - \frac{\cosh(\tau^\kappa(t))}{\kappa} \int_0^t \frac{\cosh(\tau^\kappa(s))-1}{\sinh(\tau^\kappa(s))^2}
\EE\left[ \int_s^T\xi_u K^{\Xi}(s,u) du \Big\vert \cF_s\right] ds. \label{eq:p:ccp2}\end{aligned}$$ Note that all the ratios in involving the functions $\cosh(\cdot)$ and $\sinh(\cdot)$ are actually bounded on $[0,T]$. Moreover, we have by Lemma \[lem:est\] c) below that $$\EE\left[ \int_t^T \xi_u K^{\Xi}(t,u) du \Big\vert \cF_t \right]
\in L^2(\PP \otimes dt),$$ as well as, using Jensen’s inequality, $$\begin{aligned}
& \EE\left[ \int_0^T \left( \int_0^t \EE\left[ \int_s^T\xi_u
K^{\Xi}(s,u) du \Big\vert \cF_s\right] ds \right)^2 dt \right] \\
& \leq \frac{T^2}{2} \EE\left[ \int_0^T \left( \EE\left[ \int_s^T\xi_u
K^{\Xi}(s,u) du \Big\vert \cF_s\right] \right)^2 ds \right]
< \infty.\end{aligned}$$ Together, this shows $\hat{u}^{\Xi} \in L^2(\PP \otimes dt)$ as desired.
*Computation of minimal costs:* Now, we compute the minimal costs associated to the optimal control $\hat{u}^\Xi$ given in . We will follow along the same lines as in the unconstrained case above. First of all, note that $\hat{u}^\Xi \in L^2(\PP\otimes dt)$ implies $\hat{X}^\Xi \in L^2(\PP\otimes dt)$ and hence $J(\hat{u}) < \infty$. For ease of presentation, we define $$c(t) \set \sqrt{\kappa} \coth(\tau^\kappa(t)), \quad 0 \leq t < T,$$ i.e., $\hat{u}^\Xi_t=c(t)(\hat{\xi}^\Xi_t - \hat{X}^\Xi_t)/\kappa $. Analogously to the unconstrained case above, we can write $J(\hat{u}^\Xi)$ as $$\begin{aligned}
\infty > J(\hat{u}^\Xi) =
& \lim_{t \uparrow T} \left\{ \frac{1}{2} c(0) (x - \hat{\xi}^\Xi_0)^2
+ \frac{1}{2} \EE \left[ \int_0^t (\hat{\xi}^\Xi_s - \xi_s)^2 ds \right] \right. \nonumber \\
& \left. + \frac{1}{2} \EE \left[ \int_0^t c(s) d\langle \hat{\xi}^\Xi \rangle_s \right]
- \frac{1}{2} c(t) \EE[ (\hat{X}^\Xi_t - \hat{\xi}^\Xi_t)^2] \right\}. \label{eq1:p:cost2}\end{aligned}$$ To conclude our assertion for the minimal costs in $\eqref{eq:cost2}$, observe that $$\EE[ (\hat{X}^\Xi_{t} - \hat{\xi}^\Xi_{t})^2] \leq 2 \left( \EE[
(\hat{X}_{t}^\Xi - \Xi_t )^2] + \EE [(\Xi_t - \hat{\xi}^\Xi_{t})^2]
\right),$$ where $\Xi_t \set \EE[\Xi_T \vert \cF_t]$, $0 \leq t \leq T$, and let us argue why $$\label{eq2:p:cost2} \lim_{t \uparrow T} c(t) \EE[
(\hat{X}_{t}^\Xi - \Xi_t )^2] = 0 \quad \text{and} \quad \lim_{t
\uparrow T} c(t) \EE [(\Xi_t - \hat{\xi}^\Xi_{t})^2] = 0.$$
Concerning the first convergence in , Jensen’s inequality, monotonicity of the function $\cosh(\cdot)$ as well as the estimate $\sinh(\tau) \geq \tau$ for all $\tau \geq 0$ yield $$\begin{aligned}
c(t) \EE[ (\hat{X}_{t}^\Xi - \Xi_t )^2]
& \leq c(t) \EE[ (\hat{X}_{t}^\Xi - \hat{X}^\Xi_T )^2] \nonumber \\
& \leq \frac{\kappa \cosh(\tau^\kappa(0))}{T-t} \EE \left[ \left( \int_t^T \hat{u}^\Xi_s ds \right)^2 \right]
\nonumber \\
& \leq \kappa \cosh(\tau^\kappa(0)) \EE \left[ \int_t^T (\hat{u}_s^\Xi)^2 ds \right]
\underset{t \uparrow T}{\longrightarrow} 0, \label{eq3:p:cost2}\end{aligned}$$ since $\Xi_T = \hat{X}^\Xi_T$ and $\hat{u}^\Xi \in L^2(\PP \otimes dt)$.
Concerning the second convergence in , we insert the definition for $\hat{\xi}^\Xi$ to obtain that $$\begin{aligned}
& c(t) \EE[ (\Xi_t - \hat{\xi}^\Xi_{t})^2] \\
& = c(t) \EE \left[ \left( \frac{\cosh(\tau^\kappa(t)) -
1}{\cosh(\tau^\kappa(t))} \Xi_t \right. \right. \\
& \hspace{60pt} -
\left.\left. \frac{\cosh(\tau^\kappa(t)) - 1}{\cosh(\tau^\kappa(t))} \EE\left[
\int_t^T \xi_u K^\Xi(t,u) du \Big\vert \cF_t
\right] \right)^2 \right] \\
& \leq 2 c(t) \left( \frac{\cosh(\tau^\kappa(t)) -
1}{\cosh(\tau^\kappa(t))} \right)^2 \EE [\Xi^2_T] \\
& \hspace{16pt} + 2 c(t) \left( \frac{\cosh(\tau^\kappa(t)) -
1}{\cosh(\tau^\kappa(t))} \right)^2
\EE\left[
\int_t^T \xi^2_u K^\Xi(t,u) du \right] \\
& \leq \frac{2\sqrt{\kappa}}{\cosh(\tau^\kappa(t))} \frac{(\cosh(\tau^\kappa(t)) - 1 )^2}{\sinh(\tau^\kappa(t))}
\EE[\Xi^2_T] \\
& \hspace{16pt} + \frac{2\sinh(\tau^\kappa(0))}{\cosh(\tau^\kappa(t))} \frac{\cosh(\tau^\kappa(t)) - 1}{\sinh(\tau^\kappa(t))}
\EE\left[ \int_t^T \xi^2_u du \right] \underset{t \uparrow T}{\longrightarrow} 0,\end{aligned}$$ since $\Xi_T \in L^2(\PP)$, $\xi \in L^2(\PP \otimes dt)$ and $\lim_{t\uparrow T}(\cosh(\tau^\kappa(t)) - 1)/\sinh(\tau^\kappa(t)) =
0$. Consequently, also the second convergence in holds true. This finishes our proof of the representation of the minimal costs in .
The next Lemma shows that the set $\cU^{\Xi}_x$ is not empty under the assumption .
\[lem:ccp\] For $\Xi_T \in L^2(\PP,\cF_T)$ we have that $\cU^{\Xi}_x \neq \varnothing$ if and only if condition holds, i.e., if and only if $\int_0^T \frac{d\EE[\Xi_t^2]}{T-t} < \infty$ with $\Xi_t \set \EE[ \Xi_T \vert \cF_t]$ for all $0 \leq t \leq T$.
Let $\Xi_T \in L^2(\PP,\cF_T)$. We first prove necessity. Assume there exists $u \in \cU^{\Xi}_x$, i.e., $u \in L^2(\PP \otimes dt)$ such that $$X^u_T = x + \int_0^T u_s ds = \Xi_T.$$ Then, applying Fubini’s Theorem, we obtain $$\int_0^T\frac{d\EE[\Xi_t^2]}{T-t} = \frac{1}{T} (\EE[\Xi^2_T]- \EE[\Xi^2_0])
+ \int_0^T \EE[\Xi_T^2 - \Xi_s^2] d\left(\frac{1}{T-s}\right).$$ Moreover, $\EE[\Xi_T^2 - \Xi_s^2] = \EE[(\Xi_T - \Xi_s)^2] \leq \EE[(X^u_T -
X^u_s)^2]$ due to the $L^2$-projection property of conditional expectations. Hence, we get $$\begin{aligned}
\int_0^T\frac{d\EE[\Xi_t^2]}{T-t} \leq & \frac{1}{T} (\EE[\Xi^2_T]- \EE[\Xi^2_0])
+ \int_0^T \EE \left[
\left( \int_s^T u_r dr
\right)^2 \right] d\left(\frac{1}{T-s}\right) \\
= & \frac{1}{T} (\EE[\Xi^2_T]- \EE[\Xi^2_0])
+ \EE \left[ \int_0^T \left( \frac{1}{T-s} \int_s^T u_r dr \right)^2 ds \right]
< \infty
\end{aligned}$$ by $\Xi_T \in L^2(\PP)$ and Lemma \[lem:est\] a).
For sufficiency, simply consider the optimizer $\hat{u}^{\Xi}$ from Theorem \[thm:main2\] which we proved to be in $\cU^\Xi_x$ under the condition .
The final Lemma collects estimates concerning the $L^2(\PP \otimes dt)$-norm which are needed several times in the proofs above.
\[lem:est\] Let $(\zeta_t)_{0 \leq t \leq T} \in L^2(\PP \otimes dt)$ be progressively measurable. Moreover, let $K(t,u)$, $K^\Xi(t,u)$, $0 \leq t \leq u < T$, denote the kernels from Theorems \[thm:main1\] and \[thm:main2\], respectively.
- For $\bar{\zeta}_t \set \frac{1}{T-t} \int_t^T \zeta_s ds$, $t < T$, we have $$\norm{\bar\zeta}_{L^2(\PP \otimes dt)} \leq 2 \norm{\zeta}_{L^2(\PP \otimes dt)}.$$
- For $\zeta^K_t \set \EE[\int_t^T \zeta_u K(t,u) du \vert \cF_t]$, $t < T$, we have $$\norm{\zeta^K}_{L^2(\PP \otimes dt)} \leq c \norm{\zeta}_{L^2(\PP \otimes dt)}$$ for some constant $c > 0$.
- For $\zeta^{K^{\Xi}}_t \set \EE[\int_t^T \zeta_u K^{\Xi}(t,u) du \vert
\cF_t]$, $t < T$, we have $$\norm{\zeta^{K^{\Xi}}}_{L^2(\PP \otimes dt)} \leq c \norm{\zeta}_{L^2(\PP \otimes dt)}$$ for some constant $c>0$.
a\) By Fubini’s Theorem and the Cauchy-Schwarz inequality, we have $$\begin{aligned}
\norm{\bar\zeta}^2_{L^2(\PP\otimes dt)}
& = \EE \left[ \int_0^T \int_0^T
\zeta_r \zeta_s \int_0^{r \wedge s} \left(\frac{1}{T-t}\right)^2 dt
dr ds \right] \\
& = \EE \left[ \int_0^T \int_0^T
\zeta_r \zeta_s \frac{1}{T-r \wedge s} dr ds \right] - \frac{1}{T} \EE\left[ \left(
\int_0^T \zeta_s ds \right)^2 \right]\\
& \leq \EE \left[ 2 \int_0^T \zeta_r \int_0^r
\zeta_s \frac{1}{T-s} ds dr\right] \\
& = 2 \EE \left[ \int_0^T \zeta_s \left( \frac{1}{T-s} \int_s^T
\zeta_r dr \right) ds \right] \\
& \leq 2 \norm{\zeta}_{L^2(\PP \otimes dt)}
\norm{\bar\zeta}_{L^2(\PP \otimes dt)}\end{aligned}$$ and hence the assertion.
b\) First, assume that $(\zeta_t)_{0 \leq t \leq T}$ is deterministic, and so $\zeta^K_t = \int_t^T \zeta_u K(t,u) du$. By similar computations as in a) we obtain $$\begin{aligned}
\norm{\zeta^K}^2_{L^2(dt)}
& = \int_0^T \int_0^T
\zeta_r \zeta_s \int_0^{r \wedge s} K(t,r)K(t,s) dt
dr ds \\
& \leq \int_0^T \int_0^T
\zeta_r \zeta_s \frac{1}{\sqrt{\kappa}} \cosh(\tau^\kappa(r)) \cosh(\tau^\kappa(s))
\coth(\tau^\kappa(r \wedge s)) dr ds \\
& = 2 \int_0^T \zeta_r \frac{\cosh(\tau^\kappa(r))}{\sqrt{\kappa}} \int_0^r
\zeta_s \cosh(\tau^\kappa(s)) \coth(\tau^\kappa(s)) ds dr \\
& = 2 \int_0^T \zeta_s \cosh(\tau^\kappa(s))^2 \zeta^{K}_s ds \\
& \leq 2 \cosh(\tau^\kappa(0))^2 \norm{\zeta}_{L^2(dt)} \norm{\zeta^K}_{L^2(dt)},\end{aligned}$$ i.e., $\norm{\zeta^K}_{L^2(dt)} \leq c \norm{\zeta}_{L^2(dt)}$ for some constant $c>0$. Now, for general $(\zeta_t)_{0 \leq t \leq T} \in L^2(\PP \otimes dt)$ progressively measurable, we get with Fubini’s Theorem $$\EE \left[ \int_0^T (\zeta^K_t)^2 dt \right] =
\int_0^T \int_t^T \int_t^T \EE\Big[ \EE[\zeta_r\vert\cF_t]
\EE[\zeta_s\vert\cF_t] \Big] K(t,r) K(t,s) dr ds dt.$$ Again, application of Cauchy-Schwarz’s and Jensen’s inequalities yields $$\EE\left[ \EE[\zeta_r\vert\cF_t] \EE[\zeta_s\vert\cF_t] \right] \leq
\norm{\zeta_r}_{L^2(\PP)} \norm{\zeta_s}_{L^2(\PP)}, \quad t \leq r,s
\leq T.$$ Consequently, $$\begin{aligned}
\norm{\zeta^K}_{L^2(\PP \otimes dt)}^2
& \leq
\int_0^T \int_t^T \int_t^T \norm{\zeta_r}_{L^2(\PP)}
\norm{\zeta_s}_{L^2(\PP)} K(t,r) K(t,s) dr ds dt \\
& =\int_0^T \left( \int_t^T \norm{\zeta_r}_{L^2(\PP)} K(t,r) dr \right)^2
dt.\end{aligned}$$ Now, put $\tilde{\zeta}_t \set \norm{\zeta_t}_{L^2(\PP)}$ and apply the estimate already proved for deterministic functions to conclude $$\begin{aligned}
\norm{\zeta^K}_{L^2(\PP \otimes dt)}^2
& =\int_0^T \left( \int_t^T \tilde{\zeta}_r K(t,r) dr \right)^2
dt \\
& \leq c \int_0^T \vert \tilde{\zeta}_t \vert^2 dt
= c \int_0^T \EE[\zeta_t^2] dt = c \norm{\zeta}^2_{L^2(\PP \otimes dt)}.\end{aligned}$$
c\) Jensen’s inequality and Fubini’s Theorem give $$\begin{aligned}
\norm{\zeta^{K^{\Xi}}}^2_{L^2(\PP \otimes dt)}
& = \EE \left[ \int_0^T (\zeta^{K^\Xi}_t)^2 dt \right]
\leq \int_0^T \int_t^T \EE[\zeta_u^2] K^\Xi(t,u) du dt \\
& = \int_0^T \EE[\zeta_u^2] \int_0^u K^\Xi(t,u) dt du.\end{aligned}$$ Now, using $\cosh(\tau)-1 \geq \tau^2/2$ for all $\tau \geq 0$, we get $$\begin{aligned}
0 & \leq \int_0^u K^\Xi(t,u) dt = \int_0^u
\frac{\sinh(\tau^\kappa(u))}{\sqrt{\kappa}(\cosh(\tau^\kappa(t))-1)}
dt \\
& \leq
\frac{\sinh(\tau^\kappa(u))}{\sqrt{\kappa}}
\int^u_0\frac{2\kappa}{(T-t)^2} dt
\leq
2 \sqrt{\kappa} \, \frac{\sinh(\tau^\kappa(u))}{T-u}
\underset{u \uparrow T}{\longrightarrow} 1.\end{aligned}$$ Thus, the above integral over $K^\Xi$ is bounded uniformly in $0 \leq u \leq T$ by some constant $c >0$, and so $$\norm{\zeta^{K^{\Xi}}}^2_{L^2(\PP \otimes dt)}
\leq c \int_0^T \EE[\zeta_u^2] du = c \, \norm{\zeta}_{L^2(\PP \otimes dt)}^2$$ yielding the assertion in c).
[^1]: Technische Universit[ä]{}t Berlin, Institut f[ü]{}r Mathematik, Stra[ß]{}e des 17. Juni 136, 10623 Berlin, Germany, email `bank@math.tu-berlin.de`. Financial support by Einstein Foundation through project “Game options and markets with frictions” is gratefully acknowledged.
[^2]: ETH Zürich, Departement für Mathematik, Rämistrasse 101, CH-8092, Zürich, Switzerland, and Swiss Finance Institute, email `mete.soner@math.ethz.ch`.
[^3]: Technische Universit[ä]{}t Berlin, Institut f[ü]{}r Mathematik, Stra[ß]{}e des 17. Juni 136, 10623 Berlin, Germany, email `voss@math.tu-berlin.de`.
|
---
author:
- 'Chiheb Ben Hammouda[^1]'
- 'Nadhir Ben Rached [^2]'
- 'Raúl Tempone[^3] [^4]'
bibliography:
- 'imp\_samp\_SRNS.bib'
title: Importance sampling for a robust and efficient multilevel Monte Carlo estimator for stochastic reaction networks
---
Introduction
============
The construction of the importance sampling algorithm {#sec:The construction of the importance sampling algorithm}
=====================================================
Numerical Experiments {#sec:num_experiments}
=====================
Conclusions and future work {#sec:Conclusions and future work}
===========================
**Acknowledgments** This work was supported by the KAUST Office of Sponsored Research (OSR) under Award No. URF/1/2584-01-01 and the Alexander von Humboldt Foundation. C. Ben Hammouda and R. Tempone are members of the KAUST SRI Center for Uncertainty Quantification in Computational Science and Engineering.
[^1]: King Abdullah University of Science and Technology (KAUST), Computer, Electrical and Mathematical Sciences & Engineering Division (CEMSE), Thuwal $23955-6900$, Saudi Arabia ([chiheb.benhammouda@kaust.edu.sa]{}).
[^2]: Chair of Mathematics for Uncertainty Quantification, RWTH Aachen University, Aachen $52072$, Germany. ([benrached@uq.rwth-aachen.de]{}).
[^3]: King Abdullah University of Science and Technology (KAUST), Computer, Electrical and Mathematical Sciences & Engineering Division (CEMSE), Thuwal $23955-6900$, Saudi Arabia ([raul.tempone@kaust.edu.sa]{}).
[^4]: Alexander von Humboldt Professor in Mathematics for Uncertainty Quantification, RWTH Aachen University, Aachen $52072$, Germany.
|
---
abstract: 'In this study, we examine decoherence of a spin qubits system coupled independently to counted spin chain with a $1/r^2$ interaction by using influence functional. We also examine the time evolution of density matrix numerically when environment is Gaussian noise.'
author:
- Toshifumi Itakura
title: Decoherence of coupled spin qubit system
---
Introduction
============
Among various proposals for quantum computations, quantum bits (qubits) in solid materials, such as superconducting Josephson junctions [@Nakamura], and quantum dots [@Hayashi; @Tanamoto; @Loss], have the advantage of scalability. Such coherent two level systems constitute qubits and a quantum computation can be carried out as a unitary operation applied to many qubit systems. It is essential that this quantum coherence be maintained during computation. However, dephasing is hard to avoid, due to the interaction between qubit system the environment. The decay of off-diagonal elements of the qubit density matrix signals occurrence of dephasing. Various environments can cause dephasing. In solid systems, the effect of phonons is important [@Fujisawa_SC]. The effect of electromagnetic fluctuation has been extensively studied for Josephson junction charge qubits [@Schon].
For spin qubit system, the fluctuations of the nuclear spins of impurities can also be a cause of dephasing. It has recently been shown experimentally that the coupling between the spin of an electron in a quantum dot and the environment is very weak [@Fujisawa_Nature; @Erlingsson; @Khaetskii]. Decoherence of nuclear spins is experimentally examined, in which the source of decoherence is dipole-dipole interaction. [@Yusa] For this reason, the dephasing time of a spin qubit conjectured to be very long. However, both donor impurities and nuclear spins in semiconductors [@Ladd] have been suggested as possible building blocks for feasible quantum dots architectures. The proposal based on experimental findings of quantum computation by using an Si$^{29}$ array is another possibility [@Ladd]. For a spin qubit system in 2D, the coupling with neighbor spin chain can cause a dephasing. When an interaction is between the qubits themselves, it can in principle be incorporated into the quantum computer Hamiltonian, although this would lead to more complicated gate sequences. Therefore it is instructive to analyze the error introduced by ignoring some of these interactions, as done in the case of dipolar coupled spin qubits [@Sousa_1] and spin-orbit interaction. [@Burkard] For a quantum spin chain with a $1/r^2$ interaction, an exact expression of the dynamical correlation function has been obtained [@Haldane]. Using this expression, we consider the case in which the qubit system is coupled to spin of the spin chain. We examine the relaxation phenomena of a spin qubits array which is coupled to a spin chain with long-range interactions. Integrating over the spin chain variables, we obtain the influence functional of the qubit system. [@Weiss; @Itakura] Using this influence functional, we examine the dephasing of the qubits density matrix. In the present study, we especially concentrated on the effect of spin flip process. We examine the zero-dimensional qubit and one-dimensional qubits system. The spin flips process lead to oscillation self-excitation. [@Tokura_s]
Hamiltonian
===========
We examine the Hamiltonian of counted spin chain and qubits system. The type of pin chain and qubit interaction is XXZ, ($A_{\perp}, A_{zz} \ne 0$). $$\begin{aligned}
H_{spin} &=& J \sum_{n,m} (-1)^{n-m} [d(n-m)]^2
{\bf \vec{S}}_n \cdot \bf{ \vec{S}}_m, \\
H_{qb} &=& \sum_i \hbar \omega_o I_i^z, \\
H_{int} &=& \sum_i \gamma_N \hbar^2 [
\frac{1}{2} A_{\perp} (S_{i}^x I^x_i + S_{i}^y I^y_i )
+ A_{zz} S_{i}^z I^z_i ]. \nonumber \\\end{aligned}$$ This Hamiltonian $H_{spin}$ represent inverse square interacting spin chain system, and is counted Haldane-Shastry model. Here, $d(n)$=$(N/\pi) \sin (\pi n /N)$, $S_{i}^{j}$,($j=x,y,z$) is spin operator at site $i$ of spin chain, $I^{j}_i$,($j=x,y,z$) is qubit operator at site $i$, $J$ is strength of interaction of spin chain system, $\gamma_N$ is geomagnetic ratio of qubit. Above model, the spin chain and zero-dimensional qubit interacts with contact interaction. We also examine the one-dimensional qubit system with contact interaction, for this case the effect of indirect interaction appears. And, we scale the length by the lattice length $a$.
single qubit system
===================
First, we consider a single qubit and one-dimensional spin chain system. Thus, we examine the effect of direct interaction by influence functional method. [@Weiss; @Itakura] The interaction Hamiltonian is as below, $$H_{int} = \gamma_N \hbar^2 [
\frac{1}{2} A_{\perp} (S_{0}^x I^x + S_{0}^y I^y )
+ A_{zz} S_0^z I^z ].$$ We integrate about the qubit system except the 0-th site spin. This lead density matrix of spin system, $$\rho ( I_{z+}^f, I_{z-}^f )
=\int^{I_{z+}(t) =I_{z+}^f, I_{z-} (t) =
I_{z-}^f}_{ I_{z+}(0)=I_{z+}^f, I_{z+} (0) = I_{z+}^f}
[ d I_{z+} ] [ d I_{z-} ]
{\exp} ( \frac{i}{\hbar} (I_{qb} [ I_{z-} ] - I_{qb} [ I_{z+} ] ))
F[I_{z+},I_{z-}],$$ the influence functional is $$\begin{aligned}
F[ I_{z+}, I_{z-} ]
&=& \int [ d {\bf S}_{+i} ] [ d {\bf S}_{-i} ] \nonumber \\
\delta ( {\bf S}_{+i} (t) - {\bf S}_{-i}(t) )
\rho ( {\bf S}_i (0), {\bf S}_j (0) ) \nonumber \\
&& {\rm exp} \{
\frac{i}{\hbar} ( I[{\bf S}_+ ]- I [{\bf S}_- ] )
\}, \end{aligned}$$ where, $I_{qb} [I] = \int_0^t \hbar \Delta I_z$. For following discussion, we choose the action of system, $ I[ {\bf S} ] = I_0 [ {\bf S} ] + I_{int} [ I_{z}, S_zi^0 ]$, thus the unperturbed is given by, $$\begin{aligned}
I_0 [ {\bf S} ] &=& \frac{ ( \gamma_N \hbar)^2}{4}
\int_0^t \int_0^{t_1} dt_1 dt_2
A_{zz}^2 S_i^z (t_1)
{\bf \Delta}_{00p}^{zz -1} ( t_1 , i , t_2 , i)
S_i^z ( t_2 ) \nonumber \\
&& \frac{1}{4} A_{\perp}^2 ( S_i^+ (t_1)
{\bf \Delta}_{00p}^{+- -1} ( t_1 , i , t_2 , i)
S_i^- ( t_2 )
+ S_i^- (t_1)
{\bf \Delta}_{00p}^{-+ -1} ( t_1 , i , t_2 , i)
S_i^+ ( t_2 ) )\nonumber \\
&+& i \frac{\theta}{4\pi} \int dx \int_0^{t} dt \vec{n}
\cdot (\frac{\partial \vec{\bf{S}}}{\partial x}
\times \frac{\partial \vec{\bf{S}}}{\partial t} ) \end{aligned}$$ Where $\theta=2 \pi n $ and ${\bf \Delta}_{00p} (t_1,i,t_2, j) $ is the free propagator of environmental system at zero temperature which is defined on closed time path and has four components, $$\begin{aligned}
\label{eqn:Green}
&& {\bf \Delta}_{00p} (i, t_1, j, t_2) \nonumber \\
&=& \left(
\begin{array}{cc}
{\bf \Delta}_{00}^{++} (i, t_1,j, t_2) & {\bf \Delta}_{00}^{+-}
(i, t_1, j, t_2) \\
{\bf \Delta}_{00}^{-+} (i, t_1,j, t_2) &{\bf \Delta}_{00}^{--}
(i, t_1, j, t_2)
\end{array}
\right). \nonumber \\\end{aligned}$$ For the Berry phase term, because n is odd, the environment is a one-dimensional spin-1/2 system. [@Tsvelik] In the present model, the spin chain is a gapless solvable model (a special point in spin chain systems). Therefore, we can integrate over the spin chain degree of freedom and the Berry phase term does not make system to be distempered. Introducing the incoming interaction picture for the environment system we can easily verify that Eq. (\[eqn:Green\]) turns out, [@Chou] $$\begin{aligned}
F[ I_{z+}, I_{z-}] &=&
{\rm exp} [ - i \frac{(\gamma_N \hbar)^2}{4}
\int_{0}^{t} \int_{0}^{t_1} dt_1 dt_2 \nonumber \\
&& A_{zz}^2
( I_{z+} (t_1) \Delta_{00}^{++} (i,t_1,i,t_2) I_{z+} (t_2) \nonumber \\
&+& I_{z-} (t_1) \Delta_{00}^{--} (i,t_1,i,t_2) I_{z-} (t_2) \nonumber \\
&-& I_{z+} (t_1) \Delta_{00}^{+-} (i,t_1,i,t_2) I_{z-} (t_2) \nonumber \\
&-& I_{z-} (t_1) \Delta_{00}^{-+} (i,t_1,i,t_2) I_{z+} (t_2)) \nonumber \\
&& \frac{1}{4} A_{\perp}^2
( I_{++} (t_1) \Delta_{00}^{++} (i,t_1,i,t_2) I_{++} (t_2) \nonumber \\
&+& I_{+-} (t_1) \Delta_{00}^{--} (i,t_1,i,t_2) I_{+-} (t_2) \nonumber \\
&-& I_{++} (t_1) \Delta_{00}^{+-} (i,t_1,i,t_2) I_{+-} (t_2) \nonumber \\
&-& I_{+-} (t_1) \Delta_{00}^{-+} (i,t_1,i,t_2) I_{++} (t_2) )\nonumber \\
&& \frac{1}{4} A_{\perp}^2
( I_{-+} (t_1) \Delta_{00}^{++} (i,t_1,i,t_2) I_{-+} (t_2) \nonumber \\
&+& I_{--} (t_1) \Delta_{00}^{--} (i,t_1,i,t_2) I_{--} (t_2) \nonumber \\
&-& I_{-+} (t_1) \Delta_{00}^{+-} (i,t_1,i,t_2) I_{--} (t_2) \nonumber \\
&-& I_{--} (t_1) \Delta_{00}^{-+} (i,t_1,i,t_2) I_{-+} (t_2)) ]. \nonumber \\ \end{aligned}$$ For the convenience we change of the coordinate, $$\eta_z = (I_{z+} + I_{z-})/2,
\xi_z = (I_{z+} - I_{z-})/2.$$ The $\eta$ and $\xi$ are called sojourn and blip. In terms of this variable, density matrix is described by as follows $$\begin{aligned}
\rho (\eta_z =1) &=& |\uparrow><\uparrow|,
\rho ( \eta_z =-1) = |\downarrow><\downarrow|, \nonumber \\
\rho (\xi_z = 1 ) &=& |\uparrow><\downarrow|,
\rho ( \xi_z = - 1) = |\downarrow><\uparrow |. \nonumber \end{aligned}$$ Another variable are defined by $$\eta_+ = (I_{++} + I_{+-})/2,
\xi_+ = (I_{++} - I_{+-})/2,$$ $$\eta_- = (I_{-+} + I_{--})/2,
\xi_- = (I_{-+} - I_{--})/2.$$ In terms of above variables the elements of density matrix are expressed as $$\begin{aligned}
\rho (\eta_+ =1) &=& |\uparrow><\downarrow|,
\rho ( \eta_+ =-1) = |\downarrow><\uparrow|, \nonumber \\
\rho (\xi_+ = 1 ) &=& |\uparrow><\uparrow|,
\rho ( \xi_+ = - 1) = |\downarrow><\downarrow |. \nonumber \\
\rho (\eta_- =1 ) &=& |\downarrow><\uparrow|,
\rho ( \eta_- =-1) = |\uparrow><\downarrow|, \nonumber \\
\rho (\xi_- = 1 ) &=& |\downarrow><\downarrow|,
\rho ( \xi_- = - 1) = |\uparrow><\uparrow |. \nonumber \end{aligned}$$ Therefore, the below equations are hold for these new variables, $$\eta_z = \xi_+ = -\xi_- , \xi_z = \eta_+ =-\eta_-.$$
In this case, the influence functional is expressed as $$\begin{aligned}
F[ \eta, \xi ] &=&
{\rm exp} [ - i \frac{( \gamma_N \hbar)^2}{4}
\int_{0}^{t} \int_{0}^{t_1} dt_1 dt_2 \nonumber \\
&& A_{zz}^2
\{ \xi (t_1) G^R (t_1,i,t_2,i) \eta (t_2) \nonumber \\
&+& \eta (t_1) G^A (t_1,i,t_2,i) \xi (t_2) \nonumber \\
&-& \xi (t_1) G^K (t_1,i,t_2,i)
\xi (t_2) \} \nonumber \\
&+& \frac{1}{2} (
A_{\perp}^2 \{ \eta (t_1) G^R (t_1,i,t_2,i) \xi (t_2) \nonumber \\
&+& \xi (t_1) G^A (t_1,i,t_2,i) \eta (t_2) \nonumber \\
&-& \eta (t_1) G^K (t_1,i,t_2,i)
\eta (t_2) \} ) \nonumber \\
&+& \frac{1}{2} (
A_{\perp}^2 \{ \eta (t_1) G^R (t_1,i,t_2,i) \xi (t_2) \nonumber \\
&+& \xi (t_1) G^A (t_1,i,t_2,i) \eta (t_2) \nonumber \\
&-& \eta (t_1) G^K (t_1,i,t_2,i)
\eta (t_2) \} ) ] \nonumber \\
&=&
{\rm exp} [ - i \frac{( \gamma_N \hbar)^2}{4}
\int_{0}^{t} \int_{0}^{t_1} dt_1 dt_2 \nonumber \\
&& \{ \xi (t_1) ( A_{zz}^2 G^R (t_1,i,t_2,i)
+ A_{\perp}^2 G^A (t_1,i,t_2,i) )
\eta (t_2) \nonumber \\
&+& \eta (t_1) ( A_{zz}^2 G^A (t_1,i,t_2,i)
+ A_{\perp}^2 G^R (t_1,i,t_2,i) ) \xi (t_2)
\nonumber \\
&-& A_{zz}^2 \xi (t_1) G^K (t_1,i,t_2,i)
\xi (t_2) \nonumber \\
&-& 2 A_{zz}^2 \eta (t_1) G^K (t_1,i,t_2,i) \eta (t_2) \}],
\nonumber \\\end{aligned}$$ where $G^R (t_1,i,t_2,j)$, $G^A (t_1,i,t_2,j)$ and $G^K (t_1,i,t_2,j)$ are retarded Green’s function, advanced Green’s function and Keldysh Green’s function. For above integral equation, after we slice the time and take a difference, we get the differential equation for density matrix is given by, $$\begin{aligned}
&& \frac{ d \rho_{\rm b} (t)}{d t} =
- \frac{i}{\hbar} [H_{qb}, \rho_{\rm b} (t)]
-\frac{i}{4} (\gamma_N \hbar)^2
\int_0^t dt_1\nonumber \\
&& \left(
\begin{array}{cc}
-A_{zz}^2 G_K (i, t,i, t_1), & A_{zz}^2 G^A (i, t,i, t_1)
+ A_{\perp}^2 G^R (i, t,i, t_1) \\
A_{zz}^2 G^R (i, t,i, t_1) + A_{\perp}^2 G^A (i, t,i, t_1),
& - 2 A_{\perp}^2 G_K (i, t, i, t_1) \\
\end{array}
\right) \rho_{\rm b} (t_1) \nonumber \\
\nonumber \\\end{aligned}$$ where $[A,B]$ is $AB-BA$ and $\rho_{\rm b} (t)$ is $$\begin{aligned}
\rho_{\rm b} (t) = \left( \begin{array}{cc}
\eta_z (t) =1, & \eta_z (t)= -1 \\
\xi_z (t) =1, & \xi_z (t) = -1 \\
\end{array} \right)\end{aligned}$$ Next we choose the representation of density matrix for the spin diagonal case, $$\begin{aligned}
&& \frac{ d \rho_{\bf s} (t)}{d t}
= - \frac{i}{\hbar} [ H_{qb}, \rho_{\rm s}(t)]
- \frac{i}{4} ( \gamma_N \hbar)^2 \int_0^t dt_1
\nonumber \\
&& \left(
\begin{array}{c}
- \frac{A_{zz}^2 + A_{\perp}^2 }{2} G^K (i,t,i,t_1) \\
\frac{A_{zz}^2 + A_{\perp}^2}{2} ( G^R (i,t,i,t_1) + G^A (i,t,i,t_1)) \\
\frac{i(A_{zz}^2 - A_{\perp}^2 )}{2}
( G^A (i,t,i,t_1) - G^R (i,t,i,t_1)) \\
-\frac{ A_{zz}^2 - A_{\perp}^2 }{2} G^K (i,t,i,t_1) \\
\end{array}
\right)^{t} \rho_{\bf s} (t_1) \nonumber \\\end{aligned}$$. where the density matrix, $\rho_{\rm s} (t)$ is represented as $$\begin{aligned}
\rho_{\rm s} (t) &=& \left( \begin{array}{c}
|\uparrow (t) >< \uparrow (t)| + |\downarrow(t)><\downarrow(t)|,\\
|\uparrow (t)><\downarrow (t)|, \\
|\downarrow (t) >< \uparrow (t) |, \\
|\uparrow (t) >< \uparrow (t)|-|\downarrow(t)><\downarrow(t)| \\
\end{array} \right). \nonumber \\\end{aligned}$$ The trace of density matrix decreases with time. Another diagonal element show different behavior. For spin flip process, another diagonal element increases with time, this represents self-excitation. The one of off-diagonal element become decoherence. Another off-diagonal element shows oscillation where modulation of signal occurs.
one-dimensional qubits system
=============================
Next, we examine 1-dimensonal qubit system by using influence functional. [@Itakura] In this case, the effect of indirect interaction appears. The interaction Hamiltonian is as below, $$H_{int} = \gamma_N \hbar^2 \sum_i [
\frac{1}{2}
A_{\perp} ( I_{i}^+ S_{i}^- + I_{i}^- S_{i}^+) +A_{zz} I_{i}^z S_{i}^z ].$$ We integrate about the qubit system except i-th site spin. This lead density matrix of spin system, $$\begin{aligned}
\rho ( I_{zi+}^f, I_{zi-}^f )
&=& \Pi_i \int^{I_{zi+}(t) =I_{zi+}^f, I_{zi-} (t) =
I_{zi-}^f}_{ I_{zi+}(0)=I_{zi+}^f, I_{zi+} (0) = I_{zi+}^f}
[ d I_{zi+} ] [ d I_{zi-} ] \nonumber \\
&& {\exp} ( \frac{i}{\hbar} (I_{qb} [ I_{zi-} ] - I_{qb} [ I_{zi+} ] ))
F[I_{zi+},I_{zi-}], \end{aligned}$$ the influence functional is $$\begin{aligned}
F[ I_{zi+}, I_{zj-} ] &=& \int [ d {\bf S}_{+i} ] [ d {\bf S}_{-i} ]
\delta ( {\bf S}_{+i} (t) - {\bf S}_{-i}(t) ) \nonumber \\
&& \rho ( {\bf S}_i (0), {\bf S}_j (0) )
{\rm exp} \{
\frac{i}{\hbar} ( I[{\bf S}_+ ]- I [{\bf S}_- ] )
\}, \nonumber \end{aligned}$$ where, $I_{qb} [I] = \int_0^t \hbar \Delta I_iz$. For following discussion, we choose the action of system, $ I[ {\bf S} ] = I_0 [ {\bf S} ] + I_{int} [ I_{zi}, S_zi^0 ]$, thus the unperturbed is given by, $$\begin{aligned}
I_0 [ {\bf S} ] &=& \frac{ (A_{zz} \gamma_N \hbar)^2}{4}
\int_0^t \int_0^{t_1} dt_1 dt_2 \sum_{i.j}
{\bf S}_i^z (t_1)
{\bf \Delta}_{00p}^{zz-1} ( t_1 , i , t_2 , j)
{\bf S}_j^z ( t_2 ) \nonumber \\
&+& \frac{ (A_{\perp} \gamma_N \hbar)^2}{4}
( {\bf S}_i^+ (t_1)
{\bf \Delta}_{00p}^{+--1} ( t_1 , i , t_2 , j)
{\bf S}_j^- ( t_2 )
+ {\bf S}_i^- (t_1)
{\bf \Delta}_{00p}^{-+-1} ( t_1 , i , t_2 , j)
{\bf S}_j^+ ( t_2 ) \nonumber \\
&+& i \frac{\theta}{4\pi} \int dx \int_0^{t} dt \vec{n}
\cdot (\frac{\partial \vec{\bf{S}}}{\partial x}
\times \frac{\partial \vec{\bf{S}}}{\partial t} ) \end{aligned}$$ where $\theta=2 \pi n $ and ${\bf \Delta}_{00p}^v (t_1,i,t_2, j) $ is the free propagator of environmental system at zero temperature which is defined on closed time path and has four components. For the Berry phase term, because n is odd, the environment is a one-dimensional spin-1/2 system. [@Tsvelik] In the present model, the spin chain is a gapless solvable model (a special point in spin chain systems). Therefore, we can completely integrate over the spin chain degree of freedom and the Berry phase term does not make system to be distempered. Introducing the incoming interaction picture for the environment system we can easily verify, and the equation turns out, $$\begin{aligned}
F[ I_{zi+}, I_{zi-}] &=&
{\rm exp} [ - i \frac{(\gamma_N \hbar)^2}{4}
\int_{0}^{t} \int_{0}^{t_1} dt_1 dt_2 \nonumber \\
&& A_{zz}^2
( I_{zi+} (t_1) \Delta_{00}^{++} (i,t_1,j,t_2) I_{zj+} (t_2) \nonumber \\
&+& I_{zi-} (t_1) \Delta_{00}^{--} (i,t_1,j,t_2) I_{zj-} (t_2) \nonumber \\
&-& I_{zi+} (t_1) \Delta_{00}^{+-} (i,t_1,j,t_2) I_{zj-} (t_2) \nonumber \\
&-& I_{zi-} (t_1) \Delta_{00}^{-+} (i,t_1,j,t_2) I_{zj+} (t_2)) \nonumber \\
&& \frac{1}{4} A_{\perp}^2
( I_{+i+} (t_1) \Delta_{00}^{++} (i,t_1,j,t_2) I_{+j+} (t_2) \nonumber \\
&+& I_{+i-} (t_1) \Delta_{00}^{--} (i,t_1,j,t_2) I_{+j-} (t_2) \nonumber \\
&-& I_{+i+} (t_1) \Delta_{00}^{+-} (i,t_1,j,t_2) I_{+j-} (t_2) \nonumber \\
&-&
I_{+i-} (t_1) \Delta_{00}^{-+} (i,t_1,j,t_2) I_{+j+} (t_2) ) \nonumber \\
&& \frac{1}{4} A_{\perp}^2
( I_{-i+} (t_1) \Delta_{00}^{++} (i,t_1,j,t_2) I_{-j+} (t_2) \nonumber \\
&+& I_{-i-} (t_1) \Delta_{00}^{--} (i,t_1,j,t_2) I_{-j-} (t_2) \nonumber \\
&-& I_{-i+} (t_1) \Delta_{00}^{+-} (i,t_1,j,t_2) I_{-j-} (t_2) \nonumber \\
&-& I_{-i-} (t_1) \Delta_{00}^{-+} (i,t_1,j,t_2) I_{-j+} (t_2)) ]. \end{aligned}$$ For the convenience we change of the coordinate, $$\eta_{zi} = (I_{zi+} + I_{zi-})/2,
\xi_{zi} = (I_{zi+} - I_{zi-})/2.$$ The $\eta$ and $\xi$ are called sojourn and blip. In terms of this variable, density matrix is described by as follows $$\begin{aligned}
\rho (\eta_{zi} =1) &=& |\uparrow_i><\uparrow_i|,
\rho ( \eta_{zi} =-1) = |\downarrow_i><\downarrow_i|, \nonumber \\
\rho (\xi_{zi} = 1 ) &=& |\uparrow_i><\downarrow_i|,
\rho ( \xi_{zi} = - 1) = |\downarrow_i><\uparrow_i |. \nonumber \end{aligned}$$ Another variable are defined by $$\eta_{+i} = (I_{+i+} + I_{+i-})/2,
\xi_{+i} = (I_{+i+} - I_{+i-})/2,$$ $$\eta_{-i} = (I_{-i+} + I_{-i-})/2,
\xi_{-i} = (I_{-i+} - I_{-i-})/2.$$ In terms of above variables the elements of density matrix are expressed as $$\begin{aligned}
\rho (\eta_{+i} =1) &=& |\uparrow_i><\downarrow_i|,
\rho ( \eta_{+i} =-1) = |\downarrow_i><\uparrow_i|, \nonumber \\
\rho (\xi_{+i} = 1 ) &=& |\uparrow_i><\uparrow_i|,
\rho ( \xi_{+i} = - 1) = |\downarrow_i><\downarrow_i |. \nonumber \\
\rho (\eta_{-i} =1 ) &=& |\downarrow_i><\uparrow_i|,
\rho ( \eta_{-i} =-1) = |\uparrow_i><\downarrow_i|, \nonumber \\
\rho (\xi_{-i} = 1 ) &=& |\downarrow_i><\downarrow_i|,
\rho ( \xi_{-i} = - 1) = |\uparrow_i><\uparrow_i |. \nonumber \end{aligned}$$ Therefore, the below equations are hold for these new variables, $$\eta_{zi} = \xi_{+i} = -\xi_{-i} , \xi_{zi} = \eta_{+i} =-\eta_{-i}.$$
In this case, the influence function is expressed as $$\begin{aligned}
&=&
{\rm exp} [ - i \frac{( \gamma_N \hbar)^2}{4}
\int_{0}^{t} \int_{0}^{t_1} dt_1 dt_2 \nonumber \\
&& \{ \xi_i (t_1) ( A_{zz}^2 G^R (t_1,i,t_2,j)
+ A_{\perp}^2 G^A (t_1,i,t_2,j) )
\eta_j (t_2) \nonumber \\
&+& \eta_i (t_1) ( A_{zz}^2 G^A (t_1,i,t_2,j)
+ A_{\perp}^2 G^R (t_1,i,t_2,j) ) \xi_j (t_2)
\nonumber \\
&-& A_{zz}^2 \xi_i (t_1) G^K (t_1,i,t_2,j)
\xi_j (t_2)
- 2 A_{zz}^2 \eta_i (t_1) G^K (t_1,i,t_2,j) \eta_j (t_2) \}],
\nonumber \\\end{aligned}$$ where $G^R (t_1,i,t_2,j)$, $G^A (t_1,i,t_2,j)$ and $G^K (t_1,i,t_2,j)$ are retarded Green’s function, advanced Green’s function and Keldysh Green’s function. For above integral equation, after slice the time and take difference, we get the differential equation for density matrix is given by, $$\begin{aligned}
&&\frac{ d \rho_{\rm b} (i,j,t)}{d t} =
- \frac{i}{\hbar} [H_{qb}, \rho_{\rm b}(t)]
-\frac{i}{4} (\gamma_N \hbar)^2
\int_0^t dt_1 \sum_k \nonumber \\
&& \left(
\begin{array}{cc}
-A_{zz}^2 G_K (i, t,k, t_1) & A_{zz}^2 G^A (i, t,k, t_1)
+ A_{\perp}^2 G^R (i, t,k, t_1) \\
A_{zz}^2 G^R (i, t,k, t_1) + A_{\perp}^2 G^A (i, t,k, t_1)
& - A_{\perp}^2 G_K (i, t, k, t_1) \\
\end{array}
\right) \rho_{\rm b} (t_1,k,j). \nonumber \\\end{aligned}$$ Next we choose the representation of density matrix for the spin diagonal case, $$\begin{aligned}
&& \frac{ d \rho_{\rm s} (i,j,t)}{d t} =
- \frac{i}{\hbar}
[H_{qb}, \rho_{\rm s} (t)] - \frac{i}{4} ( \gamma_N \hbar)^2 \int_0^t dt_1 \sum_k
\nonumber \\
&& \left(
\begin{array}{c}
- \frac{A_{zz}^2 + A_{\perp}^2 }{2} G^K (i,t,k,t_1) \\
\frac{A_{zz}^2 + A_{\perp}^2}{2} ( G^R (i,t,k,t_1) + G^A (i,t,k,t_1)) \\
\frac{i(A_{zz}^2 - A_{\perp}^2 )}{2}
( G^A (i,t,k,t_1) - G^R (i,t,k,t_1)) \\
-\frac{ A_{zz}^2 - A_{\perp}^2 }{2} G^K (i,t,k,t_1) \\
\end{array}
\right) \rho_{\rm s} (k,j,t_1). \nonumber \\\end{aligned}$$ The trace of density matrix decreases with time. Another diagonal element shows different behavior. For spin flip process, another diagonal element increases with time, this represents self-excitation. The off-diagonal element shows oscillation where modulation of the signal occurs. The self-excitation is appears in poor man scale for Kondo effect. The strong coupling limit is self-excitation. Above behavior is simple RG flow.
We examine the pure dephasing event. Because the propagator of qubit has no time dependence, the blip state and sojourn state does not change. Therefore when we choose the initial condition of qubit density matrix to be coherent state, such as, $$\begin{aligned}
\rho ( t=0) = \prod_i
\pm ( |\uparrow_i><\downarrow_i|
\pm |\downarrow_i><\uparrow_i|) ,
\nonumber \\\end{aligned}$$ the time evolution occurs only off-diagonal channel. In addition to that, even if we start from off-diagonal state, the interaction Hamiltonian does not contain the spin flip process. Then the blip state does not change. So we can take the $\xi_i (t)=\xi_i (=\pm 1)$ and $\eta_i (t)=0$ for all t. This situation leads to exact expression of dephasing rate as follows, $$\begin{aligned}
&& \rho (\xi_i,t) = \int^{I_{zi+}(t) =I_{zi+}^f, I_{zi-} (t) =
I_{zi-}^f}_{ I_{zi+}(0)=I_{zi+}^f, I_{zi+} (0) = I_{zi+}^f}
d \xi_i \nonumber \\
&& {\rm exp} [ i \Delta t -i
\frac{( A_{zz} \gamma_N \hbar )^2 }{4}
\sum_{i \ne j} \int_0^t dt \int_0^{t_1} dt_2
\xi_i G^K (t_1,i,t_1,j) \xi_j ] \nonumber \\
&=& {\rm exp} ( i \Delta t )
\times
{\rm det}_i [ - i \frac{(A_{zz} \gamma_N \hbar)^2 }{16}
\int^{t}_0 dt_1 \int^{t_1}_0 dt_2
G^K (t_1,i,t_2,j) ] \nonumber \\
&=& {\rm exp} [ i \Delta t
- {\rm Tr}
\log [ 1 - i \frac{(A_{zz} \gamma_N \hbar)^2 }{16}
\int^{t}_0 dt_1 \int^{t_1}_0 dt_2
G^K (t_1,i,t_2,i)] ] , \nonumber \\ \end{aligned}$$ where we neglect the effect of direct interaction. Using the analytic expression, the time evolution of off diagonal density matrix element is given by $$\begin{aligned}
&-& \ln \Re \rho(\xi_i = \pm 1,t ) \nonumber \\
&=& {\rm Tr}
\log [ 1 + \frac{( A_{zz}
\gamma_N \hbar)^2 }{16}
\int_0^t dt \int_0^{t_1} dt_2
\{ S^z_j (t_1), S^z_j (t_2) \} ] \nonumber \\
&=&
L \log [ 1 + \frac{( A_{zz}
\gamma_N \hbar)^2 }{16}
\int_0^t dt \int_0^{t_1} dt_2
\{ S^z_0 (t_1), S^z_0 (t_2) \} ] \nonumber \\
&=& L \log [ 1 + \frac{( A_{zz} \gamma_N \hbar)^2}{16}
\int_{-\infty}^{\infty} dw
\{ S^z_0 (w), S^z_0 (-w) \}
\left(
\frac{\sin( \omega t / 2)}{\omega/2} \right)^2 ]
\nonumber \\\end{aligned}$$ where $ \{ S^z_0,S^z_0 \} $ is symmetries correlation function, defined by $\{ A,B \} =AB + BA$ because Keldysh Green’s function is symmetries correlation function for the base of present Hilbert space. Then, $$\rho_{\rm off} (t)
= (1+\frac{(A_{zz} \gamma_N \hbar)^2}{16} \int_{-\infty}^{\infty} dw
\{ S^z_0 (w), S^z_0 (-w) \}
\left(
\frac{\sin( \omega t / 2)}{\omega/2} \right)^2 )^{-L}$$ Above result is exact. Thus, for infinite number of qubit system the dephasing rate becomes infinity. Next we evaluate the quality factor. The quantum error correction code rate is given by $\Delta L$, where $\Delta$ is single qubit coherence time and $L$ is number of qubit. The decoherence rate is given by $\ln (1+ \frac{T_{2}^{-1}}{4} t) \frac{ L}{t} $ where $T_{2}^{-1}$ is decoherence rate by single qubit. Therefore the quality factor is given by $Q=\frac{\Delta t }{ \ln(1+ \frac{T_{2}^{-1} }{4} t ) }$. At $t \rightarrow \infty$, $Q$ becomes infinity, thus spin quantum computer is scalable.
Numerical results
=================
For Gaussian noise, we obtain the numerical results. The initial conditions are $I(t),\sigma_x,\sigma_y,\sigma_z=1$. The coupling constants are $A_{zz}=1.0$, $A_{\perp}=0.1$. The off-diagonal components show oscillation. The trace shows decoherence. The diagonal component increases. The purity shows oscillation. The definition of purity is given by $Tr (\rho^2(t))$
For $SU(2)$ symmetric case, $(A_{zz} = A_{\perp})$, the equation for density matrix is given by as follows, $$\begin{aligned}
\frac{d I (t)}{d t} &=& - \frac{1}{4} (\gamma_N \hbar A)^2 \int^t_0 dt_1
C(t-t_1) \sigma_{z} \nonumber \\
\frac{d \sigma_z (t)}{ d t} &=&
-i \frac{\Delta}{\hbar} \sigma_y (t) + \frac{1}{4} ( \gamma_N \hbar)^2 \int_0^t
dt_1 C(t-t_1) \sigma_{x} (t_1) \nonumber \\
\frac{d \sigma_y (t)}{d t} &=& -i \frac{\Delta}{\hbar} (t) \nonumber \\
\frac{d \sigma_z (t)}{dt} &=& 0.\end{aligned}$$ By using above equations, the equation for $\sigma_y (t)$ is $$\begin{aligned}
\frac{d^2 \sigma_y (t)}{d t^2} &=& - \frac{\Delta^2}{\hbar^2} \sigma_{y} (t)
+ i \frac{\hbar}{4 \Delta} (\gamma_N A)^2 \int_0^t dt_1 C(t-t_1) \frac{d \sigma_y (t)}{d t}\end{aligned}$$ The numerical results for $SU(2)$ symmetric case is ginven as Fig.6-10.
(90,90)(0,5) (0,0)
(90,90)(0,5) (0,0)
(90,90)(0,5) (0,0)
(90,90)(0,5) (0,0)
(90,90)(0,5) (0,0)
(90,90)(0,5) (0,0)
(90,90)(0,5) (0,0)
(90,90)(0,5) (0,0)
(90,90)(0,5) (0,0)
Conclusion
==========
In summary, we had examined decoherence of a spin qubits system coupled with a spin chain. The examined decoherence is the case of single qubit system and one-dimensional qubits system. We obtained the differential-integral equation for general initial condition. The trace of density matrix decreases with time. Another diagonal element shows different behavior. For spin flip process, another diagonal element increases with time, this represents self-excitation. The off-diagonal element shows oscillation where modulation of the signal occurs. Decoherence without trace conservation occurs. At thermodynamic limit, quality factor becomes infinity, thus spin quantum computer is scalable. We also examine the numerical calculation for Gaussian noise. The purity is plotted, this quantity shows oscillating behavior.
[99]{} Y. Nakamura, Yu. A. Pashkin, and J. S. Tsai : Nature [**398**]{} (1999), 786. T. Hayashi, T. Fujisawa, H-D. Cheong, Y-H. Jeong and Y. Hirayama : Phys. Rev. Lett. [**91**]{} (2003), 226804. T. Tanamoto : Phys. Rev. A [**61**]{} (2000), 22305. D. Loss and D. P. DiVincenzo : Phys. Rev. A [**57**]{} (1998), 120. T. Fujisawa, T. H. Oosterkamp, W. G. van der Wiel, B. W. Broer, R. Aguado, S. Tarucha and L. P. Kouwenhoven : Science [**282**]{} (2000), 5390. A. Shnirman, G. Schön, and Z. Hermon : Phys. Rev. Lett. [**79**]{} (1997), 2371. T. Fujisawa, D, G. Austing, Y. Tokura, Y. Hirayama and S. Tarucha : Nature [**419**]{} (2002), 278. T. D. Ladd, J. R. Goldman, F. Yamaguchi, Y. Yamamoto, E, Abe and M. Itoh : Rev. Rev. Lett. [**89**]{} (2002), 017901. S. I. Erlingsson, Y. V. Nazarov and V. I. Falko : Phys. Rev. B [**64**]{} (2001), 195306. A. V. Khaetskii, D. Loss and L. Glazman : Phys. Rev. Lett. [**88**]{} (2002), 186802 T. Ota, G. Yusa, N. Kumadam, S. Miyashita, T. Fujisawa and Y. Hirayama : Appl. Phys. Lett. [**91**]{} (2007) 193101. R. de Sousa and S. Das Sarma : Phys. Rev. B [**68**]{} 115322 (2003). G. Burkard and D. Loss Phys. Rev. Lett. [**88**]{} 047903 (2002). F. D. M. Haldane and M. R. Zirnbauer : Phys. Rev. Lett. [**71**]{} (1993) 4055. U. Weiss, [*Quantum Dissipative Systems*]{} (World Scientific, Singapore, 1999) 2nd ed. T. Itakura : Prog. Theor. Phys. [**114**]{} (2005) 275. Y. Tokura, T. Kubo, S. Amaha, T. Kodera, and S. Tarucha : Physica E [**40**]{} (2008) 1690 K. C. Chou, Z. B. Su, B. L. Hao, and L. Yu: Phys. Rep. [**118**]{} (1995) 1. A. M. Tsvelik : Quantum Field Theory in Condenced Matter Physics. \[2nd ed.\] Cambridge University Press, (2003)
|
---
abstract: |
Confidence intervals based on penalized maximum likelihood estimators such as the LASSO, adaptive LASSO, and hard-thresholding are analyzed. In the known-variance case, the finite-sample coverage properties of such intervals are determined and it is shown that symmetric intervals are the shortest. The length of the shortest intervals based on the hard-thresholding estimator is larger than the length of the shortest interval based on the adaptive LASSO, which is larger than the length of the shortest interval based on the LASSO, which in turn is larger than the standard interval based on the maximum likelihood estimator. In the case where the penalized estimators are tuned to possess the ‘sparsity property’, the intervals based on these estimators are larger than the standard interval by an order of magnitude. Furthermore, a simple asymptotic confidence interval construction in the ‘sparse’ case, that also applies to the smoothly clipped absolute deviation estimator, is discussed. The results for the known-variance case are shown to carry over to the unknown-variance case in an appropriate asymptotic sense.
*MSC Subject Classifications:* Primary 62F25; secondary 62C25,
62J07.
*Keywords*: penalized maximum likelihood, penalized least squares, Lasso, adaptive Lasso, hard-thresholding, soft-thresholding, confidence set, coverage probability, sparsity, model selection.
author:
- |
Benedikt M. Pötscher[^1] and Ulrike Schneider[^2]\
Department of Statistics, University of Vienna\
and\
Institute for Mathematical Stochastics, University of Göttingen
date: |
Preliminary version: February 2008\
First version: June 2008\
First revision: May 2009\
Second revision: January 2010
title: 'Confidence Sets Based on Penalized Maximum Likelihood Estimators in Gaussian Regression[^3]'
---
Introduction
============
Recent years have seen an increased interest in penalized maximum likelihood (least squares) estimators. Prominent examples of such estimators are the LASSO estimator (Tibshirani (1996)) and its variants like the adaptive LASSO (Zou (2006)), the Bridge estimators (Frank and Friedman (1993)), or the smoothly clipped absolute deviation (SCAD) estimator (Fan and Li (2001)). In linear regression models with orthogonal regressors, the hard- and soft-thresholding estimators can also be reformulated as penalized least squares estimators, with the soft-thresholding estimator then coinciding with the LASSO estimator.
The asymptotic distributional properties of penalized maximum likelihood (least squares) estimators have been studied in the literature, mostly in the context of a finite-dimensional linear regression model; see Knight and Fu (2000), Fan and Li (2001), and Zou (2006). Knight and Fu (2000) study the asymptotic distribution of Bridge estimators and, in particular, of the LASSO estimator. Their analysis concentrates on the case where the estimators are tuned in such a way as to perform conservative model selection, and their asymptotic framework allows for dependence of parameters on sample size. In contrast, Fan and Li (2001) for the SCAD estimator and Zou (2006) for the adaptive LASSO estimator concentrate on the case where the estimators are tuned to possess the ‘sparsity’ property. They show that, with such tuning, these estimators possess what has come to be known as the ‘oracle property’. However, their results are based on a fixed-parameter asymptotic framework only. Pötscher and Leeb (2009) and Pötscher and Schneider (2009) study the finite-sample distribution of the hard-thresholding, the soft-thresholding (LASSO), the SCAD, and the adaptive LASSO estimator under normal errors; they also obtain the asymptotic distributions of these estimators in a general ‘moving parameter’ asymptotic framework. The results obtained in these two papers clearly show that the distributions of the estimators studied are often highly non-normal and that the so-called ‘oracle property’ typically paints a misleading picture of the actual performance of the estimator. \[In the wake of Fan and Li (2001) a considerable literature has sprung up establishing the so-called ‘oracle property’ for a variety of estimators. All these results are fixed-parameter asymptotic results only and can be very misleading. See Leeb and Pötscher (2008) and Pötscher (2009) for more discussion.\]
A natural question now is what all these distributional results mean for confidence intervals that are based on penalized maximum likelihood (least squares) estimators. This is the question we address in the present paper in the context of a normal linear regression model with orthogonal regressors. In the known-variance case we obtain formulae for the finite-sample infimal coverage probabilities of fixed-width confidence intervals based on the following estimators: hard-thresholding, LASSO (soft-thresholding), and adaptive LASSO. We show that among those intervals the symmetric ones are the shortest, and we show that hard-thresholding leads to longer intervals than the adaptive LASSO, which in turn leads to longer intervals than the LASSO. All these intervals are longer than the standard confidence interval based on the maximum likelihood estimator, which is in line with Joshi (1969). In case the estimators are tuned to possess the ‘sparsity’ property, explicit asymptotic formulae for the length of the confidence intervals are furthermore obtained, showing that in this case the intervals based on the penalized maximum likelihood estimators are larger by an order of magnitude than the standard maximum likelihood based interval. This refines, for the particular estimators considered, a general result for confidence sets based on ‘sparse’ estimators (Pötscher (2009)). Additionally, in the ‘sparsely’ tuned case a simple asymptotic construction of confidence intervals is provided that also applies to other penalized maximum likelihood estimators such as the SCAD estimator. Furthermore, we show how the results for the known-variance case carry over to the unknown-variance case in an asymptotic sense.
The plan of the paper is as follows: After introducing the model and estimators in Section 2, the known-variance case is treated in Section 3 whereas the unknown-variance case is dealt with in Section 4. All proofs as well as some technical lemmata are relegated to the Appendix.
The Model and Estimators
========================
For a normal linear regression model with orthogonal regressors, distributional properties of penalized maximum likelihood (least squares) estimators with a separable penalty can be reduced to the case of a Gaussian location problem; for details see, e.g., Pötscher and Schneider (2009). Since we are only interested in confidence sets for individual components of the parameter vector in the regression that are based on such estimators, we shall hence suppose that the data $y_{1},\ldots ,y_{n}$ are independent identically distributed as $N(\theta ,\sigma ^{2})$, $\theta \in \mathbb{R}$, $0<\sigma <\infty $. \[This entails no loss of generality in the known-variance case. In the unknown-variance case an explicit treatment of the orthogonal linear model would differ from the analysis in the present paper only in that the estimator $\hat{\sigma}^{2}$ defined below would be replaced by the usual residual variance estimator from the least-squares regression; this would have no substantial effect on the results.\] We shall be concerned with confidence sets for $\theta $ based on penalized maximum likelihood estimators such as the hard-thresholding estimator, the LASSO (reducing to soft-thresholding in this setting), and the adaptive LASSO estimator. The hard-thresholding estimator $\tilde{\theta}_{H}$ is given by$$\tilde{\theta}_{H}:=\tilde{\theta}_{H}(\eta _{n})=\bar{y}\boldsymbol{1}(\left\vert \bar{y}\right\vert >\hat{\sigma}\eta _{n})$$where the threshold $\eta _{n}$ is a positive real number, $\bar{y}$ denotes the maximum likelihood estimator, i.e., the arithmetic mean of the data, and $\hat{\sigma}^{2}=(n-1)^{-1}\sum_{i=1}^{n}(y_{i}-\bar{y})^{2}$. Also define the infeasible estimator$$\hat{\theta}_{H}:=\hat{\theta}_{H}(\eta _{n})=\bar{y}\boldsymbol{1}(\left\vert \bar{y}\right\vert >\sigma \eta _{n})$$which uses the value of $\sigma $. The LASSO (or soft-thresholding) estimator $\tilde{\theta}_{S}$ is given by $$\tilde{\theta}_{S}:=\tilde{\theta}_{S}(\eta _{n})=\limfunc{sign}(\bar{y})(\left\vert \bar{y}\right\vert -\hat{\sigma}\eta _{n})_{+}$$and its infeasible version by$$\hat{\theta}_{S}:=\hat{\theta}_{S}(\eta _{n})=\limfunc{sign}(\bar{y})(\left\vert \bar{y}\right\vert -\sigma \eta _{n})_{+}.$$Here $\limfunc{sign}(x)$ is defined as $-1$, $0$, and $1$ in case $x<0$, $x=0 $, and $x>0$, respectively, and $z_{+}$ is shorthand for $\max \{z,0\}$. The adaptive LASSO estimator $\tilde{\theta}_{A}$ in this simple model is given by
$$\tilde{\theta}_{A}:=\tilde{\theta}_{A}(\eta _{n})=\bar{y}(1-\hat{\sigma}^{2}\eta _{n}^{2}/\bar{y}^{2})_{+}=\left\{
\begin{array}{cl}
0 & \text{if }\;|\bar{y}|\leq \hat{\sigma}\eta _{n} \\
\bar{y}-\hat{\sigma}^{2}\eta _{n}^{2}/\bar{y} & \text{if}\;\;|\bar{y}|>\hat{\sigma}\eta _{n},\end{array}\right.$$
and its infeasible counterpart by$$\hat{\theta}_{A}:=\hat{\theta}_{A}(\eta _{n})=\bar{y}(1-\sigma ^{2}\eta
_{n}^{2}/\bar{y}^{2})_{+}=\left\{
\begin{array}{cl}
0 & \text{if }\;|\bar{y}|\leq \sigma \eta _{n} \\
\bar{y}-\sigma ^{2}\eta _{n}^{2}/\bar{y} & \text{if}\;\;|\bar{y}|>\sigma
\eta _{n}.\end{array}\right.$$It coincides with the nonnegative Garotte in this simple model. For the feasible estimators we always need to assume $n\geq 2$, whereas for the infeasible estimators also $n=1$ is admissible.
Note that $\eta _{n}$ plays the rôle of a tuning parameter and it is most natural to let the estimators depend on the tuning parameter only via $\sigma \eta _{n}$ and $\hat{\sigma}\eta _{n}$, respectively, in order to take account of the scale of the data. This makes the estimators mentioned above scale equivariant. We shall often suppress dependence of the estimators on $\eta _{n}$ in the notation. In the following let $P_{n,\theta
,\sigma }$ denote the distribution of the sample when $\theta $ and $\sigma $ are the true parameters. Furthermore, let $\Phi $ denote the standard normal cumulative distribution function.
We also note the following obvious fact: Since hard- and soft-thresholding operate in a coordinatewise fashion, the results given below also apply mutatis mutandis to linear regressions with non-orthogonal regressors. Of course, the soft-thresholding estimator then no longer coincides with the LASSO estimator. We refrain from spelling out details.
Confidence Intervals: Known-Variance Case\[finite\_sample\]
===========================================================
In this section we consider the case where the variance $\sigma ^{2}$ is known, $n\geq 1$ holds, and we are interested in the finite-sample coverage properties of intervals of the form $[\hat{\theta}-\sigma a_{n},\hat{\theta}+\sigma b_{n}]$ where $a_{n}$ and $b_{n}$ are nonnegative real numbers and $\hat{\theta}$ stands for any one of the estimators $\hat{\theta}_{H}=\hat{\theta}_{H}(\eta _{n})$, $\hat{\theta}_{S}=\hat{\theta}_{S}(\eta _{n})$, or $\hat{\theta}_{A}=\hat{\theta}_{A}(\eta _{n})$. We shall also consider one-sided intervals $(-\infty ,\hat{\theta}+\sigma c_{n}]$ and $[\hat{\theta}-\sigma c_{n},\infty )$ with $0\leq c_{n}<\infty $. Let $p_{n}(\theta
;\sigma ,\eta _{n},a_{n},b_{n})=P_{n,\theta ,\sigma }\left( \theta \in
\lbrack \hat{\theta}-\sigma a_{n},\hat{\theta}+\sigma b_{n}]\right) $ denote the coverage probability. Due to the above-noted scale equivariance of the estimator $\hat{\theta}$, it is obvious that $$p_{n}(\theta ;\sigma ,\eta _{n},a_{n},b_{n})=p_{n}(\theta /\sigma ;1,\eta
_{n},a_{n},b_{n})$$holds, and the same is true for the one-sided intervals. In particular, it follows that the infimal coverage probabilities $\inf_{\theta \in \mathbb{R}}p_{n}(\theta ;\sigma ,\eta _{n},a_{n},b_{n})$ do not depend on $\sigma $. Therefore, we shall assume without loss of generality that $\sigma =1$ for the remainder of this section and we shall write $P_{n,\theta }$ for $P_{n,\theta ,1}$.
Infimal coverage probabilities in finite samples[inf\_cov\_prob]{}
------------------------------------------------------------------
We begin with soft-thresholding. Let $C_{S,n}$ denote the interval $[\hat{\theta}_{S}-a_{n},\hat{\theta}_{S}+b_{n}]$. We first determine the infimum of the coverage probability $p_{S,n}(\theta ):=p_{S,n}(\theta ;1,\eta
_{n},a_{n},b_{n})=P_{n,\theta }\left( \theta \in C_{S,n}\right) $ of this interval.
\[lasso\] For every $n\geq 1$, the infimal coverage probability of the interval $C_{S,n}$ is given by$$\inf_{\theta \in \mathbb{R}}p_{S,n}(\theta )=\left\{
\begin{array}{cc}
\Phi (n^{1/2}(a_{n}-\eta _{n}))-\Phi (n^{1/2}(-b_{n}-\eta _{n})) & \text{if
\ }a_{n}\leq b_{n} \\
\Phi (n^{1/2}(b_{n}-\eta _{n}))-\Phi (n^{1/2}(-a_{n}-\eta _{n})) & \text{if
\ }a_{n}>b_{n}.\end{array}\right. \label{infimal_soft_asym}$$
As a point of interest we note that $p_{S,n}(\theta )$ is a piecewise constant function with jumps at $\theta =-a_{n}$ and $\theta =b_{n}$.
Next we turn to hard-thresholding. Let $C_{H,n}$ denote the interval $[\hat{\theta}_{H}-a_{n},\hat{\theta}_{H}+b_{n}]$. The infimum of the coverage probability $p_{H,n}(\theta ):=p_{H,n}(\theta ;1,\eta
_{n},a_{n},b_{n})=P_{n,\theta }\left( \theta \in C_{H,n}\right) $ of this interval has been obtained in Proposition 3.1 in Pötscher (2009), which we repeat for convenience.
\[hard\] For every $n\geq 1$, the infimal coverage probability of the interval $C_{H,n}$ is given by$$\begin{aligned}
&&\inf_{\theta \in \mathbb{R}}p_{H,n}(\theta ) \label{infimal_hard_asymm} \\
&=&\left\{
\begin{array}{ll}
\Phi (n^{1/2}(a_{n}-\eta _{n}))-\Phi (-n^{1/2}b_{n}) & \text{if \ \ }\eta
_{n}\leq a_{n}+b_{n}\text{ \ and \ }a_{n}\leq b_{n} \\
\Phi (n^{1/2}(b_{n}-\eta _{n}))-\Phi (-n^{1/2}a_{n}) & \text{if \ \ }\eta
_{n}\leq a_{n}+b_{n}\text{ \ and \ }a_{n}>b_{n} \\
0 & \text{if \ \ }\eta _{n}>a_{n}+b_{n}.\end{array}\right. \notag\end{aligned}$$
For later use we observe that the interval $C_{H,n}$ has positive infimal coverage probability if and only if the length of the interval $a_{n}+b_{n}$ is larger than $\eta _{n}$. As a point of interest we also note that the coverage probability $p_{H,n}(\theta )$ is discontinuous (with discontinuity points at $\theta =-a_{n}$ and $\theta =b_{n}$). Furthermore, as discussed in Pötscher (2009), the infimum in (\[infimal\_hard\_asymm\]) is attained if $\eta _{n}>a_{n}+b_{n}$, but not in case $\eta _{n}\leq
a_{n}+b_{n}$.
Finally, we consider the adaptive LASSO. Let $C_{A,n}$ denote the interval $[\hat{\theta}_{A}-a_{n},\hat{\theta}_{A}+b_{n}]$. The infimum of the coverage probability $p_{A,n}(\theta ):=p_{A,n}(\theta ;1,\eta
_{n},a_{n},b_{n})=P_{n,\theta }\left( \theta \in C_{A,n}\right) $ of this interval is given next.
\[adLASSOinf\] For every $n\geq 1$, the infimal coverage probability of $C_{A,n}$ is given by$$\inf_{\theta \in \mathbb{R}}p_{A,n}(\theta )=\Phi (n^{1/2}(a_{n}-\eta
_{n}))-\Phi \left( n^{1/2}\left( (a_{n}-b_{n})/2-\sqrt{((a_{n}+b_{n})/2)^{2}+\eta _{n}^{2}}\right) \right)$$if $a_{n}\leq b_{n}$, and by$$\inf_{\theta \in \mathbb{R}}p_{A,n}(\theta )=\Phi (n^{1/2}(b_{n}-\eta
_{n}))-\Phi \left( n^{1/2}\left( (b_{n}-a_{n})/2-\sqrt{((a_{n}+b_{n})/2)^{2}+\eta _{n}^{2}}\right) \right)$$if $a_{n}>b_{n}$.
We note that $p_{A,n}$ is continuous except at $\theta =b_{n}$ and $\theta
=-a_{n}$ and that the infimum of $p_{A,n}$ is not attained which can be seen from a simple refinement of the proof of Proposition \[adLASSOinf\].
\[open\](i) If we consider the open interval $C_{S,n}^{o}=(\hat{\theta}_{S}-a_{n},\hat{\theta}_{S}+b_{n})$ the formula for the coverage probability becomes $$\begin{aligned}
P_{n,\theta }\left( \theta \in C_{S,n}^{o}\right) &=&[\Phi
(n^{1/2}(a_{n}-\eta _{n}))-\Phi (n^{1/2}(-b_{n}-\eta _{n}))]\boldsymbol{1}(\theta \leq -a_{n}) \\
&+&[\Phi (n^{1/2}(a_{n}+\eta _{n}))-\Phi (n^{1/2}(-b_{n}-\eta _{n}))]\boldsymbol{1}(-a_{n}<\theta <b_{n}) \\
&+&[\Phi (n^{1/2}(a_{n}+\eta _{n}))-\Phi (n^{1/2}(-b_{n}+\eta _{n}))]\boldsymbol{1}(b_{n}\leq \theta ).\end{aligned}$$As a consequence, the infimal coverage probability of $C_{S,n}^{o}$ is again given by (\[infimal\_soft\_asym\]). A fortiori, the half-open intervals $(\hat{\theta}_{n}-a_{n},\hat{\theta}_{n}+b_{n}]$ and $[\hat{\theta}_{n}-a_{n},\hat{\theta}_{n}+b_{n})$ then also have infimal coverage probability given by (\[infimal\_soft\_asym\]).
\(ii) For the open interval $C_{H,n}^{o}=(\hat{\theta}_{H}-a_{n},\hat{\theta}_{H}+b_{n})$ the coverage probability satisfies $$\begin{gathered}
P_{n,\theta }\left( \theta \in C_{H,n}^{o}\right) =P_{n,\theta }\left(
\theta \in C_{H,n}\right) \\
-[\boldsymbol{1}(\theta =b_{n})+\boldsymbol{1}(\theta =-a_{n})][\Phi
(n^{1/2}(-\theta +\eta _{n}))-\Phi (n^{1/2}(-\theta -\eta _{n}))].\end{gathered}$$Inspection of the proof of Proposition 3.1 in Pötscher (2009) then shows that $C_{H,n}^{o}$ has the same infimal coverage probability as $C_{H,n}$. However, now the infimum is always a minimum. Furthermore, the half-open intervals $(\hat{\theta}_{H}-a_{n},\hat{\theta}_{H}+b_{n}]$ and $[\hat{\theta}_{H}-a_{n},\hat{\theta}_{H}+b_{n})$ then a fortiori have infimal coverage probability given by (\[infimal\_hard\_asymm\]); for these intervals the infimum is attained if $\eta _{n}>a_{n}+b_{n}$, but not necessarily if $\eta
_{n}\leq a_{n}+b_{n}$.
\(iii) If $C_{A,n}^{o}$ denotes the open interval $(\hat{\theta}_{A}-a_{n},\hat{\theta}_{A}+b_{n})$, the formula for the coverage probability becomes$$\begin{gathered}
P_{n,\theta }\left( \theta \in C_{A,n}^{o}\right) = \\
\left\{
\begin{array}{ll}
\Phi \left( n^{1/2}\gamma ^{(-)}(\theta ,-a_{n})\right) -\Phi \left(
n^{1/2}\gamma ^{(-)}(\theta ,b_{n})\right) & \text{if }\;\theta \leq -a_{n}
\\
\Phi \left( n^{1/2}\gamma ^{(+)}(\theta ,-a_{n})\right) -\Phi \left(
n^{1/2}\gamma ^{(-)}(\theta ,b_{n})\right) & \text{if }\;-a_{n}<\theta
<b_{n} \\
\Phi \left( n^{1/2}\gamma ^{(+)}(\theta ,-a_{n})\right) -\Phi \left(
n^{1/2}\gamma ^{(+)}(\theta ,b_{n})\right) & \text{if }\;\theta \geq b_{n},\end{array}\right. \end{gathered}$$where $\gamma ^{(-)}$ and $\gamma ^{(+)}$ are defined in (\[gamma1\]) and (\[gamma2\]) in the Appendix. Again the coverage probability is continuous except at $\theta =b_{n}$ and $\theta =-a_{n}$ (and is continuous everywhere in the trivial case $a_{n}=b_{n}=0$). It is now easy to see that the infimal coverage probability of $C_{A,n}^{o}$ coincides with the infimal coverage probability of the closed interval $C_{A,n}$, the infimum of the coverage probability of $C_{A,n}^{o}$ now always being a minimum. Furthermore, the half-open intervals $(\hat{\theta}_{A}-a_{n},\hat{\theta}_{A}+b_{n}]$ and $[\hat{\theta}_{A}-a_{n},\hat{\theta}_{A}+b_{n})$ a fortiori have the same infimal coverage probability as $C_{A,n}$ and $C_{A,n}^{o}$.
\(iv) The one-sided intervals $(-\infty ,\hat{\theta}_{S}+c_{n}]$, $(-\infty ,\hat{\theta}_{S}+c_{n})$, $[\hat{\theta}_{S}-c_{n},\infty )$, $(\hat{\theta}_{S}-c_{n},\infty )$, $(-\infty ,\hat{\theta}_{H}+c_{n}]$, $(-\infty ,\hat{\theta}_{H}+c_{n})$, $[\hat{\theta}_{H}-c_{n},\infty )$, $(\hat{\theta}_{H}-c_{n},\infty )$, $(-\infty ,\hat{\theta}_{A}+c_{n}]$, $(-\infty ,\hat{\theta}_{A}+c_{n})$, $(\hat{\theta}_{A}-c_{n},\infty )$, and $[\hat{\theta}_{A}-c_{n},\infty )$, with $c_{n}$ a nonnegative real number, have infimal coverage probability $\Phi (n^{1/2}(c_{n}-\eta _{n}))$. This is easy to see for soft-thresholding, follows from the reasoning in Pötscher (2009) for hard-thresholding, and for the adaptive LASSO follows by similar, but simpler, reasoning as in the proof of Proposition \[adLASSOinf\].
Symmetric intervals are shortest\[symm\]
----------------------------------------
For the two-sided confidence sets considered above, we next show that given a prescribed infimal coverage probability the symmetric intervals are shortest. We then show that these shortest intervals are longer than the standard interval based on the maximum likelihood estimator and quantify the excess length of these intervals.
\[short\]For every $n\geq 1$ and every $\delta $ satisfying $0<\delta <1$ we have:
\(a) Among all intervals $C_{S,n}$ with infimal coverage probability not less than $\delta $ there is a unique shortest interval $C_{S,n}^{\ast }=[\hat{\theta}_{S}-a_{n,S}^{\ast },\hat{\theta}_{S}+b_{n,S}^{\ast }]$ characterized by $a_{n,S}^{\ast }=b_{n,S}^{\ast }$ with $a_{n,S}^{\ast }$ being the unique solution of $$\Phi (n^{1/2}(a_{n}-\eta _{n}))-\Phi (n^{1/2}(-a_{n}-\eta _{n}))=\delta .
\label{short_a_S}$$The interval $C_{S,n}^{\ast }$ has infimal coverage probability equal to $\delta $ and $a_{n,S}^{\ast }$ is positive.
\(b) Among all intervals $C_{H,n}$ with infimal coverage probability not less than $\delta $ there is a unique shortest interval $C_{H,n}^{\ast }=[\hat{\theta}_{H}-a_{n,H}^{\ast },\hat{\theta}_{H}+b_{n,H}^{\ast }]$ characterized by $a_{n,H}^{\ast }=b_{n,H}^{\ast }$ with $a_{n,H}^{\ast }$ being the unique solution of $$\Phi (n^{1/2}(a_{n}-\eta _{n}))-\Phi (-n^{1/2}a_{n})=\delta .
\label{short_a_H}$$The interval $C_{H,n}^{\ast }$ has infimal coverage probability equal to $\delta $ and $a_{n,H}^{\ast }$ satisfies $a_{n,H}^{\ast }>\eta _{n}/2$.
\(c) Among all intervals $C_{A,n}$ with infimal coverage probability not less than $\delta $ there is a unique shortest interval $C_{A,n}^{\ast }=[\hat{\theta}_{A}-a_{n,A}^{\ast },\hat{\theta}_{A}+b_{n,A}^{\ast }]$ characterized by $a_{n,A}^{\ast }=b_{n,A}^{\ast }$ with $a_{n,A}^{\ast }$ being the unique solution of$$\Phi (n^{1/2}(a_{n}-\eta _{n}))-\Phi \left( -n^{1/2}\sqrt{a_{n}^{2}+\eta
_{n}^{2}}\right) =\delta . \label{short_a_A}$$The interval $C_{A,n}^{\ast }$ has infimal coverage probability equal to $\delta $ and $a_{n,A}^{\ast }$ is positive.
In the statistically uninteresting case $\delta =0$ the interval with $a_{n}=b_{n}=0$ is the unique shortest interval in all three cases. However, for the case of the hard-thresholding estimator also any interval with $a_{n}=b_{n}$ and $a_{n}\leq \eta _{n}/2$ has infimal coverage probability equal to zero.
Given that the distributions of the estimation errors $\hat{\theta}_{S}-\theta $, $\hat{\theta}_{H}-\theta $, and $\hat{\theta}_{A}-\theta $ are not symmetric (see Pötscher and Leeb (2009), Pötscher and Schneider (2009)), it may seem surprising at first glance that the shortest confidence intervals are symmetric. Some intuition for this phenomenon can be gained on the grounds that the distributions of the estimation errors under $\theta =\tau $ and $\theta =-\tau $ are mirror-images of one another.
The above theorem shows that given a prespecified $\delta $ ($0<\delta <1$), the shortest confidence set with infimal coverage probability equal to $\delta $ based on the soft-thresholding (LASSO) estimator is shorter than the corresponding interval based on the adaptive LASSO estimator, which in turn is shorter than the corresponding interval based on the hard-thresholding estimator. All three intervals are longer than the corresponding standard confidence interval based on the maximum likelihood estimator. That is, $$a_{n,H}^{\ast }>a_{n,A}^{\ast }>a_{n,S}^{\ast }>n^{-1/2}\Phi ^{-1}((1+\delta
)/2).$$Figure 1 below shows $n^{1/2}$ times the half-length of the shortest $\delta
$-level confidence intervals based on hard-thresholding, adaptive LASSO, soft-thresholding, and the maximum likelihood estimator, respectively, as a function of $n^{1/2}\eta _{n}$ for various values of $\delta $. The graphs illustrate that the intervals based on hard-thresholding, adaptive LASSO, and soft-thresholding substantially exceed the length of the maximum likelihood based interval except if $n^{1/2}\eta _{n}$ is very small. For large values of $n^{1/2}\eta _{n}$ the graphs suggest a linear increase in the length of the intervals based on the penalized estimators. This is formally confirmed in Section \[asy\_length\] below.
![$n^{1/2}a_{n,H}^{\ast }$, $n^{1/2}a_{n,A}^{\ast }$, $n^{1/2}a_{n,S}^{\ast }$ as a function of $n^{1/2}\eta _{n}$ for coverage probabilities $\protect\delta =0.5$, $0.8$, $0.9$, $0.95$. The horizontal line at height $\Phi^{-1}((1+\delta)/2)$ indicates $n^{1/2}$ times the half-length of the standard maximum likelihood based interval.](IVlengths2.ps){width="\textwidth"}
### Asymptotic behavior of the length\[asy\_length\]
It is well-known that as $n\rightarrow \infty $ two different regimes for the tuning parameter $\eta _{n}$ can be distinguished. In the first regime $\eta _{n}\rightarrow 0$ and $n^{1/2}\eta _{n}\rightarrow e$, $0<e<\infty $. This choice of tuning parameter leads to estimators $\hat{\theta}_{S}$, $\hat{\theta}_{H}$, and $\hat{\theta}_{A}$ that perform conservative model selection. In the second regime $\eta _{n}\rightarrow 0$ and $n^{1/2}\eta
_{n}\rightarrow \infty $, leading to estimators $\hat{\theta}_{S}$, $\hat{\theta}_{H}$, and $\hat{\theta}_{A}$ that perform consistent model selection (also known as the ‘sparsity property’); that is, with probability approaching $1$, the estimators are exactly zero if the true value $\theta
=0 $, and they are different from zero if $\theta \neq 0$. See Pötscher and Leeb (2009) and Pötscher and Schneider (2009) for a detailed discussion. We now discuss the asymptotic behavior, under the two regimes, of the half-length $a_{n,S}^{\ast }$, $a_{n,H}^{\ast }$, and $a_{n,A}^{\ast
} $ of the shortest intervals $C_{S,n}^{\ast }$, $C_{H,n}^{\ast }$, and $C_{A,n}^{\ast }$ with a fixed infimal coverage probability $\delta $, $0<\delta <1$.
If $\eta _{n}\rightarrow 0$ and $n^{1/2}\eta _{n}\rightarrow e$, $0<e<\infty
$, then it follows immediately from Theorem \[short\] that $n^{1/2}a_{n,S}^{\ast }$, $n^{1/2}a_{n,H}^{\ast }$, and $n^{1/2}a_{n,A}^{\ast
}$ converge to the unique solutions of $$\Phi (a-e)-\Phi (-a-e)=\delta , \label{asy_half-lenght_S_0}$$$$\Phi (a-e)-\Phi (-a)=\delta , \label{asy_half-lenght_H_0}$$and$$\Phi \left( \sqrt{a^{2}+e^{2}}\right) -\Phi (-a+e)=\delta ,
\label{asy_half-lenght_A_0}$$respectively. \[Actually, this is even true if $e=0$.\] Hence, while $a_{n,H}^{\ast }$, $a_{n,A}^{\ast }$, and $a_{n,S}^{\ast }$ are larger than the half-length $n^{-1/2}\Phi ^{-1}((1+\delta )/2)$ of the standard interval, they are of the same order $n^{-1/2}$.
The situation is different, however, if $\eta _{n}\rightarrow 0$ but $n^{1/2}\eta _{n}\rightarrow \infty $. In this case Theorem \[short\] shows that $$\Phi (n^{1/2}(a_{n,S}^{\ast }-\eta _{n}))\rightarrow \delta$$since $n^{1/2}(-a_{n,S}^{\ast }-\eta _{n})\leq -n^{1/2}\eta _{n}\rightarrow
-\infty $. In other words, $$a_{n,S}^{\ast }=\eta _{n}+n^{-1/2}\Phi ^{-1}(\delta )+o(n^{-1/2}).
\label{asy_half-length_S}$$Similarly, noting that $n^{1/2}a_{n,H}^{\ast }>n^{1/2}\eta _{n}/2\rightarrow
\infty $, we get$$a_{n,H}^{\ast }=\eta _{n}+n^{-1/2}\Phi ^{-1}(\delta )+o(n^{-1/2});
\label{asy_half-length_H}$$and since $n^{1/2}\sqrt{a_{n}^{2}+\eta _{n}^{2}}\geq n^{1/2}\eta
_{n}\rightarrow \infty $ we obtain$$a_{n,A}^{\ast }=\eta _{n}+n^{-1/2}\Phi ^{-1}(\delta )+o(n^{-1/2}).
\label{asy_half-length_A}$$\[Actually, the condition $\eta _{n}\rightarrow 0$ has not been used in the derivation of (\[asy\_half-length\_S\])-(\[asy\_half-length\_A\]).\] Hence, the intervals $C_{S,n}^{\ast }$, $C_{H,n}^{\ast }$, and $C_{A,n}^{\ast }$ are asymptotically of the same length. They are also longer than the standard interval by an order of magnitude: the ratio of each of $a_{n,S}^{\ast }$ ($a_{n,H}^{\ast }$, $a_{n,A}^{\ast }$, respectively) to the half-length of the standard interval is $n^{1/2}\eta _{n}$, which diverges to infinity. Hence, when the estimators $\hat{\theta}_{S}$, $\hat{\theta}_{H} $, and $\hat{\theta}_{A}$ are tuned to possess the ‘sparsity property’, the corresponding confidence sets become very large. For the particular intervals considered here this is a refinement of a general result in Pötscher (2009) for confidence sets based on arbitrary estimators possessing the ‘sparsity property’. \[We note that the sparsely tuned hard-thresholding estimator or the sparsely tuned adaptive LASSO (under an additional condition on $\eta _{n}$) are known to possess the so-called ‘oracle property’. In light of the ‘oracle property’ it is sometimes argued in the literature that valid confidence intervals based on these estimators with length proportional to $n^{-1/2}$ can be obtained. However, in light of the above discussion such intervals necessarily have infimal coverage probability that converges to zero and thus are not valid. This once more shows that *fixed-parameter* asymptotic results like the ‘oracle’ property can be dangerously misleading.\]
A simple asymptotic confidence interval
---------------------------------------
The results for the finite-sample confidence intervals given in Section [inf\_cov\_prob]{} required a detailed case by case analysis based on the finite-sample distribution of the estimator on which the interval is based. If the estimators $\hat{\theta}_{S}$, $\hat{\theta}_{H}$, and $\hat{\theta}_{A}$ are tuned to possess the ‘sparsity property’, i.e., if the tuning parameter satisfies $\eta _{n}\rightarrow 0$ and $n^{1/2}\eta
_{n}\rightarrow \infty $, a simple asymptotic confidence interval construction relying on asymptotic results obtained in Pötscher and Leeb (2009) and Pötscher and Schneider (2009) is possible as shown below. An advantage of this construction is that it easily extends to other estimators like the smoothly clipped absolute deviation (SCAD) estimator when tuned to possess the ‘sparsity property’.
As shown in Pötscher and Leeb (2009) and Pötscher and Schneider (2009), the uniform rate of consistency of the ‘sparsely’ tuned estimators $\hat{\theta}_{S}$, $\hat{\theta}_{H}$, and $\hat{\theta}_{A}$ is not $n^{1/2} $, but only $\eta _{n}^{-1}$; furthermore, the limiting distributions of these estimators under the appropriate $\eta _{n}^{-1}$-scaling and under a moving-parameter asymptotic framework are always concentrated on the interval $[-1,1]$. These facts can be used to obtain the following result.
\[asy\]Suppose $\eta _{n}\rightarrow 0$ and $n^{1/2}\eta _{n}\rightarrow
\infty $. Let $\hat{\theta}$ stand for any of the estimators $\hat{\theta}_{S}(\eta _{n})$, $\hat{\theta}_{H}(\eta _{n})$, or $\hat{\theta}_{A}(\eta
_{n})$. Let $d$ be a real number, and define the interval $D_{n}=[\hat{\theta}-d\eta _{n},\hat{\theta}+d\eta _{n}]$. If $d>1$, the interval $D_{n}$ has infimal coverage probability converging to $1$, i.e., $$\lim_{n\rightarrow \infty }\inf_{\theta \in \mathbb{R}}P_{n,\theta }(\theta
\in D_{n})=1\text{.}$$If $d<1$, $$\lim_{n\rightarrow \infty }\inf_{\theta \in \mathbb{R}}P_{n,\theta }(\theta
\in D_{n})=0\text{.}$$
The asymptotic distributional results in the above proposition do not provide information on the case $d=1$. However, from the finite-sample results in Section \[inf\_cov\_prob\] we see that in this case the infimal coverage probability of $D_{n}$ converges to $1/2$.
Since the interval $D_{n}$ for $d>1$ has asymptotic infimal coverage probability equal to one, one may wonder how much cruder this interval is compared to the finite-sample intervals $C_{S,n}^{\ast }$, $C_{H,n}^{\ast }$, and $C_{A,n}^{\ast }$ constructed in Section \[symm\], which have infimal coverage probability equal to a prespecified level $\delta $, $0<\delta <1$: The ratio of the half-length of $D_{n}$ to the half-length of the corresponding interval $C_{S,n}^{\ast }$, $C_{H,n}^{\ast }$, and $C_{A,n}^{\ast }$ is $$d(1+O(n^{-1/2}\eta _{n}^{-1}))=d(1+o(1))$$as can be seen from equations (\[asy\_half-length\_S\]), ([asy\_half-length\_H]{}), and (\[asy\_half-length\_A\]). Since $d$ can be chosen arbitrarily close to one, this ratio can be made arbitrarily close to one. This may sound somewhat strange, since we are comparing an interval with asymptotic infimal coverage probability $1$ with the shortest finite-sample confidence intervals that have a fixed infimal coverage probability $\delta $ less than $1$. The reason for this phenomenon is that, in the relevant moving-parameter asymptotic framework, the distribution of $\hat{\theta}-\theta $ is made up of a bias-component which in the worst case is of the order $\eta _{n}$ and a random component which is of the order $n^{-1/2}$. Since $\eta _{n}\rightarrow 0$ and $n^{1/2}\eta _{n}\rightarrow \infty $, the deterministic bias-component dominates the random component. This can also be gleaned from equations (\[asy\_half-length\_S\]), ([asy\_half-length\_H]{}), and (\[asy\_half-length\_A\]), where the level $\delta
$ enters the formula for the half-length only in the lower order term.
We note that using Theorem 19 in Pötscher and Leeb (2009) the same proof immediately shows that Proposition \[asy\] also holds for the smoothly clipped absolute deviation (SCAD) estimator when tuned to possess the ‘sparsity property’. In fact, the argument in the proof of the above proposition can be applied to a large class of post-model-selection estimators based on a consistent model selection procedure.
\(i) Suppose $D_{n}^{\prime }=[\hat{\theta}-d_{1}\eta _{n},\hat{\theta}+d_{2}\eta _{n}]$ where $\hat{\theta}$ stands for any of the estimators $\hat{\theta}_{S}$, $\hat{\theta}_{H}$, or $\hat{\theta}_{A}$. If $\min
(d_{1},d_{2})>1$, then the limit of the infimal coverage probability of $D_{n}^{\prime }$ is $1$; if $\max (d_{1},d_{2})<1$ then this limit is zero. This follows immediately from an inspection of the proof of Proposition [asy]{}.
\(ii) Proposition \[asy\] also remains correct if $D_{n}$ is replaced by the corresponding open interval. A similar comment applies to the open version of $D_{n}^{\prime }$.
Confidence Intervals: Unknown-Variance Case
===========================================
In this section we consider the case where the variance $\sigma ^{2}$ is unknown, $n\geq 2$, and we are interested in the coverage properties of intervals of the form $[\tilde{\theta}-\hat{\sigma}a_{n},\tilde{\theta}+\hat{\sigma}a_{n}]$ where $a_{n}$ is a nonnegative real number and $\tilde{\theta}
$ stands for any one of the estimators $\tilde{\theta}_{H}=\tilde{\theta}_{H}(\eta _{n})$, $\tilde{\theta}_{S}=\tilde{\theta}_{S}(\eta _{n})$, or $\tilde{\theta}_{A}=\tilde{\theta}_{A}(\eta _{n})$. For brevity we only consider symmetric intervals. A similar argument as in the known-variance case shows that we can assume without loss of generality that $\sigma =1$, and we shall do so in the sequel; in particular, this argument shows that the infimum with respect to $\theta $ of the coverage probability does not depend on $\sigma $.
Soft-thresholding
-----------------
Consider the interval $E_{S,n}=\left[ \tilde{\theta}_{S}-\hat{\sigma}a_{n},\tilde{\theta}_{S}+\hat{\sigma}a_{n}\right] $ where $a_{n}$ is a nonnegative real number and $\tilde{\theta}_{S}=\tilde{\theta}_{S}(\eta _{n})$. We then have$$P_{n,\theta }\left( \theta \in E_{S,n}\right) =\int_{0}^{\infty }P_{n,\theta
}\left( \theta \in E_{S,n}\left\vert \hat{\sigma}=s\right. \right) h_{n}(s)ds$$where $h_{n}$ is the density of $\hat{\sigma}$, i.e., $h_{n}$ is the density of the square root of a chi-square distributed random variable with $n-1$ degrees of freedom divided by the degrees of freedom. In view of independence of $\hat{\sigma}$ and $\bar{y}$ we obtain the following representation of the finite-sample coverage probability$$\begin{aligned}
P_{n,\theta }\left( \theta \in E_{S,n}\right) &=&\int_{0}^{\infty
}P_{n,\theta }\left( \theta \in \left[ \hat{\theta}_{S}(s\eta _{n})-sa_{n},\hat{\theta}_{S}(s\eta _{n})+sa_{n}\right] \right) h_{n}(s)ds \notag \\
&=&\int_{0}^{\infty }p_{S,n}\left( \theta ;1,s\eta _{n},sa_{n},sa_{n}\right)
h_{n}(s)ds \label{cov_unknown_S}\end{aligned}$$where $p_{S,n}$ is given in (\[cov\_S\]) in the Appendix.
We next determine the infimal coverage probability of $E_{S,n}$ in finite samples: It follows from (\[cov\_S\]), the dominated convergence theorem, and symmetry of the standard normal distribution that$$\begin{aligned}
\inf_{\theta \in \mathbb{R}}P_{n,\theta }\left( \theta \in E_{S,n}\right)
&\leq &\lim_{\theta \rightarrow \infty }\int_{0}^{\infty }p_{S,n}\left(
\theta ;1,s\eta _{n},sa_{n},sa_{n}\right) h_{n}(s)ds \notag \\
&=&\int_{0}^{\infty }\lim_{\theta \rightarrow \infty }p_{S,n}\left( \theta
;1,s\eta _{n},sa_{n},sa_{n}\right) h_{n}(s)ds \notag \\
&=&\int_{0}^{\infty }[\Phi (n^{1/2}s(a_{n}-\eta _{n}))-\Phi
(n^{1/2}s(-a_{n}-\eta _{n}))]h_{n}(s)ds \notag \\
&=&T_{n-1}(n^{1/2}(a_{n}-\eta _{n}))-T_{n-1}(n^{1/2}(-a_{n}-\eta _{n})),
\label{upper}\end{aligned}$$where $T_{n-1}$ is the cdf of a Student $t$-distribution with $n-1$ degrees of freedom. Furthermore, (\[infimal\_soft\_asym\]) shows that $$p_{S,n}\left( \theta ;1,s\eta _{n},sa_{n},sa_{n}\right) \geq \Phi
(n^{1/2}s(a_{n}-\eta _{n}))-\Phi (n^{1/2}s(-a_{n}-\eta _{n}))$$holds and whence we obtain from (\[cov\_unknown\_S\]) and (\[upper\]) the following expression for the infimal coverage probability of $E_{S,n}$:$$\inf_{\theta \in \mathbb{R}}P_{n,\theta }\left( \theta \in E_{S,n}\right)
=T_{n-1}(n^{1/2}(a_{n}-\eta _{n}))-T_{n-1}(n^{1/2}(-a_{n}-\eta _{n}))
\label{finite}$$for every $n\geq 2$. Remark \[open\] shows that the same relation is true for the corresponding open and half-open intervals.
Relation (\[finite\]) shows the following: suppose $1/2\leq \delta <1$ and $a_{n,S}^{\ast }$ solves (\[short\_a\_S\]), i.e., the corresponding interval $C_{S,n}^{\ast }$ has infimal coverage probability equal to $\delta $. Let $a_{n,S}^{\ast \ast }$ be the (unique) solution to $$T_{n-1}(n^{1/2}(a_{n}-\eta _{n}))-T_{n-1}(n^{1/2}(-a_{n}-\eta _{n}))=\delta ,$$i.e., the corresponding interval $E_{S,n}^{\ast \ast }=\left[ \tilde{\theta}_{S}-\hat{\sigma}a_{n,S}^{\ast \ast },\tilde{\theta}_{S}+\hat{\sigma}a_{n,S}^{\ast \ast }\right] $ has infimal coverage probability equal to $\delta $. Then $a_{n,S}^{\ast \ast }\geq a_{n,S}^{\ast }$ holds in view of Lemma \[l\_5\] in the Appendix. I.e., given the same infimal coverage probability $\delta \geq 1/2$, the expected length of the interval $E_{S,n}^{\ast \ast }$ based on $\tilde{\theta}_{S}$ is not smaller than the length of the interval $C_{S,n}^{\ast }$ based on $\hat{\theta}_{S}$.
Since $\left\Vert \Phi -T_{n-1}\right\Vert _{\infty }=\sup_{x\in \mathbb{R}}\left\vert \Phi (x)-T_{n-1}(x)\right\vert \rightarrow 0$ for $n\rightarrow
\infty $ holds by Polya’s theorem, the following result is an immediate consequence of (\[finite\]), Proposition \[lasso\], and Remark \[open\].
\[soft\_unknown\]For every sequence $a_{n}$ of nonnegative real numbers we have with $E_{S,n}=\left[ \tilde{\theta}_{S}-\hat{\sigma}a_{n},\tilde{\theta}_{S}+\hat{\sigma}a_{n}\right] $ and $C_{S,n}=\left[ \hat{\theta}_{S}-a_{n},\hat{\theta}_{S}+a_{n}\right] $ that$$\inf_{\theta \in \mathbb{R}}P_{n,\theta }\left( \theta \in E_{S,n}\right)
-\inf_{\theta \in \mathbb{R}}P_{n,\theta }\left( \theta \in C_{S,n}\right)
\rightarrow 0$$as $n\rightarrow \infty $. The analogous results hold for the corresponding open and half-open intervals.
We discuss this theorem together with the parallel results for hard-thresholding and adaptive LASSO based intervals in Section \[disc\].
Hard-thresholding
-----------------
Consider the interval $E_{H,n}=\left[ \tilde{\theta}_{H}-\hat{\sigma}a_{n},\tilde{\theta}_{H}+\hat{\sigma}a_{n}\right] $ where $a_{n}$ is a nonnegative real number and $\tilde{\theta}_{H}=\tilde{\theta}_{H}(\eta _{n})$. We then have analogously as in the preceding subsection that$$P_{n,\theta }\left( \theta \in E_{H,n}\right) =\int_{0}^{\infty
}p_{H,n}\left( \theta ;1,s\eta _{n},sa_{n},sa_{n}\right) h_{n}(s)ds.$$Note that $p_{H,n}\left( \theta ;1,s\eta _{n},sa_{n},sa_{n}\right) $ is symmetric in $\theta $ and for $\theta \geq 0$ is given by (see Pötscher (2009))$$\begin{aligned}
&&p_{H,n}\left( \theta ;1,s\eta _{n},sa_{n},sa_{n}\right) \\
&=&\left\{ \Phi (n^{1/2}(-\theta +s\eta _{n}))-\Phi (n^{1/2}(-\theta -s\eta
_{n}))\right\} \boldsymbol{1}\left( 0\leq \theta \leq sa_{n}\right) \\
&&+\max \left[ 0,\Phi (n^{1/2}sa_{n})-\Phi (n^{1/2}(-\theta +s\eta _{n}))\right] \boldsymbol{1}\left( sa_{n}<\theta \leq s\eta _{n}+sa_{n}\right) \\
&&+\left\{ \Phi (n^{1/2}sa_{n})-\Phi (-n^{1/2}sa_{n})\right\} \boldsymbol{1}\left( s\eta _{n}+sa_{n}<\theta \right)\end{aligned}$$in case $\eta _{n}>2a_{n}$, by$$\begin{aligned}
&&p_{H,n}\left( \theta ;1,s\eta _{n},sa_{n},sa_{n}\right) \\
&=&\left\{ \Phi (n^{1/2}(-\theta +s\eta _{n})-\Phi (n^{1/2}(-\theta -s\eta
_{n}))\right\} \boldsymbol{1}\left( 0\leq \theta \leq s\eta
_{n}-sa_{n}\right) \\
&&+\left\{ \Phi (n^{1/2}sa_{n})-\Phi (n^{1/2}(-\theta -s\eta _{n}))\right\}
\boldsymbol{1}\left( s\eta _{n}-sa_{n}<\theta \leq sa_{n}\right) \\
&&+\left\{ \Phi (n^{1/2}sa_{n})-\Phi (n^{1/2}(-\theta +s\eta _{n}))\right\}
\boldsymbol{1}\left( sa_{n}<\theta \leq s\eta _{n}+sa_{n}\right) \\
&&+\left\{ \Phi (n^{1/2}sa_{n})-\Phi (-n^{1/2}sa_{n})\right\} \boldsymbol{1}\left( s\eta _{n}+sa_{n}<\theta \right)\end{aligned}$$if $a_{n}\leq \eta _{n}\leq 2a_{n}$, and by $$\begin{aligned}
&&p_{H,n}\left( \theta ;1,s\eta _{n},sa_{n},sa_{n}\right) \\
&=&\left\{ \Phi (n^{1/2}sa_{n})-\Phi (-n^{1/2}sa_{n})\right\} \left\{
\boldsymbol{1}\left( 0\leq \theta \leq sa_{n}-s\eta _{n}\right) +\boldsymbol{1}\left( s\eta _{n}+sa_{n}<\theta \right) \right\} \\
&&+\left\{ \Phi (n^{1/2}sa_{n})-\Phi (n^{1/2}(-\theta -s\eta _{n}))\right\}
\boldsymbol{1}\left( sa_{n}-s\eta _{n}<\theta \leq sa_{n}\right) \\
&&+\left\{ \Phi (n^{1/2}sa_{n})-\Phi (n^{1/2}(-\theta +s\eta _{n}))\right\}
\boldsymbol{1}\left( sa_{n}<\theta \leq s\eta _{n}+sa_{n}\right)\end{aligned}$$if $\eta _{n}<a_{n}$. In the subsequent theorems we consider only the case where $\eta _{n}\rightarrow 0$ as this is the only interesting case from an asymptotic perspective: note that any of the penalized maximum likelihood estimators considered in this paper is inconsistent for $\theta $ if $\eta
_{n}$ does not converge to zero.
\[hard\_unknown\]Suppose $\eta _{n}\rightarrow 0$. For every sequence $a_{n}$ of nonnegative real numbers we have with $E_{H,n}=\left[ \tilde{\theta}_{H}-\hat{\sigma}a_{n},\tilde{\theta}_{H}+\hat{\sigma}a_{n}\right] $ and $C_{H,n}=\left[ \hat{\theta}_{H}-a_{n},\hat{\theta}_{H}+a_{n}\right] $ that$$\inf_{\theta \in \mathbb{R}}P_{n,\theta }\left( \theta \in E_{H,n}\right)
-\inf_{\theta \in \mathbb{R}}P_{n,\theta }\left( \theta \in C_{H,n}\right)
\rightarrow 0$$as $n\rightarrow \infty $. The analogous results hold for the corresponding open and half-open intervals.
Adaptive LASSO
--------------
Consider the interval $E_{A,n}=[\tilde{\theta}_{A}-\hat{\sigma}a_{n},\tilde{\theta}_{A}+\hat{\sigma}a_{n}]$ where $a_{n}$ is a nonnegative real number and $\tilde{\theta}_{A}=\tilde{\theta}_{A}(\eta _{n})$. We then have analogously as in the preceding subsections that $$P_{n,\theta }(\theta \in E_{A,n})=\int_{0}^{\infty }p_{A,n}(\theta ;1,s\eta
_{n},sa_{n},sa_{n})h_{n}(s)ds$$where $p_{A,n}$ is given in (\[cov\]) in the Appendix.
\[adaptive\_unknown\]Suppose $\eta _{n}\rightarrow 0$. For every sequence $a_{n}$ of nonnegative real numbers we have with $E_{A,n}=\left[ \tilde{\theta}_{A}-\hat{\sigma}a_{n},\tilde{\theta}_{A}+\hat{\sigma}a_{n}\right] $ and $C_{A,n}=\left[ \hat{\theta}_{A}-a_{n},\hat{\theta}_{A}+a_{n}\right] $ that$$\inf_{\theta \in \mathbb{R}}P_{n,\theta }\left( \theta \in E_{A,n}\right)
-\inf_{\theta \in \mathbb{R}}P_{n,\theta }\left( \theta \in C_{A,n}\right)
\rightarrow 0$$as $n\rightarrow \infty $. The analogous results hold for the corresponding open and half-open intervals.
Discussion\[disc\]
------------------
Theorems \[soft\_unknown\], \[hard\_unknown\], and \[adaptive\_unknown\] show that the results in Section \[finite\_sample\] carry over to the unknown-variance case in an asymptotic sense: For example, suppose $0<\delta
<1$, and $a_{n,S}$ ($a_{n,H}$, $a_{n,A}$, respectively) is such that $E_{S,n} $ ($E_{H,n}$, $E_{A,n}$, respectively) has infimal coverage probability converging to $\delta $. Then, for a regime where $n^{1/2}\eta
_{n}\rightarrow e$ with $0\leq e<\infty $, it follows that $n^{1/2}a_{n,S}$, $n^{1/2}a_{n,H}$, and $n^{1/2}a_{n,A}$ have limits that solve ([asy\_half-lenght\_S\_0]{})-(\[asy\_half-lenght\_A\_0\]), respectively; that is, they have the same limits as $n^{1/2}a_{n,S}^{\ast }$, $n^{1/2}a_{n,H}^{\ast
}$, and $n^{1/2}a_{n,A}^{\ast }$, which are $n^{1/2}$ times the half-length of the shortest $\delta $-confidence intervals $C_{S,n}^{\ast }$, $C_{H,n}^{\ast }$, and $C_{A,n}^{\ast }$, respectively, in the known-variance case. Furthermore, for a regime where $n^{1/2}\eta _{n}\rightarrow \infty $ it follows that $a_{n,S}$, $a_{n,H}$, and $a_{n,A}$ satisfy ([asy\_half-length\_S]{})-(\[asy\_half-length\_A\]), respectively (where we also assume $\eta _{n}\rightarrow 0$ for hard-thresholding and the adaptive LASSO). Hence, $a_{n,S}$, $a_{n,H}$, and $a_{n,A}$ on the one hand, and $a_{n,S}^{\ast }$, $a_{n,H}^{\ast }$, and $a_{n,A}^{\ast }$ on the other hand have again the same asymptotic behavior. Furthermore, Theorems [soft\_unknown]{}, \[hard\_unknown\], and \[adaptive\_unknown\] show that Proposition \[asy\] immediately carries over to the unknown-variance case.
Appendix
========
**Proof of Proposition \[lasso\]:** Using the expression for the finite sample distribution of $n^{1/2}(\hat{\theta}_{S}-\theta )$ given in Pötscher and Leeb (2009) and noting that this distribution function has a jump at $-n^{1/2}\theta $ we obtain$$\begin{aligned}
p_{S,n}(\theta ) &=&[\Phi (n^{1/2}(a_{n}-\eta _{n}))-\Phi
(n^{1/2}(-b_{n}-\eta _{n}))]\boldsymbol{1}(\theta <-a_{n}) \notag \\
&+&[\Phi (n^{1/2}(a_{n}+\eta _{n}))-\Phi (n^{1/2}(-b_{n}-\eta _{n}))]\boldsymbol{1}(-a_{n}\leq \theta \leq b_{n}) \notag \\
&+&[\Phi (n^{1/2}(a_{n}+\eta _{n}))-\Phi (n^{1/2}(-b_{n}+\eta _{n}))]\boldsymbol{1}(b_{n}<\theta ). \label{cov_S}\end{aligned}$$It follows that $\inf_{\theta \in \mathbb{R}}p_{S,n}(\theta )$ is as given in the proposition. $\ \blacksquare $
**Proof of Proposition \[adLASSOinf\]:** The distribution function $F_{A,n,\theta }(x)=P_{n,\theta }(n^{1/2}(\hat{\theta}_{A}-\theta
)\leq x)$ of the adaptive LASSO estimator is given by $$\begin{aligned}
\boldsymbol{1}(x+n^{1/2}\theta &\geq &0)\Phi \left( -((n^{1/2}\theta -x)/2)+\sqrt{((n^{1/2}\theta +x)/2)^{2}+n\eta _{n}^{2}}\right) + \\
\boldsymbol{1}(x+n^{1/2}\theta &<&0)\Phi \left( -((n^{1/2}\theta -x)/2)-\sqrt{((n^{1/2}\theta +x)/2)^{2}+n\eta _{n}^{2}}\right) \end{aligned}$$(see Pötscher and Schneider (2009)). Hence, the coverage probability $p_{A,n}(\theta )=F_{A,n,\theta }(n^{1/2}a_{n})-\lim_{x\rightarrow
(-n^{1/2}b_{n})_{-}}F_{A,n,\theta }(x)$ is$$p_{A,n}(\theta )=\left\{
\begin{array}{ll}
\Phi \left( n^{1/2}\gamma ^{(-)}(\theta ,-a_{n})\right) -\Phi \left(
n^{1/2}\gamma ^{(-)}(\theta ,b_{n})\right) & \text{if }\;\theta <-a_{n} \\
\Phi \left( n^{1/2}\gamma ^{(+)}(\theta ,-a_{n})\right) -\Phi \left(
n^{1/2}\gamma ^{(-)}(\theta ,b_{n})\right) & \text{if }\;-a_{n}\leq \theta
\leq b_{n} \\
\Phi \left( n^{1/2}\gamma ^{(+)}(\theta ,-a_{n})\right) -\Phi \left(
n^{1/2}\gamma ^{(+)}(\theta ,b_{n})\right) & \text{if }\;\theta >b_{n}.\end{array}\right. \label{cov}$$Here$$\begin{aligned}
\gamma ^{(-)}(\theta ,x) &=&-((\theta +x)/2)-\sqrt{((\theta -x)/2)^{2}+\eta
_{n}^{2}} \label{gamma1} \\
\gamma ^{(+)}(\theta ,x) &=&-((\theta +x)/2)+\sqrt{((\theta -x)/2)^{2}+\eta
_{n}^{2}}, \label{gamma2}\end{aligned}$$which are clearly smooth functions of $(\theta ,x)$. Observe that $\gamma
^{(-)}$ and $\gamma ^{(+)}$ are nonincreasing in $\theta \in \mathbb{R}$ (for every $x\in \mathbb{R}$). As a consequence, we obtain for $-a_{n}\leq
\theta \leq b_{n}$ the lower bound$$\begin{aligned}
p_{A,n}(\theta ) &\geq &\Phi \left( n^{1/2}\gamma
^{(+)}(b_{n},-a_{n})\right) -\Phi \left( n^{1/2}\gamma
^{(-)}(-a_{n},b_{n})\right) \notag \\
&=&\Phi \left( n^{1/2}\left( (a_{n}-b_{n})/2+\sqrt{((a_{n}+b_{n})/2)^{2}+\eta _{n}^{2}}\right) \right) \notag \\
&&-\Phi \left( n^{1/2}\left( (a_{n}-b_{n})/2-\sqrt{((a_{n}+b_{n})/2)^{2}+\eta _{n}^{2}}\right) \right) . \label{lower_bound}\end{aligned}$$
Consider first the case where $a_{n}\leq b_{n}$. We then show that $p_{A,n}(\theta )$ is nonincreasing on $(-\infty ,-a_{n})$: The derivative $dp_{A,n}(\theta )/d\theta $ is given by $$\begin{aligned}
&&dp_{A,n}(\theta )/d\theta = \\
&&n^{1/2}[\phi (n^{1/2}\gamma ^{(-)}(\theta ,-a_{n}))\partial \gamma
^{(-)}(\theta ,-a_{n})/\partial \theta -\phi (n^{1/2}\gamma ^{(-)}(\theta
,b_{n}))\partial \gamma ^{(-)}(\theta ,b_{n})/\partial \theta ]\end{aligned}$$where $\phi $ denotes the standard normal density function. Using the relation $a_{n}\leq b_{n}$, elementary calculations show that $$\partial \gamma ^{(-)}(\theta ,-a_{n})/\partial \theta \leq \partial \gamma
^{(-)}(\theta ,b_{n})/\partial \theta \text{ \ \ \ for }\theta \in (-\infty
,-a_{n})\text{.}$$Furthermore, given $a_{n}\leq b_{n}$, it is not too difficult to see that $\left\vert \gamma ^{(-)}(\theta ,-a_{n})\right\vert \leq \left\vert \gamma
^{(-)}(\theta ,b_{n})\right\vert $ for $\theta \in (-\infty ,-a_{n})$ (cf. Lemma \[l\_1\] below), which implies that$$\phi (n^{1/2}\gamma ^{(-)}(\theta ,-a_{n}))\geq \phi (n^{1/2}\gamma
^{(-)}(\theta ,b_{n})).$$The last two displays together with the fact that $\partial \gamma
^{(-)}(\theta ,-a_{n})/\partial \theta $ as well as $\partial \gamma
^{(-)}(\theta ,b_{n})/\partial \theta $ are less than or equal to zero, imply that $dp_{A,n}(\theta )/d\theta \leq 0$ on $(-\infty ,-a_{n})$. This proves that $$\inf_{\theta <-a_{n}}p_{A,n}(\theta )=\lim_{\theta \rightarrow
(-a_{n})_{-}}p_{A,n}(\theta )=c$$with $$c=\Phi \left( n^{1/2}(a_{n}-\eta _{n})\right) -\Phi \left( n^{1/2}\left(
(a_{n}-b_{n})/2-\sqrt{((a_{n}+b_{n})/2)^{2}+\eta _{n}^{2}}\right) \right) .
\label{c}$$Since the lower bound given in (\[lower\_bound\]) is not less than $c$, we have $$\inf_{\theta \leq b_{n}}p_{A,n}(\theta )=\inf_{\theta <-a_{n}}p_{A,n}(\theta
)=c.$$It remains to show that $p_{A,n}(\theta )\geq c$ for $\theta >b_{n}$. From (\[cov\]) and (\[c\]) after rearranging terms we obtain for $\theta
>b_{n} $$$\begin{aligned}
p_{A,n}(\theta )-c &=&\left[ \Phi (n^{1/2}\gamma ^{(+)}(\theta
,-a_{n}))-\Phi (n^{1/2}\gamma ^{(-)}(-a_{n},-a_{n}))\right] - \\
&&\left[ \Phi (n^{1/2}\gamma ^{(+)}(\theta ,b_{n}))-\Phi (n^{1/2}\gamma
^{(-)}(-a_{n},b_{n}))\right] .\end{aligned}$$It is elementary to show that $\gamma ^{(+)}(\theta ,-a_{n}))\geq \gamma
^{(-)}(-a_{n},-a_{n})=a_{n}-\eta _{n}$ and $\gamma ^{(+)}(\theta
,b_{n}))\geq \gamma ^{(-)}(-a_{n},b_{n})$. We next show that$$\gamma ^{(+)}(\theta ,-a_{n})-\gamma ^{(-)}(-a_{n},-a_{n}))\geq \gamma
^{(+)}(\theta ,b_{n})-\gamma ^{(-)}(-a_{n},b_{n}). \label{comp_lenght}$$To establish this note that (\[comp\_lenght\]) can equivalently be rewritten as$$f(0)+f((\theta +a_{n})/2)\geq f((\theta -b_{n})/2)+f((a_{n}+b_{n})/2)
\label{ineq_f}$$where $f(x)=(x^{2}+\eta _{n}^{2})^{1/2}$. Observe that $0\leq (\theta
-b_{n})/2\leq (\theta +a_{n})/2$ holds since $0\leq a_{n}\leq b_{n}<\theta $. Writing $(\theta -b_{n})/2$ as $\lambda (\theta +a_{n})/2+(1-\lambda )0$ with $0\leq \lambda \leq 1$ gives $(a_{n}+b_{n})/2=(1-\lambda )(\theta
+a_{n})/2+\lambda 0$. Because $f$ is convex, the inequality (\[ineq\_f\]) and hence (\[comp\_lenght\]) follows.
Next observe that in case $a_{n}\geq \eta _{n}$ we have (using monotonicity of $\gamma ^{(+)}(\theta ,b_{n})$)$$0\leq \gamma ^{(-)}(-a_{n},-a_{n}))=a_{n}-\eta _{n}\leq b_{n}-\eta
_{n}=-\gamma ^{(+)}(b_{n},b_{n})\leq -\gamma ^{(+)}(\theta ,b_{n})
\label{ineq_1}$$for $\theta >b_{n}$. In case $a_{n}<\eta _{n}$ we have (using $\gamma
^{(-)}(\theta ,x)=\gamma ^{(-)}(x,\theta )$ and monotonicity of $\gamma
^{(-)}$ in its first argument)$$\gamma ^{(-)}(-a_{n},b_{n})\leq \gamma ^{(-)}(-a_{n},-a_{n})=a_{n}-\eta
_{n}<0, \label{ineq_3}$$and (using monotonicity of $\gamma ^{(+)}$) $$\gamma ^{(-)}(-a_{n},b_{n})\leq -\gamma ^{(+)}(b_{n},-a_{n})\leq -\gamma
^{(+)}(\theta ,-a_{n}) \label{ineq_4}$$for $\theta >b_{n}$. Applying Lemma \[l\_2\] below with $\alpha
=n^{1/2}\gamma ^{(-)}(-a_{n},-a_{n})$, $\beta =n^{1/2}\gamma ^{(+)}(\theta
,-a_{n})$, $\gamma =n^{1/2}\gamma ^{(-)}(-a_{n},b_{n})$, and $\delta
=n^{1/2}\gamma ^{(+)}(\theta ,b_{n})$ and using (\[comp\_lenght\])-([ineq\_4]{}), establishes $p_{A,n}(\theta )-c\geq 0$. This completes the proof in case $a_{n}\leq b_{n}$.
The case $a_{n}>b_{n}$ follows from the observation that (\[cov\]) remains unchanged if $a_{n}$ and $b_{n}$ are interchanged and $\theta $ is replaced by $-\theta $. $\ \blacksquare $
\[l\_1\] Suppose $a_{n}\leq b_{n}$. Then $\left\vert \gamma ^{(-)}(\theta
,-a_{n})\right\vert \leq \left\vert \gamma ^{(-)}(\theta ,b_{n})\right\vert $ holds for $\theta \in (-\infty ,-a_{n})$.
Squaring both sides of the claimed inequality shows that the claim is equivalent to$$a_{n}^{2}/2-(a_{n}-\theta )\sqrt{((a_{n}+\theta )/2)^{2}+\eta ^{2}}\leq
b_{n}^{2}/2+(b_{n}+\theta )\sqrt{((b_{n}-\theta )/2)^{2}+\eta ^{2}}.$$But, for $\theta <-a_{n}$, the left-hand side of the preceding display is not larger than$$a_{n}^{2}/2+(a_{n}+\theta )\sqrt{((a_{n}-\theta )/2)^{2}+\eta ^{2}}.$$Since $a_{n}^{2}/2\leq b_{n}^{2}/2$, it hence suffices to show that $$-(a_{n}+\theta )\sqrt{((a_{n}-\theta )/2)^{2}+\eta ^{2}}\geq -(b_{n}+\theta )\sqrt{((b_{n}-\theta )/2)^{2}+\eta ^{2}}$$for $\theta <-a_{n}$. This is immediately seen by distinguishing the cases where $-b_{n}\leq \theta <-a_{n}$ and where $\theta <-b_{n}$, and observing that $a_{n}\leq b_{n}$.
The following lemma is elementary to prove.
\[l\_2\] Suppose $\alpha $, $\beta $, $\gamma $, and $\delta $ are real numbers satisfying $\alpha \leq \beta $, $\gamma \leq \delta $, and $\beta
-\alpha \geq \delta -\gamma $. If $0\leq \alpha \leq -\delta $, or if $\gamma \leq \alpha \leq 0$ and $\gamma \leq -\beta $, then $\Phi (\beta
)-\Phi (\alpha )\geq \Phi (\delta )-\Phi (\gamma )$.
**Proof of Theorem \[short\]:** (a) Since $\delta $ is positive, any solution to (\[short\_a\_S\]) has to be positive. Now the equation (\[short\_a\_S\]) has a unique solution $a_{n,S}^{\ast }$, since (\[short\_a\_S\]) as a function of $a_{n}\in \lbrack 0,\infty )$ is easily seen to be strictly increasing with range $[0,1)$. Furthermore, the infimal coverage probability (\[infimal\_soft\_asym\]) is a continuous function of the pair $(a_{n},b_{n})$ on $[0,\infty )\times \lbrack 0,\infty )$. Let $K\subseteq \lbrack 0,\infty )\times \lbrack 0,\infty )$ consist of all pairs $(a_{n},b_{n})$ such that (i) the corresponding interval $[\hat{\theta}_{S}-a_{n},\hat{\theta}_{S}+b_{n}]$ has infimal coverage probability not less than $\delta $, and (ii) the length $a_{n}+b_{n}$ is less than or equal $2a_{n,S}^{\ast }$. Then $K$ is compact. It is also nonempty as the pair $(a_{n,S}^{\ast },a_{n,S}^{\ast })$ belongs to $K$. Since the length $a_{n}+b_{n}$ is obviously continuous, it follows that there is a pair $(a_{n}^{o},b_{n}^{o})\in K$ having minimal length within $K$. Since confidence sets corresponding to pairs not belonging to $K$ always have length larger than $2a_{n,S}^{\ast }$, the pair $(a_{n}^{o},b_{n}^{o})$ gives rise to an interval with shortest length within the set of all intervals with infimal coverage probability not less than $\delta $. We next show that $a_{n}^{o}=b_{n}^{o}$ must hold: Suppose not, then we may assume without loss of generality that $a_{n}^{o}<b_{n}^{o}$, since ([infimal\_soft\_asym]{}) remains invariant under permutation of $a_{n}^{o}$ and $b_{n}^{o}$. But now increasing $a_{n}^{o}$ by $\varepsilon >0$ and decreasing $b_{n}^{o}$ by the same amount such that $a_{n}^{o}+\varepsilon
<b_{n}^{o}-\varepsilon $ holds, will result in an interval of the same length with infimal coverage probability$$\Phi (n^{1/2}(a_{n}^{o}+\varepsilon -\eta _{n}))-\Phi
(n^{1/2}(-(b_{n}^{o}-\varepsilon )-\eta _{n})).$$This infimal coverage probability will be strictly larger than$$\Phi (n^{1/2}(a_{n}^{o}-\eta _{n}))-\Phi (n^{1/2}(-b_{n}^{o}-\eta _{n}))\geq
\delta$$provided $\varepsilon $ is chosen sufficiently small. But then, by continuity of the infimal coverage probability as a function of $a_{n}$ and $b_{n}$, the interval $[\hat{\theta}_{S}-a_{n}^{o}-\varepsilon ,\hat{\theta}_{S}+b_{n}^{\prime }-\varepsilon ]$ with $\varepsilon <b_{n}^{\prime
}<b_{n}^{o}$ will still have infimal coverage probability not less than $\delta $ as long as $b_{n}^{\prime }$ is sufficiently close to $b_{n}^{o}$; at the same time this interval will be shorter than the interval $[\hat{\theta}_{S}-a_{n}^{o},\hat{\theta}_{S}+b_{n}^{o}]$. This leads to a contradiction and establishes $a_{n}^{o}=b_{n}^{o}$. By what was said at the beginning of the proof, it is now obvious that $a_{n}^{o}=b_{n}^{o}=a_{n,S}^{\ast }$ must hold, thus also establishing uniqueness. The last claim is obvious in view of the construction of $a_{n,S}^{\ast }$.
\(b) Since $\delta $ is positive, any solution to (\[short\_a\_H\]) has to be larger than $\eta _{n}/2$. Now equation (\[short\_a\_H\]) has a unique solution $a_{n,H}^{\ast }$, since (\[short\_a\_H\]) as a function of $a_{n}\in \lbrack \eta _{n}/2,\infty )$ is easily seen to be strictly increasing with range $[0,1)$. Furthermore, define $K$ similarly as in the proof of part (a). Then, by the same reasoning as in (a), the set $K$ is compact and non-empty, leading to a pair $(a_{n}^{o},b_{n}^{o})$ that gives rise to an interval with shortest length within the set of all intervals with infimal coverage probability not less than $\delta $. We next show that $a_{n}^{o}=b_{n}^{o}$ must hold: Suppose not, then we may again assume without loss of generality that $a_{n}^{o}<b_{n}^{o}$. Note that $a_{n}^{o}+b_{n}^{o}>\eta _{n}$ must hold, since the infimal coverage probability of the corresponding interval is positive by construction. Since all this entails $\left\vert a_{n}^{o}-\eta _{n}\right\vert <b_{n}^{o}$, increasing $a_{n}^{o}$ by $\varepsilon >0$ and decreasing $b_{n}^{o}$ by the same amount such that $a_{n}^{o}+\varepsilon <b_{n}^{o}-\varepsilon $ holds, will result in an interval of the same length with infimal coverage probability$$\begin{aligned}
\Phi (n^{1/2}(a_{n}^{o}+\varepsilon -\eta _{n}))-\Phi
(-n^{1/2}(b_{n}^{o}-\varepsilon )) &>& \\
\Phi (n^{1/2}(a_{n}^{o}-\eta _{n}))-\Phi (-n^{1/2}b_{n}^{o}) &\geq &\delta\end{aligned}$$provided $\varepsilon $ is chosen sufficiently small. By continuity of the infimal coverage probability as a function of $a_{n}$ and $b_{n}$, the interval $[\hat{\theta}_{S}-a_{n}^{o}-\varepsilon ,\hat{\theta}_{S}+b_{n}^{\prime }-\varepsilon ]$ with $\varepsilon <b_{n}^{\prime
}<b_{n}^{o}$ will still have infimal coverage probability not less than $\delta $ as long as $b_{n}^{\prime }$ is sufficiently close to $b_{n}^{o}$; at the same time this interval will be shorter than the interval $[\hat{\theta}_{S}-a_{n}^{o},\hat{\theta}_{S}+b_{n}^{o}]$, leading to a contradiction thus establishing $a_{n}^{o}=b_{n}^{o}$. As in (a) it now follows that $a_{n}^{o}=b_{n}^{o}=a_{n,H}^{\ast }$ must hold, thus also establishing uniqueness. The last claim is then obvious in view of the construction of $a_{n,H}^{\ast }$.
\(c) Since $\delta $ is positive, it is easy to see that any solution to ([short\_a\_A]{}) has to be positive. Now equation (\[short\_a\_A\]) has a unique solution $a_{n,A}^{\ast }$, since (\[short\_a\_A\]) as a function of $a_{n}\in \lbrack 0,\infty )$ is strictly increasing with range $[0,1)$. Furthermore, the infimal coverage probability as given in Proposition [adLASSOinf]{} is a continuous function of the pair $(a_{n},b_{n})$ on $[0,\infty )\times \lbrack 0,\infty )$. Define $K$ similarly as in the proof of part (a). Then by the same reasoning as in (a), the set $K$ is compact and non-empty, leading to a pair $(a_{n}^{o},b_{n}^{o})$ that gives rise to an interval with shortest length within the set of all intervals with infimal coverage probability not less than $\delta $. We next show that $a_{n}^{o}=b_{n}^{o}$ must hold: Suppose not, then we may again assume without loss of generality that $a_{n}^{o}<b_{n}^{o}$. But now increasing $a_{n}^{o}$ by $\varepsilon >0$ and decreasing $b_{n}^{o}$ by the same amount such that $a_{n}^{o}+\varepsilon <b_{n}^{o}-\varepsilon $ holds, will result in an interval of the same length with infimal coverage probability$$\begin{aligned}
&&\Phi (n^{1/2}(a_{n}^{o}+\varepsilon -\eta _{n}))-\Phi \left( n^{1/2}\left(
\varepsilon +(a_{n}^{o}-b_{n}^{o})/2-\sqrt{((a_{n}^{o}+b_{n}^{o})/2)^{2}+\eta _{n}^{2}}\right) \right) > \\
&&\Phi (n^{1/2}(a_{n}^{o}-\eta _{n}))-\Phi \left( n^{1/2}\left(
(a_{n}^{o}-b_{n}^{o})/2-\sqrt{((a_{n}^{o}+b_{n}^{o})/2)^{2}+\eta _{n}^{2}}\right) \right) \geq \delta ,\end{aligned}$$provided $\varepsilon $ is chosen sufficiently small. This is so since $a_{n}^{o}<b_{n}^{o}$ implies $$\left\vert a_{n}^{o}-\eta _{n}\right\vert <\left\vert
(a_{n}^{o}-b_{n}^{o})/2-\sqrt{((a_{n}^{o}+b_{n}^{o})/2)^{2}+\eta _{n}^{2}}\right\vert$$as is easily seen. But then, by continuity of the infimal coverage probability as a function of $a_{n}$ and $b_{n}$, the interval $[\hat{\theta}_{S}-a_{n}^{o}-\varepsilon ,\hat{\theta}_{S}+b_{n}^{\prime }-\varepsilon ]$ with $\varepsilon <b_{n}^{\prime }<b_{n}^{o}$ will still have infimal coverage probability not less than $\delta $ as long as $b_{n}^{\prime }$ is sufficiently close to $b_{n}^{o}$; at the same time this interval will be shorter than the interval $[\hat{\theta}_{S}-a_{n}^{o},\hat{\theta}_{S}+b_{n}^{o}]$. This leads to a contradiction and establishes $a_{n}^{o}=b_{n}^{o}$. As in (a) it now follows that $a_{n}^{o}=b_{n}^{o}=a_{n,A}^{\ast }$ must hold, thus also establishing uniqueness. The last claim is obvious in view of the construction of $a_{n,A}^{\ast }$. $\ \blacksquare $
**Proof of Proposition \[asy\]:** Let $$c=\liminf_{n\rightarrow \infty }\inf_{\theta \in \mathbb{R}}P_{n,\theta
}\left( -d\leq \eta _{n}^{-1}(\hat{\theta}-\theta )\leq d\right) .$$By definition of $c$, we can find a subsequence $n_{k}$ and elements $\theta
_{n_{k}}\in \mathbb{R}$ such that$$P_{n_{k},\theta _{n_{k}}}\left( -d\leq \eta _{n_{k}}^{-1}(\hat{\theta}-\theta _{n_{k}})\leq d\right) \rightarrow c$$for $k\rightarrow \infty $. Now, by Theorem 17 (for $\hat{\theta}=\hat{\theta}_{H}$), Theorem 18 (for $\hat{\theta}=\hat{\theta}_{S}$), and Remark 12 in Pötscher and Leeb (2009), and by Theorem 6 (for $\hat{\theta}=\hat{\theta}_{A}$) and Remark 7 in Pötscher and Schneider (2009), any accumulation point of the distribution of $\eta _{n_{k}}^{-1}(\hat{\theta}-\theta
_{n_{k}})$ with respect to weak convergence is a probability measure concentrated on $[-1,1]$. Since $d>1$, it follows that $c=1$ must hold, which proves the first claim. We next prove the second claim. In view of Theorem 17 (for $\hat{\theta}=\hat{\theta}_{H}$) and Theorem 18 (for $\hat{\theta}=\hat{\theta}_{S}$) in Pötscher and Leeb (2009), and in view of Theorem 6 (for $\hat{\theta}=\hat{\theta}_{A}$) in Pötscher and Schneider (2009) it is possible to choose a sequence $\theta _{n}\in \mathbb{R}$ such that the distribution of $\eta _{n}^{-1}(\hat{\theta}-\theta _{n})$ converges to point mass located at one of the endpoints of the interval $[-1,1]$. But then clearly $$P_{n,\theta _{n}}\left( -d\leq \eta _{n}^{-1}(\hat{\theta}-\theta _{n})\leq
d\right) \rightarrow 0$$for $d<1$ which implies the second claim. $\ \blacksquare $
**Proof of Theorem \[hard\_unknown\]:** We prove the result for the closed interval. Inspection of the proof together with Remark [open]{} then gives the result for the open and half-open intervals.
Step 1: Observe that for every $s>0$ and $n\geq 2$ we have from the above formulae for $p_{H,n}$ that$$\lim_{\theta \rightarrow \infty }p_{H,n}\left( \theta ;1,s\eta
_{n},sa_{n},sa_{n}\right) =\Phi (n^{1/2}sa_{n})-\Phi (-n^{1/2}sa_{n}).$$By the dominated convergence theorem it follows that for $\theta \rightarrow
\infty $$$\begin{aligned}
P_{n,\theta }\left( \theta \in E_{H,n}\right) &=&\int_{0}^{\infty
}p_{H,n}\left( \theta ;1,s\eta _{n},sa_{n},sa_{n}\right) h_{n}(s)ds \\
&\rightarrow &\int_{0}^{\infty }\left[ \Phi (n^{1/2}sa_{n})-\Phi
(-n^{1/2}sa_{n})\right] h_{n}(s)ds \\
&=&T_{n-1}(n^{1/2}a_{n})-T_{n-1}(-n^{1/2}a_{n}).\end{aligned}$$Hence, $$\inf_{\theta \in \mathbb{R}}P_{n,\theta }\left( \theta \in C_{H,n}\right)
\leq \lim_{\theta \rightarrow \infty }p_{H,n}\left( \theta ;1,\eta
_{n},a_{n},a_{n}\right) =\Phi (n^{1/2}a_{n})-\Phi (-n^{1/2}a_{n})$$and$$\inf_{\theta \in \mathbb{R}}P_{n,\theta }\left( \theta \in E_{H,n}\right)
\leq T_{n-1}(n^{1/2}a_{n})-T_{n-1}(-n^{1/2}a_{n})\leq \Phi
(n^{1/2}a_{n})-\Phi (-n^{1/2}a_{n}), \label{upper_bound}$$the last inequality following from well-known properties of $T_{n-1}$, cf. Lemma \[l\_5\] below. This proves the theorem in case $n^{1/2}a_{n}\rightarrow 0$ for $n\rightarrow \infty $.
Step 2: For every $s>0$ and $n\geq 2$ we have from (\[infimal\_hard\_asymm\])$$\begin{aligned}
\inf_{\theta \in \mathbb{R}}P_{n,\theta }\left( \theta \in C_{H,n}\right)
&=&\inf_{\theta \in \mathbb{R}}p_{H,n}\left( \theta ;1,\eta
_{n},a_{n},a_{n}\right) \notag \\
&=&\max \left[ \Phi (n^{1/2}a_{n})-\Phi (-n^{1/2}(a_{n}-\eta _{n})),0\right]
\label{low_bound1}\end{aligned}$$and$$\inf_{\theta \in \mathbb{R}}p_{H,n}\left( \theta ;1,s\eta
_{n},sa_{n},sa_{n}\right) =\max \left[ \Phi (n^{1/2}sa_{n})-\Phi
(n^{1/2}(-sa_{n}+s\eta _{n})),0\right] .$$Furthermore,$$\begin{aligned}
\inf_{\theta \in \mathbb{R}}P_{n,\theta }\left( \theta \in E_{H,n}\right)
&\geq &\int_{0}^{\infty }\inf_{\theta \in \mathbb{R}}p_{H,n}\left( \theta
;1,s\eta _{n},sa_{n},sa_{n}\right) h_{n}(s)ds \notag \\
&=&\int_{0}^{\infty }\max \left[ \Phi (n^{1/2}sa_{n})-\Phi
(n^{1/2}(-sa_{n}+s\eta _{n})),0\right] h_{n}(s)ds \notag \\
&=&\max \left[ \int_{0}^{\infty }\left[ \Phi (n^{1/2}sa_{n})-\Phi
(n^{1/2}(-sa_{n}+s\eta _{n}))\right] h_{n}(s)ds,0\right] \notag \\
&=&\max \left[ T_{n-1}(n^{1/2}a_{n})-T_{n-1}(-n^{1/2}(a_{n}-\eta _{n})),0\right] . \label{low_bound2}\end{aligned}$$If $n^{1/2}(a_{n}-\eta _{n})\rightarrow \infty $, then the far right-hand sides of (\[low\_bound1\]) and (\[low\_bound2\]) converge to $1$, since $\left\Vert \Phi -T_{n-1}\right\Vert _{\infty }\rightarrow 0$ as $n\rightarrow \infty $ by Polya’s Theorem and since $n^{1/2}a_{n}\geq
n^{1/2}(a_{n}-\eta _{n})$. This proves the theorem in case $n^{1/2}(a_{n}-\eta _{n})\rightarrow \infty $.
Step 3: If $n^{1/2}\eta _{n}\rightarrow 0$, then (\[low\_bound1\]) and the fact that $\Phi $ is globally Lipschitz shows that $\inf_{\theta \in \mathbb{R}}P_{n,\theta }\left( \theta \in C_{H,n}\right) $ differs from $\Phi
(n^{1/2}a_{n})-\Phi (-n^{1/2}a_{n})$ only by a term that is $o(1)$. Similarly, (\[upper\_bound\]), (\[low\_bound2\]), the fact that $\left\Vert \Phi -T_{n-1}\right\Vert _{\infty }\rightarrow 0$ as $n\rightarrow \infty $ by Polya’s theorem, and the global Lipschitz property of $\Phi $ show that the same is true for $\inf_{\theta \in \mathbb{R}}P_{n,\theta }\left( \theta \in E_{H,n}\right) $, proving the theorem in case $n^{1/2}\eta _{n}\rightarrow 0$.
Step 4: By a subsequence argument and Steps 1-3 it remains to prove the theorem under the assumption that $n^{1/2}a_{n}$ and $n^{1/2}\eta _{n}$ are bounded away from zero by a finite positive constant $c_{1}$, say, and that $n^{1/2}(a_{n}-\eta _{n})$ is bounded from above by a finite constant $c_{2}$, say. It then follows that $a_{n}/\eta _{n}$ is bounded by a finite positive constant $c_{3}$, say. For given $\varepsilon >0$ set $\theta
_{n}(\varepsilon )=a_{n}(1+2c(\varepsilon )n^{-1/2})$ where $c(\varepsilon )$ is the constant given in Lemma \[l\_3\]. We then have for $s\in \lbrack
1-c(\varepsilon )n^{-1/2},1+c(\varepsilon )n^{-1/2}]$$$sa_{n}<\theta _{n}(\varepsilon )\leq s(\eta _{n}+a_{n})$$whenever $n>n_{0}(c(\varepsilon ),c_{3})$. Without loss of generality we may choose $n_{0}(c(\varepsilon ),c_{3})$ large enough such that also $1-c(\varepsilon )n^{-1/2}>0$ holds for $n>n_{0}(c(\varepsilon ),c_{3})$. Consequently, we have (observing that $\max (0,x)$ has Lipschitz constant $1$ and $\Phi $ has Lipschitz constant $(2\pi )^{-1/2}$) for every $s\in \lbrack
1-c(\varepsilon )n^{-1/2},1+c(\varepsilon )n^{-1/2}]$ and $n>n_{0}(c(\varepsilon ),c_{3})$$$\begin{aligned}
&&\left\vert p_{H,n}\left( \theta _{n}(\varepsilon );1,s\eta
_{n},sa_{n},sa_{n}\right) -p_{H,n}\left( \theta _{n}(\varepsilon );1,\eta
_{n},a_{n},a_{n}\right) \right\vert \\
&=&\left\vert \max (0,\Phi (n^{1/2}sa_{n})-\Phi (n^{1/2}(-\theta
_{n}(\varepsilon )+s\eta _{n})))-\max (0,\Phi (n^{1/2}a_{n})-\Phi
(n^{1/2}(-\theta _{n}(\varepsilon )+\eta _{n})))\right\vert \\
&\leq &\left\vert \left[ \Phi (n^{1/2}sa_{n})-\Phi (n^{1/2}(-\theta
_{n}(\varepsilon )+s\eta _{n}))\right] -\left[ \Phi (n^{1/2}a_{n})-\Phi
(n^{1/2}(-\theta _{n}(\varepsilon )+\eta _{n}))\right] \right\vert \\
&\leq &(2\pi )^{-1/2}n^{1/2}(a_{n}+\eta _{n})\left\vert s-1\right\vert \leq
(2\pi )^{-1/2}c(\varepsilon )(a_{n}+\eta _{n})\leq (2\pi
)^{-1/2}c(\varepsilon )(c_{3}+1)\eta _{n}.\end{aligned}$$It follows that for every $n>n_{0}(c(\varepsilon ),c_{3})$$$\begin{aligned}
&&\inf_{\theta \in \mathbb{R}}\int_{0}^{\infty }p_{H,n}\left( \theta
;1,s\eta _{n},sa_{n},sa_{n}\right) h_{n}(s)ds \\
&\leq &\int_{0}^{\infty }p_{H,n}\left( \theta _{n}(\varepsilon );1,s\eta
_{n},sa_{n},sa_{n}\right) h_{n}(s)ds \\
&=&\int_{1-c(\varepsilon )n^{-1/2}}^{1+c(\varepsilon )n^{-1/2}}p_{H,n}\left(
\theta _{n}(\varepsilon );1,s\eta _{n},sa_{n},sa_{n}\right) h_{n}(s)ds \\
&&+\int_{\left\{ s:\left\vert s-1\right\vert \geq c(\varepsilon
)n^{-1/2}\right\} }p_{H,n}\left( \theta _{n}(\varepsilon );1,s\eta
_{n},sa_{n},sa_{n}\right) h_{n}(s)ds \\
&=&B_{1}+B_{2}.\end{aligned}$$Clearly, $0\leq B_{2}\leq \varepsilon $ holds, cf. Lemma \[l\_3\], and for $B_{1}$ we have$$\begin{aligned}
&&\left\vert B_{1}-p_{H,n}\left( \theta _{n}(\varepsilon );1,\eta
_{n},a_{n},a_{n}\right) \right\vert \\
&\leq &\left\vert \int_{1-c(\varepsilon )n^{-1/2}}^{1+c(\varepsilon
)n^{-1/2}}\left[ p_{H,n}\left( \theta _{n}(\varepsilon );1,s\eta
_{n},sa_{n},sa_{n}\right) -p_{H,n}\left( \theta _{n}(\varepsilon );1,\eta
_{n},a_{n},a_{n}\right) \right] h_{n}(s)ds\right\vert +\varepsilon \\
&\leq &(2\pi )^{-1/2}c(\varepsilon )(c_{3}+1)\eta _{n}+\varepsilon \end{aligned}$$for $n>n_{0}(c(\varepsilon ),c_{3})$. It follows that $$\begin{aligned}
&&\inf_{\theta \in \mathbb{R}}\int_{0}^{\infty }p_{H,n}\left( \theta
;1,s\eta _{n},sa_{n},sa_{n}\right) h_{n}(s)ds \\
&\leq &p_{H,n}\left( \theta _{n}(\varepsilon );1,\eta
_{n},a_{n},a_{n}\right) +(2\pi )^{-1/2}c(\varepsilon )(c_{3}+1)\eta
_{n}+2\varepsilon \end{aligned}$$holds for $n>n_{0}(c(\varepsilon ),c_{3})$. Now $$\begin{aligned}
p_{H,n}\left( \theta _{n}(\varepsilon );1,\eta _{n},a_{n},a_{n}\right)
&=&\max (0,\Phi (n^{1/2}a_{n})-\Phi (n^{1/2}(-\theta _{n}(\varepsilon )+\eta
_{n}))) \\
&=&\max (0,\Phi (n^{1/2}a_{n})-\Phi (n^{1/2}(-a_{n}(1+2c(\varepsilon
)n^{-1/2})+\eta _{n}))).\end{aligned}$$But this differs from $\inf_{\theta \in \mathbb{R}}P_{n,\theta }\left(
\theta \in C_{H,n}\right) =\max (0,\Phi (n^{1/2}a_{n})-\Phi
(n^{1/2}(-a_{n}+\eta _{n})))$ by at most$$\begin{aligned}
&&\left\vert \Phi (n^{1/2}(-a_{n}+\eta _{n}))-\Phi
(n^{1/2}(-a_{n}(1+2c(\varepsilon )n^{-1/2})+\eta _{n}))\right\vert \\
&\leq &(2\pi )^{-1/2}2c(\varepsilon )a_{n}\leq (2\pi )^{-1/2}2c(\varepsilon
)c_{3}\eta _{n}.\end{aligned}$$Consequently, for $n>n_{0}(c(\varepsilon ),c_{3})$ $$\begin{aligned}
\inf_{\theta \in \mathbb{R}}P_{n,\theta }\left( \theta \in E_{H,n}\right)
&=&\inf_{\theta \in \mathbb{R}}\int_{0}^{\infty }p_{H,n}\left( \theta
;1,s\eta _{n},sa_{n},sa_{n}\right) h_{n}(s)ds \\
&\leq &\max (0,\Phi (n^{1/2}a_{n})-\Phi (n^{1/2}(-a_{n}+\eta _{n})))+(2\pi
)^{-1/2}c(\varepsilon )(3c_{3}+1)\eta _{n}+2\varepsilon \\
&=&\inf_{\theta \in \mathbb{R}}P_{n,\theta }\left( \theta \in C_{H,n}\right)
+(2\pi )^{-1/2}c(\varepsilon )(3c_{3}+1)\eta _{n}+2\varepsilon .\end{aligned}$$On the other hand, $$\begin{aligned}
\inf_{\theta \in \mathbb{R}}P_{n,\theta }\left( \theta \in E_{H,n}\right)
&=&\inf_{\theta \in \mathbb{R}}\int_{0}^{\infty }p_{H,n}\left( \theta
;1,s\eta _{n},sa_{n},sa_{n}\right) h_{n}(s)ds \\
&\geq &\int_{0}^{\infty }\inf_{\theta \in \mathbb{R}}p_{H,n}\left( \theta
;1,s\eta _{n},sa_{n},sa_{n}\right) h_{n}(s)ds \\
&=&\int_{0}^{\infty }\max (0,\Phi (n^{1/2}sa_{n})-\Phi (n^{1/2}s(-a_{n}+\eta
_{n})))h_{n}(s)ds \\
&=&\max (0,T_{n-1}(n^{1/2}a_{n})-T_{n-1}(n^{1/2}(-a_{n}+\eta _{n}))) \\
&\geq &\max (0,\Phi (n^{1/2}a_{n})-\Phi (n^{1/2}(-a_{n}+\eta
_{n})))-2\left\Vert \Phi -T_{n-1}\right\Vert _{\infty } \\
&=&\inf_{\theta \in \mathbb{R}}P_{n,\theta }\left( \theta \in C_{H,n}\right)
-2\left\Vert \Phi -T_{n-1}\right\Vert _{\infty }.\end{aligned}$$Since $\eta _{n}\rightarrow 0$ and $\left\Vert \Phi -T_{n-1}\right\Vert
_{\infty }\rightarrow 0$ for $n\rightarrow \infty $ and since $\varepsilon $ was arbitrary the proof is complete. $\ \blacksquare $
**Proof of Theorem \[adaptive\_unknown\]:** We prove the result for the closed interval. Inspection of the proof together with Remark \[open\] then gives the result for the open and half-open intervals.
Step 1: Observe that for every $s>0$ and $n\geq 2$ we have from (\[cov\]) that$$\lim_{\theta \rightarrow \infty }p_{A,n}\left( \theta ;1,s\eta
_{n},sa_{n},sa_{n}\right) =\Phi (n^{1/2}sa_{n})-\Phi (-n^{1/2}sa_{n}).$$Then exactly the same argument as in the proof of Theorem \[hard\_unknown\] shows that $\inf_{\theta \in \mathbb{R}}P_{n,\theta }\left( \theta \in
C_{A,n}\right) $ as well as $\inf_{\theta \in \mathbb{R}}P_{n,\theta }\left(
\theta \in E_{A,n}\right) $ converge to zero for $n\rightarrow \infty $ if $n^{1/2}a_{n}\rightarrow 0$, thus proving the theorem in this case. For later use we note that this reasoning in particular gives$$\inf_{\theta \in \mathbb{R}}P_{n,\theta }\left( \theta \in E_{A,n}\right)
\leq T_{n-1}(n^{1/2}a_{n})-T_{n-1}(-n^{1/2}a_{n})\leq \Phi
(n^{1/2}a_{n})-\Phi (-n^{1/2}a_{n}). \label{star}$$
Step 2: By Proposition \[adLASSOinf\] we have for every $s>0$ and $n\geq 1$$$\inf_{\theta \in \mathbb{R}}p_{A,n}\left( \theta ;1,s\eta
_{n},sa_{n},sa_{n}\right) =\Phi (n^{1/2}s\sqrt{a_{n}^{2}+\eta _{n}^{2}})-\Phi (n^{1/2}s(-a_{n}+\eta _{n})).$$Arguing as in the proof of Theorem \[hard\_unknown\] we then have$$\begin{aligned}
\inf_{\theta \in \mathbb{R}}P_{n,\theta }\left( \theta \in C_{A,n}\right)
&=&\inf_{\theta \in \mathbb{R}}p_{A,n}\left( \theta ;1,\eta
_{n},a_{n},a_{n}\right) \notag \\
&=&\Phi (n^{1/2}\sqrt{a_{n}^{2}+\eta _{n}^{2}})-\Phi (n^{1/2}(-a_{n}+\eta
_{n})) \label{low_bound3}\end{aligned}$$and$$\begin{aligned}
\inf_{\theta \in \mathbb{R}}P_{n,\theta }\left( \theta \in E_{A,n}\right)
&\geq &\int_{0}^{\infty }\inf_{\theta \in \mathbb{R}}p_{A,n}\left( \theta
;1,s\eta _{n},sa_{n},sa_{n}\right) h_{n}(s)ds \notag \\
&=&T_{n-1}(n^{1/2}\sqrt{a_{n}^{2}+\eta _{n}^{2}})-T_{n-1}(n^{1/2}(-a_{n}+\eta _{n})). \label{low_bound4}\end{aligned}$$If $n^{1/2}(a_{n}-\eta _{n})\rightarrow \infty $, then the far right-hand sides of (\[low\_bound3\]) and (\[low\_bound4\]) converge to $1$, since $\left\Vert \Phi -T_{n-1}\right\Vert _{\infty }\rightarrow 0$ as $n\rightarrow \infty $ by Polya’s Theorem and since $n^{1/2}\sqrt{a_{n}^{2}+\eta _{n}^{2}}\geq n^{1/2}a_{n}\rightarrow \infty $ and $n^{1/2}(-a_{n}+\eta _{n})\rightarrow -\infty $. This proves the theorem in case $n^{1/2}(a_{n}-\eta _{n})\rightarrow \infty $.
Step 3: Analogous to the corresponding step in the proof of Theorem [hard\_unknown]{}, using (\[low\_bound3\]), (\[star\]), (\[low\_bound4\]), and additionally noting that $0\leq n^{1/2}\sqrt{a_{n}^{2}+\eta _{n}^{2}}-n^{1/2}a_{n}\leq n^{1/2}\eta _{n}$, the theorem is proved in the case $n^{1/2}\eta _{n}\rightarrow 0$.
Step 4: Similar as in the proof of Theorem \[hard\_unknown\] it remains to prove the theorem under the assumption that $n^{1/2}a_{n}\geq c_{1}>0$, $n^{1/2}\eta _{n}\geq c_{1}$, and that $n^{1/2}(a_{n}-\eta _{n})\leq
c_{2}<\infty $. Again, it then follows that $0\leq a_{n}/\eta _{n}\leq
c_{3}<\infty $. For given $\varepsilon >0$ set $\theta _{n}(\varepsilon
)=a_{n}(1+2c(\varepsilon )n^{-1/2})$ where $c(\varepsilon )$ is the constant given in Lemma \[l\_3\]. We then have for $s\in \lbrack 1-c(\varepsilon
)n^{-1/2},1+c(\varepsilon )n^{-1/2}]$$$sa_{n}<\theta _{n}(\varepsilon )$$for all $n$. Choose $n_{0}(c(\varepsilon ))$ large enough such that $1-c(\varepsilon )n^{-1/2}>1/2$ holds for $n>n_{0}(c(\varepsilon ))$. Consequently, for every $s\in \lbrack 1-c(\varepsilon
)n^{-1/2},1+c(\varepsilon )n^{-1/2}]$ and $n>n_{0}(c(\varepsilon ))$ we have from (\[cov\]) (observing that $\Phi $ has Lipschitz constant $(2\pi
)^{-1/2}$)$$\begin{aligned}
&&|p_{A,n}(\theta _{n}(\varepsilon );1,s\eta
_{n},sa_{n},sa_{n})-p_{A,n}(\theta _{n}(\varepsilon );1,\eta
_{n},a_{n},a_{n})| \\[1ex]
&\leq &(2\pi )^{-1/2}n^{1/2}\left( \left\vert s-1\right\vert
a_{n}+\left\vert \sqrt{(\theta _{n}(\varepsilon )+sa_{n})^{2}/4+s^{2}\eta
_{n}^{2}}-\sqrt{(\theta _{n}(\varepsilon )+a_{n})^{2}/4+\eta _{n}^{2}}\right\vert +\right. \\
&&\left. \left\vert \sqrt{(\theta _{n}(\varepsilon )-sa_{n})^{2}/4+s^{2}\eta
_{n}^{2}}-\sqrt{(\theta _{n}(\varepsilon )-a_{n})^{2}/4+\eta _{n}^{2}}\right\vert \right) .\end{aligned}$$We note the elementary inequality $\left\vert x^{1/2}-y^{1/2}\right\vert
\leq 2^{-1}z^{-1/2}\left\vert x-y\right\vert $ for positive $x$, $y$, $z$ satisfying $\min (x,y)\geq z$. Using this inequality with $z=(1-c(\varepsilon )n^{-1/2})^{2}\eta _{n}^{2}$ twice, we obtain for every $s\in \lbrack 1-c(\varepsilon )n^{-1/2},1+c(\varepsilon )n^{-1/2}]$ and $n>n_{0}(c(\varepsilon ))$$$\begin{aligned}
&&|p_{A,n}(\theta _{n}(\varepsilon );1,s\eta
_{n},sa_{n},sa_{n})-p_{A,n}(\theta _{n}(\varepsilon );1,\eta
_{n},a_{n},a_{n})| \\
&\leq &(2\pi )^{-1/2}n^{1/2}|s-1|\left( a_{n}+\left[ (1-c(\varepsilon
)n^{-1/2})^{2}\eta _{n}^{2}\right] ^{-1/2}\left[ \theta _{n}(\varepsilon
)a_{n}/2+(s+1)\left( (a_{n}^{2}/4)+\eta _{n}^{2}\right) \right] \right) .\end{aligned}$$
Since $1-c(\varepsilon )n^{-1/2}>1/2$ for $n>n_{0}(c(\varepsilon ))$ by the choice of $n_{0}(c(\varepsilon ))$ and since $a_{n}/\eta _{n}\leq c_{3}$ we obtain$$\begin{aligned}
&&|p_{A,n}(\theta _{n}(\varepsilon );1,s\eta
_{n},sa_{n},sa_{n})-p_{A,n}(\theta _{n}(\varepsilon );1,\eta
_{n},a_{n},a_{n})| \notag \\
&\leq &(2\pi )^{-1/2}c(\varepsilon )\left( a_{n}+2\eta _{n}^{-1}\left[
a_{n}^{2}+(5/2)((a_{n}^{2}/4)+\eta _{n}^{2})\right] \right) \notag \\
&\leq &(2\pi )^{-1/2}c(\varepsilon )\left( c_{3}+(13/4)c_{3}^{2}+5\right)
\eta _{n}=c_{4}(\varepsilon )\eta _{n} \label{lip_adalasso}\end{aligned}$$for every $n>n_{0}(c(\varepsilon ))$ and $s\in \lbrack 1-c(\varepsilon
)n^{-1/2},1+c(\varepsilon )n^{-1/2}]$.
Now, $$\begin{aligned}
& \inf_{\theta \in \mathbb{R}}\int_{0}^{\infty }p_{A,n}(\theta ;1,s\eta
_{n},sa_{n},sa_{n})h_{n}(s)ds \\
& \leq \int_{0}^{\infty }p_{A,n}(\theta _{n}(\varepsilon );1,s\eta
_{n},sa_{n},sa_{n})h_{n}(s)ds \\
& =\int_{1-c(\varepsilon )n^{-1/2}}^{1+c(\varepsilon
)n^{-1/2}}p_{A,n}(\theta _{n}(\varepsilon );1,s\eta
_{n},sa_{n},sa_{n})h_{n}(s)ds \\[1ex]
& +\int_{|s-1|\geq c(\varepsilon )n^{-1/2}}p_{A,n}(\theta _{n}(\varepsilon
);1,s\eta _{n},sa_{n},sa_{n})h_{n}(s)ds \\
& =:B_{1}+B_{2}.\end{aligned}$$Clearly, $0\leq B_{2}\leq \varepsilon $ holds by the choice of $c(\varepsilon )$, see Lemma \[l\_3\]. For $B_{1}$ we have using ([lip\_adalasso]{})$$\begin{aligned}
&&|B_{1}-p_{A,n}(\theta _{n}(\varepsilon );1,\eta _{n},a_{n},a_{n})| \\
&\leq &\int_{1-c(\varepsilon )n^{-1/2}}^{1+c(\varepsilon
)n^{-1/2}}|p_{A,n}(\theta _{n}(\varepsilon );1,s\eta
_{n},sa_{n},sa_{n})-p_{A,n}(\theta _{n}(\varepsilon );1,\eta
_{n},a_{n},a_{n})|h_{n}(s)ds+\varepsilon \\
&\leq &c_{4}(\varepsilon )\eta _{n}+\varepsilon \end{aligned}$$for $n>n_{0}(c(\varepsilon ))$. It follows that $$\begin{aligned}
&&\inf_{\theta \in \mathbb{R}}\int_{0}^{\infty }p_{A,n}(\theta ;1,s\eta
_{n},sa_{n},sa_{n})h_{n}(s)ds \\
&\leq &p_{A,n}(\theta _{n}(\varepsilon );1,\eta
_{n},a_{n},a_{n})+c_{4}(\varepsilon )\eta _{n}+2\varepsilon \end{aligned}$$holds for $n>n_{0}(c(\varepsilon ))$. Furthermore, the absolute difference between $p_{A,n}(\theta _{n}(\varepsilon );1,\eta _{n},a_{n},a_{n})$ and $\inf_{\theta \in \mathbb{R}}P_{n,\theta }\left( \theta \in C_{A,n}\right) $ can be bounded as follows: Using Proposition \[adLASSOinf\], (\[cov\]), observing that $\Phi $ has Lipschitz constant $(2\pi )^{-1/2}$, and using the elementary inequality noted earlier twice with $z=\eta _{n}^{2}$ we obtain$$\begin{aligned}
&&\left\vert p_{A,n}(\theta _{n}(\varepsilon );1,\eta _{n},a_{n},a_{n})-\Phi
\left( n^{1/2}\sqrt{a_{n}^{2}+\eta _{n}^{2}}\right) +\Phi \left(
n^{1/2}(-a_{n}+\eta _{n})\right) \right\vert \\
&\leq &(2\pi )^{-1/2}n^{1/2}\left\vert -a_{n}c(\varepsilon )n^{-1/2}+\sqrt{a_{n}^{2}(1+c(\varepsilon )n^{-1/2})^{2}+\eta _{n}^{2}}-\sqrt{a_{n}^{2}+\eta
_{n}^{2}}\right\vert \\
&&+(2\pi )^{-1/2}n^{1/2}\left\vert \sqrt{(a_{n}c(\varepsilon
)n^{-1/2})^{2}+\eta _{n}^{2}}-\sqrt{(a_{n}c(\varepsilon )n^{-1/2}+\eta
_{n})^{2}}\right\vert \\
&\leq &(2\pi )^{-1/2}\left( 2a_{n}c(\varepsilon )+(2\eta
_{n})^{-1}a_{n}^{2}(2c(\varepsilon )+c(\varepsilon )^{2}n^{-1/2})\right) \\
&\leq &(2\pi )^{-1/2}\left( 2c_{3}c(\varepsilon
)+2^{-1}c_{3}^{2}(2c(\varepsilon )+c(\varepsilon )^{2})\right) \eta
_{n}=c_{5}(\varepsilon )\eta _{n}.\end{aligned}$$Consequently, for $n>n_{0}(c(\varepsilon ))$ $$\begin{aligned}
& \inf_{\theta \in \mathbb{R}}\int_{0}^{\infty }p_{A,n}(\theta ;1,s\eta
_{n},sa_{n},sa_{n})h_{n}(s)ds \\
& \leq \Phi (n^{1/2}\sqrt{a_{n}^{2}+\eta _{n}^{2}})-\Phi
(n^{1/2}(-a_{n}+\eta _{n})) \\
& +\left( c_{4}(\varepsilon )+c_{5}(\varepsilon )\right) \eta
_{n}+2\varepsilon .\end{aligned}$$On the other hand, $$\begin{aligned}
& \inf_{\theta \in \mathbb{R}}\int_{0}^{\infty }p_{A,n}(\theta ;1,s\eta
_{n},sa_{n},sa_{n})h_{n}(s)ds \\
& \geq \int_{0}^{\infty }\inf_{\theta \in \mathbb{R}}p_{A,n}(\theta ;1,s\eta
_{n},sa_{n},sa_{n})h_{n}(s)ds \\
& =\int_{0}^{\infty }\left[ \Phi (n^{1/2}s\sqrt{a_{n}^{2}+\eta _{n}^{2}})-\Phi (n^{1/2}s(-a_{n}+\eta _{n}))\right] h_{n}(s)ds \\
& =T_{n-1}(n^{1/2}\sqrt{a_{n}^{2}+\eta _{n}^{2}})-T_{n-1}(n^{1/2}(-a_{n}+\eta _{n})) \\
& \geq \Phi (n^{1/2}\sqrt{a_{n}^{2}+\eta _{n}^{2}})-\Phi
(n^{1/2}(-a_{n}+\eta _{n}))-2\Vert \Phi -T_{n-1}\Vert _{\infty }.\end{aligned}$$Since $\eta _{n}\rightarrow 0$ and $\left\Vert \Phi -T_{n-1}\right\Vert
_{\infty }\rightarrow 0$ for $n\rightarrow \infty $ and since $\varepsilon $ was arbitrary the proof is complete. $\ \blacksquare $
\[l\_3\]Suppose $\sigma =1$. Then for every $\varepsilon >0$ there exists a $c=c(\varepsilon )>0$ such that $$\int_{\max (0,1-cn^{-1/2})}^{1+cn^{-1/2}}h_{n}(s)ds\geq 1-\varepsilon$$holds for every $n\geq 2$.
By the central limit theorem and the delta-method we have that $n^{1/2}(\hat{\sigma}-1)$ converges to a normal distribution. It follows that $n^{1/2}(\hat{\sigma}-1)$ is (uniformly) tight. In other words, for every $\varepsilon >0$ we can find a real number $c>0$ such that for all $n\geq 2$ holds$$\Pr \left( \left\vert n^{1/2}(\hat{\sigma}-1)\right\vert \leq c\right) \geq
1-\varepsilon .$$
\[l\_5\] Suppose $n\geq 2$ and $x\geq y\geq 0$. Then$$T_{n-1}(x)\leq \Phi (x)$$and$$T_{n-1}(x-y)-T_{n-1}(-x-y)\leq \Phi (x-y)-\Phi (-x-y).$$
The first claim is well-known, see, e.g., Kagan and Nagaev (2008). The second claim follows immediately from the first claim, since by symmetry of $\Phi $ and $T_{n-1}$ we have $$\begin{aligned}
&&\Phi (x-y)-\Phi (-x-y)-\left( T_{n-1}(x-y)-T_{n-1}(-x-y)\right) \\
&=&\left[ \Phi (x-y)-T_{n-1}(x-y)\right] +\left[ \Phi (x+y)-T_{n-1}(x+y)\right] \geq 0.\end{aligned}$$
[99]{} Fan, J. & R. Li (2001): Variable selection via nonconcave penalized likelihood and its oracle properties. *Journal of the American Statistical Association* 96, 1348-1360.
Frank, I. E. & J. H. Friedman (1993): A statistical view of some chemometrics regression tools (with discussion). *Technometrics *35, 109-148.
Joshi, V. M. (1969): Admissibility of the usual confidence sets for the mean of a univariate or bivariate normal population. *Annals of Mathematical Statistics* 40, 1042-1067.
Kagan, A. & A. V. Nagaev (2008): A lemma on stochastic majorization and properties of the Student distribution. *Theory of Probability and its Applications *52, 160-164.
Knight, K. & W. Fu (2000): Asymptotics of lasso-type estimators. *Annals of Statistics *28, 1356-1378.
Leeb, H. & B. M. Pötscher (2008): Sparse estimators and the oracle property, or the return of Hodges’ estimator. *Journal of Econometrics *142, 201-211.
Pötscher, B. M. (2009): Confidence sets based on sparse estimators are necessarily large. *Sankhya* 71-A, 1-18.
Pötscher, B. M. & H. Leeb (2009): On the distribution of penalized maximum likelihood estimators: the LASSO, SCAD, and thresholding. *Journal of Multivariate Analysis *100, 2065-2082.
Pötscher, B. M. & U. Schneider (2009): On the distribution of the adaptive LASSO estimator. *Journal of Statistical Planning and Inference *139, 2775-2790.
Tibshirani, R. (1996): Regression shrinkage and selection via the lasso. *Journal of the Royal Statistical Society Series B* 58, 267-288.
Zou, H. (2006): The adaptive lasso and its oracle properties. *Journal of the American Statistical Association* 101, 1418-1429.
[^1]: Department of Statistics, University of Vienna, Universitätsstrasse 5, A-1010 Vienna. Phone: +431 427738640. E-mail: benedikt.poetscher@univie.ac.at
[^2]: Institute for Mathematical Stochastics, Georg-August-University Göttingen, Goldschmidtstraße 7, D-37077 Göttingen. Phone: +49 55139172107. E-mail: ulrike.schneider@math.uni-goettingen.de
[^3]: Earlier versions of this paper were circulated under the title “Confidence Sets Based on Penalized Maximum Likelihood Estimators”.
|
---
abstract: 'We study the dependence of the zeros of eigenfunctions of Sturm-Liouville problem on the parameters that define the boundary conditions. As a corollary, we obtain Sturm oscillation theorem, which states that the $n$-th eigenfunction has $n$ zeros.'
address:
- 'Tigran Harutyunyan Faculty of Mathematics and Mechanics, Yerevan State University, 1 Alex Manoogian, 0025, Yerevan, Armenia'
- 'Avetik Pahlevanyan Faculty of Mathematics and Mechanics, Yerevan State University, 1 Alex Manoogian, 0025, Yerevan, Armenia'
- 'Yuri Ashrafyan Faculty of Mathematics and Mechanics, Yerevan State University, 1 Alex Manoogian, 0025, Yerevan, Armenia'
author:
- 'Tigran Harutyunyan, Avetik Pahlevanyan, Yuri Ashrafyan'
title: 'On the “movement” of the zeros of eigenfunctions of the Sturm-Liouville problem'
---
\[theorem\][Lemma]{} \[theorem\][Definition]{} \[theorem\][Remark]{}
Let us consider Sturm-Liouville boundary value problem $L\left(q, \alpha, \beta \right):$ $$\label{eq1}
ly \equiv - y'' + q\left( x \right)y = \mu y, \; 0<x<\pi, \; \mu \in \mathbb{C},$$ $$\label{eq2}
y\left( 0 \right)\cos \alpha + y'\left( 0 \right)\sin \alpha = 0, \; \alpha \in \left( {0,\pi } \right],$$ $$\label{eq3}
y\left( \pi \right)\cos \beta + y'\left( \pi \right)\sin \beta = 0, \; \beta \in \left[ {0,\pi } \right),$$ where $q \in L_{\mathbb{R}}^1\left[ {0,\pi } \right],$ i.e. $q$ is a real-valued, summable function on $\left[0, \pi\right]$.
By $L\left(q,\alpha ,\beta \right)$ we also denote the self-adjoint operator corresponding to the problem –.
It is well-known, that the problem $L\left(q, \alpha, \beta \right)$ has a countable set of simple, real eigenvalues (see, e.g. [@Levitan_Sargsyan:1988; @Marchenko:1977; @Freiling_Yurko:2001; @Harutyunyan:2008]), which we denote by ${\mu }_n\left(q,\alpha ,\beta \right),$ $n=0,1,\dots,$ (emphasizing the dependence on $q,$ $\alpha$ and $\beta$) and enumerate in increasing order: $$\label{eq4}
\mu_0 \left(q,\alpha,\beta\right) < \mu_1 \left(q,\alpha,\beta\right) < \dots < \mu_n \left(q,\alpha,\beta\right) < \dots \; .$$ In the papers [@Harutyunyan:2008; @Harutyunyan_Navasardyan:2000] it was introduced the concept of the eigenvalues function (EVF) of a family of Sturm-Liouville operators $\left\{L\left(q, \alpha, \beta\right); \alpha \in \left(0,\pi\right], \beta\left[0,\pi\right)\right\}.$ For fixed $q$ it is a function of two variables $\gamma$ and $\delta$ $\left(\gamma=\alpha+\pi n \in \left(0,\infty\right), \delta=\beta-\pi m \in \left(-\infty, \pi\right)\right)$ determined through the eigenvalues $\mu_n \left(q,\alpha,\beta\right),$ $n=0,1,2,\dots,$ by the following formula: $$\label{eq5}
\mu \left(\gamma,\delta\right)=\mu\left(\alpha+\pi n, \beta-\pi m \right):= \mu_{n+m} \left(q,\alpha,\beta\right), \; n,m=0,1,2,\dots.$$ It is proved that this function is analytic with respect to $\gamma$ and $\delta.$ It is strictly increasing by $\gamma$ and strictly decreasing by $\delta.$
It is also known (see, e.g. [@Levitan_Sargsyan:1988]) that every nontrivial solution $y\left(x,\mu\right)$ of the equation may have only simple zeros (if $y\left(x_0,\mu\right)=0,$ then $y'\left(x_0,\mu\right)\neq 0$), and (see, e.g. [@Ghazaryan:2002]) every solution $y\left(x,\mu\right)$ is a continuously differentiable function with respect to the totality of variables $x$ and $\mu.$ Therefore, by applying the implicit function theorem (see, e.g. [@Fikhtengolts:1966 p. 452]), we get that the zeros of the solution $y\left(x,\mu\right)$ are the continuously differentiable functions with respect to $\mu.$ Since the function $x=x\left(\mu\right),$ such that the identity $y\left(x\left(\mu\right),\mu\right) \equiv 0$ is true for all $\mu$ from some interval $\left(a,b\right),$ is called the solution of the equation $y\left(x,\mu\right)=0,$ then differentiating the last identity with respect to $\mu$ we obtain: $$\label{eq6}
\cfrac{dy\left(x\left(\mu\right), \mu\right)}{d\mu}=\cfrac{\partial y\left(x\left(\mu\right), \mu\right)}{\partial x} \, \cfrac{dx\left(\mu\right)}{d\mu}+\cfrac{\partial y\left(x\left(\mu\right), \mu\right)}{\partial \mu} \equiv 0, \; \mu \in \left(a,b\right).$$ Let us denote $\cfrac{\partial y\left(x,\mu\right)}{\partial \mu}:=\dot{y}\left(x,\mu\right)$ and write the identity in the following form: $$\label{eq7}
\cfrac{dx\left(\mu\right)}{d\mu}=\dot{x}\left(\mu\right)=-\cfrac{\dot{y}\left(x\left(\mu\right), \mu\right)}{y'\left(x\left(\mu\right), \mu\right)}, \; \mu \in \left(a,b\right).$$ On the other hand, let us write down the fact that $y\left(x,\mu\right)$ is the solution of the equation , i.e. $$\label{eq8}
-y''\left(x, \mu \right)+q\left(x\right)y\left(x,\mu \right)\equiv \mu y\left(x,\mu \right), \; 0<x<\pi, \; \mu \in \mathbb{C},$$ and differentiating this identity with respect to $\mu$ we receive: $$\label{eq9}
-\dot{y}'' \left(x, \mu \right)+q\left(x\right)\dot{y}\left(x, \mu \right) \equiv y\left(x, \mu \right)+\mu \dot{y}\left(x,\mu \right).$$ Multiplying by $\dot{y},$ by $y$ and subtracting from the second obtained identity the first one, we get $$y''\left(x, \mu \right)\dot{y}\left(x, \mu \right)-\dot{y}''\left(x, \mu \right)y\left(x, \mu \right) \equiv y^{2} \left(x, \mu \right), \; 0<x<\pi, \; \mu \in {\mathbb C},$$ i.e. $$\label{eq10}
\cfrac{d}{dx} \left[y'\left(x, \mu \right)\dot{y}\left(x, \mu \right)-\dot{y}'\left(x, \mu \right)y\left(x, \mu \right)\right] \equiv y^{2} \left(x, \mu \right).$$ If we integrate this identity with respect to $x$ from $0$ to $a$ $\left(0 \leq a \leq \pi \right),$ then $$\begin{gathered}
\label{eq11}
y'\left(a, \mu \right)\dot{y}\left(a, \mu \right)-\dot{y}'\left(a, \mu \right)y\left(a, \mu \right)-y'\left(0, \mu \right)\dot{y}\left(0, \mu \right)+\\
+\dot{y}'\left(0, \mu \right)y\left(0, \mu \right)=\int_{0}^{a} y^{2} \left(x, \mu \right)dx,\end{gathered}$$ and if we integrate with respect to $x$ from $a$ to $\pi,$ then we get $$\begin{gathered}
\label{eq12}
y'\left(\pi, \mu \right)\dot{y}\left(\pi, \mu \right)-\dot{y}'\left(\pi, \mu \right)y\left(\pi, \mu \right)-y'\left(a, \mu \right)\dot{y}\left(a, \mu \right)+\\
+\dot{y}'\left(a, \mu \right)y\left(a, \mu \right)=\int_{a}^{\pi} y^{2} \left(x, \mu \right)dx.\end{gathered}$$ Now, as $y\left(x,\mu\right)$ let us take $y=\varphi\left(x, \mu, \alpha, q\right)-$ the solution of the equation , satisfying the following initial conditions: $$\label{eq13}
\varphi \left(0, \mu, \alpha, q\right)=\sin \alpha, \;\;\; \varphi'\left(0, \mu, \alpha, q\right)=-\cos \alpha.$$ It is easy to see that eigenfunctions of the problem $L\left(q,\alpha,\beta\right)$ are obtained from the solution $\varphi\left(x, \mu, \alpha, q\right)$ at $\mu=\mu_n\left(q,\alpha,\beta\right)$ (here we use ), i.e. $$\begin{gathered}
\label{eq14}
\varphi_{n} \left(x, q, \alpha, \beta \right):=\varphi_{n} \left(x\right)=\varphi \left(x, \mu_{n} \left(q, \alpha, \beta \right), \alpha, q\right)=\\
=\varphi \left(x, \mu \left(\alpha+\pi n, \beta \right), \alpha, q\right)=\varphi \left(x, \mu \left(\alpha, \beta-\pi n\right), \alpha, q\right)=\\
=\left. \varphi \left(x, \mu \left(\alpha, \delta\right), \alpha, q\right)\right|_{\delta=\beta-\pi n} =\left. \varphi \left(x, \mu \left(\alpha, \delta \right)\right)\right|_{\delta =\beta-\pi n} =\\
=\varphi \left(x, \mu \left(\alpha, \beta -\pi n\right)\right).\end{gathered}$$ Let $0 \leq x_{n}^{0} < x_{n}^{1} < \dots < x_{n}^{m} \leq \pi$ be the zeros of the eigenfunction $\varphi_{n} \left(x, q, \alpha, \beta \right)=\varphi \left(x, \mu \left(\alpha, \beta-\pi n\right)\right),$ i.e. $\varphi_{n} \left(x_{n}^{k}, q, \alpha, \beta \right)=\varphi \left(x_{n}^{k}, \mu \left(\alpha, \beta-\pi n \right) \right)=0,$ $k=0,1,\dots,m.$
Let $q, \alpha, n$ be fixed. We consider the following questions:
- How are the zeros $x_{n}^{k}=x_{n}^{k} \left(\beta \right),$ $k=0,1,\dots,m$ changing, when $\beta$ is changing on $\left[0,\pi\right)?$
- How many zeros the $n$-th eigenfunction $\varphi _{n} \left(x,q,\alpha ,\beta \right)$ has, i.e. what is equal $m?$
By taking $y=\varphi_n\left(x\right)$ and $a=x_{n}^{k}$ $\left(k=0,1,\dots,m\right)$ in , we receive $$\begin{gathered}
\label{eq15}
\varphi'_{n}\left(x_{n}^{k} \right)\dot{\varphi}_{n}\left(x_{n}^{k} \right)-\dot{\varphi}'_{n}\left(x_{n}^{k} \right)\varphi_{n}\left(x_{n}^{k} \right)-\varphi'_{n}\left(0 \right)\dot{\varphi}_{n}\left(0 \right)+\\
+\dot{\varphi}'_{n}\left(0 \right)\varphi_{n}\left(0 \right)=\int_{0}^{x_{n}^{k}} \varphi_{n}^{2} \left(x\right)dx.\end{gathered}$$ Since the initial conditions should be held for all $\mu \in \mathbb{C},$ then $\dot{\varphi}_{n} \left(0\right)=0$ and $\dot{\varphi}'_{n} \left(0\right)=0.$ Also, taking into account that $\varphi_{n} \left(x_{n}^{k} \right)=0,$ from we get $$\label{eq16}
\varphi'_{n} \left(x_{n}^{k} \right)\dot{\varphi}_{n} \left(x_{n}^{k} \right)=\int_{0}^{x_{n}^{k}} \varphi_{n}^{2} \left(x\right)dx.$$ Since the zeros of the solutions are simple, then $\varphi'_{n} \left(x_{n}^{k} \right) \neq 0,$ and therefore, from the following equality implies : $$\label{eq17}
\cfrac{\dot{\varphi}_{n} \left(x_{n}^{k} \right)}{\varphi'_{n} \left(x_{n}^{k} \right)} =\cfrac{1}{\left(\varphi'_{n} \left(x_{n}^{k} \right)\right)^2} \int_{0}^{x_{n}^{k}} \varphi_{n}^{2}\left(x\right)dx.$$ Now, from , we obtain $$\label{eq18}
\dot{x}_{n}^{k} \left(\mu_{n} \right)=\left. \cfrac{dx_{n}^{k} \left(\mu \right)}{d\mu} \right|_{\mu=\mu_{n}} =-\cfrac{\dot{\varphi}_{n} \left(x_{n}^{k} \right)}{\varphi'_{n} \left(x_{n}^{k} \right)} =-\cfrac{1}{\left(\varphi'_{n} \left(x_{n}^{k}\right)\right)^2} \int_{0}^{x_{n}^{k}} \varphi_{n}^{2}\left(x\right)dx,$$ i.e. zeros $x_{n}^{k} \left(\mu_{n} \right),$ $k=0,1,\dots,m$ of the eigenfunction $\varphi_{n} \left(x\right)$ are decreasing if the eigenvalue $\mu_{n} \left(q, \alpha, \beta \right)$ is increasing, which in its turn means that $$\label{eq19}
\dot{x}_{n}^{k} \left(\mu_{n} \left(q, \alpha, \beta \right)\right) \leq 0.$$
Let us note, that the equality $\dot{x}_{n}^{k} \left(\mu_{n} \right)=0$ may occur only at $x_{n}^{k}=0,$ i.e., when $x=0$ is a zero of the eigenfunction $\varphi_{n} \left(x\right),$ and it is so at $\alpha =\pi$ $\left(\gamma=\pi l, \; l=1,2,\dots\right).$
Meanwhile, in the inequality $\dot{x}_{n}^{k} \left(\mu_{n} \right)=\dot{x}_{n}^{k} \left(\mu_{n} \left(q, \alpha, \beta \right)\right) \leq 0,$ the variable $\mu_{n}$ can be changed depending on $q,$ $\alpha$ and $\beta.$ More precisely, if for some change in these three variables $\mu_{n} \left(q, \alpha, \beta \right)$ increases, then the zeros of the eigenfunction $\varphi_{n} \left(x\right)$ are moving to the left, and if $\mu_{n} \left(q, \alpha, \beta \right)$ decreases, then the zeros of the eigenfunction $\varphi_{n} \left(x\right)$ are moving to the right.
In the work [@Poschel_Trubowitz:1987], it was proved for $q \in L_{\mathbb{R}}^{2}\left[0,\pi\right]$ that the number of zeros of the $n$-th eigenfunctions of the problems $L\left(q, \pi, 0 \right)$ and $L\left(0, \pi, 0 \right)$ are equal. But it is easy to see that the same proof remains true for $q \in L_{\mathbb{R}}^{1}\left[0,\pi\right].$
It is easy to calculate that the eigenvalues of the problem $L\left(0,\pi,0\right)$ are $\mu_n\left(0,\pi,0\right)=\left(n+1\right)^2$ and the eigenfunctions are $$\begin{gathered}
\label{eq20}
\varphi_{n} \left(x\right)=\varphi \left(x, \mu_{n} \left(0, \pi, 0 \right), \pi, 0 \right)=\\
=\varphi \left( x, \left(n+1\right)^{2}, \pi, 0 \right)=\cfrac{\sin \left(n+1\right)x}{n+1}, \; n=0,1,2,\dots.\end{gathered}$$
The zeros of this eigenfunction are $x_{n}^{k}=\cfrac{\pi k}{n+1}, k=0,1,\dots,n+1,$ i.e. the $n$-th eigenfunction of the problem $L\left(0,\pi,0\right)$ has $n+2$ zeros in $\left[0,\pi\right],$ two of which are $0$ and $\pi,$ and $n$ zeros in $\left(0,\pi\right).$
Thereby, the $n$-th eigenfunction $\varphi \left(x, q, \pi, 0 \right)$ of the problem $L\left(q, \pi, 0\right)$ has two zeros at the endpoints of $\left[0,\pi\right],$ i.e. $x_{n}^{0} \left(q, \pi, 0\right)=0,$ $x_{n}^{n+1} \left(q, \pi, 0\right)=\pi$ and also $n$ zeros in the interval $\left(0, \pi\right).$
Along with increasing $\beta$ from $0$ to $\pi$ the eigenvalue $\mu_n \left(q,\pi,0\right)$ is continuously (with respect to $\beta$) decreasing from $\mu_n \left(q,\pi,0\right)$ to $\mu_n \left(q,\pi,\pi\right)=\mu\left(\pi,\pi-\pi n\right)=$\
$=\mu\left(\pi,0-\left(n-1\right)\pi\right)=\mu_{n-1} \left(q,\pi,0\right)$ (see [@Ghazaryan:2002]) and, according to , the zeros of the function $\varphi_n\left(x,q,\pi,\beta\right)$ are increasing, i.e. are moving to the right (all but leftmost zero $x_n^0=0$). In particular, the rightmost zero $x_n^{n+1}=\pi,$ by moving to the right, leaves the segment $\left[0,\pi\right]$ and in $\left[0,\pi\right]$ remains $n+1$ zeros (one $x_n^0=0$ and $n$ zeros in the interval $\left(0,\pi\right)$). And the previous zero reaches $\pi$ when the relation $\varphi_n\left(\pi\right)=\varphi\left(\pi,\mu_n \left(q,\pi,\beta\right),\pi,q\right)=c_n \psi_n \left(\pi\right) \sin \beta=0$ again occurs (see below and ), and this is possible only when $\beta$ reaches $\pi$ (and $\mu_n \left(q,\pi,\beta\right),$ by decreasing, reaches $\mu_n \left(q,\pi,\pi\right)=\mu_{n-1} \left(q,\pi,0\right)$). Then the eigenfunction $\varphi_n \left(x\right)=\varphi\left(x,\mu_n \left(q,\pi,\beta\right),\beta,q\right)$ will smoothly transform to the eigenfunction $\varphi\left(x,\mu_n \left(q,\pi,\pi \right),\pi,q\right)=\varphi\left(x,\mu_{n-1} \left(q,\pi,0 \right),\pi,q\right)$ which has $n+1$ zeros in $\left[0,\pi \right],$ two out of which are the endpoints $0$ and $\pi,$ and $n-1$ zeros are in $\left(0,\pi \right).$ Thus, the oscillation theorem is proved for all $L\left(q,\pi,\beta\right),$ $\beta \in \left[0,\pi\right).$
Now, as $y\left(x,\mu\right)$ let us take $\psi\left(x, \mu, \beta, q\right)-$ the solution of the equation satisfying the following initial conditions: $$\label{eq21}
\psi \left(\pi, \mu, \beta, q\right)=\sin \beta, \;\;\; \psi'\left(\pi, \mu, \beta, q\right)=-\cos \beta.$$ It is easy to see that the eigenvalues $\mu_n=\mu_n \left(q,\alpha,\beta\right),$ $n=0,1,\dots,$ of the problem $L\left(q,\alpha,\beta\right)$ are the zeros of the entire function $$\label{eq22}
\Psi \left(\mu \right)=\Psi \left(\mu, \alpha, \beta, q\right)=\psi \left(0, \mu, \beta, q\right)\cos \alpha +\psi'\left(0, \mu, \beta, q\right)\sin \alpha,$$ and the eigenfunctions, corresponding to these eigenvalues, are obtained by the formula $$\label{eq23}
\psi_{n} \left(x\right)=\psi \left(x, \mu_{n} \left(q, \alpha, \beta \right), \beta, q\right), \; n=0,1,\dots.$$ Since all eigenvalues $\mu_n$ are simple, then eigenfunctions $\varphi_n \left(x\right)$ and $\psi_n \left(x\right)$ corresponding to the same eigenvalue $\mu_n$ are linearly dependent, i.e. there exist the constants $c_n=c_n \left(q,\alpha,\beta\right),$ $n=0,1,\dots,$ such that $$\label{eq24}
\varphi_{n} \left(x\right)=c_{n} \psi_{n} \left(x\right), \; n=0,1,\dots.$$ This implies that $\varphi_{n} \left(x,q,\alpha,\beta \right)$ and $\psi_{n} \left(x,q,\alpha,\beta \right)$ have equal number of zeros.
Let $0 \leq x_n^m < x_n^{m-1} < \dots <x_n^0 \leq \pi$ be the zeros of the eigenfunction $\psi_{n} \left(x,q,\alpha,\beta \right)$ i.e. $\psi_{n} \left(x_n^k, q, \alpha, \beta \right)=0,$ $k=0,1,\dots,m.$
By taking in the identity $y=\psi_{n} \left(x\right)=\psi \left(x, \mu_n \left(q, \alpha, \beta \right), \beta, q \right),$ we receive $$\begin{gathered}
\label{eq25}
\psi'_{n} \left(\pi \right)\dot{\psi}_{n} \left(\pi \right)-\dot{\psi}'_{n} \left(\pi \right)\psi_{n} \left(\pi \right)-\psi'_{n} \left(x_{n}^{k} \right)\dot{\psi}_{n} \left(x_{n}^{k} \right)+\\
+\dot{\psi}'_{n} \left(x_{n}^{k} \right)\psi_{n} \left(x_{n}^{k} \right)=\int_{x_{n}^{k}}^{\pi}\psi_{n}^{2} \left(x\right)dx.\end{gathered}$$ From we get that $\dot{\psi}_{n} \left(\pi\right)=0$ and $\dot{\psi}'_{n} \left(\pi\right)=0.$ And since $\psi_{n} \left(x_n^k\right)=0,$ then the equality gets the form $$\label{eq26}
-\psi'_{n} \left(x_{n}^{k} \right)\dot{\psi}_{n} \left(x_{n}^{k} \right)=\int_{x_{n}^{k}}^{\pi} \psi_{n}^{2} \left(x\right)dx.$$ As all zeros $x_n^k$ are simple, i.e. $\psi'_{n} \left(x_n^k\right) \neq 0,$ dividing both sides of the last equality by $\left(\psi'_{n} \left(x_n^k\right)\right)^2,$ we obtain $$\label{eq27}
\cfrac{\dot{\psi}_{n} \left(x_{n}^{k} \right)}{\psi'_{n} \left(x_{n}^{k} \right)} =-\cfrac{1}{\left(\psi'_{n} \left(x_{n}^{k} \right)\right)^2} \int_{x_{n}^{k}}^{\pi} \psi_{n}^{2} \left(x\right)dx.$$ Now from , by taking $y=\psi_n \left(x\right),$ $x=x_n^k,$ we have $$\label{eq28}
\dot{x}_{n}^{k} \left(\mu_{n} \right)=\left. \cfrac{dx_{n}^{k} \left(\mu \right)}{d\mu} \right|_{\mu =\mu_{n}} =-\cfrac{\dot{\psi}_{n} \left(x_{n}^{k} \right)}{\psi'_{n} \left(x_{n}^{k} \right)} =\cfrac{1}{\left(\psi'_{n} \left(x_{n}^{k} \right)\right)^2} \int_{x_{n}^{k}}^{\pi} \psi_{n}^{2} \left(x\right)dx \geq 0,$$ i.e. the zeros $x_n^k \left(\mu_n\right),$ $k=0,1,\dots,m$ of the eigenfunction $\psi_n \left(x\right)$ are increasing, if the eigenvalue $\mu_n$ is increasing. Note that the equality $\dot{x}_n^k \left(\mu_n\right)=0$ is possible only when $x_n^0 \left(\mu_n\right)=\pi,$ but it holds when $\beta=0$ $\left(\delta=-\pi l, \; l=0,1,2,\dots\right).$
While studying the dependence of the zeros of eigenfunctions on $\alpha,$ it is convenient to use formula , because the eigenfunctions $\psi_n \left(x\right)$ have fixed values $\psi_n \left(\pi, \mu, \beta\right)=\sin \beta$ and $\psi'_n \left(\pi, \mu, \beta\right)=-\cos \beta$ for all $\mu \in \mathbb{C},$ i.e. all $\psi_n \left(x\right)$ satisfy the initial conditions $\psi_n \left(\pi\right)=\sin \beta$ and $\psi'_n \left(\pi\right)=-\cos \beta,$ which means that from the endpoint $\pi$ of the segment $\left[0,\pi\right]$ (when changing $\alpha$) new zeros can neither enter nor leave (neither appear nor disappear). Thus, with increasing $\alpha,$ the eigenvalues $\mu_n \left(q, \alpha, \beta\right)$ (with fixed $q$ and $\beta$) are increasing and according to (i.e. $\dot{x}_n^k \left(\mu_n\right) \geq 0$) the zeros of the eigenfunction $\psi_n \left(x\right)$ are moving to the right (i.e. are increasing). Wherein, the values $\psi \left(\pi\right)=\sin \beta$ and $\psi' \left(\pi\right)=-\cos \beta$ are not changed, the number of the zeros increase, and these zeros can neither “collide” nor “split” as they are simple. That’s why new zeros can appear only entering through $0-$ left endpoint of the segment $\left[0,\pi\right]$ and moving to the right (and “condensing” respectively). And new zeros are entered through $0-$ left endpoint of the segment $\left[0,\pi\right]$ only when $\psi_n \left(0\right)=0,$ but since $\psi_n \left(0\right)=c_n \varphi_n \left(0\right)=c_n \sin \alpha$ $\left(c_n \neq 0\right),$ then the equality $\psi_n \left(0\right)=0$ is possible only when $\sin \alpha=0,$ i.e. in our notations only when $\alpha=\pi$ (and because $\mu_n \left(q, 0, \beta\right)=\mu \left(0+\pi n, \beta\right)=\mu \left(\pi+\left(n-1\right)\pi, \beta\right)=\mu_{n-1} \left(q, \pi, \beta\right)$) or when $\alpha=0.$
Hence, when $\alpha=\pi,$ the eigenfunction $\psi_n \left(x, q, \pi, \beta\right)$ as well as $\varphi_n \left(x, q, \pi, \beta\right)$ has $n$ zeros in $\left(0,\pi\right)$ and one zero at $x=0$ left endpoint. Wherein, $\psi_n \left(x, q, \pi, \beta\right)=\psi \left(x, \mu_n \left(q, \pi, \beta\right), \beta, q\right)=\psi \left(x, \mu_{n+1} \left(q, 0, \beta\right), \beta, q\right)=\psi_{n+1} \left(x, q, 0, \beta\right).$ With increasing $\alpha$ from $0$ to $\pi$ the eigenvalue $\mu_{n+1} \left(q, 0, \beta\right)$ is increasing (continuously with respect to $\alpha$) to $\mu_{n+1} \left(q, \pi, \beta \right),$ and the leftmost zero $x=0$ by moving to the right, appears in $\left(0,\pi\right),$ i.e. there are $n+1$ zeros of the eigenfunction $\psi_n \left(x, q, \alpha, \beta\right)$ in $\left(0,\pi\right)$ (and another fixed zero $x=\pi,$ if $\beta=0$). A new zero will appear at $x=0$ left endpoint, when $\alpha$ will reach $\alpha=\pi.$
Thus, we obtain the following oscillation theorem:
\[thm1\] The eigenfunctions of the problem $L\left(q, \alpha, \beta\right)$ corresponding to the $n$-th eigenvalue $\mu_n \left(q, \alpha, \beta\right),$ $n=0,1,2,\dots,$ have exactly $n$ zeros in $\left(0,\pi\right).$ All these zeros are simple. If $\alpha=\pi$ and $\beta=0,$ then the $n$-th eigenfunction has $n+2$ zeros in $\left[0,\pi\right],$ and if either $\alpha=\pi,$ $\beta \in \left(0,\pi\right)$ or $\beta=0,$ $\alpha \in \left(0,\pi\right),$ then the $n$-th eigenfunction has $n+1$ zeros in $\left[0,\pi\right].$
Oscillation properties of the solutions of the problem $L\left(q, \alpha, \beta\right)$ (Sturm theory), the studies of which initiated by Sturm in [@Sturm1:1836; @Sturm2:1836], outlined in monographic literature (see, e.g. [@Levitan_Sargsyan:1988; @Sansone:1953; @Coddington_Levinson:1955]) for continuous $q.$ In the recent years, the study was mostly focused on the cases when $q$ is bounded or $q \in L_{\mathbb{R}}^{2} \left[0,\pi\right]$ (see, e.g. [@Poschel_Trubowitz:1987; @Simon:2005]), but in many studies (see [@Hinton:2005] and references therein) implicitly assumes that Sturm’s oscillation theorem (that the $n$-th eigenfunction has $n$ zeros) is also true for $q \in L_{\mathbb{R}}^{1} \left[0,\pi\right],$ although the rigorous proof is not available in the literature.
Our oscillation theorem is true for all $q \in L_{\mathbb{R}}^{1} \left[0,\pi\right].$
[14]{}
Levitan, B.M. and Sargsyan, I.S. *Sturm-Liouville and Dirac operators*, Nauka, Moscow, (in Russian), 1988.
Marchenko, V.A. *Sturm-Liouville Operators and Applications*, Naukova Dumka, Kiev, (in Russian), 1977.
Freiling, G. and Yurko, V. *Inverse Sturm-Liouville Problem and Their Applications*, NOVA Science Publishers, New York, 2001.
Harutyunyan, T.N. “The Dependence of the Eigenvalues of the Sturm-Liouville Problem on Boundary Conditions.” *Matematicki Vesnik*, 60, no. 4, (2008): 285–294.
Ghazaryan, H.G., Hovhannisyan, A.H., Harutyunyan, T.N., Karapetyan, G.A. *Ordinary Differential Equations*, Zangak-97, Yerevan, (in Armenian), 2002.
Fikhtengolts, G.M. *Differential and integral calculus, vol. I*, Fizmatlit, Moscow, (in Russian), 1966.
Harutyunyan, T.N. and Navasardyan, H.R. “Eigenvalue function of a family of Sturm-Liouville operators.” *Izv. Nats. Akad. Nauk Armenii Mat.*, 35, no. 5, (in Russian), (2000): 1–11.
P[ö]{}schel, J. and Trubowitz, E. *Inverse spectral theory*, Academic Press, Inc., Boston, MA, 1987.
Sturm, C. “M$\acute{\mbox{e}}$moire sur les $\acute{\mbox{E}}$quations diff$\acute{\mbox{e}}$rentielles lin$\acute{\mbox{e}}$aires du second ordre.” *J. Math. Pures Appl.*, 1, (1836): 106–186.
Sturm, C. “M$\acute{\mbox{e}}$moire sur une classe d’$\acute{\mbox{E}}$quations $\grave{\mbox{a}}$ diff$\acute{\mbox{e}}$rences partielles.” *J. Math. Pures Appl.*, 1, (1836): 373–444.
Sansone, G. *Ordinary differential equations*, Izd. Inostr. Lit., Moscow, (in Russian), 1953.
Coddington, E. and Levinson N. *Theory of Ordinary Differential Equations*, McGraw Hill Book Company, New York, 1955.
Simon, B. “Sturm oscillation and comparison theorems.” *Sturm-Liouville Theory: Past and Present (Birkh$\ddot{a}$user Verlag)*, (2005): 29–43.
Hinton, D. “Sturm’s 1836 oscillation results. Evolution of the theory.” *Sturm-Liouville Theory: Past and Present (Birkh$\ddot{a}$user Verlag)*, (2005): 1–27.
|
---
abstract: 'We derive a new high-order compact finite difference scheme for option pricing in stochastic volatility models. The scheme is fourth order accurate in space and second order accurate in time. Under some restrictions, theoretical results like unconditional stability in the sense of von Neumann are presented. Where the analysis becomes too involved we validate our findings by a numerical study. Numerical experiments for the European option pricing problem are presented. We observe fourth order convergence for non-smooth payoff.'
address:
- 'Department of Mathematics, University of Sussex, Pevensey II, Brighton, BN1 9QH, United Kingdom.'
- |
Institut de Mathématiques de Toulouse\
Université de Toulouse et CNRS (UMR 5219), France.
author:
- 'Bertram D[ü]{}ring'
- Michel Fournié
title: 'High-order compact finite difference scheme for option pricing in stochastic volatility models'
---
Option pricing ,compact finite difference discretizations ,mixed derivatives ,high-order scheme 65M06 ,65M12 ,91B28
Introduction
============
The traditional approach to price derivative assets or options is to specify an asset price process exogenously by a stochastic diffusion process and then price by no-arbitrage arguments. The seminal example of this approach is Black & Scholes’ paper [@BlaSch73] on pricing of European-style options. This approach leads to simple, explicit pricing formulas. However, empirical research has revealed that they are not able to explain important effects in real financial markets, e.g. the volatility smile (or skew) in option prices.
In real financial markets, not only asset returns are subject to risk, but also the estimate of the riskiness is typically subject to significant uncertainty. To incorporate such additional source of randomness into an asset pricing model, one has to introduce a second risk factor. This also allows to fit higher moments of the asset return distribution. The most prominent work in this direction is Heston model [@Hes93]. Such models are based on a two-dimensional stochastic diffusion process with two Brownian motions with correlation $\rho$, i.e., $dW^{(1)}(t)dW^{(2)}(t)=\rho\, dt,$ on a given filtered probability space for the stock price $S=S(t)$ and the stochastic volatility $\sigma=\sigma(t)$ $$\begin{aligned}
dS(t)& =\bar{\mu} S(t)\,dt +\sqrt{\sigma(t)} S(t)\,dW^{(1)}(t),\\
d\sigma(t)& =a(\sigma(t)) \,dt+b(\sigma(t))\,dW^{(2)}(t),\end{aligned}$$ where $\bar{\mu}$ is the drift of the stock, $a(\sigma)$ and $b(\sigma)$ are the drift and the diffusion coefficient of the stochastic volatility.
Application of Itô’s Lemma leads to partial differential equations of the following form $$\label{P0}
V_t+\frac12 S^2\sigma V_{SS}+\rho b(\sigma) \sqrt{\sigma}S V_{S\sigma}+\frac12 b^2(\sigma)
V_{\sigma\sigma}+a(\sigma)V_\sigma+rSV_S-rV=0,$$ where $r$ is the (constant) riskless interest rate. Equation has to be solved for $S,\sigma>0,\,0 \leq t \leq T$ and subject to final and boundary conditions which depend on the specific option that is to be priced.
For some models and under additional restrictions, closed form solutions to can be obtained by Fourier methods (e.g. [@Hes93], [@Due09]). Another approach is to derive approximate analytic expressions, see e.g.[@BeGoMi10] and the literature cited therein. In general, however, —even in the Heston model [@Hes93] when the parameters in it are non constant— equation has to be solved numerically. Moreover, many (so-called American) options feature an additional early exercise right. Then one has to solve a free boundary problem which consists of and an early exercise constraint for the option price. Also for this problem one typically has to resort to numerical approximations.
In the mathematical literature, there are many papers on numerical methods for option pricing, mostly addressing the one-dimensional case of a single risk factor and using standard, second order finite difference methods (see, e.g., [@TavRan00] and the references therein). More recently, high-order finite difference schemes (fourth order in space) were proposed that use a compact stencil (three points in space). In the present context see, e.g., [@TaGoBh08] for linear and [@DuFoJu04; @DuFoJu03; @LiaKha09] for fully nonlinear problems.
There are less works considering numerical methods for option pricing in stochastic volatility models, i.e., for two spatial dimensions. Finite difference approaches that are used are often standard, low order methods (second order in space) and do provide little numerical analysis or convergence results. Other approaches include finite element-finite volume [@ZvFoVe98], multigrid [@ClaPar99], sparse wavelet [@HiMaSc05], or spectral methods [@ZhuKop10].
Let us review some of the related finite difference literature. Different efficient methods for solving the American option pricing problem for the Heston model are compared in [@IkoToi07]. The article focusses on the treatment of the early exercise free boundary and uses a second order finite difference discretization. In [@HouFou07] different, low order ADI (alternating direction implicit) schemes are adapted to the Heston model to include the mixed spatial derivative term. While most of [@TaGoBh08] focusses on high-order compact scheme for the standard (one-dimensional) case, in a short remark [@TaGoBh08 Section 5] also the stochastic volatility (two-dimensional) case is considered. However, the final scheme there is of second order only due to the low order approximation of the cross diffusion term.
The originality of the present work consists in proposing a new, [ *high-order compact finite difference scheme*]{} for (two-dimensional) option pricing models with [*stochastic volatility*]{}. It should be emphasised that although our presentation is focused on the Heston model, our methodology naturally adapts to other stochastic volatility models. We derive a new compact scheme that is fourth order accurate in space and second order accurate in time. The stability analysis of the scheme is a difficult task due to the multi-dimensional context, variable coefficients and the nature of the boundary conditions. Under additional assumptions (zero correlation, periodic boundary conditions), we establish theoretical results like unconditional stability in the sense of von Neumann (for ‘frozen coefficients’). We discuss this in the numerical part.
This paper is organised as follows. In the next section, we recall the Heston model from [@Hes93] and its closed form solution for the constant parameters case. In Section \[probsection\] we introduce new independent variables to transform the partial differential equation to a more tractable form. In Section \[HOCsection\] we derive the new high-order compact scheme. We analyse its necessary stability condition in section \[numanalsection\]. Numerical experiments that confirm the good properties of the method are presented in Section \[numsection\]. We give numerical results for the European option pricing problem with non-smooth payoff and observe fourth order convergence. Section \[concsection\] concludes.
Heston model
============
Let us recall the Heston model from [@Hes93] on which we will focus our presentation. Consider a two-dimensional standard Brownian motion $W=(W^{(1)},W^{(2)})$ with correlation $dW^{(1)}(t)dW^{(2)}(t)=\rho dt$ on a given filtered probability space. Assuming a specific form of the drift $a(\sigma)$ and the diffusion coefficient $b(\sigma)$ of the stochastic volatility, the value of the underlying asset in [@Hes93] is characterised by $$\begin{aligned}
dS(t)& =\bar{\mu} S(t) \,dt+\sqrt{\sigma(t)} S(t)\,dW^{(1)}(t),\quad \nonumber \\
\label{SDEs}
d\sigma(t)& =\kappa^*(\theta^*-\sigma(t)) \,dt+v\sqrt{\sigma(t)}\,dW^{(2)}(t),
\end{aligned}$$ for $0< t\leq T$ with $S(0),\sigma(0)>0$ and $\bar{\mu}$, $\kappa^*$, $v$ and $\theta^*$ the drift, the mean reversion speed, the volatility of volatility and the long-run mean of $\sigma,$ respectively.
Note that our method carries over to other stochastic volatility models with different choices of the drift and the diffusion coefficient of the stochastic volatility, e.g., the GARCH diffusion model $$\label{eq:garchmodel}
d\sigma(t)=\kappa^*(\theta^*-\sigma(t))\,dt+v\sigma(t)\,dW^{(2)}(t),$$ or the so-called 3/2-model $$\label{eq:32model}
d\sigma(t)= \kappa^*\sigma(t) (\theta^*-\sigma(t))\,dt+v{\sigma(t)}^{3/2}\,dW^{(2)}(t),$$ in a natural way (see also Remark \[otherstochmodels\] at the end of section \[HOCsection:derivation\]).
In the Heston model, it follows by Itô’s lemma and standard arbitrage arguments that any derivative asset $V=V(S,\sigma,t)$ solves the following partial differential equation $$\begin{gathered}
V_t+\frac12 S^2\sigma V_{SS}+\rho v {\sigma}S V_{S\sigma}+\frac12 v^2\sigma
V_{\sigma\sigma}+rSV_S\\
+\big[\kappa^* (\theta^*-\sigma)-\lambda(S,\sigma,t)\big]V_\sigma-rV=0, \label{P1}
\end{gathered}$$ which has to be solved for $S,\sigma>0$, $0 \leq t < T$ and subject to a suitable final condition, e.g., $$V(S,\sigma,T)=\max(K-S,0),$$ in case of a European put option (with $K$ denoting the strike price). In , $\lambda(S,\sigma,t)$ denotes the market price of volatility risk. While in principle it could be estimated from market data, this is difficult in practice and the results are controversial. Therefore, one typically assumes a risk premium that is proportional to $\sigma$ and chooses $\lambda(S,\sigma,t)=\lambda_0\sigma$ for some constant $\lambda_0$. For streamlining the presentation we restrict ourselves to this important case, although our scheme applies to general functional forms $\lambda=\lambda(S,\sigma,t).$
The ‘boundary’ conditions in the case of the put option read as follows
$$\begin{aligned}
V(0,\sigma,t)&=Ke^{-r(T-t)},& &T> t\geq 0,\;\sigma>0,
\label{boundary1}\\
V(S,\sigma,t)&\to 0,& &T> t\geq 0,\;\sigma>0,\; \text{as } S\to\infty,
\label{boundary2}\\
V_\sigma(S,\sigma,t)&\to 0,& &T> t\geq 0,\;S>0,\; \text{as } \sigma \to\infty.
\label{boundary4}
\end{aligned}$$
The remaining boundary condition at $\sigma=0$ can be obtained by looking at the formal limit $\sigma\to 0$ in , i.e., $$V_t+rSV_S+\kappa^*\theta^* V_\sigma-rV= 0,\quad T> t\geq 0,\;S>0,\; \text{as } \sigma\to 0.
\label{boundary3}$$ This boundary condition is used frequently, e.g. in [@IkoToi07; @ZvFoVe98]. Alternatively, one can use a homogeneous Neumann condition [@ClaPar99], i.e., $$V_\sigma(S,\sigma,t) \to 0, \quad T> t\geq0,\;S>0,\; \text{as } \sigma\to 0.$$
For [*constant*]{} parameters, one can employ Fourier transform techniques and obtain a system of ordinary differential equations which can be solved analytically [@Hes93]. By inverting the transform one arrives at a closed-form solution of , where the European put option price $V$ is given by $$\label{HestonFormula}
V(S,\sigma,t)=Ke^{-r(T-t)}\mathcal{I}_2 -S\mathcal{I}_1,$$ with ($k=1,2$) $$\begin{aligned}
\mathcal{I}_k&=\frac12+\frac1\pi\int_0^\infty\mathrm{Re}\biggl[\frac{e^{-i\xi
\ln( K)}f_k(\xi)}{i\xi}\biggl]\,d\xi,\label{I1I2}\\
f_k(\xi)&=\exp\big(C(T-t,\xi)+\sigma D(T-t,\xi)+i\xi \ln S\big),\nonumber\\
C(\tau,\xi)&=r\xi i \tau+\frac{\kappa^*\theta^*}{v^2}\Bigl[(b+d)\tau -2\ln\Bigl(\frac{1-ge^{d\tau}}{1-g}\Bigr)\Bigr],\;
D(\tau,\xi)= \frac{b_k+d_k}{v^2} \frac{1-{e^{d_k\tau}}}{1-g{e^{d_k\tau}}},\nonumber \\
g&=\frac{b_k+d_k}{b_k-d_k},\quad
d_k=\sqrt { \left( {\xi}^{2}\mp i\xi \right) { v}^{2}+ b_k^{2},
}\quad
b_k=\kappa^*+\lambda_0-\rho v(i\xi+\delta_{1k}).\nonumber
\end{aligned}$$ Here, $\delta_{i,j}$ denotes Kronecker’s delta.
Transformation of the equation and boundary conditions {#probsection}
======================================================
Under the transformation of variables $$\label{trafo}
x=\ln \Big(\frac SK\Big),\quad \tilde t=T-t,
\quad u=\exp(r\tilde t)\frac VK,$$ (we immediately drop the tilde in the following) we arrive at $$\begin{gathered}
\label{P2}
u_t-\frac12 \sigma \bigl(u_{xx}+2\rho vu_{x\sigma}+v^2u_{\sigma\sigma}\bigr)\\
+\Big(\frac12 \sigma-r \Big)u_x-\big[\kappa^*\theta^*-(\kappa^* +\lambda_0)\sigma\big]u_\sigma=0,
\end{gathered}$$ which is now posed on ${{\mathbb{R}}}\times{{\mathbb{R}}}^+\times(0,T).$ We study the problem using the modified parameters $$\kappa=\kappa^*+\lambda_0,\quad \theta=\frac{\kappa^*\theta^*}{\kappa^*+\lambda_0},$$ which is both convenient and standard practice. For similar reasons, some authors set the market price of volatility risk to zero. Equation can then be written as $$\label{P3}
u_t-\frac12 \sigma \bigl(u_{xx}+2\rho vu_{x\sigma}+v^2u_{\sigma\sigma}\bigr)+\Big(\frac12 \sigma-r\Big)u_x-\kappa \big[\theta -\sigma\big]u_\sigma=0.$$ The problem is completed by the following initial and boundary conditions: $$\begin{aligned}
u(x,\sigma,0) &=\max (1-\exp (x),0),& & x\in{{\mathbb{R}}},\;\sigma>0,\nonumber \\
u(x,\sigma,t) &\to 1,& & x\to -\infty ,\;\sigma>0,\;t>0,\nonumber \\
u(x,\sigma,t) &\to 0,& & x\to +\infty ,\;\sigma>0,\;t>0,\nonumber \\
u_\sigma(x,\sigma,t) &\to 0,& & x\in{{\mathbb{R}}},\;\sigma\to \infty,\;t>0,\nonumber\\
u_\sigma(x,\sigma,t) &\to 0,& & x\in{{\mathbb{R}}},\;\sigma\to 0,\;t>0.\nonumber
\end{aligned}$$
High-order compact scheme {#HOCsection}
=========================
For the discretization, we replace ${{\mathbb{R}}}$ by $[ -R_1,R_1] $ and ${{\mathbb{R}}}^+$ by $[L_2,R_2]$ with $R_1,R_2>L_2>0$ . For simplicity, we consider a uniform grid $Z=\{ x_{i}\in \left[ -R_1,R_1 \right]:$ $x_{i}=ih_1$, $i=-N,\dots,N\}\times\{ \sigma_{j}\in \left[L_2,R_2 \right]:$ $\sigma_{j}=L_2+jh_2$, $j=0,\dots,M\}$ consisting of $(2N+1)\times
(M+1)$ grid points, with $R_1=Nh_1,$ $R_2=L_2+Mh_2$ and with space steps $h_{1}$, $h_{2}$ and time step $k$. Let $u_{i,j}^{n}$ denote the approximate solution of in $(x_{i},\sigma_j)$ at the time $ t_{n}=nk$ and let $u^{n}=(u_{i,j}^{n})$.
We impose artificial boundary conditions in a classical manner rigorously studied for a class of Black-Scholes equations in [@KanNic00]. The boundary conditions on the grid are treated as follows. Due to the compactness of the scheme, the treatment of the Dirichlet boundary conditions is minimal. It is straightforward to consider Dirichlet boundary conditions without introduction of numerical error by imposing $$u_{-N,j}^{n}=1-e^{rt_{n}-Nh}, \quad u_{+N,j}^{n}= 0,\quad (j=0,\dots,M).$$ At the other boundaries we impose homogeneous Neumann boundary conditions. The treatment of homogeneous Neumann conditions requires more attention. Indeed, no values are prescribed. The values of the unknown on the boundaries must be set by extrapolation from values in the interior. Then a numerical error is introduced, and the main consideration is that the order of extrapolation should be high enough not to affect the overall order of accuracy. We refer to the paper of Gustafsson [@GusBC] to discuss the influence of the order of the approximation on the global convergence rate and justify our choice of fourth order extrapolation formulae. By Taylor expansion, if we cancel the first derivates on the boundaries, it is trivial to verify $$u_{i,0}^{n}=\frac{18}{11}u_{i,1}^{n}-\frac{9}{11}u_{i,2}^{n}+\frac{2}{11}u_{i,3}^{n},\quad
(i=-N+1,\dots,N-1),$$ and $$u_{i,M}^n = \frac{18}{11} u_{i,M-1}^n - \frac{9}{11} u_{i,M-2}^n +
\frac{2}{11} u_{i,M-3}^n, \quad (i=-N+1,\dots,N-1).$$
Derivation of the high-order scheme for the elliptic problem {#HOCsection:derivation}
------------------------------------------------------------
First we introduce the high-order compact finite difference discretization for the stationary, elliptic problem with Laplacian operator which appears after the variable transformation $y=\sigma/v$. Equation is then reduced to the two-dimensional elliptic equation $$\label{eq:convection}
-\frac{1}{2} v y (u_{xx}+u_{yy}) - \rho v y u_{xy}+\Big(\frac12 v
y-r\Big)u_x-\kappa \frac{\theta -vy}{v}u_y=f(x,y),$$ with the same boundary conditions.\
The fourth order compact finite difference scheme uses a nine-point computational stencil using the eight nearest neighbouring points of the reference grid point $(i,j).$
The idea behind the derivation of the high-order compact scheme is to operate on the differential equations as an auxiliary relation to obtain finite difference approximations for high-order derivatives in the truncation error. Inclusion of these expressions in a central difference method for equation (\[eq:convection\]) increases the order of accuracy, typically to $\mathcal{O}(h^4),$ while retaining a compact stencil defined by nodes surrounding a grid point.
Introducing a uniform grid with mesh spacing $h=h_1=h_2$ in both the $x$- and $y$-direction, the standard central difference approximation to equation (\[eq:convection\]) at grid point $(i,j)$ is $$\begin{gathered}
\label{eq:central}
-\frac{1}{2} v y_j \bigl(\delta_x^2u_{i,j}+\delta_y^2u_{i,j}\bigr) - \rho v y_j
\delta_x\delta_y u_{i,j}\\
+\Big(\frac12 vy_j-r\Big)\delta_x u_{i,j}-\kappa \frac{\theta -vy_j}{v}\delta_y u_{i,j}
- \tau_{i,j}= f_{i,j},
\end{gathered}$$ where $\delta_x$ and $\delta_x^2$ ($\delta_y$ and $\delta_y^2$, respectively) denote the first and second order central difference approximations with respect to $x$ (with respect to $y$). The associated truncation error is given by $$\begin{gathered}
\label{eq:tau}
\tau_{i,j} =
\frac{1}{24}vyh^{2}
(u_{xxxx} + u_{yyyy}) +\frac{1}{6}\rho vy h^{2}(u_{xyyy} + u_{xxxy})\\
+\frac{1}{12}( 2\,r-vy ) h^{2} u_{xxx} +\frac{1}{6}{\frac {\kappa ( \theta -vy ) }{v}}h^{2}u_{yyy} +\mathcal{O}(h^4).
\end{gathered}$$ For the sake of readability, here and in the following we omit the subindices $j$ and $(i,j)$ on $y_j$ and $u_{i,j}$ (and its derivatives), respectively. We now seek second-order approximations to the derivatives appearing in (\[eq:tau\]). Differentiating equation (\[eq:convection\]) once with respect to $x$ and $y,$ respectively, yields $$\begin{aligned}
\label{eq:dx1}
u_{xxx}=& -u_{xyy} -2\rho u_{xxy} -{\frac { 2r+vy }{vy}}u_{xx}+2\,{\frac
{\kappa ( vy-\theta )}{{v}^{2}y}}u_{xy}-{\frac {2}{vy}}f_x,\\
\nonumber
u_{yyy} =& -u_{xxy}-2\rho u_{xyy} -\frac{1}{y} u_{xx}-\frac{2 \kappa ( \theta -vy) +v^2}{v^2y} u_{yy} \\
\label{eq:dx2}
& \hspace*{2.5cm} -\frac{2 \rho +2r- vy}{vy} u_{xy}+\frac{1}{y} u_x+\frac{2 \kappa }{vy}u_y-\frac{2}{vy}f_y.
\end{aligned}$$ Differentiating equations (\[eq:dx1\]) and (\[eq:dx2\]) with respect to $y$ and $x,$ respectively, and adding the two expressions, we obtain $$\begin{gathered}
\label{eq:dxy}
u_{{{\it xyyy}}}+u_{{{\it xxxy}}}=\frac{vy+2r}{2v{y}^{2}}u_{xx}+\frac{\kappa (\theta +vy)}{v^2y^2} u_{xy}-
\frac{4\kappa (\theta -vy)+v^2}{2v^2y}u_{xyy}\\
-\frac{\rho v+2r-vy}{vy}u_{xxy}-2\rho u_{xxyy}-\frac {1}{2y}u_{xxx}+\frac{1}{vy^2}f_{x} - \frac{2}{vy}f_{xy}.
\end{gathered}$$ Notice that all the terms in the right hand sides of (\[eq:dx1\])-(\[eq:dxy\]) have compact $\mathcal{O}(h^2)$ approximations at node $(i,j)$ using finite differences based on $\delta_x$, $\delta_x^2$, $\delta_y$, $\delta_y^2$. We have, for example, ${u_{xxy}}_{i,j}=\delta_x^2\delta_y u_{i,j}+\mathcal{O}(h^2).$ By differentiating equation (\[eq:convection\]) twice with respect to $x$ and $y$, respectively, and adding the two expressions, we obtain $$\begin{gathered}
\label{eq:dxxyy}
u_{{{\it xxxx}}}+u_{{{\it yyyy}}}=-2 \rho u_{{{\it xyyy}}}-2 \rho
u_{{{\it xxxy}}}-2 u_{{{\it xxyy}}}+2{\frac {
( \kappa vy- {v}^{2}- \kappa \theta ) }{{
v}^{2}y}}u_{{{\it xxy}}} \\
-{\frac { ( 2 r-vy ) }{vy}}u_{{{\it xxx}}}+2{\frac { ( \kappa vy- {v}^{2}-\kappa \theta ) }{{v}^{2}y}}u_{{{\it yyy}}}
-{\frac { ( -vy+4 \rho v+2 r ) }{vy}}u_{{{\it xyy}}} \\
+4 {\frac {\kappa }{vy}}u_{{{\it yy}}} + \frac {2}{y}u_{{{\it xy}}}
- {\frac {2}{vy}}(f_{{{\it xx}}}+f_ {{{\it yy}}}).
\end{gathered}$$ Again, using (\[eq:dx1\])-(\[eq:dxy\]), the right hand side can be approximated up to $\mathcal{O}(h^2)$ within the nine-point compact stencil. Substituting equations (\[eq:dx1\])-(\[eq:dxxyy\]) into equation (\[eq:tau\]) and simplifying yields a new expression for the error term $\tau_{i,j}$ that consists only of terms which are either
- terms of order $\mathcal{O}(h^4)$, or
- terms of order $\mathcal{O}(h^2)$ multiplied by derivatives of $u$ which can be approximated up to $\mathcal{O}(h^2)$ within the nine-point compact stencil.
Hence, substituting the central $\mathcal{O}(h^2)$ approximations to the derivatives in this new expression for the error term and inserting it into (\[eq:central\]) yields the following $\mathcal{O}(h^4)$ approximation to the initial partial differential equation (\[eq:convection\]), $$\begin{aligned}
&-\frac{1}{24}\frac {h^2((vy_j-2r)^2-4\rho vr-2\kappa ( vy_j-\theta ) -2v^2)+12v^2y_j^2 }{vy_j}{\delta^2_x u_{i,j} }\nonumber \\
&-\frac{1}{12}\frac {h^2(2\kappa ^{2}(vy_j-\theta )^2-\kappa v^3y_j-\kappa \theta v^2
-v^4)+6v^4y_j^2 }{v^3y_j}{\delta^2_y u_{i,j}}\nonumber\\
&-\frac{1}{12}h^{2}vy_j(1+2\rho^2) {\delta^2_x\delta^2_y u_{i,j}}\nonumber\\
&+\frac{h^2}{6}\frac { (\kappa (vy_j-\theta )+v\rho(vy_j-2r) )}{v} {\delta^2_x\delta_y u_{i,j}}\nonumber\\
&+\frac{{h}^{2}}{12}\frac { (4\kappa \rho(vy_j-\theta )+v(vy_j-2r))}{v} {\delta_x
\delta^2_y u}_{i,j}\nonumber\\
&-\frac16 \frac{h^2(\kappa (vy_j-2r)(vy_j-\theta )-\kappa v^2y_j\rho-v^3\rho-v^2r) +6v^3y_j^2\rho}{{v}^{2}y_j}\delta_x \delta_y u_{i,j}\nonumber\\
&+\frac{1}{12}\frac{6v^2y_j^2-12vy_jr-h^2[v^2+\kappa (vy_j-\theta )]}{vy_j}{\delta_x u_{i,j}}\nonumber\\
&+\frac{\kappa }{6}{\frac {(6v^2y_j^2-6vy_j\theta - h^2[v^2+\kappa (vy_j-\theta )]) }{{v}^{2}y_j}}\delta_y u_{i,j}\nonumber\\
=& f_{i,j}
+\frac{{h}^{2}}{6}{\frac {\rho}{v}} \delta_x\delta_y f_{i,j}
-\frac{{h}^{2}}{6}{\frac {( {v}^{2}+\kappa ( vy_j-\theta ))}{{v}^{2}y_j}} \delta_y f_{i,j}\nonumber \\
&-\frac{{h}^{2}}{12}{\frac { (2\rho v-2r+vy_j)}{vy_j}} \delta_x
f_{i,j}
\label{eq:scheme2}
+ \frac{{h}^{2}}{12} \delta_x^2 f_{i,j} +\frac{{h}^{2}}{12} \delta_y^2 f_{i,j}.
\end{aligned}$$ The fourth order compact finite difference scheme (\[eq:scheme2\]) considered at the mesh point $(i,j)$ involves the nearest eight neighbouring mesh points. Associated to the shape of the computational stencil, we introduce indexes for each node from zero to nine, $$\label{eq:coeffnumber}
\left (
\begin{array}{ccc}
\begin{array}{rcl}
u_{i-1,j+1}=u_6\\
u_{i-1,j}=u_3\\
u_{i-1,j-1}=u_7\\
\end{array}
&
\begin{array}{rcl}
u_{i, j+1}=u_2 \\
u_{i, j}=u_0 \\
u_{i, j-1}=u_{4} \\
\end{array}
&
\begin{array}{rcl}
u_{i+1,j+1}=u_5\\
u_{i+1,j}=u_1\\
u_{i+1,j-1}=u_8\\
\end{array}
\end{array}
\right ).$$ With this indexing, the scheme (\[eq:scheme2\]) is defined by $$\label{eq:stencil2}
\sum_{l=0}^8 \alpha_l u_l = \sum_{l=0}^8 \gamma_l f_l,$$ where the coefficients $\alpha_l$ and $\gamma_l$ are given by $$\begin{aligned}
\alpha_0=&\bigg( {\frac {4 {\kappa }^{2}+{v}^{2}}{12v}}-{\frac {v
(2 {\rho}^{2}-5 ) }{3{h}^{2}}} \bigg) y_j\\
&-{\frac {
\kappa {v}^{2}+2 {\kappa }^{2}\theta +{v}^{2}r}{3{v}^{2}}}+{\frac
{-{v}^{4}+{\kappa }^{2}{\theta }^{2}-{v}^{3}r\rho+{v}^{2}{r}^{2}}{3{v}^{3
}y_j}},\\
\alpha_{1,3}=&\bigg( -\frac v{24}+{\frac {\pm \frac 16v\mp\frac 13\kappa \rho}{h}}+{\frac {v
( \rho^2-1 ) }{3{h}^{2}}} \bigg) y_j
\mp\frac {\kappa h}{24}+\frac {\kappa } {12}+\frac r6\\
&\mp{\frac {v r-\kappa \theta \rho}{3vh}}\mp{\frac { ( {v}^{2}-\kappa \theta
) h}{24vy_j}}-{\frac {-2 rv\rho+\kappa \theta +2 {r}^{2}-{v}
^{2}}{12vy_j}} ,\\
\alpha_{2,4}=&\bigg( -{\frac {{\kappa }^{2}}{6v}}+{\frac {\pm\frac 13\kappa \mp\frac 16\rho
v}{h}}+{\frac {v ( \rho^2-1 ) }
{3{h}^{2}}} \bigg) y_j\mp{\frac {{\kappa }^{2}h}{12v}}+{\frac {
\kappa ( {v}^{2}+4 \kappa \theta ) }{12{v}^{2}}}\\
&\mp{
\frac {rv\rho-\kappa \theta }{3vh}}\mp{\frac {\kappa
( {v}^{2}-\kappa \theta ) h}{12{v}^{2}y_j}}+{\frac {
( 2 \kappa \theta +{v}^{2} ) ( {v}^{2}-\kappa \theta ) }{12{v}^{3}y_j}},\\
\alpha_{5,7}=&\bigg( -\frac {\kappa }{24}\pm{\frac { ( 2 \rho+1 ) (
2 \kappa +v ) }{24h}}-{\frac {v ( \rho+1 )
( 2 \rho+1 ) }{12{h}^{2}}} \bigg) y_j\\&+{\frac {\kappa
( \rho v+2 r+\theta ) }{24v}}
\mp{\frac { ( 2
\rho+1 ) ( \kappa \theta +vr ) }{12vh}}+{\frac {
{v}^{2}r+{v}^{3}\rho-2 r\kappa \theta }{24{v}^{2}y_j}},\\
\alpha_{6,8}=& \bigg( \frac {\kappa }{24}\pm{\frac { ( 2 \rho-1 ) (
-2 \kappa +v ) }{24h}}-{\frac {v ( 2 \rho-1 )
( \rho-1 ) }{12{h}^{2}}} \bigg) y_j\\
&-{\frac {\kappa
( \rho v+2 r+\theta ) }{24v}}
\mp{\frac { ( 2
\rho-1 ) ( vr-\kappa \theta ) }{12vh}}-{\frac {{v}^{2}r+{v}^{3}\rho-2
r\kappa \theta }{24{v}^{2}y_j}},
\end{aligned}$$ and $$\begin{aligned}
\gamma_0 & = \frac{2}{3},\qquad
\gamma_5 =\gamma_7=\frac{\rho}{24},& \gamma_6 &=\gamma_8=-\frac{\rho}{24},\\
\gamma_{1,3} &= \frac1{12}\mp \frac{h}{24}\pm\frac1{12}{\frac { ( r-\rho v
) h}{vy_j}},&
\gamma_{2,4} &= \frac1{12}\mp\frac1{12}{\frac{\kappa
h}{v}}\mp\frac1{12}{\frac { ({v}^{2} -\kappa \theta )
h}{{v}^{2}y_j}}.
\end{aligned}$$ When multiple indexes are used with $\pm$ and $\mp$ signs, the first index corresponds to the upper sign.
\[otherstochmodels\] The derivation of the scheme in this section can be modified to accomodate other stochastic volatility models as, e.g., the GARCH diffusion model or the 3/2-model . Using these models the structure of the partial differential equations , and remains the same, only the coefficients of the derivatives have to be modified accordingly. Similarly, the coefficients of the derivatives in - have to be modified. Substituting these in the modified expression for the truncation error one obtains equivalent $\mathcal{O}(h^4)$ approximations as .
High-order scheme for the parabolic problem
-------------------------------------------
The high-order compact approach presented in the previous section can be extended to the parabolic problem directly by considering the time derivative in place of $f(x,y)$. Any time integrator can be implemented to solve the problem as presented in [@SpotzCarey]. We consider the most common class of methods involving two times steps. For example, differencing at $t_{{\mu}}=(1-{\mu})t^n + {\mu} t^{n+1}$, where $0 \leq {\mu} \leq 1$ and the superscript $n$ denotes the time level, yields a class of integrators that include the forward Euler ($\mu = 0$), Crank-Nicolson ($\mu=1/2$) and backward Euler ($\mu = 1$) schemes. We use the notation $\delta^+_t u^n = \frac{u^{n+1}-u^{n}}{k}$. Then the resulting fully discrete difference scheme for node $(i,j)$ at the time level $n$ becomes $$\sum_{l=0}^8 \mu \alpha_l u_l^{n+1} + (1-\mu) \alpha_l u_l^{n} =
\sum_{l=0}^8 \gamma_l \delta^+_t u_l^n,$$ that can be written in the form (after multiplying by $24 v^3h^2yk$) $$\label{eq:hocscheme}
\sum_{l=0}^8 \beta_l u_l^{n+1} = \sum_{l=0}^8 \zeta_l u_l^n.$$ The coefficients $\beta_l,$ $\zeta_l$ are numbered according to the indexes and are given by $$\begin{aligned}
\beta_0 =& ( ( ( 2 {y_j}^{2}-8 ) {v}^{4}+ ( ( -
8 \kappa -8 r ) y_j-8 \rho r ) {v}^{3}+ ( 8 {\kappa
}^{2}{y_j}^{2}+8 {r}^{2} ) {v}^{2}\\
&-16 {\kappa }^{2}\theta vy_j +8 {\kappa }^{2}{\theta }^{2} ) \mu k+16 {v}^{3}y_j ) {h}^{2}+
( -16 {\rho}^{2}+40 ) {y_j}^{2}{v}^{4}\mu k\\
\beta_{1,3} =&\pm ( (\kappa \theta {v}^{2} -{v}^{4}-\kappa y_j{v}^{3}
) \mu k- ( y_j+2 \rho ) {v}^{3}+2 {v}^{2}r
) {h}^{3}+ ( ( ( -{y_j}^{2}+2 ) {v}^{4}\\
&+ ( ( 4 r+2 \kappa ) y_j+4 \rho r ) {v}^{3}-
( 2 \kappa \theta +4 {r}^{2} ) {v}^{2} ) \mu k+2
{v}^{3}y_j ) {h}^{2}\\
&\pm ( 4 {v}^{4}{y_j}^{2}+ ( -8 {y_j}^{
2}\kappa \rho-8 y_jr ) {v}^{3}+8 y_j\kappa \theta \rho {v}^{2}
) \mu kh+ (8 {\rho}^{2}-8 )
{y_j}^{2}{v}^{4}\mu k,\\
\beta_{2,4} =& \pm ( ( 2 {\kappa }^
{2}\theta v-2 {\kappa }^{2}{v}^{2}y_j-2 {v}^{3}\kappa ) \mu k-2 {v}^{2}y_j\kappa +2 v\kappa \theta -2 {v
}^{3} ) {h}^{3}+ ( ( 2 {v}^{4}\\
&+2 \kappa y_j{v}^{3}+ ( -4 {\kappa }^{2}{y_j}^{2}+2 \kappa \theta ) {v}^{2}+8 {
\kappa }^{2}\theta vy_j-4 {\kappa }^{2}{\theta }^{2} ) \mu k+2 {v
}^{3}y_j ) {h}^{2}\\
&\pm ( ( 8 {y_j}^{
2}\kappa +8 y_j\rho r ) {v}^{3}-4 {v}^{4}{y_j}^{2}\rho-8 {v}^{2}y_j\kappa \theta
) \mu kh+ ( 8 {\rho}^{2}-8 ) {y_j}^{2}{v}^{4}\mu k,\\
\beta_{5,7} =& ( ( {v}^{4}\rho+ ( -{y}^{2}\kappa +\kappa y_j\rho+r
) {v}^{3}+ ( \theta +2 r ) \kappa y_j{v}^{2}-2 r
\kappa \theta v ) \mu k\\
&+{v}^{3}\rho y_j ) {h}^{2}\pm ( ( 2 \rho+1 ) {y_j}^{2}{v}^{4}+ ( ( 2+4
\rho ) \kappa {y_j}^{2}+ ( -4 \rho r-2 r ) y_j
) {v}^{3}\\
&+ ( -2 \theta -4 \theta \rho ) \kappa y_j{
v}^{2} ) \mu kh+ ( -2-4 {\rho}^{2}-6 \rho ) {y_j}^{2
}{v}^{4}\mu k,\\
\beta_{6,8} =& ( ( -{v}^{4}\rho+ ( {y_j}^{2}\kappa -\kappa y_j\rho-r
) {v}^{3}+ ( -\theta -2 r ) \kappa y_j{v}^{2}+2 r
\kappa \theta v ) \mu k\\
&-{v}^{3}\rho y_j ) {h}^{2}
\pm ( ( 2 \rho-1 ) {y_j}^{2}{v}^{4}+ ( ( 2-4
\rho ) \kappa {y_j}^{2}+( 2 r-4 \rho r ) y_j
) {v}^{3}\\
&+ ( 4 \theta \rho-2 \theta ) \kappa y_j{v
}^{2} ) \mu kh+ ( -4 {\rho}^{2}+6 \rho-2 ) {y_j}^{2}
{v}^{4}\mu k,
\end{aligned}$$ and $$\begin{aligned}
\zeta_0 =& 16v^3y_jh^2+(1-\mu)k( ( ( 8-2 {y_j}^{2} ) {v}^{4}+ ( ( 8 \kappa
+8 r ) y_j+8 \rho r ) {v}^{3}\\
&+ ( -8 {r}^{2}-8 {
\kappa }^{2}{y_j}^{2} ) {v}^{2}+16 {\kappa }^{2}\theta vy_j-8 {
\kappa }^{2}{\theta }^{2} ) {h}^{2}+ ( -40+16 {\rho}^{2}
) {y_j}^{2}{v}^{4}
),\\
\zeta_{1,3} =&\pm(2r-(y_j+2\rho)v)v^2h^3+2v^3y_jh^2+(1-\mu)k ( \pm (
{v}\kappa y_j+ {v}^{2} -\kappa \theta ){v}^{2}{h}^{3}\\
&+ ( {v}^{2}{y_j}^{2}- ( 4 r+2 \kappa
) vy_j+ 4 {r}^{2}+2 \kappa \theta -2 {v}^{2}-4
\rho vr ) {v}^{2} {h}^{2}\\
&\pm ( ( -4 {v}+8
\kappa \rho ) {v}^{3}{y_j}^{2}+ ( -8 \kappa \theta
\rho+8 vr ) {v}^{2}y_j ) h+ ( 8 {v}^{2}-8 {v}^{2}{
\rho}^{2} ) {v}^{2}{y_j}^{2}),
\\
\zeta_{2,4} =&\pm(2v\kappa \theta -2v^2y_j\kappa -2v^3)h^3+2v^3y_jh^2+(1-\mu)k ( \pm 2(
{v}^{3}\kappa - {\kappa }^{2}\theta v\\
&+ {\kappa }^{2}{v}^{
2}y_j ) {h}^{3}+ ( 4 {\kappa }^{2}{v}^{2}{y_j}^{2}- ( 2 {v}^{2}+8 {\kappa }\theta )\kappa v y_j
+2 \kappa \theta (2 {\kappa }{\theta }- {v}^{2})-2 {v}^{4} ) {h}^{2}\\
&\pm ( ( -8 {v}^{3}\kappa +4 {v}^{4}\rho ) {y_j}^{2}+
( 8 \kappa \theta {v}^{2}-8 {v}^{3}\rho r ) y_j
) h+ ( -8 {v}^{4}{\rho}^{2}+8 {v}^{4} ) {y_j}^{2}),\\
\zeta_{5,7} =&v^3\rho y_jh^2+(1-\mu)k ( ( {v}^{3}{y_j}^{2}\kappa -v ( v\kappa \theta +2 r\kappa v+
\kappa {v}^{2}\rho ) y_j\\
&-v ( {v}^{2}r-2 r\kappa \theta +{v}
^{3}\rho ) ) {h}^{2}\pm ( -v ( 2 {v}^{3}\rho+{v}
^{3}+4 \kappa {v}^{2}\rho+2 {v}^{2}\kappa ) {y_j}^{2}\\
&+v (
2 v\kappa \theta +4 v\kappa \theta \rho+4 {v}^{2}\rho r+2 {v}^
{2}r ) y_j ) h+v ( 2 {v}^{3}+6 {v}^{3}\rho+4 {v}^{3
}{\rho}^{2} ) {y_j}^{2}), \\
\zeta_{6,8} =&-v^3\rho y_jh^2 +(1-\mu)k ( ( -{v}^{3}{y_j}^{2}\kappa +v ( v\kappa \theta +2 r\kappa v+
\kappa {v}^{2}\rho ) y_j\\
&+v ( {v}^{2}r-2 r\kappa \theta +{v}
^{3}\rho ) ) {h}^{2}
\pm ( v ( -2 {v}^{3}\rho+{v}
^{3}+4 \kappa {v}^{2}\rho-2 {v}^{2}\kappa ) {y_j}^{2}\\
&+v (
2 v\kappa \theta -4 v\kappa \theta \rho+4 {v}^{2}\rho r-2 {v}^{
2}r ) y_j ) h+v ( 2 {v}^{3}-6 {v}^{3}\rho+4 {v}^{3}
{\rho}^{2} ) {y_j}^{2}).\end{aligned}$$ When multiple indexes are used with $\pm$ and $\mp$ signs, the first index corresponds to the upper sign. Choosing $\mu=1/2,$ i.e., in the Crank-Nicolson case, the resulting scheme is of order two in time and of order four in space.
Stability analysis {#numanalsection}
------------------
Besides the multi-dimensionality the initial-boundary-value problem features two main difficulties for its stability analysis: the coefficients are non-constant and the boundary conditions are not periodic. In this section, we consider the von Neumann stability analysis (see, e.g., [@StrikwerdaBook]) even if the problem considered does not satisfy periodic boundary conditions. This approach is extensively used in the literature and yields good criteria on the robustness of the scheme. Other approaches which take into account the boundary conditions like normal mode analysis [@GKS] are beyond the scope of the present paper (we refer to [@FournieRigal] for normal mode analysis for a high-order compact scheme).
To consider the variable coefficients, the principle of ‘frozen coefficients’ (the variable coefficient problem is stable if all the ‘frozen’ problems are stable) [@GKS; @StrikwerdaBook] is employed. It should be noted, that in the discrete case, this principle is far from trivial. The most general statements are given in [@GKS; @Magnus; @Wade; @StrikewedaWade] and reference therein for hyperbolic problems. For parabolic problems in the discrete case we refer to [@RicMor67; @Widlund65]. Using the frozen coefficients approach gives a necessary stability condition and slightly strengthened stability for frozen coefficients is sufficient to ensure overall stability [@RicMor67]. We now turn to the von Neumann stability analysis. We rewrite $u^n_{i,j}$ as $$\label{eq:Unwave}
u^n_{i,j}=g^n e^{Iiz_1 + Ijz_2},$$ where $I$ is the imaginary unit, $g^n$ is the amplitude at time level $n$, and $z_1={2\pi h}/{\lambda_1}$ and $z_2={2\pi h}/{\lambda_2}$ are phase angles with wavelengths $\lambda_1$ and $\lambda_2,$ in the range $[0,2\pi[$, respectively. Then the scheme is stable if for all $z_1$ and $z_2$ the amplification factor $G={g^{n+1}}/{g^{n}}$ satisfies the relation $$\label{eq:vNstab}
|G|^2 - 1 \leq 0.$$ An expression for $G$ can be found using (\[eq:Unwave\]) in (\[eq:hocscheme\]).
Our aim is to prove von Neumann stability (for ‘frozen coefficients’) without restrictions on the time step size. To show that holds we would need to study the (formidable) expression for the amplification factor $G$ (not given here) which consists of polynomials of order up to six in 13 variables. To reduce the high number of parameters in the following numerical analysis, we assume here zero interest rate $r=0$ and choose the parameter ${\mu}=1/2$ (Crank-Nicolson case). Even then, at present a complete analysis for non-zero correlation seems out of reach, but we are able to show the following result.
\[thm:stability\] For $r=\rho=0$ and $\mu=1/2$ (Crank-Nicolson), the scheme (\[eq:hocscheme\]) satisfies the stability condition .
Let us define new variables $$\begin{aligned}
c_1=\cos\left(\frac{z_1}{2}\right),&\quad c_2=\cos\left(\frac{z_2}{2}\right),\quad
s_1=\sin\left(\frac{z_1}{2}\right),\quad s_2=\sin\left(\frac{z_2}{2}\right),\\ W&=\frac {2 \left( \theta-vy \right) }{v}s_2,\quad
V= \frac {2vy}{\kappa}s_1,\end{aligned}$$ which allow us to express $G$ in terms of $h,k,\kappa,V,W$ and trigonometric functions only. This reduces the number of variables in the amplification factor from ten to nine. The new variable $V$ has constant positive sign contrary to $W$.
In the new variables the stability criterion of the scheme can be written as $$\label{eq:vNstab2}
\frac{-8kh^2(n_4h^2+n_2)}{d_6h^6 + d_4h^4 + d_2h^2 + d_0} \leq 0,$$ with $$\begin{aligned}
n_4 & =-4\,V{\kappa}^{3}{\it f_3}\,s_1^{3}{W}^{2}-{V}^{3}{\kappa}^{3}{
\it f_4}\,s_1^{3},\quad
n_2 = -4\,{V}^{3}{\kappa}^{3}{\it f_2}\,{\it f_1}\,{\it s_1},\\
d_6 & = 4\, \left( -2\,W{\it c_2}+V{\it c_1} \right) ^{2}{\kappa}^{2}s_1^{4},\\
d_4 &= \frac{1}{4}\,{\kappa}^{4}s_1^{4} \left( {V}^{2}-4\,V{\it c_1}\,W{\it
c_2}+4\,{W}^{2} \right) ^{2}{k}^{2}\\
&\quad -4\,V{\kappa}^{3}s_1^{3} \left( {\it f_4}\,{V}^{2}+4\,{\it f_3}
\,{W}^{2} \right) k +16\,{\kappa}^{2}{V}^{2}f_2^{2}s_1^{2},\\
d_2 &= {V}^{2}{\kappa}^{4}s_1^{2} \left( {V}^{2}{\it f_6}-36\,V{\it
c_1}\,W{\it c_2}+4\,{\it f_5}\,{W}^{2} \right) {k}^{2}-16\,{V}^{3}{
\kappa}^{3}{\it f_2}\,{\it f_1}\,{\it s_1}\,k,\\
d_0 & =4\,{V}^{4}{\kappa}^{4}f_1^{2}{k}^{2},\end{aligned}$$
where $f_1,$ $f_2,$ $f_3,$ $f_4,$ $f_5,$ and $f_6$ have constant sign and are defined by $$\begin{aligned}
f_1 &= 2c_1^{2}c_2^{2}+c_1^{2}+c_2^{2}-4 \leq 0,&
f_2 &=c_1^{2}+c_2^{2}+1\geq 0, \\
f_3 &=2c_1^{2}c_2^{2}-c_1^{2}-1 \leq 0,&
f_4 &=2c_1^{2}c_2^{2}-c_2^{2}-1 \leq 0,\\
f_5 &=4c_1^{4}c_2^{2}-2c_1^{2}-c_2^{2}+8 \geq 0,&
f_6 &= 4c_1^{2}{{c_2}}^{4}-2c_2^{2}-c_1^{2}+8\geq 0.\end{aligned}$$ We observe that we can restrict our analysis (expect for $d_2$, treated below) to the trigonometric functions $s_1$, $s_2$, $c_1,$ and $c_2$ in the reduced range $[0,1]$ (${z_1}/{2}$ and ${z_2}/{2}$ are in $[0,\pi[$, even exponents for cosinus functions). It is straight-forward to verify that $n_4,$ $n_2,$ $d_6,$ $d_4,$ and $d_0$ are positive. It remains to prove $d_2=d_{22}k^2 + d_{21}k$ is positive as well. Indeed, $d_{21}\geq 0$ and $d_{22}$ is a polynomial of degree two in $W$ having a positive leading order coefficient. The minimum value of $d_{22}$ is given by $$m=2{V}^{4}{\kappa}^{4}s_1^{2}f_1f_7/f_5$$ with $f_7 =4c_2^{4}c_1^{4}-2c_1^{4}c_2^{2}-2c_1^{2}c_2^{4}+6c_1^{2}c_2^{2}+c_1^{2}+c_2^{2}-8\leq0.$ Hence, $m$ is positive and then $d_2$ is positive as well. Therefore, the numerator in is negative and the denominator in is positive which completes the proof.
\
For non-zero correlation the situation becomes more involved. Additional terms appear in the expression for the amplification factor $G$ and we face an additional degree of freedom through $\rho$. Since we have proven condition for $\rho=0$ it seems reasonable to assume it also holds at least for values of $\rho$ close to zero. In practical applications, however, correlation can be strongly negative. Few theoretical results can be obtained, we recall the following lemma from [@DuFo12].
\[partStabLemma\] For any $\rho$, $r=0$, and $\mu=1/2$ (Crank-Nicolson) it holds: if either $c_1=\pm1$ or $c_2=\pm 1$ or $y=0$, then the stability condition is satisfied.
See Lemma 1 in [@DuFo12].
In [@DuFo12], we have reformulated condition into a constrained optimisation problem and have employed a line-search global-optimisation algorithm to find the maxima. We have found that the stability condition was always satisfied. The maxima for each $\rho\in [-1,0]$ were always negative but very close to zero. This result is in agreement with Lemma \[partStabLemma\] (in fact, $|G|^2-1=0$ for $y=0$). Our conjecture from these results is that the stability condition is satisfied also for non-vanishing correlation although it will be hard to give an analytical proof.
In our numerical experiments we observe stability also for a general choice of parameters. To validate the stability property of the scheme also for general parameters, we perform additional numerical tests in section \[numsection\].
Numerical results {#numsection}
=================
Numerical convergence
---------------------
Parameter Value
--------------------------- ---------------
strike price $K=100$
time to maturity $T=0.5$
interest rate $r=0.05$
volatility of volatility $v=0.1$
mean reversion speed $\kappa =2$
long-run mean of $\sigma$ $\theta =0.1$
correlation $\rho=-0.5$
: Default parameters for numerical simulations.[]{data-label="defaulttable"}
In this section we perform a numerical study to compute the order of convergence of the scheme . Due to the compact discretization the resulting linear systems have a good sparsity pattern and can be solved very efficiently. We compute the $l_2$ norm error $\varepsilon_2$ and the maximum norm error $\varepsilon_\infty$ of the numerical solution with respect to a numerical reference solution on a fine grid. We fix the parabolic mesh ratio $k/h^2$ to a constant value which is natural for parabolic PDEs and our scheme which is of order $\mathcal{O}(k^2)$ in time and $\mathcal{O}(h^4)$ in space. Then, asymptotically, we expect these errors to converge as $\varepsilon = Ch^m$ for some $m$ and $C$ representing constants. This implies $\ln(\varepsilon) = \ln(C) + m \ln(h) .$ Hence, the double-logarithmic plot $\varepsilon$ against $h$ should be asymptotic to a straight line with slope $m$. This gives a method for experimentally determining the order of the scheme.
Figure \[fig:sol\] shows the numerical solution for the European option price at time $T=0.5$ using the parameters from Table \[defaulttable\].
![Numerical solution for the European option price.[]{data-label="fig:sol"}](solution2.eps){width="65.00000%"}
We refer to Figure \[fig:numconv1\] and Figure \[fig:numconv2\] for the results of the numerical convergence study using the default parameters from Table \[defaulttable\]. For the parameter $\mu$, we use a Rannacher time-stepping choice [@Ran84], i.e., we start with four fully implicit quarter time steps ($\mu=1$) and then continue with Crank-Nicolson ($\mu=1/2$).
![$l_2$-error vs. $h$.[]{data-label="fig:numconv1"}](l2err.eps){width="65.00000%"}
![$l_\infty$-error vs. $h$.[]{data-label="fig:numconv2"}](linferr.eps){width="65.00000%"}
For comparison we conducted additional experiments using a standard, second order scheme (based on the central difference discretization where we neglect the truncation error). We observe that the numerical convergence order agrees well with the theoretical order of the schemes. It is important to choose the mesh in such a way that the singular point of the initial condition is not a point of the mesh. The construction of such a mesh is always possible in a simple manner. Then the non-smooth payoff can be directly considered in our scheme and we observe fourth order numerical convergence.
Without constraint on the mesh, i.e. when then singular point of the payoff is a mesh point, the rate of convergence is reduced to two. However, it is possible to recover the fourth order convergence with such a mesh if the initial data are smoothed.
The numerical convergence analysis also shows the superior efficiency of the high-order scheme compared to a standard second order discretization. In each time step of each scheme a linear system has to be solved. For both schemes this requires the same computational time for the same dimension. To achieve the same level of accuracy the new scheme requires significantly less grid points, or in other words, the computational time to obtain a given accuracy level is greatly reduced by using the high-order scheme.
Numerical stability analysis
----------------------------
In our numerical analysis in section \[numanalsection\], we have proven the stability result Theorem \[thm:stability\] for $r=\rho=0.$ To validate this property for general parameters, we perform additional numerical tests. We compute numerical solutions for varying values of the parabolic mesh ratio $k/h^2$ and the mesh width $h.$ Plotting the associated $l_2$ norm errors in the plane should allow us to detect stability restrictions depending on $k/h^2$ or oscillations that occur for high cell Reynolds number (large $h$). This approach for a numerical stability study was also used in [@DuFoJu03].
![$l_2$ norm error in the $k/h^2$-$h$-plane for $\rho=-0.5$ (top) and $\rho=0$ (bottom).[]{data-label="fig:numstability"}](stability.eps){width="65.00000%"}
We perform numerical experiments for $\rho=0$ and $\rho=-0.5$. For the other parameters, we use again the default parameters from Table \[defaulttable\]. The results are shown in Figure \[fig:numstability\]. For both cases, $\rho=0$ and $\rho=-0.5,$ the errors show a similar behaviour, being slightly larger for non-vanishing correlation. There is almost no dependence of the error on the parabolic mesh ratio $k/h^2,$ which confirms numerically regular solutions can be obtained without restriction on the time step size. For larger values of $h,$ which also result in a higher cell Reynolds number, the error grows gradually, and no oscillation in the numerical solutions occurs. Based on these results and the findings in [@DuFo12], we conjecture that the stability condition also holds for general choice of parameters.
Conclusion {#concsection}
==========
We have presented a new high-order compact finite difference scheme for option pricing under stochastic volatility that is fourth order accurate in space and second order accurate in time. We have conducted a von Neumann stability analysis (for ‘frozen coefficients’ and periodic boundary data) and proved unconditional stability for vanishing correlation. In our numerical experiments we observe a stable behaviour also for a general choice of parameters. Additional numerical tests presented here and the results of subsequent research reported in [@DuFo12] suggest that the scheme is also von Neumann stable for non-zero correlation. In our numerical convergence study we obtain fourth order numerical convergence for the non-smooth payoffs which are typical in option pricing.
It would be interesting to consider extensions of this scheme to non-uniform grids and to the American option pricing problem, where early exercise of the option is possible. An approach to the first would be to introduce a transformation of the partial differential equation from a non-uniform grid to a uniform grid [@Fournie00]. Then our high order compact methodology can be applied to this transformed partial differential equation. This is, however, not straight-forward as the derivatives of the transformation appear in the truncation error and due to the presence of the cross-derivative terms. One cannot proceed to cancel terms in the truncation error in a similar fashion as in the current paper, and the derivation of a high-order compact scheme becomes much more involved. For the second extension, the American option pricing problem, one has to solve a free boundary problem. It can be written as a linear complementarity problem which can be discretised using the scheme . To retain the high-order convergence one would need to combine the high-order discretization with a high-order resolution of the free boundary. Both extensions are beyond the scope of the present paper, and we leave them for future research.
[[**Acknowledgement.**]{}Bertram D[ü]{}ring acknowledges partial support from the Austrian Science Fund (FWF), grant P20214, and from the Austrian-Croatian Project HR 01/2010 of the Austrian Exchange Service (ÖAD). The authors are grateful to the anonymous referees for helpful remarks and suggestions.]{}
[99]{}
E. Benhamou, E. Gobet, and M. Miri. Time Dependent Heston Model, [*SIAM J. Finan. Math.*]{} **1**, 289-325, 2010.
F. Black and M. Scholes. The pricing of options and corporate liabilities. *J. Polit. Econ.* **81**, 637-659, 1973.
N. Clarke and K. Parrott. Multigrid for American option pricing with stochastic volatility. [*Appl. Math. Finance*]{} **6**(3), 177–195, 1999.
B. Düring and M. Fournié. On the stability of a compact finite difference scheme for option pricing. In: [*Progress in Industrial Mathematics at ECMI 2010*]{}, M. Günther et al. (eds.), pp. 215-221, Springer, Berlin, Heidelberg, 2012.
B. Düring, M. Fournié, and A. Jüngel. Convergence of a high-order compact finite difference scheme for a nonlinear Black-Scholes equation. [*Math. Mod. Num. Anal.*]{} **38**(2), 359–369, 2004.
B. Düring, M. Fournié, and A. Jüngel. High-order compact finite difference schemes for a nonlinear Black-Scholes equation. [*Intern. J. Theor. Appl. Finance*]{} **6**(7), 767–789, 2003.
B. Düring. Asset pricing under information with stochastic volatility. [*Rev. Deriv. Res.*]{} **12**(2), 141–167, 2009.
M. Fournié. High order conservative difference methods for 2d drift-diffusion model on non-uniform grid. [ *Appl. Numer. Math.*]{} **33**(1-4), 381–392, 2000.
M. Fournié and A. Rigal. High Order Compact Schemes in Projection Methods for Incompressible Viscous Flows, [*Commun. Comput. Phys.*]{} **9**(4), 994–1019, 2011.
B. Gustafsson, H.-O. Kreiss, and J. Oliger. [*Time Dependent Problems and Difference Methods*]{}, Wiley-Interscience, 1996.
B. Gustafsson. The convergence rate for difference approximation to general mixed initial-boundary value problems. [ *SIAM J. Numer. Anal.*]{} **18**(2), 179–190, 1981.
S.L. Heston. A closed-form solution for options with stochastic volatility with applications to bond and currency options. [*Review of Financial Studies*]{} **6**(2), 327–343, 1993.
K.J. in’t Hout and S. Foulon. ADI finite difference schemes for option pricing in the Heston model with correlation. [*Int. J. Numer. Anal. Mod.*]{} **7**, 303–320, 2010.
S. Ikonen and J. Toivanen. Efficient numerical methods for pricing American options under stochastic volatility. [*Numer. Methods Partial Differential Equations*]{} **24**(1), 104–126, 2008.
N. Hilber, A. Matache, and C. Schwab. Sparse wavelet methods for option pricing under stochastic volatility. [*J. Comput. Financ.*]{} **8**(4), 1–42, 2005.
W. Liao and A.Q.M. Khaliq. High-order compact scheme for solving nonlinear Black-Scholes equation with transaction cost. [ *Int. J. Comput. Math.*]{} **86**(6), 1009–1023, 2009.
P. Kangro and R. Nicolaides. Far field boundary conditions for Black-Scholes equations. [*SIAM J. Numer. Anal.*]{} **38**, 1357–1368, 2000.
S. Mishra and M. Svärd. On stability of numerical schemes via frozen coefficients and the magnetic induction equations. [*BIT Numer. Math.*]{} **50**, 85–108, 2010.
R. Rannacher. Finite element solution of diffusion problems with irregular data. [*Numer. Math.*]{} **43**(2), 309–327, 1984.
R.D. Richtmyer and K.W. Morton. [*Difference Methods for Initial Value Problems*]{}. Interscience, New York, 1967.
W.F. Spotz and C.F. Carey. Extension of high-order compact schemes to time-dependent problems. [*Numer. Methods Partial Differential Equations*]{} **17**(6), 657–672, 2001.
J.C. Strikwerda. [*Finite difference schemes and partial differential equations*]{}. Second edition. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 2004.
J.C. Strikwerda and B.A. Wade. An extension of the Kreiss matrix theorem. [*SIAM J. Numer. Anal.*]{} **25**(6), 1272–1278, 1988.
D.Y. Tangman, A. Gopaul, and M. Bhuruth. Numerical pricing of options using high-order compact finite difference schemes. [*J. Comp. Appl. Math.*]{} **218**(2), 270–280, 2008.
D. Tavella and C. Randall. *Pricing Financial Instruments: The Finite Difference Method*. John Wiley & Sons, 2000.
B.A. Wade. Symmetrizable finite difference operators. [*Math. Comput.*]{} **54**, 525–543, 1990.
O.B. Widlund. Stability of parabolic difference schemes in the maximum norm. [*Numer. Math.*]{} **8**, 186–202, 1966.
W. Zhu and D.A. Kopriva. A spectral element approximation to price European options with one asset and stochastic volatility. [*J. Sci. Comput.*]{} **42**(3), 426–446, 2010.
R. Zvan, P.A. Forsyth and K.R. Vetzal. Penalty methods for American options with stochastic volatility. [ *J. Comp. Appl. Math.*]{} **91**(2), 199–218, 1998.
|
---
author:
- 'Jean-François Tremblay$^{*}$, Martin Béland$^{+}$, François Pomerleau$^{*}$, Richard Gagnon$^{\dagger}$, Philippe Giguère [^1]'
bibliography:
- 'references.bib'
title: Automatic 3D Mapping for Tree Diameter Measurements in Inventory Operations
---
Introduction {#sec:intro}
============
Forestry is an important industry in many countries. In 2016, it accounted for about 13 billion USD in Canada’s economy and a similar figure in Sweden’s exports of wood products. Yet, worker shortages and high turnover rates coupled with long training time are threatening many operations in this industry. Recent progress in field robotics, such as 3D mapping, have the potential to improve forestry operations while reducing demand for labor. Furthermore, these 3D mapping technologies could be used to estimate wood biomass for carbon accounting purposes [@biomass]. From a scientific point of view, studying the wider context of field robotics in forests is interesting, from the new challenges it generates. For instance, localization and mapping is more difficult in unstructured environments [@Pomerleau2013].
A key component in modern forest operations is forest inventory [@tlsinventory]. It consists in identifying specific attributes of trees. Some of these attributes, such as species, can be estimated using cameras and advanced computer vision techniques [@mathieu]. Others, such as tree diameters, can be extracted from lidar point clouds, as they contain metric information. We conjecture that the ability for an autonomous system to process geometric information via 3D mapping is one of the key elements to the development of future intelligent forest machinery. In our immediate case of forest inventory, this would enable automatic or computer-assisted tree selection for forest harvesting equipment. At the moment, deciding which trees to harvest in a partial cut scenario is performed manually by a technician. This operation has been identified as expensive, time consuming as well as yielding different results depending on the technician [@markingunreliable]. We believe that this could be addressed by equipping harvesting machinery with the proper sensors and algorithms. This paper explores the use of automatic map building with a standard set of robotic sensors, within the context of forest inventory. Although full 3D maps are produced in the process, as shown in , we limit our quantitative study on a single standard attribute in the forest inventory: measurements. The is arguably the most important tree characteristic used for tree selection and wood volume prediction in the forest industry [@west2009tree]. Typical requirements for diameter measurement accuracy are around of error [@tlsinventory], but can be as high as for American diameter classes [@americandbh]. Early work on tree diameter estimation from lidars focused on [@tlsinventory]. The , which is mounted on a tripod, is manually moved by an operator. Once the individual scans are registered using markers manually installed in the environment, one obtains a very precise 3D map of the environment. However, this data collection approach is significantly more tedious and time consuming than mobile mapping techniques.
We propose to use an 3D mapping approach for forest mapping. A map in 2D might not be sufficient for forest mapping or forest robotics in general. Indeed, forests are rarely flat and even when they are, obstacles on the ground will result on the robot not being levelled, as will be shown in our dataset. Our dataset will present one particularly steep forest where it is not clear how any 2D approach could work. Point clouds generated from this approach tend to be noisier than in nature [@tlsinventory]; this causes problems in the accuracy of diameter extraction. From the generated 3D maps, we automatically estimate tree diameters, comparing several approaches. We validate our complete approach on an extensive dataset of 11 trajectories through four forests, varying topographies, ages, species compositions and densities.
Our contributions are as follows:
we test ICP mapping in different forest types for the first time and provide insight about its performance and limitations;
we propose a new robust approach to estimation, based on the median of several cylinder fittings, designed to perform well on noisy maps, including those built with ICP;
we perform extensive validation of different tree diameter estimation methods from noisy lidar data, and identify which ones perform best; and
we provide recommendations on trajectories and field deployment for diameter estimation from robot mapping.
Related works {#sec:relatedworks}
=============
Before measuring tree s from lidar-equipped mobile platforms, one needs to create a map from the observations. @jagbrant2015lidar used a 2D lidar, combined with GPS and a IMU only for localization, to detect trees in orchards. This approach is not optimal in natural forests as GPS performance is affected by heavy forest canopy. @tsubouchi2014forest bridged the gap to natural forests, using a 2D lidar on a pan-tilt unit combined with a tripod. The scans were taken in a static manner as opposed to a moving robot. The mapping was done using a combination of the tree’s location and . @tang2015slam created maps using , for trajectories traveling on a road through the forest as opposed to inside the forest itself. This trajectory was selected to improve GPS reception. Importantly, they did not perform full 3D mapping which, as we claim, is key to enabling robotics in forests. @calderscomp performed an extensive comparison between a commercial GeoSLAM handheld scanner and . Their testing was limited to circular plots with radius. They also noted that the still failed on two of the forest plots tested. More recently, @seiki2017backpack scanned a forest with a 2D lidar and a pan-tilt unit on a backpack, using a technique called LOAM [@loam]. Other work using graph- has been done in [@marek].
After generating a 3D map, one needs to use either a circle fitting or cylinder fitting algorithm to estimate s. @jfrDetectionAndDTM presented a method to detect and segment trees in static lidar scans. They tested diameter estimation using cone and cylinder fitting for five sites. They reported a of more than , which is not sufficiently accurate for forest inventory. @tsubouchi2014forest performed least square 2D circle fitting for diameter estimation. @calderscomp used Computree [@computree] for terrain height models and diameter measurements, which was developed for . @seiki2017backpack employed “the Point Cloud Library RANSAC cylinder fitting method”. In [@marek], two circle fitting methods were validated: the Pratt fit [@pratt1987direct] and least square circle fitting.
Another important aspect is the rigorous analysis of the mapping and diameter extraction method, both in terms of forest variety and the number of trees tested. @jfrDetectionAndDTM had five test sites, containing 113 trees. In [@tsubouchi2014forest], testing was done in one forest with no branches or vegetation-occluding trunks, validating against nine measured trees. @calderscomp had the most complete dataset, with 10 test sites consisting of a circle of radius containing a total of 331 trees. While @tang2015slam did not assess their measurements, they measured the position of 224 trees with a total station along one trajectory. While the results in [@seiki2017backpack] were encouraging, the validation was conducted on one forest site with seven reference trees. [@marek] evaluated their work in one site under near-perfect conditions; no branches occluding the stem and no ground vegetation causing occlusion.
Methods {#subsec:methods}
=======
We describe here our data processing pipeline, starting with map generation, tree segmentation and determining breast height. Finally, we present our different diameter estimation algorithms.
Iterative closest point mapping {#sec:mapping}
-------------------------------
Our 3D mapping method relies on a modified version of `ethz-icp-mapping` [@icpmapping], which uses the algorithm as the registration solution. takes as input a reading point cloud $\mathbf{Q} \in \mathbb{R}^{3 \times m}$ (i.e., the current lidar view of the robot) containing $m$ points, a map point cloud $\mathbf{M}' \in \mathbb{R}^{3 \times l}$ containing $l$ points, and an initial pose estimate $\mathbf{\hat{T}} \in \text{SE}(3)$ to estimate the pose of the robot in the map. To compute the initial estimate, we used an to fuse our and wheel odometry. Reading point clouds are filtered for dynamic elements and maps are uniformly downsampled to keep computation time reasonable. The mapping was not performed in real time, as we prioritized map quality over computation time. The algorithm is described in .
$\mathbf{T}_0 \gets \mathbf{1}$ $\mathbf{M}'_0 \gets \mathrm{inputFilters}(\mathbf{Q}_0)$ $\mathbf{\hat{T}}_{i} = \mathbf{O}_{i-1}^{-1} \mathbf{O}_{i} \mathbf{T}_{i-1} $ $\mathbf{Q}'_i = \mathrm{inputFilters}(\mathbf{Q}_i)$ $\mathbf{T}_i = \mathrm{icp}(\mathbf{Q}'_i, \mathbf{M}_{i-1}, \mathbf{\hat{T}}_i)$ $\mathbf{M}_i = \begin{pmatrix} \mathbf{M}_{i-1} & | & \mathbf{T}_i \mathbf{Q}'_i \end{pmatrix}$ $\mathbf{M}'_i = \mathrm{reduceDensity}(\mathbf{M}_i)$ **output** final map $\mathbf{M}'_t$ and trajectory $\mathbf{T}_{0:t}$
Point selection for estimation {#sec:slice-description}
------------------------------
The first step in estimating from a 3D map is to segment trees. Although automatic methods exist [@jfrDetectionAndDTM; @computree], we chose to perform the tree segmentation manually. Our motivation is to validate diameter estimation methods also on less visible trees regardless of segmentation quality. Our manual segmentation comes in the form of 3D bounding-boxes around trunks, which were manually adjusted. These bounding-boxes can include branches and noise, which will be outliers stressing the estimation methods.
To estimate the s, we had to locate the breast height of each tree, defined as above ground level. This implies estimating the ground level at each tree location in the point cloud using a . Several algorithms have been designed for this purpose [@jfrDetectionAndDTM], from which we chose the raster-based method. The ground height for a given tree was the value of the given the $(x,y)$ position of the center of the manually drawn bounding box. Then, we selected every point in the tree bounding box which was between $h/2$ below breast height and $h/2$ above, where $h$ represents a section thickness. Selecting this thickness $h$ was a trade-off between inducing an error from the change in diameter along a tree’s height and the fact that cylinder fitting performs better as more points are available. Points resulting from this selection are colored in . Those points were finally used to estimate the by one of the cylinder fitting methods described below.
Least square cylinder fitting {#sec:cylinder-fitting}
-----------------------------
As commonly done [@jfrDetectionAndDTM; @seiki2017backpack; @computree], we formulate tree diameter estimation as cylinder-fitting. Fitting cylinders to point clouds is a fairly well studied problem [@lukacs1998faithful]. Let $\mathbf{P} = \begin{pmatrix} \mathbf{p}_1 & \mathbf{p}_2 & \dots & \mathbf{p}_n \end{pmatrix} \in \mathbb{R}^{3 \times n}$ be the slice in our point cloud described in and containing $n$ points belonging to one tree. We also have $\mathbf{N} = \begin{pmatrix} \mathbf{n}_1 & \mathbf{n}_2 & \dots & \mathbf{n}_n \end{pmatrix} \in \mathbb{R}^{3 \times n}$, which are the surface normals for each $\mathbf{p}_i$. We used the spectral decomposition of the covariance matrix from the $q$-nearest neighbors for each $\mathbf{p}_i$ to estimate $\mathbf{N}$. The eigenvector associated with the smallest eigenvalue of this matrix is the direction of least variance, corresponding to the estimated normal of the surface. A cylinder used to fit $\mathbf{P}$ can be represented in multiple ways. For this work, we parametrize a cylinder as $(\mathbf{a},\mathbf{c},r)$ where $\mathbf{a} \in \mathbb{R}^3 : \left\Vert \mathbf{a} \right\Vert_2 = 1$ is the cylinder axis direction, $\mathbf{c} \in \mathbb{R}^3$ is any point on the cylinder axis and $r \in \mathbb{R}^+$ is the cylinder radius. This parametrization has seven parameters, with one degree of freedom removed from the axis norm constraint; the last degree of freedom can be removed by imposing $\mathbf{a} \cdot \mathbf{c} = 0$ which comes naturally when solving for $\mathbf{c}$ in the next cylinder fitting method presented. From there, we investigated four methods to find those parameters from the point cloud $\mathbf{P}$.
**1) Finding the axis using surface normals** — The *linear least square* method ($A_{LLS}$) [@li2018supervised] needs the surface normals $\mathbf{N}$. This axis-finding method is based on the fact that if $\mathbf{P}$ and $\mathbf{N}$ represent a perfect cylinder, then all normals $\mathbf{n}_i$, $i = 1 \dots n$ will lie on a plane passing through the origin for which the normal will be $\mathbf{a}$. Therefore, finding the optimal axis $\mathbf{a}^*$ for a cylinder can be done by solving $$\mathbf{a}^* = \operatorname*{arg\,min}_{\mathbf{a}} \left\Vert \mathbf{N^{\intercal}a} \right\Vert_2 .
\vspace{-7pt}$$ As it turns out, $\mathbf{a}^*$ is the third right singular vector of the singular value decomposition of the matrix $\mathbf{N}$. A useful property of the $A_{LLS}$ method is that it is linear. We can then project $\mathbf{P}$ on a plane perpendicular to $\mathbf{a}^*$; the resulting 2D point cloud can be used to fit a circle using any known method discussed in the next paragraph. This circle fitting method will find the remaining parameters $r$ and $\mathbf{c}$. Another approach ($A_N$) in finding the cylinder axis is assuming that the tree is perfectly vertical, leading to the simplification $\mathbf{a} = \begin{pmatrix} 0 & 0 & 1 \end{pmatrix}$. This approach was employed in [@tsubouchi2014forest; @marek].
**2) Circle fitting algorithms** — Once the axis $\mathbf{a}$ of a cylinder is known, one can project the points in $\mathbf{P}$ on a plane perpendicular to this axis and then fit a circle to find the radius $r$ and center $\mathbf{c}$. This can be done using *iterative* or *algebraic* methods. The iterative methods consist of minimizing the sum of squares of the point-to-circle distance using iterative methods. Consequently, they are prone to local minima issues. The algebraic methods do not rely on iterative methods, but rather analytically solve the problem of circle fitting using an approximation of point-to-circle distance [@pratt1987direct]. In our experiments, we used an algebraic fit called *Hyper*, abbreviated as $H$, introduced in [@al2009error]. In their paper, the authors prove that they have a non-biased fit, as opposed to @pratt1987direct, in the case of incomplete circle arcs.
**3) Non-linear least square cylinder fitting** — This method, presented by @lukacs1998faithful, uses non-linear optimization to estimate the complete cylinder parameters. It relies on a point-to-cylinder distance: $d(\mathbf{p}_i; \mathbf{a}, \mathbf{c}, r) = \left\Vert \left(\mathbf{p}_i - \mathbf{c} \right) \times \mathbf{a} \right\Vert_2 - r.$ To find the cylinder, one then solves $$\label{eq-least-square-cylinder}
\mathbf{a}^*, \mathbf{c}^*, r^* = \operatorname*{arg\,min}_{\mathbf{a}, \mathbf{c}, r} \hspace{3pt} \sum_{i = 1 \dots n} d^2(\mathbf{p}_i; \mathbf{a}, \mathbf{c}, r) \underbrace{+ \sum_{i = 1 \dots n}(\mathbf{n}_i \cdot \mathbf{a})^2}_{\text{Normals loss (optional)}}.
\vspace{-7pt}$$ The second sum is optional, but can be added to the minimization to penalize cylinders which do not fit $\mathbf{N}$ well. To the best of our knowledge, this penalty has not been described elsewhere in the literature. For this paper, the original method without normals will be called $C_{NLS}$, while the minimization with the extra penalty will be called $C_{NLSN}$. We can solve this optimization problem in an unconstrained manner, by converting the problem to the cylinder parametrization from @lukacs1998faithful.
**4) Multiple cylinders voting** — We can fit multiple cylinders to the tree slice to improve the estimate robustness. In this case, we divide our tree slice vertically to form $n_{cyls}$ point clouds, and fit a cylinder to each one. Then, one can choose the median ($V_{median}$) or the mean ($V_{mean}$) of the diameter of the cylinders as the .
Experimental Setup {#sec:exp_setup}
==================
For each tree in our test sites, a forest technician identified all species and measured the diameter of trees using a specialized diameter tape. This information was engraved on a small metal marker attached to each tree. The only criteria for tree inclusion in the dataset was that
its diameter was greater than and
the tree was standing.
We generated initial 3D maps from our robot observations, and then segmented every individual tree in these maps, assigning an ID to each tree. Afterwards, a stem map (i.e., a two-dimensional plot of the position of every individual tree and its ID) was generated and printed on paper. We then used this stem map in the field to associate each tree to its ID, and then recover the measurement made by the technician. Unfortunately, even differential GPS cannot be used to localize trees sufficiently precisely for this task, due to canopy interference. In the end, the information for each tree included
an individual ID,
its position in the 3D map in the form of a bounding box,
its ground-truth , and
its species.
\[f-ground-truth\]
(delimit) \[beginningBlock\] [Delimit site]{}; (measure) \[normalBlock, below of=delimit, yshift=0.5cm, xshift=0.35cm\] [Manually measure DBHs\
and identify species]{}; (perfTraj) \[normalBlock, right of=delimit, xshift=1.7cm\] [Perform trajectories\
with robot]{}; (choose) \[normalBlock, right of=perfTraj, xshift=2.5cm\] [Choose a reference\
trajectory and perform ICP mapping ]{}; (segment) \[normalBlock, below of=choose, yshift=0.5cm, xshift=0.75cm\] [Manually segment trees\
in CloudCompare]{}; (dataAssoc) \[endBlock, left of=segment, xshift=-2cm\] [Perform data assocation]{}; (delimit) – (perfTraj); (perfTraj) – (choose); (choose) – (segment); (segment) – (dataAssoc); (delimit) – (measure); (measure) – (dataAssoc);
We used a *Clearpath Husky A200* mobile robot to map the different forest sites. Its skid drive makes it appropriate for navigating rough forested environments and emulating forest machinery. The robot was equipped with a *Velodyne HDL32* lidar, an *Xsens MTI-30* and wheel encoders for odometry. All processing was performed offline on a workstation with an AMD Ryzen 1700 and 64 GB of RAM.
[0.49]{}
[figures/site\_photos/1]{} (5.5, 43.5) [ **<span style="font-variant:small-caps;">Young</span>**]{} (5, 44) [ **<span style="font-variant:small-caps;">Young</span>**]{}
[0.49]{}
[figures/site\_photos/2]{} (5.5, 43.5) [ **<span style="font-variant:small-caps;">Mixed</span>**]{} (5, 44) [ **<span style="font-variant:small-caps;">Mixed</span>**]{}
\
[0.49]{}
[figures/site\_photos/3]{} (5.5, 43.5) [ **<span style="font-variant:small-caps;">Mature</span>**]{} (5, 44) [ **<span style="font-variant:small-caps;">Mature</span>**]{}
[0.49]{}
[figures/site\_photos/4]{} (5.5, 43.5) [ **<span style="font-variant:small-caps;">Maple</span>**]{} (5, 44) [ **<span style="font-variant:small-caps;">Maple</span>**]{}
Experimental sites
------------------
We collected data on four significantly different sites, scanning in total 1.4 hectares. We manually measured and marked 943. From these, 588 were above in (trees below this threshold are not considered of commercial value) and kept for our study. We chose sites that were different in terms of age, composition (see ), density and topography (see ), to identify how these factors could affect our diameter estimation and robot mapping. The first three sites were located at Forêt Montmorency, owned by Laval University. The last one was located on the University campus, as Forêt Montmorency contains little deciduous forest. We tested a number of trajectories for each site, to see their impact on diameter estimation. The different trajectories were placed in a common coordinate system for each site, using the approach proposed in [@tlr]. For our analysis, each tree observed in a given trajectory is considered as an individual tree observation. Therefore, we have in our dataset 2 to 4 observations for each tree. By far, these four sites represent the largest dataset from mobile lidar in the literature. We describe them below.
**1) Young balsam firs** (<span style="font-variant:small-caps;">Young</span>) — Despite being a plantation, the topography of this site was very rough, with a incline and mossy soil. The robot experienced frequent slippage, thus affecting odometry. There were a lot of lower branches occluding the trunks at breast height, which could affect our measurements. The trees were tightly planted, resulting in reduced visibility for the lidar. These factors make this site challenging both for perception and navigation. This site was mostly composed of balsam firs with some paper birch and measured $\times$ . Two trajectories were performed on this site: one big loop around the site, and a longer one where we made a loop around the site but also crossed the site in the middle.
**2) Mixed boreal forest** (<span style="font-variant:small-caps;">Mixed</span>) Despite being generally flat, this site had a lot of branches on the ground, making navigation difficult. The understory vegetation was also very dense in some places, thus limiting visibility. It was diverse in terms of tree species and age, consisting mainly of quaking aspens, balsam firs and spruce. The site was $\times$ . We conducted three trajectories: the first one was a simple loop around the site, while the other two tried to be more exhaustive.
**3) Mature boreal forest** (<span style="font-variant:small-caps;">mature</span>) There was a incline, not as sharp as <span style="font-variant:small-caps;">Young</span>, and very irregular ground. This site had big trees with non-occluded trunks. Being a mature forest, there was a fair number of fallen trees which could block the robot. The site was mostly composed of balsam fir as well as white and black spruce. This site measures $\times$ . Two trajectories were performed, similarly as in <span style="font-variant:small-caps;">Young</span>.
**4) Mature natural maple forest** (<span style="font-variant:small-caps;">Maple</span>) It was flat and easy to navigate, with very few obstacle on the ground. Consequently, we drove the robot at a faster speed (1 m/s) for much of the trajectories. It was a mature decideous natural forest, composed mainly of sugar and red maple. The site was $\times$ and contained upwards of 1000 trees. To reduce the ground-truth labor, we randomly selected 100 trees which we would measure. Two trajectories, similar to <span style="font-variant:small-caps;">Young</span> and <span style="font-variant:small-caps;">Mature</span>, were performed at the beginning of October with the leaves on the trees. The same trajectories were repeated at the beginning of November with no leaves left, to study the potential impact of leaves on diameter estimation.
![Diameter and species distribution for our test sites, with the tree count by species in parenthesis. Notice the species and diameter diversity in the different sites. This allowed us to verify the impact of species on diameter estimation, as bark texture impacts the point cloud produced by the lidar. Trees of less than 10 cm were segmented and measured, but were not used in this paper.[]{data-label="fig-site-composition"}](figures/site_composition/all){width="\linewidth"}
Results and discussion
======================
Comparing diameter estimation approaches
----------------------------------------
We tested different combinations of the methods presented in , to determine the best one. First, we tested the combination of $A_N$ and $H$, which can be summarized as fitting a circle to the $xy$-coordinates of the tree slice. Similarly, we tested $A_{LLS}$ and $H$. Also, we tested combining both methods above with $C_{NLS}$ and $C_{NLSN}$, using both of the first as initial estimates for the two later. All six resulting methods were combined with either $V_{median}$ and $V_{mean}$, resulting in 12 sets of results. All combinations used RANSAC as the outlier rejection method, with tolerance $\varepsilon$. We limited this comparison for trees observed at a distance closer than . As will be shown in , this minimal observation distance has too large of an impact on the estimation of , and we consider accurate measurement from this distance currently unfeasible. This meant discarding 143 out of our 1458 tree observations. We tested all of the following hyperparameter values: $q \in \{15, 20, 25\}$, $n_{cyls} \in \{1, 2, 3, 4, 5\}$, $h \in \{20,30,40,50,60\}$ cm, and $\varepsilon \in \{1,2,3\}$ cm.
Because $V_{mean}$ consistently underperformed $V_{median}$ in all of our tests, its performance is not reported in . The inferior performance of $V_{mean}$ was also confirmed by the fact that the number of vertical slices $n_{cyls} = 1$ was always the best choice in our hyperparameters exploration with $V_{mean}$, meaning that not using $V_{mean}$ was preferable to using it.
[ X R R R R R R ]{} & $A_{LLS}$ & $A_{N}$ & $A_{LLS} + C_{NLS}$ & $A_{N} + C_{NLS}$ & $A_{LLS}\hspace{-1pt}+\hspace{-1pt}C_{NLSN}$ & $A_{N} + C_{NLSN}$\
RMSE (cm) & 5.08 & 4.41 & 3.76 & **3.45** & 3.86 & 3.66\
Bias (cm) & -0.95 & 0.72 & -0.62 & -0.41 & -0.18 & **0.00**\
Fail rate (%) & 11.41 & 13.61 & **6.23** & 6.46 & 6.54 & 7.53\
$q$ & 25 & *N/A* & 25 & *N/A* & 25 & 20\
$n_{cyls}$ & 1 & 3 & 5 & 5 & 3 & 5\
$h$ (cm) & 20 & 60 & 60 & 60 & 40 & 50\
$\varepsilon$ (cm) & 3 & 3 & 2 & 2 & 2 & 1\
[*Legend*: $A_{LLS}$–lin. l.-s. axis, $A_{N}$–vertical axis, $C_{NLSN}$–non-lin. l.-s. with normals, $C_{NLSN}$–without normals.]{} \[tab:DBH\_Results\]
One can see that the best performing method is $A_N + C_{NLS}$ and that the worst is $A_{LLS}$. One conclusion from our comparison is that $V_{median}$ leads to better results. All of the methods, except $A_{LLS}$, performed better when $n_{cyls}$ was larger than one. Surprisingly, using a pure vertical tree axis ($A_N$) performed better than trying to take into account the stem direction ($A_{LLS}$), even as an initial estimate to non-linear cylinder fitting. This suggests that $A_{LLS}$ is not precise enough to estimate the stem direction accurately in noisy mobile lidar point clouds. However, the vast majority of our trees grew vertically in our dataset, thus there may be a bias favoring $A_N$.
Our best performing site was <span style="font-variant:small-caps;">Mature</span>, with its well-spaced trees and visible trunks. gives an example of the error distribution achieved in one trajectory with $A_{LLS} + H + C_{NLSN} + V_{median}$ in these ideal circumstances.
![image](figures/best_results){width="60.00000%"}
Factors impacting estimation {#sec:impact-min-distance}
----------------------------
The following statistics were generated using the $A_{N} + H + C_{NLS} + V_{median}$ method, but similar observations can be made for others. We tried to identify possible factors impacting the estimation of . For instance, we can see in the impact of minimal observation distances and presence of foliage on the error distribution, for the <span style="font-variant:small-caps;">Maple</span> dataset. This minimal observation distance represents how close the robot was driven from a tree. We observe that the error becomes too high (i.e., more than of ) for trees to which the robot has not gotten closer than , particularly when trees have foliage; again, this is similar for the other three sites. The effect of foliage could be due to increased localization error incurred during the map creation process of or reduced trunk visibility.
![image](figures/leaf_onoff){width="60.00000%"} \[fig:distance\]
We also observed that the localization error can result in a reduced diameter estimation accuracy. In <span style="font-variant:small-caps;">Mature</span>, the trajectory that looped around the site resulted in significant localization error, as stem sections of the same trees observed at the beginning of the trajectory were misaligned with observations made at the end of the trajectory. This caused a bias of for the loop, while the other two trajectories had a respective bias of and . This trajectory was the only one in our dataset with such visible localization error.
Finally, another factor impacting the estimation of is species, which we attributed to the influence of bark roughness. For example, the big red maples in <span style="font-variant:small-caps;">Maple</span> had a bias of , the quaking aspens of <span style="font-variant:small-caps;">Mixed</span> had a bias of while the very smooth balsam firs overall had limited bias (), after removing the results from the problematic trajectory mentioned above. The same effect of bark texture on measurements has also been observed by @calderscomp.
Lessons learned
---------------
During the course of this work, we gained significant experience in field deployment of mobile robots in forests. Here are some lessons and recommendations that could be useful to anyone interested in deploying robots in forests for 3D mapping and inventory purposes, as well as more sophisticated experiments where observing trees is important.
- Ground roughness, such as branches, irregular ground or other obstacles, did not seem to have an significant impact on our mapping and diameter estimation, as our two best performing sites, <span style="font-variant:small-caps;">Mature</span> and <span style="font-variant:small-caps;">Young</span>, were also the ones where the robot had the most trouble navigating.
- Getting close to each tree is essential for estimation: our experiments shows that diameter estimation performance degrades rapidly for observations beyond and becomes unusable after . This could be due to propagation of the orientation estimation error during the map building process, lidar beam width or point density reduction. In each site, more exhaustive trajectories always performed better than simple loops. For example in <span style="font-variant:small-caps;">Mixed</span>, we even observed a negative bias caused by localization error in the simple loop.
- Our results in <span style="font-variant:small-caps;">Maple</span> were affected by the robot speed. At 1 m/s, the platform moves during each lidar scan, which is ignored by ICP. This limitation of the algorithm led to poorer results in what, we thought, should have been the best performing site. Solving this issue by inferring the movement of the platform during one scan, such as done by @loam, is important if we want this approach to work in fast moving forest robots. It would also allow for faster data acquisition, as well as possibly lead to more accurate estimation overall.
- Mobility in forests is challenging; we pushed our robot to its limit. Using continuous tracks would be ideal, as it is used for most forest machinery. In <span style="font-variant:small-caps;">Mature</span> we had to slightly alter the environment for our robot by cutting three fallen trees (which are abundant in mature forests), while other sites were navigable with no modifications. From a pure forest inventory standpoint, a backpack-mounted or handheld sensor system would be better suited, but it remains interesting nonetheless to study robot perception in such difficult conditions.
Conclusion {#sec:Conclusion}
==========
In this paper, we presented an ICP mapping approach in forests, and demonstrated that it produced maps that were accurate enough to perform tree diameter estimation, especially in mature, well-spaced forests where we reached an accuracy of . We also identified key challenges to address in robot mapping in forests: dealing with rough tree bark, reduced visibility in dense forest and estimating platform motion during one scan. We compared multiple diameter estimation methods, and concluded that fitting multiple cylinders using *Hyper* circle fitting combined with non-linear cylinder fitting, and taking the median of the diameters of those cylinders works best. All of our methods were validated in the most extensive dataset of measurement from mobile lidar in the literature.
Future work
-----------
3D mapping opens the door for future automation of forestry equipment. The next step would be localizing a forest harvester in our 3D point cloud in real time. More work is needed in this direction as we did not attempt getting our mapping algorithm to run in real time while producing maps that were accurate enough to measure the of trees. Trying to use trees as landmarks to take some pressure off could be a solution. Furthermore, integrating work by @mathieu to perform species classification would be beneficial for tree selection applications. Although we restricted our evaluations on estimation and not quality of the maps, note that the latter can play an important role in other automated tasks such as navigation and tree grasping. Evaluating the possible use of our maps for these purposes would be of interest. While we did not attempt to measure trees whose was less than , there is interest in being able to detect and measure those small trees for regeneration monitoring purposes, as well as to avoid damaging them during operations. More work is needed to achieve accurate measurement of those small trees from mobile lidar. Such accuracy could possibly be achieved by combining the range measurements of lidar with angular measurements made with a camera.
This work was supported by a Mitacs Accelerate grant and FORAC. We thank the Canadian Space Agency for lending us their Velodyne HDL-32, the Forêt Montmorency staff, especially Charles Villeneuve, for his help with tree measurements as well as Simon-Pierre Deschênes and Philippe Dandurand for their help with field work.
[^1]: Northern Robotics Laboratory, Université Laval + Department of Geomatics Sciences, Université Laval $\dagger$ Centre de Recherche Industrielle du Québec Communication e-mail: [jean-francois.tremblay.36@ulaval.ca ]{}
|
---
abstract: 'We study multiple-period Bloch states of a Bose-Einstein condensate with spatially periodic interactomic interaction. Solving the Gross-Pitaevskii equation for the continuum model, and also using a simplified discrete version of it, we investigate the energy-band structures and the corresponding stability properties. We observe a new “attraction-induced dynamical stability” mechanism caused by the localization of the density distribution in the attractive domains of the system and the isolation of these higher-density regions. This makes the superfluid stable near the zone boundary, and also enhances the stability of higher-periodic states if the nonlinear interaction strength is sufficiently high.'
author:
- Raka Dasgupta
- 'B. Prasanna Venkatesh'
- Gentaro Watanabe
title: 'Attraction-induced dynamical stability of a Bose-Einstein condensate in a nonlinear lattice'
---
Introduction
============
The study of nonlinear phenomena in Bose-Einstein condensates (BECs) of cold atomic gases has become a subject of immense interest, both from theoretical and experimental perspectives [@pethickbook; @kevrebook; @morsch_rmp]. Confining magnetic traps and/or optical lattices provide controllable externally applied potentials for a dilute BEC and appear as a linear term in the Gross-Pitaevskii (GP) equation governing the statics and dynamics of the order parameter. Further, the interatomic interactions lead to an atomic density dependent nonlinear term within the GP framework. The strength of this nonlinear term can be controlled by varying the scattering length via magnetic [@inouye; @court; @roberts; @moerd; @timmer] or optical [@fedichev; @bohn; @fatemi; @theis] Feshbach resonances [@bloch_rmp; @chin_rmp].
One intriguing aspect of cold atomic gases confined in optical lattices is the competition between the linear terms coming from the optical lattice and the nonlinear terms [@trombe1; @bronski; @kevre2] that allows for solitonic solutions [@burger; @denschlag; @khaykovich], loop structures in the energy bands [@wu02; @diakonov02; @mueller02; @pethick2; @carr1; @watanabe; @sarma], period doubling [@carr1; @pethick1; @yoon], etc., and also gives rise to dynamical instabilities [@wu2; @pethick2; @wu03; @modugno04; @desarlo05; @pethickbook]. Along this research direction, recently another interesting possibility has opened up where one may imagine having no linear periodic component at all (apart from the kinetic energy) in the GP equation but instead introducing periodicity in the system via a spatially periodic nonlinearity. Such a system is termed as a “nonlinear lattice” [@sakaguchi; @malomed_rmp; @wu1]. Here both the nonlinearity and the periodicity are generated by a single term. Experimentally it has been realized with optical Feshbach resonances, by means of pulsed optical standing waves [@takahashi].
A BEC with a spatially modulated interaction within a mean-field approximation is well described by the GP equation in one dimension (1D): $$\label{GP1}
i\hbar \dfrac{\partial \psi}{\partial t}=-\dfrac{\hbar^2}{2m}\dfrac{\partial^2}{\partial x^2}\psi+ (V_1+V_2 \mbox{cos}2k_0x)|\psi|^2\psi\, ,$$ which is valid when the average number of particles per site is much larger than unity, and density and temperature are sufficiently low so that the normal component is negligible. Here the nonlinear term comprises a constant and a periodically modulated component. It is assumed that both $V_1$ and $V_2$ are positive quantities that can be controlled experimentally. $k_0$ is connected to the period $d$ of the modulation by $k_0=\pi/d$ and it, in fact, is the wave number of the laser beam for the optical Feshbach resonance. $m$ is the mass of bosons and $\psi$ is the condensate wave function. In a recent work, the band structure and stability of this system were studied [@wu1], considering the Bloch wave solutions for the lowest-energy bands.
We study the same system but go beyond the usual Bloch states (we call them period-1 solutions) that have the same periodicity as that of the modulated interaction. It is known that for BECs in a periodic potential, in addition to the conventional Bloch states, stationary states with periods twice, or even higher multiples of the lattice period emerge as well [@pethick1]. Furthermore, these higher period states are shown to be energetically and dynamically stable in other systems like BECs with dipole-dipole interactions in optical lattices [@maluckov]. In the present work, for the case of periodically modulated interaction, we explore the possibility of having period-doubled stationary states (termed as period-2 solutions). Moreover we make a comparison between the stability regions of period-1 [@wu1] and period-2 energy bands. We find that while the stability of the period-1 solutions can be qualitatively explained in terms of the overall averaged interaction as described in earlier studies [@wu1], the stability of period-2 solutions demands for a more careful study of the dynamics of the system. We show that, in the period-2 case, BECs localized at each cell are more isolated and such isolation can stabilize the dynamics of the system, giving the central result of this paper: attraction-induced dynamical stability.
The paper is organized as follows. In Sec. \[sec:discrete\], the system is described using a discrete model to obtain a basic sketch of the energy bands and the overall stability trends. In Sec. \[sec:continuum\] we deal with the full continuum model for the system, and solve the GP equation to study the band structures and the stability conditions. In Sec. \[sec:mech\] the stability mechanism is explained from a physical standpoint. We summarize the results in Sec. \[sec:summary\].
The discrete model \[sec:discrete\]
===================================
Formalism
---------
We first consider a simplified version of the system, where the uniform component of interaction is set to zero ($V_1=0$ and $V_2 \ne 0$), and map it in a discrete model [@pethick2; @trombe2]. This is analogous to an optical lattice in 1D. We reduce the system with a spatially periodic interaction in the continuum representation to a discrete representation by sampling just two points per period of the interaction (the maxima and minima of the interaction). Thus in this discrete model, the spacing between two sites is given by $\tilde{d}$ with the period of the interaction (i.e., the period of the original nonlinear lattice) $d = 2 \tilde{d}$. In this representation, the on-site interaction parameter alternates between $U$ and $-U$ at the adjacent sites. To obtain periodic solutions, we can define a “supercell” that consists of two sites, with the lattice constant $d$. If instead of regular Bloch solutions, we consider a $p$-periodic solution, the length of the supercell will be $pd$, containing $2p$ *discrete* lattice sites.
A simple Hamiltonian for such a discrete model describing tunneling and interaction in this situation can be written as [@pethick2; @trombe2] $$\begin{split}
H =& -K \sum_j (\psi_j^* \psi_{j+1} + \psi_{j+1}^* \psi_j) \\
& + \frac{U}{2} \left[\, \sum_{j={\rm even}}|\psi_j|^4 - \sum_{j={\rm odd}} |\psi_j|^4\, \right]\, ,
\label{hamilt}
\end{split}$$ where $\psi_j$ is the amplitude at site $j$. Here the first term in the equation signifies hopping between the nearest-neighbour sites characterized by the hopping parameter $K$, and the next term denotes the on-site inter-particle interaction. It is assumed that the odd-numbered sites are attractive, while the even-numbered sites are repulsive.
We aim to find stationary states with a fixed total number of particles. These are obtained by demanding that the variation of $H-\mu N$ ($\mu$ being the chemical potential) with respect to $\psi_j^*$ be zero. That is, $$\label{var1}
\begin{split}
U|\psi_j|^2\psi_j- K(\psi_{j+1}+\psi_{j-1})-\mu \psi_j=0\quad (\mbox{for even \textit{j}}),\\
-U|\psi_j|^2\psi_j- K(\psi_{j+1}+\psi_{j-1})-\mu \psi_j=0\quad (\mbox{for odd \textit{j}}).
\end{split}$$
Stationary solutions for the period-1 and period-2 states
---------------------------------------------------------
![Density distributions in the lowest band of the period-1 states as functions of $k$ for different values of $U\nu/2K$. Panels (a) and (b): $|g_1|^2$ (population in attractive site) and $|g_2|^2$ (populations in repulsive site) for $U\nu/2K=6$, respectively. Panels (c) and (d): $|g_1|^2$ and $|g_2|^2$ for $U\nu/2K=0.75$, respectively.[]{data-label="p1den"}](density_set4p1g1_new.eps "fig:"){height="2.7cm"} ![Density distributions in the lowest band of the period-1 states as functions of $k$ for different values of $U\nu/2K$. Panels (a) and (b): $|g_1|^2$ (population in attractive site) and $|g_2|^2$ (populations in repulsive site) for $U\nu/2K=6$, respectively. Panels (c) and (d): $|g_1|^2$ and $|g_2|^2$ for $U\nu/2K=0.75$, respectively.[]{data-label="p1den"}](density_set4p1g2_new.eps "fig:"){height="2.7cm"} ![Density distributions in the lowest band of the period-1 states as functions of $k$ for different values of $U\nu/2K$. Panels (a) and (b): $|g_1|^2$ (population in attractive site) and $|g_2|^2$ (populations in repulsive site) for $U\nu/2K=6$, respectively. Panels (c) and (d): $|g_1|^2$ and $|g_2|^2$ for $U\nu/2K=0.75$, respectively.[]{data-label="p1den"}](density_set3p1g1_new.eps "fig:"){height="2.7cm"} ![Density distributions in the lowest band of the period-1 states as functions of $k$ for different values of $U\nu/2K$. Panels (a) and (b): $|g_1|^2$ (population in attractive site) and $|g_2|^2$ (populations in repulsive site) for $U\nu/2K=6$, respectively. Panels (c) and (d): $|g_1|^2$ and $|g_2|^2$ for $U\nu/2K=0.75$, respectively.[]{data-label="p1den"}](density_set3p1g2_new.eps "fig:"){height="2.7cm"}
We focus on two particular cases: 1) period-1 states (normal Bloch states), i.e., when the particle density has the same periodicity as that of the lattice, and 2) period-2 (period-doubled) states, i.e., when the particle density has twice the periodicity as that of the lattice. We separate from $\psi_j$ a plane-wave part, $e^{i k j \tilde{d}}$, and write $\psi_j$ in a product form: $g_j e^{i k j \tilde{d}}$, where $\hbar k$ is the quasimomentum of the bulk superflow flowing in the same direction of the lattice and $g_j$ is the complex amplitude at site $j$.
The period-1 unit cell consists of two lattice sites. Since the periodic boundary condition implies that $g_j=g_{j+2}$, we have to solve Eq. (\[var1\]) for $g_1$ and $g_2$ only, subject to the condition $$|g_1|^2+|g_2|^2=\nu.$$ Here, $\nu$ is the total number of particles in the unit cell with two sites.
The populations $|g_1|^2$ and $|g_2|^2$ in the attractive and the repulsive sites, respectively, for the lowest Bloch band are given by $$\frac{|g_1|^2}{\nu} = n_+ \quad\mbox{and}\quad \frac{|g_2|^2}{\nu} = n_-
\label{eq:pop}$$ with $$n_{\pm} = \frac{1}{2} \left\{1 \pm \left[ \left(\frac{\cos{k\tilde{d}}}{U\nu/2K}\right)^2+1 \right]^{-1/2} \right\}\, .
\label{eq:npm}$$
![Density distributions in the period-2 band for $U\nu/2K =6$: (a) $|g_1|^2$, (b) $|g_2|^2$, (c) $|g_3|^2$, and (d) $|g_4|^2$ (Populations in the 1st attractive site, 1st repulsive site, 2nd attractive site, and the 2nd repulsive site, respectively).[]{data-label="p2den1"}](density_set4p2g1_new.eps "fig:"){height="2.6cm"} ![Density distributions in the period-2 band for $U\nu/2K =6$: (a) $|g_1|^2$, (b) $|g_2|^2$, (c) $|g_3|^2$, and (d) $|g_4|^2$ (Populations in the 1st attractive site, 1st repulsive site, 2nd attractive site, and the 2nd repulsive site, respectively).[]{data-label="p2den1"}](density_set4p2g2_new.eps "fig:"){height="2.6cm"} ![Density distributions in the period-2 band for $U\nu/2K =6$: (a) $|g_1|^2$, (b) $|g_2|^2$, (c) $|g_3|^2$, and (d) $|g_4|^2$ (Populations in the 1st attractive site, 1st repulsive site, 2nd attractive site, and the 2nd repulsive site, respectively).[]{data-label="p2den1"}](density_set4p2g3_new.eps "fig:"){height="2.6cm"} ![Density distributions in the period-2 band for $U\nu/2K =6$: (a) $|g_1|^2$, (b) $|g_2|^2$, (c) $|g_3|^2$, and (d) $|g_4|^2$ (Populations in the 1st attractive site, 1st repulsive site, 2nd attractive site, and the 2nd repulsive site, respectively).[]{data-label="p2den1"}](density_set4p2g4_new.eps "fig:"){height="2.6cm"}
![The same as Fig. \[p2den1\] for $U\nu/2K =0.75$: (a) $|g_1|^2$, (b) $|g_2|^2$, (c) $|g_3|^2$, and (d) $|g_4|^2$. []{data-label="p2den2"}](density_set3p2g1_new.eps "fig:"){height="2.7cm"} ![The same as Fig. \[p2den1\] for $U\nu/2K =0.75$: (a) $|g_1|^2$, (b) $|g_2|^2$, (c) $|g_3|^2$, and (d) $|g_4|^2$. []{data-label="p2den2"}](density_set3p2g2_new.eps "fig:"){height="2.7cm"} ![The same as Fig. \[p2den1\] for $U\nu/2K =0.75$: (a) $|g_1|^2$, (b) $|g_2|^2$, (c) $|g_3|^2$, and (d) $|g_4|^2$. []{data-label="p2den2"}](density_set3p2g3_new.eps "fig:"){height="2.7cm"} ![The same as Fig. \[p2den1\] for $U\nu/2K =0.75$: (a) $|g_1|^2$, (b) $|g_2|^2$, (c) $|g_3|^2$, and (d) $|g_4|^2$. []{data-label="p2den2"}](density_set3p2g4_new.eps "fig:"){height="2.7cm"}
The population density distributions for two different values of the dimensionless parameter $U\nu/2K$ are shown in Fig. \[p1den\] as functions of $k$ within the first Brillouin zone. We notice that when $U$ is sufficiently large \[Figs. \[p1den\](a) and \[p1den\](b)\], $|g_1|^2\approx\nu$ for all $k$ values. This can be easily understood from Eq. (\[hamilt\]): if $K\ll U$, putting all the particles in the attractive sites leads to the minimum-energy configuration of the system. In contrast, for smaller magnitudes of $U$, the kinetic-energy contribution also becomes significant. In this case, although at the zone edge most of the particles reside in the attractive sites, a sizable fraction of them is accumulated in the repulsive sites too, near the zone center \[Figs. \[p1den\](c) and \[p1den\](d)\].
For the period-2 case, the unit cell consists of four lattice sites. The periodic boundary condition implies that $g_j=g_{j+4}$. So we have to solve Eq. (\[var1\]) for $g_1$, $g_2$, $g_3$, and $g_4$, subject to the condition $$|g_1|^2+|g_2|^2+|g_3|^2+|g_4|^2=2\nu.$$ (Note that there is a factor of $2$ on the right-hand side since $\nu$ is defined as the number of particles per two-site unit cell.)
The distributions of $|g_1|^2$, $|g_2|^2$, $|g_3|^2$, and $|g_4|^2$ are shown for the period-doubled solutions with two different values of $U\nu/2K$ in Figs. \[p2den1\] and \[p2den2\]. For a large $U\nu/2K$ (Fig. \[p2den1\]), the total energy is lowered by putting as many particles as possible in one attractive site in each supercell, i.e., in every fourth site. At the zone edge, the repulsive sites are almost empty and at the zone center they acquire a small population (Fig. \[p2den1\]). For a smaller $U\nu/2K$ (Fig. \[p2den2\]), the distribution is slightly more even: although one attractive site in a four-site cell hosts the majority of the particles, all the other sites, too, contain non-negligible populations.
Once we solve for the ${g_j}$’s, we can obtain the energy bands using Eq. (\[hamilt\]) with appropriate boundary conditions. The energy per particle, scaled by $K$ is a function of the dimensionless parameter $U\nu/2K$. In Fig. \[bands\_d\], the period-1 (dotted line) and period-2 (solid line) bands are shown for four different values of $U\nu/2K$. We observe that when the nonlinear interaction term is large enough \[Fig. \[bands\_d\](a)\], the bands have a large separation between them and the period-2 band looks almost flat in comparison. For a relatively smaller value of $U\nu/2K$ \[Fig. \[bands\_d\](b)\], the gap between the two bands is narrower. Then if we keep lowering the value of $U\nu/2K$ \[Fig. \[bands\_d\](c)\], the two bands merge. In this case the period-2 band does not extend over the entire Brillouin zone, but appears in a small region centered around the zone edge, that shrinks further with decreasing $U\nu/2K$ \[Fig. \[bands\_d\](d)\].
![Energy per particle of period-1 (dotted lines) and period-2 (solid lines) solutions in units of $K$ for different values of $U\nu/2K$: (a) $U\nu/2K=6$, (b) $U\nu/2K=0.75$, (c) $U\nu/2K=0.5$, and (d) $U\nu/2K=0.1$.[]{data-label="bands_d"}](p12set4_new.eps "fig:") ![Energy per particle of period-1 (dotted lines) and period-2 (solid lines) solutions in units of $K$ for different values of $U\nu/2K$: (a) $U\nu/2K=6$, (b) $U\nu/2K=0.75$, (c) $U\nu/2K=0.5$, and (d) $U\nu/2K=0.1$.[]{data-label="bands_d"}](p12set3_new.eps "fig:") ![Energy per particle of period-1 (dotted lines) and period-2 (solid lines) solutions in units of $K$ for different values of $U\nu/2K$: (a) $U\nu/2K=6$, (b) $U\nu/2K=0.75$, (c) $U\nu/2K=0.5$, and (d) $U\nu/2K=0.1$.[]{data-label="bands_d"}](p12set2_new.eps "fig:") ![Energy per particle of period-1 (dotted lines) and period-2 (solid lines) solutions in units of $K$ for different values of $U\nu/2K$: (a) $U\nu/2K=6$, (b) $U\nu/2K=0.75$, (c) $U\nu/2K=0.5$, and (d) $U\nu/2K=0.1$.[]{data-label="bands_d"}](p12set1_new.eps "fig:")
For a given value of $U\nu/2K$, the period-2 bands show more flatness than their period-1 counterparts. As mentioned already, for period-2 states the majority of the particles are stored in every fourth site, while for period-1 states it is every second site. Thus, in the case of period-2 states, the degree of isolation between the regions of large density is higher. This leads to a lower tunneling rate between consecutive sites. As a result, the energy bands are more flat for the period-2 case.
Also, a higher $U\nu/2K$ value leads to more relative flatness of the bands for both period-1 and period-2 solutions. This is because a large $U\nu/2K$ means that the on-site interaction term dominates over the hopping term and the stationary solutions are well approximated by the eigenstates of the on-site interaction term, which are independent of $k$. Another reason is that a large $U\nu/2K$ leads to repulsive sites being almost empty and the tunneling rate is suppressed.
Linear stability analysis
-------------------------
Let us now examine the stability of the stationary states of the system within the discrete model. There are two aspects: 1) energetic stability — whether the stationary states are at a local energy minimum against small perturbations, and 2) dynamical stability — if it is stable with respect to the time evolution. As has been shown in general (see the Appendix of [@wu03]), energetic [*instability*]{} is a pre-requisite for dynamical [*instability*]{}. Namely, if the system is energetically stable, the system is dynamically stable as well; however, the opposite is not the case.
Here we perform a linear stability analysis of the stationary states following the treatment in Refs. [@wu1; @pethick1; @wu2; @pethick2] (see also, e.g., Refs. [@pethickbook; @wu03; @nonlinlatrev]). Let $\delta\psi_{q,j}$ be the deviation from the stationary solution $\psi^{(0)}_j$ at a given $k$, $$\delta\psi_{q,j} = e^{ikj\tilde{d}} \left[ u_{q,j} e^{i q j \tilde{d}}+ {v_{q,j}}^*e^{-i q j \tilde{d}} \right], \label{eq:perturb}$$ where the amplitudes $u_{q,j}$ and $v_{q,j}$ have the same periodicity as the stationary solution, $j$ is the site index, and $\hbar q$ is the quasimomentum of the perturbation. Now the energy functional in Eq. (\[hamilt\]) is expanded to second order in $\delta\psi_{q,j}$, and we find $\delta E_c$, its deviation from the equilibrium energy per unit cell.
We can write $\delta E_c$ in a block-diagonal structure in $q$. For the period-1 case, it has the following form: $$\delta E_c= \begin{pmatrix}
u_{q,1}^{*}& v_{q,1}^{*}& u_{q,2}^{*}& v_{q,2}^{*}
\end{pmatrix}
M(q)
\begin{pmatrix}
u_{q,1}\\
v_{q,1}\\
u_{q,2}\\
v_{q,2}\\
\end{pmatrix}\, .$$ Because of the periodic boundary condition, we have $u_{q,j} =u_{q,j+2}$ and $v_{q,j}= v_{q,j+2}$. $M(q)$ is a $4\times4$ matrix, where $$\begin{aligned}
\begin{split}
[M(q)]_{11}&=[M(q)]_{22}=U(|g_1|^2-|g_2|^2);\\
[M(q)]_{12}&=[M(q)]_{21}=U |g_1|^2;\\
[M(q)]_{13}&=[M^*(q)]_{31}=-Ke^{i(k+q) \tilde{d}};\\
[M(q)]_{24}&=-[M^*(q)]_{42}=-Ke^{-i(k-q) \tilde{d}};\\
[M(q)]_{33}&=[M(q)]_{44}=-U (|g_1|^2+|g_2|^2);\\
[M(q)]_{34}&=[M(q)]_{43}=U |g_2|^2\, ,
\end{split}
\end{aligned}$$ and zero otherwise.
![Energetic stability diagrams for period-1 solutions for (a) $U\nu/2K=0.5$ and (b) $U\nu/2K=0.1$. Quasi-wave numbers $k$ and $q$ are in units of $k_0$. The gray-shaded regions are the energetically stable regions and the white regions are the energetically unstable regions. The contours show the minimum eigenvalue of the matrix $M(q)$ in units of $K$. []{data-label="d_enp1"}](enset2_p1.eps "fig:") ![Energetic stability diagrams for period-1 solutions for (a) $U\nu/2K=0.5$ and (b) $U\nu/2K=0.1$. Quasi-wave numbers $k$ and $q$ are in units of $k_0$. The gray-shaded regions are the energetically stable regions and the white regions are the energetically unstable regions. The contours show the minimum eigenvalue of the matrix $M(q)$ in units of $K$. []{data-label="d_enp1"}](enset1_p1.eps "fig:")
![Dynamical stability diagrams for period-1 solutions for different values of $U\nu/2K$: (a) $U\nu/2K=6$, (b) $U\nu/2K=0.75$, (c) $U\nu/2K=0.5$, and (d) $U\nu/2K=0.1$. Quasi-wave numbers $k$ and $q$ are in units of $k_0$. The gray-shaded regions are the dynamically stable regions and the white regions are the dynamically unstable regions. The contours show the growth rate of the fastest growing mode, i.e., the maximum absolute value of the imaginary part of the eigenvalues of the matrix $M'(q)$ in units of $K$. []{data-label="d_dynp1"}](dynset4_p1.eps "fig:") ![Dynamical stability diagrams for period-1 solutions for different values of $U\nu/2K$: (a) $U\nu/2K=6$, (b) $U\nu/2K=0.75$, (c) $U\nu/2K=0.5$, and (d) $U\nu/2K=0.1$. Quasi-wave numbers $k$ and $q$ are in units of $k_0$. The gray-shaded regions are the dynamically stable regions and the white regions are the dynamically unstable regions. The contours show the growth rate of the fastest growing mode, i.e., the maximum absolute value of the imaginary part of the eigenvalues of the matrix $M'(q)$ in units of $K$. []{data-label="d_dynp1"}](dynset3_p1.eps "fig:") ![Dynamical stability diagrams for period-1 solutions for different values of $U\nu/2K$: (a) $U\nu/2K=6$, (b) $U\nu/2K=0.75$, (c) $U\nu/2K=0.5$, and (d) $U\nu/2K=0.1$. Quasi-wave numbers $k$ and $q$ are in units of $k_0$. The gray-shaded regions are the dynamically stable regions and the white regions are the dynamically unstable regions. The contours show the growth rate of the fastest growing mode, i.e., the maximum absolute value of the imaginary part of the eigenvalues of the matrix $M'(q)$ in units of $K$. []{data-label="d_dynp1"}](dynset2_p1.eps "fig:") ![Dynamical stability diagrams for period-1 solutions for different values of $U\nu/2K$: (a) $U\nu/2K=6$, (b) $U\nu/2K=0.75$, (c) $U\nu/2K=0.5$, and (d) $U\nu/2K=0.1$. Quasi-wave numbers $k$ and $q$ are in units of $k_0$. The gray-shaded regions are the dynamically stable regions and the white regions are the dynamically unstable regions. The contours show the growth rate of the fastest growing mode, i.e., the maximum absolute value of the imaginary part of the eigenvalues of the matrix $M'(q)$ in units of $K$. []{data-label="d_dynp1"}](dynset1_p1.eps "fig:")
The condition for energetic stability of the system is that, all the eigenvalues of the matrix $M(q)$ are positive, since a negative eigenvalue means that there exist perturbations that can lower the energy of the system. We thus study the energetic stability by noting the lowest eigenvalue of $M(q)$. If this value is $<0$, there exists at least one negative eigenvalue of $M(q)$, which would render the system energetically unstable. On the other hand, if this value is $\geqslant 0$, the system is already in either a local or global energy minimum, and hence stable.
We observe that for $U\nu/2K=6$ and $0.75$, no energetically stable region is found for period-1 solutions. An energetically stable area starts to appear for sufficiently low values of $U\nu/2K$ between $U\nu/2K=0.75$ and $0.5$ \[see, e.g., $U\nu/2K=0.5$ and $0.1$ shown in Figs. \[d\_enp1\](a) and \[d\_enp1\](b), respectively\]. We show the instability contours, and the numbers on the lines mark the lowest eigenvalue of $M(q)$ for that parameter value. The stable regions are marked by the gray-shading.
We also consider the dynamical stability of the system under the same perturbation as Eq. (\[eq:perturb\]). The linearized time-dependent GP equation for the perturbations has the form $$i \dfrac{\partial}{\partial t}\begin{pmatrix}
u_{q,1}\\
v_{q,1}\\
u_{q,2}\\
v_{q,2}\\
\end{pmatrix}=
M'(q)
\begin{pmatrix}
u_{q,1}\\
v_{q,1}\\
u_{q,2}\\
v_{q,2}\\
\end{pmatrix}\, .$$ Here $M'(q)$, too, is a $4\times 4$ matrix, where $$M'(q)=\begin{pmatrix}
\sigma_z&0\\
0 & \sigma_z\\
\end{pmatrix}
M(q)$$ with $$\sigma_z =
\begin{pmatrix}
1 & 0\\
0 & -1
\end{pmatrix}.$$
The condition for dynamical stability is that all the eigenvalues of the matrix $M'(q)$ are real, since a complex eigenvalue means that the perturbation grows exponentially in time during the dynamical evolution. We note the maximum of the absolute values of the imaginary parts of these eigenvalues to find out the fastest growing mode in the system. When this value happens to be zero, we get complete dynamical stability.
The dynamical stability diagrams are shown in Fig. \[d\_dynp1\]. It is found that the $k=0$ state is always unstable, so the superfluidity is not sustained in the Brillouin-zone center. This matches with the results obtained in [@wu1], where they used the GP equation for the full continuum model to calculate the stationary states and study the corresponding stability properties. For higher values of $U\nu/2K$ \[e.g., Fig. \[d\_dynp1\](a)\], half the region between the Brillouin-zone center and the zone edge shows dynamical stability. If the value of $U\nu/2K$ is further reduced to $\sim 1$ \[Fig. \[d\_dynp1\](b)\], an instability island starts to grow from the zone edge. At even lower values of $U\nu/2K$, the instability region around the zone center starts to shrink \[Fig. \[d\_dynp1\](c)\], and we finally get a larger stability area \[Fig. \[d\_dynp1\](d)\]. Qualitatively, all these features are in agreement with the continuum-model results in [@wu1].
We follow the same procedure for period-2 solutions to find the energetic and dynamic instabilities, only now both $M(q)$ and $M'(q)$ are $8\times 8$ matrices. Moreover, for small values of $U\nu/2K$, the period-2 solutions do not exist for the entire Brillouin zone, but for a very small $k$-span near the zone edge.
![The same as Fig. \[d\_dynp1\] for period-2 solutions for (a) $U\nu/2K=6$, (b) $U\nu/2K=0.75$, (c) $U\nu/2K=0.5$, and (d) $U\nu/2K=0.1$. []{data-label="d_dynp2"}](dynset4_p2.eps "fig:") ![The same as Fig. \[d\_dynp1\] for period-2 solutions for (a) $U\nu/2K=6$, (b) $U\nu/2K=0.75$, (c) $U\nu/2K=0.5$, and (d) $U\nu/2K=0.1$. []{data-label="d_dynp2"}](dynset3_p2.eps "fig:") ![The same as Fig. \[d\_dynp1\] for period-2 solutions for (a) $U\nu/2K=6$, (b) $U\nu/2K=0.75$, (c) $U\nu/2K=0.5$, and (d) $U\nu/2K=0.1$. []{data-label="d_dynp2"}](dynset2_p2.eps "fig:") ![The same as Fig. \[d\_dynp1\] for period-2 solutions for (a) $U\nu/2K=6$, (b) $U\nu/2K=0.75$, (c) $U\nu/2K=0.5$, and (d) $U\nu/2K=0.1$. []{data-label="d_dynp2"}](dynset1_p2.eps "fig:")
As for the energetic stability, we now find that all the period-2 solutions are energetically unstable for the range of $U\nu/2K$ we are working with. For low $U\nu/2K$, the instability contours are horizontal. As $U\nu/2K$ is gradually increased, the contours become vertical, and the magnitude of the lowest eigenvalue of $M(q)$ (which is negative) becomes larger.
In the dynamical stability diagram for high nonlinearity \[e.g., $U\nu/2K=6$ shown in Fig. \[d\_dynp2\](a)\], the basic feature of the phase map remains the same as in the period-1 case. However, if we look at the contours of the fastest growing mode, here the value at $k=0$ is one order of magnitude smaller than the corresponding value for the period-1 case shown in Fig. \[d\_dynp1\](a) (25 times smaller if we consider high-$q$ perturbations). This point will be discussed in detail in Sec. \[sec:mech\].
The continuum model \[sec:continuum\]
=====================================
Formalism and stationary solutions
----------------------------------
Next we turn to the continuum model, starting from the GP equation in 1D \[Eq. (\[GP1\])\]: $$i \dfrac{\partial}{\partial t}\psi=-\dfrac{\partial^2}{\partial x^2}\psi+ (8 c_1+ 8 c_2 \mbox{cos}2x)|\psi|^2\psi\, .$$ Here all the energies are measured in the scale of the recoil energy $E_R= \hbar^2 k_0^2/2 m$. All lengths are in units of $1/k_0$, and the time $t$ is in units of $2m/k_0^2\hbar$. The wave function $\psi$ is in units of $\sqrt{n_0}$, $n_0$ being the average number density. Here $c_1=n_0 V_1/8 E_R$ and $c_2=n_0 V_2/8 E_R$ (following the notation of [@wu1]). Again, we find solutions of the Bloch form, $\psi=e^{ikx}\phi$, where $\phi$ has the same periodicity as of the spatial modulation (period-1 solutions), twice the periodicity of it (period-2 solutions), or even higher period ones. To continue the analogy with the discrete model, we note that here, too, we can think of a “supercell”, its length being $pd$ for a period-$p$ solution.
We expand $\psi$ in terms of plane waves, $$\phi=\sum_{l= -l_{\rm max}}^{l_{\rm max}} a_l e^{i l x/p}$$ ($p$ is the periodicity of the solutions). Putting $p=1$ leads to the period-1 branches, while $p=2$ corresponds to period-doubled solutions. Here $l$ can take $2l_{\rm max}+1$ values. The coefficients $a_l$ have to satisfy the normalization condition, $\sum_l |a_l|^2=1$. The stationary solutions are obtained by means of a variational calculation [@pethickbook], so that the wave function $\psi(x)$ extremizes the total energy of the system.
![Energy per particle of period-1 (dotted lines) and period-2 (solid lines) solutions for different values of $c_2$ obtained from the continuum model: (a) $c_2=0.4$, (b) $c_2=0.1$, (c) $c_2=0.04$, and (d) $c_2=0.01$.[]{data-label="band_cont"}](cont_p12set4_new.eps "fig:") ![Energy per particle of period-1 (dotted lines) and period-2 (solid lines) solutions for different values of $c_2$ obtained from the continuum model: (a) $c_2=0.4$, (b) $c_2=0.1$, (c) $c_2=0.04$, and (d) $c_2=0.01$.[]{data-label="band_cont"}](cont_p12set3_new.eps "fig:") ![Energy per particle of period-1 (dotted lines) and period-2 (solid lines) solutions for different values of $c_2$ obtained from the continuum model: (a) $c_2=0.4$, (b) $c_2=0.1$, (c) $c_2=0.04$, and (d) $c_2=0.01$.[]{data-label="band_cont"}](cont_p12set2_new.eps "fig:") ![Energy per particle of period-1 (dotted lines) and period-2 (solid lines) solutions for different values of $c_2$ obtained from the continuum model: (a) $c_2=0.4$, (b) $c_2=0.1$, (c) $c_2=0.04$, and (d) $c_2=0.01$.[]{data-label="band_cont"}](cont_p12set1_new.eps "fig:")
In Fig. \[band\_cont\], we show the energy bands corresponding to period-1 and period-2 solutions, for four different values of $c_2$, taking $c_1$=0. Just like the discrete case, we find that when $c_2$ is large \[Figs. \[band\_cont\](a) and \[band\_cont\](b)\], the bands are widely separated. As we keep decreasing the value of $c_2$ \[Figs. \[band\_cont\](c) and \[band\_cont\](d)\], the two bands merge, and the region of the period-2 band starts diminishing. So our simplified discrete model can successfully capture all the essential features of the energy-band structures obtained from the full continuum calculation.
![Density distributions for (a) $c_2=0.4$, period-1, (b) $c_2=0.04$, period-1, (c) $c_2=0.4$, period-2, and (d) $c_2=0.04$, period-2, all for $k=0.5$ and $c_1=0$. Here $x$ is plotted in units of $1/k_0$, $n$ is in units of the average density $n_0$. []{data-label="c_den"}](contdensity_p1high.eps "fig:") ![Density distributions for (a) $c_2=0.4$, period-1, (b) $c_2=0.04$, period-1, (c) $c_2=0.4$, period-2, and (d) $c_2=0.04$, period-2, all for $k=0.5$ and $c_1=0$. Here $x$ is plotted in units of $1/k_0$, $n$ is in units of the average density $n_0$. []{data-label="c_den"}](contdensity_p1low.eps "fig:") ![Density distributions for (a) $c_2=0.4$, period-1, (b) $c_2=0.04$, period-1, (c) $c_2=0.4$, period-2, and (d) $c_2=0.04$, period-2, all for $k=0.5$ and $c_1=0$. Here $x$ is plotted in units of $1/k_0$, $n$ is in units of the average density $n_0$. []{data-label="c_den"}](contdensity_p2high_mod.eps "fig:") ![Density distributions for (a) $c_2=0.4$, period-1, (b) $c_2=0.04$, period-1, (c) $c_2=0.4$, period-2, and (d) $c_2=0.04$, period-2, all for $k=0.5$ and $c_1=0$. Here $x$ is plotted in units of $1/k_0$, $n$ is in units of the average density $n_0$. []{data-label="c_den"}](contdensity_p2low_mod.eps "fig:")
Figure \[c\_den\] shows the nature of the density distribution in the continuum model, both for period-1 and period-2 solutions. It appears that for a fixed $c_1$, a larger $c_2$ makes the peaks sharper and more isolated in nature.
We have chosen the $c_2$ values exactly as in [@wu1], so that we can reproduce the stability diagrams from the period-1 case therein, before we proceed to solve for the period-2 case, and make a direct comparison. However, in this section we focus only on $c_1=0$ situations, because that corresponds to our discrete model of having alternate $U$ and $-U$ on-site interactions (a non-zero value of $c_1$ would mean that there is a difference in magnitude of the interaction strengths in the attractive and repulsive sites).
![Energetic stability diagrams for period-1 solutions for different values of $c_2$: (a) $c_2=0.04$ and (b) $c_2=0.01$. Quasi-wave numbers $k$ and $q$ are in units of $k_0$. The gray-shaded regions are the energetically stable regions and the white regions are the energetically unstable regions. The contours show the minimum eigenvalue of the matrix $M(q)$ in units of the recoil energy $E_R$. []{data-label="en_contp1"}](cont_enset2p1.eps "fig:") ![Energetic stability diagrams for period-1 solutions for different values of $c_2$: (a) $c_2=0.04$ and (b) $c_2=0.01$. Quasi-wave numbers $k$ and $q$ are in units of $k_0$. The gray-shaded regions are the energetically stable regions and the white regions are the energetically unstable regions. The contours show the minimum eigenvalue of the matrix $M(q)$ in units of the recoil energy $E_R$. []{data-label="en_contp1"}](cont_enset1p1.eps "fig:")
Stability analysis for the continuum model
------------------------------------------
Let $\delta\psi_{q}$ be the deviation from the stationary Bloch wave solution $\psi^{(0)}$ at a given $k$ for the continuum model. This can be written as $$\delta\psi_{q}(x) = e^{ikx} \left[ u(x,q) e^{i q x}+ {v}^*(x,q)e^{-i q x} \right],$$ where the amplitudes $u(x,q)$ and $v(x,q)$ are periodic functions of $x$ with the same periodicity as the stationary solutions. Similarly to the discrete model, the energy deviation from the stationary states per unit cell is given by $$\delta E_c=\int_{-p \pi/2}^{p \pi/2} dx \begin{pmatrix}
u^{*}& v^{*}
\end{pmatrix}
M(q)
\begin{pmatrix}
u\\
v\\
\end{pmatrix}$$ for $p$-periodic states.
We proceed exactly like in the case of the discrete model, and find the eigenvalues for $M(q)$, both for period-1 and period-2 solutions. If $M(q)$ has negative eigenvalues, that would render the system energetically unstable. In the period-1 case, a higher value of $c_2$ makes the system completely unstable energetically, while for smaller $c_2$ an energetically stable region (marked by the gray shade in Fig. \[en\_contp1\]) appears, as in [@wu1]. For period-2 cases, the solutions are always unstable energetically, at least for the range of $c_2$ we have chosen, namely $0.01 \leq c_2 \leq 0.4$. This is exactly in agreement with the result we obtained in the discrete model.
The dynamical stability for period-1 and period-2 solutions is also studied. For the same perturbation $\delta\psi_q$, the time-dependent GP equation can be linearized as $$i \dfrac{\partial}{\partial t}\begin{pmatrix}
u\\
v\\
\end{pmatrix}=
M'(q)
\begin{pmatrix}
u\\
v\\
\end{pmatrix}\,$$ with $M'(q) \equiv \sigma_z M(q)$. If $M'(q)$ has complex eigenvalues, the perturbations blow up in the course of time evolution, and if the imaginary part is zero, the stationary solutions are dynamically stable. The fastest growing modes (the mode with the largest absolute value of the imaginary parts of the eigenvalues) in the system are also noted.
![Dynamical stability diagrams for period-1 solutions for different values of $c_2$: (a) $c_2=0.4$, (b) $c_2=0.1$, (c)$c_2=0.04$, and (d) $c_2=0.01$. Quasi-wave numbers $k$ and $q$ are in units of $k_0$. The contours show the growth rate of the fastest growing mode, i.e., the maximum absolute value of the imaginary part of the eigenvalues of the matrix $M'(q)$, in units of the recoil energy $E_R$. []{data-label="c_dynp1"}](cont_dynset4_p1.eps "fig:") ![Dynamical stability diagrams for period-1 solutions for different values of $c_2$: (a) $c_2=0.4$, (b) $c_2=0.1$, (c)$c_2=0.04$, and (d) $c_2=0.01$. Quasi-wave numbers $k$ and $q$ are in units of $k_0$. The contours show the growth rate of the fastest growing mode, i.e., the maximum absolute value of the imaginary part of the eigenvalues of the matrix $M'(q)$, in units of the recoil energy $E_R$. []{data-label="c_dynp1"}](cont_dynset3_p1.eps "fig:") ![Dynamical stability diagrams for period-1 solutions for different values of $c_2$: (a) $c_2=0.4$, (b) $c_2=0.1$, (c)$c_2=0.04$, and (d) $c_2=0.01$. Quasi-wave numbers $k$ and $q$ are in units of $k_0$. The contours show the growth rate of the fastest growing mode, i.e., the maximum absolute value of the imaginary part of the eigenvalues of the matrix $M'(q)$, in units of the recoil energy $E_R$. []{data-label="c_dynp1"}](cont_dynset2_p1.eps "fig:") ![Dynamical stability diagrams for period-1 solutions for different values of $c_2$: (a) $c_2=0.4$, (b) $c_2=0.1$, (c)$c_2=0.04$, and (d) $c_2=0.01$. Quasi-wave numbers $k$ and $q$ are in units of $k_0$. The contours show the growth rate of the fastest growing mode, i.e., the maximum absolute value of the imaginary part of the eigenvalues of the matrix $M'(q)$, in units of the recoil energy $E_R$. []{data-label="c_dynp1"}](cont_dynset1_p1.eps "fig:")
![The same as Fig. \[c\_dynp1\] for period-2 solutions for (a) $c_2=0.4$, (b) $c_2=0.1$, (c) $c_2=0.04$, and (d) $c_2=0.01$. []{data-label="c_dynp2"}](cont_dynset4_p2.eps "fig:") ![The same as Fig. \[c\_dynp1\] for period-2 solutions for (a) $c_2=0.4$, (b) $c_2=0.1$, (c) $c_2=0.04$, and (d) $c_2=0.01$. []{data-label="c_dynp2"}](cont_dynset3_p2.eps "fig:") ![The same as Fig. \[c\_dynp1\] for period-2 solutions for (a) $c_2=0.4$, (b) $c_2=0.1$, (c) $c_2=0.04$, and (d) $c_2=0.01$. []{data-label="c_dynp2"}](cont_dynset2_p2.eps "fig:") ![The same as Fig. \[c\_dynp1\] for period-2 solutions for (a) $c_2=0.4$, (b) $c_2=0.1$, (c) $c_2=0.04$, and (d) $c_2=0.01$. []{data-label="c_dynp2"}](cont_dynset1_p2.eps "fig:")
For the period-1 case (Fig. \[c\_dynp1\]), the basic features (dynamically unstable in half the region between the Brillouin-zone center and the zone edge for large $c_2$; the appearance of another instability island near the zone edge and the shrinking of both the unstable domains as the value of $c_2$ is lowered) remain similar to the corresponding situation in the discrete model (Fig. \[d\_dynp1\]) and also agree with previous results in [@wu1]. Similarly, for the period-2 solutions, we find that the plots (Fig. \[c\_dynp2\]) look quite similar to the corresponding plots from the discrete case (Fig. \[d\_dynp2\]) up to moderate values of $U$. This again shows that the qualitative features of almost all the properties associated with the continuum model (energy-band structures, stability conditions) can be extracted from the simple discrete model. However, this breaks down when $U$ in the discrete model (or equivalently, $c_2$ in the continuum model) is too large. While in the discrete case, we always find a region of dynamical stability, at large $c_2$ the continuum model has no stable region at all \[Fig. \[c\_dynp2\](a)\]. When we increase $c_2$ gradually from $0.1$ to $0.4$, we notice that the stable region vanishes altogether at $c_2=0.17$, and the instability contours gradually become horizontal. This point will be discussed further in the next section.
The mechanism behind dynamical stability \[sec:mech\]
=====================================================
We have come across a number of striking features while studying the dynamical stabilities both from the discrete and the continuum models. Here we recall some of them:
1\) The period-1 and period-2 states in the lowest energy band are always unstable at $k=0$ for purely sinusoidal modulations with $V_1=0$. These can, however, be stable for larger $k$ values.
2\) In the discrete model with $U\gg K$, the period-2 solutions are more dynamically stable than their period-1 counterparts.
3\) In the continuum model the period-2 solutions show greater dynamical stability (compared to the period-1 cases) up to a certain value of $c_2$, but beyond it they become completely unstable.
In this section we try to explain these features from a physical point of view, and also investigate situations with a non-zero $V_1$ (i.e., a constant component added to the periodic modulation) to obtain a better understanding of the stability mechanism.
The first feature is in complete contrast with BECs in periodic potentials where the $k=0$ state is always dynamically stable. In [@wu1], the dynamical instability of the period-1 Bloch state at $k=0$ for this model with $V_1=0$ was explained in terms of the averaged interaction energy, and it was argued that if the averaged interaction $E_{\rm int} \propto \int_{-p\pi/2}^{p\pi/2}(c_1+c_2 \mbox{cos} 2x) |\psi|^4 d x$ over one period becomes negative, that would make the $k=0$ state unstable. In the case of $V_1=0$ (i.e., $c_1=0$), since the interaction energy (for both period-1 and period-2 solutions) averaged over one supercell is always negative for $k=0$, it resembles a BEC with attractive interparticle interaction, which is unstable dynamically [@pethickbook].
Interestingly, although the lowest Bloch states are dynamically unstable at $k=0$, at larger values of $k$ these can be stable \[e.g., the gray-shaded regions in Figs. \[d\_dynp1\](a), \[d\_dynp2\](a), \[c\_dynp1\](a), and \[c\_dynp2\](b)\]. To explain this seemingly counterintuitive result, we go back to the population density distributions of the discrete model in Figs. \[p1den\], \[p2den1\], and \[p2den2\]. As we have already mentioned, at the zone edge the majority of particles are accumulated in the attractive sites, leaving the repulsive sites nearly empty. Now, for a two-site cell, the transition amplitude between the states with populations $\{|g_1|^2, |g_2|^2\}$ and $\{|g_1|^2\pm 1, |g_2|^2\mp 1\}$ can be estimated as $\sim \sqrt{|g_1| |g_2|} K$. Having alternate empty sites means that the tunneling between neighboring sites is frozen, and the dynamical instability is suppressed. This “freezing” takes place for the four-site cell in the case of the period-2 solutions as well. In contrast, at the zone center with $k=0$, the population distribution is more even, and no sites are vacant. The tunneling is non-negligible, and the suppression of dynamical instability does not work around this point. Since the isolation of the higher-density regions, which is responsible for the stability of the superfluid at higher $k$ values, is a result of the attractive interaction in alternate sites, this mechanism can be termed as “attraction-induced dynamical stability.”
That the period-2 solutions are more stable than the period-1 solutions at higher $U$ values is a direct consequence of the very same mechanism. For period-2 solutions, the higher-density regions are more localized and isolated, i.e., most of the particles are hosted by every fourth site while, for period-1 solutions, it is every second site. In the case of period-1 solutions, this particular stability mechanism is not very prominent near the zone center because the higher-density regions are not separated enough, and a larger $U$ \[Fig. \[d\_dynp1\](a)\] generates more instability than a smaller $U$ \[Fig. \[d\_dynp1\](b)\] for the same value of $k$. On the other hand, for period-2 solutions, a larger $U$ enhances the stability that was already there due to a higher degree of isolation between the higher-density regions. Thus, the superfluid with a higher $U/K$ value \[Fig. \[d\_dynp2\](a)\] is more stable than its lower-$U/K$ counterpart \[Fig. \[d\_dynp2\](b)\] for period-2 solutions.
Of course, there are other factors that determine the dynamical stability, apart from the sign of the net attractive interaction energy, and the suppression of the tunneling due to isolation of higher-density regions. When $U\nu/2K$ is sufficiently small, we observe that a dynamically unstable region appears near the zone edge. This suggests that there are several other factors, too, collectively responsible for the complicated stability diagram like Figs. \[d\_dynp1\](b), (c), and (d). It is also worth mentioning here that, similarly, in also BECs in optical lattices with dipole-dipole interactions, it is observed that higher period solutions are more stable [@maluckov].
This attractive-interaction induced dynamical stability is present in the continuum model as well. Only, now the attractive and repulsive “sites” are not actual discrete lattice sites any more, but domains. We observe that up to a certain value of $c_2$, increasing the strength of the attractive interaction enhances the stability of the period-2 states around the zone edge by suppressing the inter-site tunneling (Fig. \[c\_dynp2\]). However, if the nonlinear interaction term is increased even beyond this point ($c_2\simeq 0.17$ here), another mechanism becomes important: the interaction between intra-site particles. Then an increased attractive interaction leads to the collapse of the BEC within a supercell. Since in the discrete model this kind of intra-site degrees of freedom is completely absent, we did not have something equivalent to Fig. \[c\_dynp2\](a) there.
We also note that for higher values of $c_2$ (Fig. \[c\_den\]) the density distribution has very sharp peaks. As the value of $c_2$ is gradually decreased, those peaks broaden. This is another reason why the discrete model fails to mimic the continuum one for high $c_2$: the expansion of the sharp peaks needs more number of basis functions, and the single-band discrete model is insufficient to capture the actual behavior.
For the excited states, too, there is a departure from the prediction based on the averaged interaction. The period-1 and period-2 states in higher bands usually correspond to an average positive interaction energy, and yet we find the $k=0$ state to be dynamically unstable when $c_1=0$.
Next we consider adding a constant component to the periodic modulation, i.e., taking $c_1\ne 0$ in the continuum model. Although the $k=0$ state in the lowest band is always dynamically unstable for $c_1=0$, by gradually increasing $c_1$ one finally arrives at a critical value that stabilizes the system. In Fig. \[avint\], the solid curve gives the values of the critical $c_1$’s as $c_2$ is increased. The yellow region bounded by the solid line is dynamically stable, and the white one is dynamically unstable. The dashed line marks the separation between average attractive interaction and average repulsive interaction, i.e., the region below it is attractive and the region above is repulsive. So there is a correspondence between the overall interaction being repulsive, and the system being dynamically stable for period-1 solutions at $k=0$ \[Fig. \[avint\](a)\]. This is in agreement with the results of [@wu1].
In Fig. \[avint\](b), we plot the same for $k=1$ (i.e., the zone boundary for period-1 states). Here, too, there appears to be a relation between the region of dynamical stability and the line where the averaged interaction changes sign. Only, now the solid line lies below the dashed line and the dynamically stable region expands. This can be connected to the “attractive-interaction induced dynamical stability” again: near the zone edge there is an additional stability mechanism due to the isolation of the higher-density regions. Thus the system becomes stable even at a $c_1$ value that is slightly lower than the $c_1$ required to make the net interaction repulsive.
![(Color online) Dynamical stability and averaged interaction for (a) period-1 and $k=0$, (b) period-1 and $k=1$, and (c) period-2 and $k=0.5$. The dashed lines separate the regions of positive average interaction (above the line) and negative averaged interaction (below the line). The solid line separates the dynamically stable and the unstable regions, and the stable region is shaded in yellow.[]{data-label="avint"}](avintk0p1.eps "fig:") ![(Color online) Dynamical stability and averaged interaction for (a) period-1 and $k=0$, (b) period-1 and $k=1$, and (c) period-2 and $k=0.5$. The dashed lines separate the regions of positive average interaction (above the line) and negative averaged interaction (below the line). The solid line separates the dynamically stable and the unstable regions, and the stable region is shaded in yellow.[]{data-label="avint"}](avintk1p1.eps "fig:") ![(Color online) Dynamical stability and averaged interaction for (a) period-1 and $k=0$, (b) period-1 and $k=1$, and (c) period-2 and $k=0.5$. The dashed lines separate the regions of positive average interaction (above the line) and negative averaged interaction (below the line). The solid line separates the dynamically stable and the unstable regions, and the stable region is shaded in yellow.[]{data-label="avint"}](avintk5p2.eps "fig:")
The picture, however, changes for period-2 solutions. When $c_2$ is very low, the period-2 branch does not extend up to $k=0$, but rather appears only in a small region around the zone boundary. For a higher value of $c_2$, even though the period-2 branch exists for $k=0$, it is dynamically unstable for $c_1=0$. If we keep on increasing $c_1$, the instability increases. Thus, there is no critical $c_1$ and no stable $k=0$ state for this parameter domain, although the averaged interaction can be both attractive and repulsive, depending on the choices of $c_1$ and $c_2$.
For period-2 and $k=0.5$ (the zone boundary for period-2 states), the trend is completely opposite to the period-1 results. For $c_2\agt 0.07$, the solutions are dynamically stable even at $c_1=0$, and gradually become dynamically unstable if $c_1$ is increased above a certain value \[Fig. \[avint\](c)\]. Thus, we have a critical value of $c_1$ that marks the onset of dynamical instability. Below $c_2\simeq 0.07$, the solutions are dynamically unstable at $c_1=0$, and increasing $c_1$ makes it even more unstable. So unlike the period-1 cases, here the dynamically stable region (the region below the solid line, and not above, marked by yellow shading) does not correspond to an overall repulsive interaction \[Fig. \[avint\](c)\].
![Density distributions of period-2 states for $c_2=0.08$ and $k=0.5$ with $c_1=0$ (dashed curve) and $c_1=0.04$ (solid curve). The dashed curve belongs to the stable region and the solid one marks the onset of dynamical instability. Here $x$ is plotted in units of $1/k_0$, $n$ is in units of the average density $n_0$. []{data-label="twopeaks"}](twopeaks.eps)
In period-1 situations, the sign of the overall interaction matters in determining the dynamical stability: a repulsive interaction means a dynamically stable BEC. Since the “attraction-induced dynamical stability” is not the dominant behavior there (because the higher-density regions are not separated enough), the stability can more or less be accounted for by the sign of the net interaction alone. For period-2 solutions, however, a more complicated factor sets in. Since the period-2 case in general represents a higher degree of isolation between the higher-density regions (Fig. \[c\_den\]), the tunneling rate here plays a crucial role. For a large $c_2$, the peaks are sharper. The inter-site tunneling is suppressed here and the system is more stable. As $c_2$ is decreased, the peaks spread out to overlap, enabling more tunneling of particles, and that leads to dynamical instability. That is why in Fig. \[avint\](c), the stability region appears in the higher $c_2$ side below the solid line. That the shape of the peaks and the nature of their separation in the density distribution determines the dynamical stability can be illustrated from Fig. \[twopeaks\] as well. The dashed curve of Fig. \[twopeaks\] corresponds to the density distribution at $k=0.5$ for $c_2=0.08$ and $c_1=0$, that falls in the stable region of Fig. \[avint\](c). If $c_1$ is increased above $0.04$, although the averaged interaction is now positive (and we could thus expect a stable BEC), we find the region dynamically unstable. Here the density distribution shows wider peaks (the solid curve) and a lesser degree of isolation, and this results in more tunneling of particles, and hence, less stability. So we see that the attractive-interaction induced dynamical stability is the key factor in describing the stability of period-2 states around the zone edge.
Finally, in a realistic experiment one may anticipate that a harmonic external trapping potential is present in addition to the periodic modulation. In such a trapped case, key modifications would be in the density of states in the low-energy region and the emergence of the quantum pressure due to the inhomogeneity of the system. However, they are relevant only to the long-wavelength perturbations while the fastest growing mode for the dynamical instability in our discussion is the one with a short wavelength of the order of a lattice constant. Therefore, provided the oscillator length of the trap is much larger than the lattice constant, the dynamical stability of the nonlinear lattice in the presence of the harmonic trap could be reliably predicted within the local-density approximation using our results for the untrapped case.
Summary \[sec:summary\]
=======================
We have studied BECs in a nonlinear lattice, i.e., with a spatially periodic scattering length that can be realized via optical Feshbach resonances. Periodic and period-doubled solutions are obtained, both for a reduced discrete model and the full continuum model. The energetic and dynamic stabilities of these stationary states are then examined. It is observed that the periodic nature of the interaction leads to a splitting of the BEC: most of the particles are stored in the attractive sites or domains. If these higher-density regions are not sufficiently isolated and an inter-site tunneling is significant, then the dynamical stability of the superfluid can be qualitatively explained by the sign of the averaged interaction: a net repulsive BEC is stable and a net attractive one is unstable. However, when the higher-density regions are well separated, the inter-site tunneling is suppressed and that enhances the dynamical stability of the system. This “attraction-induced dynamical stability” plays the dominant role near the zone edge for periodic solutions. Also, it is this mechanism that renders the higher-periodic solutions more dynamically stable when the nonlinear interaction term is strong enough, unless there is an inter-site dynamics causing a collapse of the BEC.\
This work was supported by IBS through Project Code (Grant No. IBS-R024-D1); by the Zhejiang University 100 Plan; by the Junior 1000 Talents Plan of China; by the Max Planck Society, the Korea Ministry of Education, Science, and Technology (MEST), Gyeongsangbuk-Do, Pohang City, for the support of JRG at APCTP; and by Basic Science Research Program through National Research Foundation in Korea funded by MEST (Grant No. 2012R1A1A2008028). R.D. would like to acknowledge support from the Department of Science and Technology, Government of India in the form of an Inspire Faculty Award (Grant No. 04/2014/002342). P.V. is supported by the Austrian Federal Ministry of Science, Research, and Economy (BMWFW) and he would also like to thank Prof. Oriol Romero-Isart for support.
[20]{} C. J. Pethick and H. Smith, *Bose Einstein Condensation in Dilute Gases, 2nd ed.* (Cambridge University Press, Cambridge, 2008). Edited by P. G. Kevrekidis, D. J. Frantzeskakis, and R. Carretero-González, *Emergent Nonlinear Phenomena in Bose-Einstein Condensates: Theory and Experiment* (Springer-Verlag, Berlin Heidelberg, 2008). O. Morsch and M. Oberthaler, Rev. Mod. Phys. **78**, 179 (2006).
S. Inouye, M. R. Andrews, J. Stenger, H.-J. Miesner, D. M. Stamper-Kurn, and W. Ketterle, Nature **392**, 151 (1998). Ph. Courteille, R. S. Freeland, D. J. Heinzen, F. A. van Abeelen,and B. J. Verhaar, Phys. Rev. Lett. **81**, 69 (1998). J. L. Roberts, N. R. Claussen, J. P. Burke Jr., C. H. Greene, E. A. Cornell, and C. E. Wieman, Phys. Rev. Lett. **81**, 5109 (1998). A. J. Moerdijk, B. J. Verhaar, and A. Axelsson, Phys. Rev. A **51**, 4852 (1995). E. Timmermans, P. Tommasini, M. Hussein, and A. Kerman, Phys. Rep. **315**, 199 (1999).
P. O. Fedichev, Y. Kagan, G. V. Shlyapnikov, and J. T. M. Walraven, Phys. Rev. Lett. **77**, 2913 (1996). J. L. Bohn and P. S. Julienne, Phys. Rev. A **56**, 1486 (1997). F. K. Fatemi, K. M. Jones, and P. D. Lett, Phys. Rev. Lett., **85**, 4462 (2000). M. Theis, G. Thalhammer, K. Winkler, M. Hellwig, G. Ruff, R. Grimm, and J. H. Denschlag, Phys. Rev. Lett. **93**, 123001 (2004).
I. Bloch, J. Dalibard, and W. Zwerger, Rev. Mod. Phys. **80**, 885 (2008) C. Chin, R. Grimm, P. Julienne, and E. Tiesinga, Rev. Mod. Phys. **82**, 1225 (2010).
A. Trombettoni and A. Smerzi, Phys. Rev. Lett. **86**, 2353 (2001). J. C. Bronski, L. D. Carr, B. Deconinck, and J. N. Kutz, Phys. Rev. Lett. **86**, 1402 (2001). Z. Rapti, P. G. Kevrekidis, V. V. Konotop, and C. K. R. T. Jones, J. Phys. A **40**, 14151 (2007).
S. Burger, K. Bongs, S. Dettmer, W. Ertmer, K. Sengstock, A. Sanpera, G. V. Shlyapnikov, and M. Lewenstein, Phys. Rev. Lett. **83**, 5198 (1999). J. Denschlag, J. E. Simsarian, D. L. Feder, C. W. Clark, L. A. Collins, J. Cubizolles, L. Deng, E. W. Hagley, K. Helmerson, W. P. Reinhardt, S. L. Rolston, B. I. Schneider, and W. D. Phillips, Science **287**, 97 (2000). L. Khaykovich, F. Schreck, G. Ferrari, T. Bourdel, J. Cubizolles, L. D. Carr, Y. Castin, and C. Salomon, Science **296**, 1290 (2002).
B. Wu, R. B. Diener, and Q. Niu, Phys. Rev. A **65**, 025601 (2002). D. Diakonov, L. M. Jensen, C. J. Pethick, and H. Smith, Phys. Rev. A **66**, 013604 (2002). E. J. Mueller, Phys. Rev. A **66**, 063603 (2002). M. Machholm, C. J. Pethick, and H. Smith, Phys. Rev. A **67**, 053613 (2003). B. T. Seaman, L. D. Carr, and M. J. Holland, Phys. Rev. A, **72**, 033602 (2005). G. Watanabe, S. Yoon, and F. Dalfovo, Phys. Rev. Lett. **107**, 270404 (2011). H. Y. Hui, R. Barnett, J. V. Porto, and S. Das Sarma, Phys. Rev. A **86**, 063636 (2012).
M. Machholm, A. Nicolin, C. J. Pethick, and H. Smith, Phys. Rev. A **69**, 043604 (2004). S. Yoon, F. Dalfovo, T. Nakatsukasa, and G. Watanabe, New J. Phys. **18**, 023011 (2016).
B. Wu and Q. Niu, Phys. Rev. A **64**, 061603(R) (2001). B. Wu, Q. Niu, New J. Phys. **5** 104 (2003). M. Modugno, C. Tozzo, and F. Dalfovo, Phys. Rev. A **70**, 043625 (2004). L. De Sarlo, L. Fallani, J. E. Lye, M. Modugno, R. Saers, C. Fort, and M. Inguscio, Phys. Rev. A **72**, 013603 (2005).
H. Sakaguchi and B. A. Malomed, Phys. Rev. E **72**, 046610 (2005). Y. V. Kartashov, B. A. Malomed, and L. Torner, Rev. Mod. Phys. **83**, 247 (2011). S. L. Zhang, Z. W Zhou, and B. Wu, Phys. Rev. A **87**, 013633 (2013).
R. Yamazaki, S. Taie, S. Sugawa, and Y. Takahashi, Phys. Rev. Lett. **105**, 050405 (2010).
A. Maluckov, G. Gligorić, L. Hadžievski, B. A. Malomed, and T. Pfau, Phys. Rev. Lett. [**108**]{}, 140402 (2012).
A. Smerzi and A. Trombettoni, Phys. Rev. A **68**, 023613 (2003).
G. Watanabe, B. P. Venkatesh, and R. Dasgupta, Entropy **18**, 118 (2016).
|
---
abstract: 'We report the absorption profile of isolated Flavin Adenine Dinucleotide (FAD) mono-anions recorded using Photo-Induced Dissociation action spectroscopy. In this charge state, one of the phosphoric acid groups is deprotonated and the chromophore itself is in its neutral oxidized state. These measurements cover the first four optical transitions of FAD with excitation energies from 2.3 to 6.0 eV (210–550 nm). The $S_0\rightarrow S_2$ transition is strongly blue-shifted relative to aqueous solution, supporting the view that this transition has significant charge-transfer character. The remaining bands are close to their solution-phase positions. This confirms that the large discrepancy between quantum chemical calculations of vertical transition energies and solution-phase band maxima can not be explained by solvent effects. We also report the luminescence spectrum of FAD mono-anions *in vacuo*. The gas-phase Stokes shift for $S_1$ is 3000 cm$^{-1}$, which is considerably larger than any previously reported for other molecular ions and consistent with a significant displacement of the ground and excited state potential energy surfaces. Consideration of vibronic structure is thus essential for simulating the absorption and luminescence spectra of flavins.'
author:
- 'L. Giacomozzi'
- 'C. Kj[æ]{}r'
- 'J. Langeland Knudsen'
- 'L. H. Andersen'
- 'S. Br[ø]{}ndsted Nielsen'
- 'M. H. Stockett'
title: 'Absorption and luminescence spectroscopy of mass-selected Flavin Adenine Dinucleotide mono-anions'
---
Introduction
============
Flavin Adenine Dinucleotide (FAD) is a ubiquitous redox cofactor serving many key metabolic roles for example as an electron acceptor in the citric acid cycle. FAD is a member of the flavin family which also includes Flavin Mononucleotide (FMN) and riboflavin (RF). These molecules all share the tri-cyclic iso-alloxazine chromophore, whose high reduction potential and multiple redox states give a versatility which lends itself to a wide variety of reactions. In addition, FAD and FMN act as blue light sensors in enzymes and proteins regulating DNA repair [@Massey2000], phototropism and circadian rhythms in plants [@Chaves2011] and the perception of magnetic fields by some migratory birds [@Solovyov2012; @Wiltschko2014].
Not unlike other biochromophores such as chlorophyll [@Kjaer2016; @Wellman2017], the protein micro-environment may alter the electronic absorption and emission spectrum of flavin cofactors [@Udvarhelyi2015]. For example, the cellular redox equilibrium may favor one or another resting redox state of flavin [@Liu2010], which have rather different optical properties. The redox-specificity of flavin fluorescence has been exploited in autofluorescence imaging applications, where flavins serve as a non-invasive intrinsic biomarker of metabolic activity [@Galban2016]. Even for a given redox state, significant differences in the optical spectra and excited state lifetimes of flavins in different proteins have been observed [@Kao2008]. In order to quantitatively understand such effects, the intrinsic optical spectra of isolated flavins are useful as a baseline for comparison. Such studies are readily compared to high-level theoretical calculations and eliminate the potentially confounding influence of a solvent. The electronic structure of flavins has been called “a difficult case for computational chemistry,”[@Wu2010] and many authors have bemoaned the dearth of experimental benchmarks [@Sikorska2004; @Hasegawa2007; @Salzmann2008]. To date, only a few optical spectra of isolated flavin-related compounds have been published, including fluorescence and fluorescence excitation spectra of lumiflavin in helium nanodroplets [@Vdovin2013], and action spectra of protonated lumichrome [@Guenther2017] (a flavin derivative lacking a N-10 substituent) and anionic FAD [@Stockett2017b] *in vacuo*. All of these studies examined only the lowest singlet excited state of the system in question.
Here, we give the full UV-Vis absorption profile of FAD mono-anions *in vacuo*, covering the first four bright transitions with excitation energies from 2.3 to 6.0 eV (210–550 nm). In addition, we report the luminescence spectrum of FAD mono-anions *in vacuo*. In this charge state, FAD is deprotonated on one of the phosphoric acid groups linking the flavin and adenine moeties (Figure \[fig\_pid\]), and the flavin chromophore is in its neutral, oxidized form [@Stockett2017b]. These new experimental results, in conjunction with previously reported solution-phase data, are used to critically evaluate the state of the art in modeling the electronic structure of flavins.
Experiments
===========
Absorption profile measurements were performed at two different instruments, the SepI accelerator mass spectrometer complex [@Stochkel2011; @Wyer2012] and the ELISA electrostatic ion storage ring [@Nielsen2001; @Andersen2002], both located at Aarhus University. In both cases, photo-absorption was measured indirectly by Photo-Induced Dissociation (PID) action spectroscopy. Flavin adeine dinucleotide disodium salt hydrate was purchased from Sigma Aldrich and dissolved in methanol. FAD anions were transferred to the gas phase via electrospray ionization and stored in a multipole ion trap which was emptied every 25 ms (SepI) or 40 ms (ELISA). Ion bunches extracted from the trap were accelerated to kinetic energies of 50 keV (SepI) or 22 keV (ELISA) and the ions of interest were selected using a bending magnet according to their mass-to-charge ratio. A high-intensity pulsed laser system (EKSPLA OPO) was used to excite the mass-selected ion bunches *in vacuo*. In action spectroscopy, it is usually assumed that the electronically excited system ultra-rapidly crosses over to a highly vibrationally excited level of the ground electronic state (Internal Conversion), and that this vibrational energy is re-distributed over all internal degrees of freedom in a matter of picoseconds (Internal Vibrational Redistribution). During the $\sim$5 ns irradiation time, the ions may (or may not) absorb multiple photons sequentially, *i.e.* the ion returns to its electronic ground state in between each photon absorption. The deposited energy leads to unimolecular dissociation and/or thermionic electron emission on timescales ranging up to several milliseconds. By monitoring the yield of the photo-products (daughter ions or neutral fragments) as a function of excitation wavelength (corrected for the variation in laser power/photon flux across the spectral range), the so-called action spectrum is constructed. It should be kept in mind that the action spectrum may not perfectly reflect the absorption cross section due to limitations such as sampling time or alternative relaxation channels such as photon or electron emission. Additional experimental details are presented in the Supplementary Material.
![Top: Structure of FAD mono-anions. Bottom: Photo-Induced Dissociation mass spectrum of FAD mono-anions (parent ion 784 $m/z$) recorded at SepI with 250 nm (5 eV, 1.2 mJ/pulse) excitation.[]{data-label="fig_pid"}](fad_struct_short.png "fig:"){width="0.99\columnwidth"} ![Top: Structure of FAD mono-anions. Bottom: Photo-Induced Dissociation mass spectrum of FAD mono-anions (parent ion 784 $m/z$) recorded at SepI with 250 nm (5 eV, 1.2 mJ/pulse) excitation.[]{data-label="fig_pid"}](uv_pid.pdf "fig:"){width="0.99\columnwidth"}
At SepI, daughter ions were separated using an electrostatic energy analyzer (mass resolving power $\sim$100) positioned after the laser-ion interaction region and counted with a channeltron detector. Every second ion bunch was irradiated with the laser and the difference in counts between the “laser-on” and “laser-off” injections is the photo-induced signal. The low background rate and high detection efficiency of daughter ions provides superior signal-to-noise than measurement of the depletion of parent ions, particularly for large molecules like FAD with many internal degrees of freedom and correspondingly low dissociation yields. The depletion of the parent FAD mono-anion ion beam measured with 210 nm excitation was 0.8$\pm 0.5\%$. The SepI instrument samples photo-induced dissociation occurring during the $\sim 10$ $\mu$s it takes for the ions to travel from the laser interaction region to the electrostatic analyzer. This limited sampling time could in principle skew the absorption profile towards the blue, an effect known as a kinetic shift. The PID mass spectrum recorded with 250 nm (5 eV) excitation is shown in Figure \[fig\_pid\]. The dominant daughter ion is that with 542 $m/z$. This corresponds to the loss of neutral lumichrome (the flavin rings plus a hydrogen atom), which is the main product of normal photolysis of flavins in solution [@Holzer2005].
![ELISA electrostatic ion storage ring. FAD mono-anions circulating in the ring are overlapped with a laser pulse in the upper straight section. Neutral products of dissociation occurring in the lower straight section are detected by the MCP.[]{data-label="fig_elisa"}](elisa.png){width="0.99\columnwidth"}
Figure \[fig\_elisa\] shows the ELISA electrostatic ion storage ring. FAD mono-anions circulate around the race-track like ring with a revolution time around 60 $\mu$s. After being stored for 11 ms the laser pulse is overlapped with the ion bunch in the upper straight section. Laser-excited ions may then continue to circulate for several ms before decaying. If they dissociate while in the lower straight section (*i.e.* after at least one half revolution) the neutral fragments will no longer be affected by the electrostatic confinement fields and fly to the microchannel plate (MCP) detector mounted on this section.
![Top: PID action spectra for FAD mono-anions. The blue and red curves were recorded using SepI monitoring the yield of the daughter ion with 542 $m/z$. The green curve was recorded using ELISA monitoring the total neutral fragment yield. Bottom: Absorption cross section of FAD in neutral aqueous solution, adapted from Islam *et al.* [@Islam2003] The stick spectrum is the consensus of DFT calculations of vertical transition energies for lumiflavin *in vacuo* [@Neiss2003; @Sikorska2004; @Zenichowski2007; @Choe2007; @Salzmann2008; @Vdovin2013; @Zanetti-Polzi2017].[]{data-label="fig_abs"}](absgasaq.pdf){width="0.99\columnwidth"}
The luminescence spectrum of gas-phase FAD mono-anions was recorded using the LUNA luminescence spectrometer in Aarhus [@Stockett2016]. Ions were again produced by electrospray ionization and accumulated in a cylindrical Paul trap. The amplitude and DC offset of the radio frequency trapping voltage applied to the cylinder electrode were set to apply a low-mass cutoff of approximately 600 $m/z$ *i.e.* higher than any of the daughter ions observed in the PID mass spectrum (Figure \[fig\_pid\]). The luminescence signal rate was insufficient to further optimize the mass selection parameters. The trapped ions were excited at 445 nm by an EKSPLA OPO laser system. The laser power was reduced to 50 $\mu$J/pulse to reduce multiple-photon absorption. Luminescence was collected through one of the end caps of the Paul trap, which is made of a wire mesh. An aspheric condenser lens mounted directly behind the mesh collimates the emission which was transmitted through a vacuum window, a 450 nm longpass edge filter (to reduce scattered laser light) and coupled into the entrance slit of an Andor 303i Czerny-Turner imaging spectrograph equipped with a NewtomEM electron multiplying CCD detector array. To correct for scattered laser light and other background sources, the experiment was repeated in alternating sets of 100 cycles with ions in the trap followed by 100 cycles with no ions (the trapping voltage was switched off). The difference between the “ions-on” and “ions-off” acquisitions is the luminescence signal.
Absorption Results and Discussion
=================================
In the upper panel of Figure \[fig\_abs\], PID action spectra of FAD mono-anions, recorded in three overlapping spectral regions, are shown. SepI was used for the wavelength ranges 210–350 nm and 420–550 nm, monitoring the yield of the daughter ion with 542 $m/z$ (lumichrome loss). ELISA was used in the range 309–550 nm, monitoring the total neutral fragment yield. The SepI data from 420 to 550 nm was reported previously [@Stockett2017b], and is reproduced here to show consistency between the the two measurement techniques. The excellent agreement in the low-energy range between the two datasets, which sample different dissociation timescales, indicates that our results are not strongly affected by any kinetic shift. The lower panel of Figure \[fig\_abs\] shows the absorption cross section of FAD in neutral aqueous solution adapted from Islam *et al.* [@Islam2003] With the exception of the $S_0\rightarrow S_2$ transition, which is red-shifted by 0.22 eV (23 nm) in solution, the band maxima are identical within experimental accuracies. Hints of vibronic structure are present in the gas-phase spectrum (upper panel), such as a minor peak at 411 nm, but the bands are generally broad and featureless, consistent with the solution-phase measurements (lower panel).
$S_0\rightarrow$ FAD$^a$ FMN$^a$ LF$^b$ [@Neiss2003; @Sikorska2004; @Zenichowski2007; @Choe2007; @Salzmann2008; @Vdovin2013; @Zanetti-Polzi2017] RF [@Sikorska2005] FMN [@Kammler2012] FAD$^c$ [@Islam2003]
------------------ --------- --------- ----------------------------------------------------------------------------------------------------------------- -------------------- -------------------- ----------------------
$S_1$ 2.74 - 3.00 3.08 2.56/2.89 2.74
$S_2$ 3.59 - 3.84 3.68 3.26/3.3 3.37
$S_{3a}$ - - 4.70 4.83 - -
$S_{3b}$ 4.75 4.69 4.88 4.89 - 4.73
$S_4$ 5.8 5.6 5.81 5.79 - 5.78
: Band maxima of action spectrum of FAD and FMN mono-anions *in vacuo* compared to electronic structure calculations of vertical transition energies for various flavins *in vacuo* and absorption band maxima of FAD in neutral aqueous solution. All in eV.\
$^a$Present work, experiment *in vacuo*, uncertainties implied by the number of significant digits\
$^b$Consensus of TD-DFT values from various authors, see Supplementary Material\
$^c$Experiment in aqueous solution[]{data-label="tab_gas"}
In Table \[tab\_gas\], our experimental results are compared with a survey of previously published calculations of gas-phase transition energies for various flavins. Most electronic structure calculations of flavins focus on lumiflavin (LF), the smallest subunit which shares the essential photophysical properties of the larger flavin cofactors. In LF, the ribityl sidechain at the N-10 position is replaced by a methyl group, which simplifies calculations. Most of the calculations of LF have been performed using Time Dependent Density Functional Theory (TD-DFT) methods [@Neiss2003; @Sikorska2004; @Zenichowski2007; @Choe2007; @Salzmann2008; @Vdovin2013; @Zanetti-Polzi2017] and show a high degree of consistency (see Supplementary Material for a complete tabulation). Indeed, the variation in transition energies amongst the various TD-DFT calculations is less than 5$\%$. For the sake of comparison, these transition energies (and their calculated transition $f-$values) have been simply averaged to give a “consensus” spectrum, which is presented in the lower panel of Figure \[fig\_abs\] [^1]. The consensus vertical transition energies (given in Table \[tab\_gas\]) overestimate the present experimental band maxima by about 0.3 eV for the first two bands and 0.1 eV for the UV bands.
All calculations agree that the bright transitions are due to $\pi\rightarrow\pi^*$ transitions. While not all authors provide detailed assignments, generally the $S_0\rightarrow S_1$ is considered to be the HOMO$\rightarrow$LUMO transition and $S_0\rightarrow S_2$ a transition to the LUMO from a lower orbital (usually HOMO-1) which is localized to the aromatic benzene-like ring of the flavin chromophore [@Hasegawa2007; @Zenichowski2007; @Salzmann2008]. This localization gives this transition a significant degree of charge transfer character [@Hasegawa2007], which is widely thought to be responsible for the solvatochromic behavior of $S_0\rightarrow S_2$. The HOMO and LUMO, in contrast, are both spread across the entire chromophore [@Hasegawa2007; @Zenichowski2007; @Salzmann2008] and this transition shows little solvatochromism in theory [@Zenichowski2007] or experiment [@Sikorska2004; @Sikorska2005; @Zirak2009]. The present experimental results qualitatively support this view, with a large blue-shift (0.22 eV) upon desolvation for $S_0\rightarrow S_2$, but no such shift for $S_0\rightarrow S_1$.
Several authors have investigated whether solvent effects can explain the large deviation between calculated vertical excitation energies and experimentally measured absorption band maxima in solution [@Hasegawa2007; @Zenichowski2007; @Wu2010; @Zanetti-Polzi2017]. The present results, however, show that such effects are small. As has been pointed out earlier [@Salzmann2009a], the absorption spectra of flavins are hardly affected by the solvent environment [@Koziol1965; @Koziol1966; @Sikorska2004; @Weigel2008; @Zirak2009], except of course for the $S_0\rightarrow S_2$ transition. Moreover, calculations [@Hasegawa2007] and measurements [@Stanley1999] find only a small increase in the permanent dipole moment of the flavin chromophore upon excitation to $S_1$. Solvatochromism measurements [@Zirak2009] actually imply a slight *decrease* in the dipole moment upon excitation, but with a large uncertainty. There is thus no reason to expect large solvent effects for $S_0\rightarrow S_1$, and indeed none are found in most calculations [@Hasegawa2007; @Zenichowski2007], or from the present gas-phase experiments.
Setting aside solvent effects, the vibronic structure of flavins must seriously be taken into account. As the density of vibrational levels in an electronically excited state increases strongly with energy, the absorption band maximum is usually observed to the blue of the 0-0 transition energy. The band maximum often roughly coincides with the vertical transition energy calculated in TD-DFT from the ground state equilibrium geometry, although clearly not in the case of flavins. Full calculations of broadened vibronic excitation spectra reported for flavins [@Salzmann2009a; @Klaumuenzer2010; @Goetze2013; @Karasulu2014] come closer to reproducing the profile and position of the present gas-phase results than “simple” TD-DFT. There remains some discrepancy in the 0-0 energy (0.05 eV [@Klaumuenzer2010] to 0.5 eV [@Karasulu2014], depending on the calculation) compared to that measured in He droplets for lumiflavin [@Vdovin2013], which is presumed to be due to limitations in the chosen functionals. These methods have struggled to include micro-solvation effects, and are rarely attempted for excited states higher than $S_1$. We hope the present contribution will serve as a benchmark for further refining these methods, as it is clear that careful consideration of vibronic activity is essential in modeling absorption by flavins.
Solvent effects are evidently important for modeling the $S_0\rightarrow S_2$ transition. The $S_2$ absorption band maximum of riboflavin varies from 332 nm (3.75 eV) in apolar dioxane to 367 nm (3.38 eV) in water [@Koziol1965; @Sikorska2005]. The present value for the gas phase is 346 nm (3.59 eV). Calculations [@Hasegawa2007] and measurements [@Stanley1999] find that the permanent dipole moment of the $S_2$ state is significantly higher than that of $S_0$, implying bulk polarization effects may be important. In addition, significant differences between polar aprotic solvents such as DMSO and polar protic solvents like water suggest that hydrogen bonding plays a role as well [@Zirak2009]. Calculations including both of these effects do well at reproducing the magnitude of the solvent shift of $S_0\rightarrow S_2$ [@Hasegawa2007; @Salzmann2008].
A few TD-DFT calculations have been performed on more complex flavins including the ribityl sidechain. The addition of the sidechain appears to have little influence on the HOMO and LUMO orbitals [@Sikorska2005; @Wolf2008; @Klaumuenzer2010; @Wu2010]. However, the sidechain may participate in other orbitals, notably including those involved in the $S_0\rightarrow S_2$ transition [@Wu2010; @Kammler2012]. This could affect the degree of charge transfer character in these transitions. Sikorska and coworkers reported theoretical spectra for both LF [@Sikorska2004] and RF [@Sikorska2005] and found $S_0\rightarrow S_2$ to be nearly 0.2 eV lower for RF, while the other transition energies agreed with the consensus for LF.
![Comparision of action spectrum of FAD and FMN mono-anions recorded at SepI. The spectrum for FAD was recorded monitoring the photo-induced yield of daughter ions with 542 $m/z$ (lumichrome loss, same data as main text). For FMN, the yield of 169 $m/z$ (loss of formylmethylflavin) was monitored.[]{data-label="fig_fmn"}](fad_fmn.pdf){width="0.99\columnwidth"}
Interpretation of the UV portion of the absorption profile is complicated by the presence of the adenine moiety in FAD and the relative lack of calculations and solution-phase data in this spectral region. Most calculations find the $S_0\rightarrow S_3$ band of the flavin chromophore to be composed of two transitions (labeled $S_{3a}$ and $S_{3b}$ in Table \[tab\_gas\]), which are not resolved in the present experiment. Adenine absorbs at wavelengths similar to those of flavins, with band maxima near 250 and 200 nm in the gas phase [@Li1987], but with lower absorption cross sections (in solution) [@Islam2003]. We are aware of no modern quantum chemical calculations of the full FAD system. To add another point of comparison, we recorded a PID action spectrum of FMN (which lacks the adenine part) mono-anions at SepI. Figure \[fig\_fmn\] shows the PID action spectra of FAD and FMN mono-anions recored at SepI. The FAD spectrum is the same data presented in Figure \[fig\_abs\], the solid line is a 5-point moving average. The action spectrum for FMN (parent mass 455 $m/z$) was recorded monitoring the yield of daughter ions with 169 $m/z$, the most prominent peak in the PID mass spectrum (not shown). This corresponds to the loss of formylmethylflavin, and was determined to be a 1-photon process. The error bars are the standard deviation of 6 individual scans. Comparing the action spectra, the presence of adenine leads to an apparent blue-shift in the $S_0\rightarrow S_3$ and $S_0\rightarrow S_4$ transitions of about 0.05 and 0.2 eV, respectively. The positions of the band maxima are given in Table \[tab\_gas\]. These results imply a larger discrepancy between theory and experiment for FMN than for FAD. It should be kept in mind, however, that the dissociation rates and competition with other channels like thermionic electron emission could be different for FAD and FMN, potentially skewing the absorption profiles. It is also not obvious whether the differences in the action spectra are due to absorption by adenine, or by perturbation of the electronic structure of the flavin chromophore. In relation to the this, we note that the present measurement of the $S_0\rightarrow S_1$ band maximum of FAD is slightly red-shifted (by $\sim 0.05$ eV) with respect to most solution-phase measurements of other flavins that lack the adenine moiety [@Sikorska2004; @Sikorska2005; @Zirak2009]. This shift (between FAD and flavins without adenine) is also found in solution-phase measurements [@Islam2003; @Hussain2016]. As adenine does not absorb in this spectral region, this shift may point to a small stabilizing effect of the adenine moiety on the ground state of the flavin chromophore in FAD.
Luminescence Results and Discussion
===================================
![Fluorescence spectrum of FAD mono-anions *in vacuo* (black, excitation wavelength 445 nm) and in aqueous solution, adapted from Islam *et al.* [@Islam2003] (red, excitation wavelength 428 nm).[]{data-label="fig_lumi"}](fad_luna.pdf){width="0.99\columnwidth"}
Figure \[fig\_lumi\] shows the luminescence spectrum of FAD mono-anions *in vacuo* excited at 445 nm, as well as that in aqueous solution (adapted from Islam *et al.* [@Islam2003]). Both spectra represent $S_1\rightarrow S_0$ fluorescence. To our knowledge, Figure \[fig\_lumi\] is the first reported gas-phase fluorescence spectrum of a completely bare (tag-free), naturally occurring biomolecular ion in its physiological charge state. The light grey line is the raw data from the CCD and the solid black line is a fit to the functional form
$$y=A\times exp[-exp(-(x-x_0)/w)-(x-x_0)/w],$$
as recommended by Greisch *et al.* [@Greisch2014] as an empirical tool for characterizing asymmetric luminescence bands. The fit gives the position of the band maximum $x_0=525\pm 2$ nm (2.37 eV). The position of the fluorescence maximum in water is 541 nm [@Islam2003], which is consistent with other flavins in water [@Zirak2009; @Weigel2008]. In contrast to $S_0\rightarrow S_1$ absorption band maximum, the fluorescence band maximum of flavins varies significantly with solvent polarity, varying from 542 nm in water to 509.5 nm in benzene for riboflavin [@Zirak2009]. This suggests that the fluorescence solvatochromism is due to stabilization of the more polarizable excited state by solvent dipole rearrangement. The gas phase Stokes shift of 3000 cm$^{-1}$ (0.37 eV) is consistent with that of riboflavin in the least polar solvents such as chloroform [@Zirak2009]. In polar, protic solvents, Stokes shifts close to 4000 cm$^{-1}$ are observed [@Zirak2009; @Weigel2008]. We note that the emission maximum and Stokes shift for riboflavin in DMSO are more similar to those in apolar solvents than polar protic ones [@Zirak2009], again suggesting that micro-solvation effects (*e.g.* H-bonding) play some role.
Notably, the gas-phase Stokes shift for FAD is significantly higher than that measured for other complex molecular ions such as xanthene [@McQueen2010; @Forbes2011; @Kjaer2017] and phenoxazine [@Stockett2016b] laser dyes. Gas-phase stokes shifts for rhodamine dyes, for example, have been found to range from 900 cm$^{-1}$ [@Wellman2015] to less than 500 cm$^{-1}$ [@Forbes2011]. The analysis of Klaumünzer *et al.* [@Klaumuenzer2010] indicates that several stretching modes of the iso-alloxazine chromophore with frequencies in the range 1400-1600 cm$^{-1}$ dominate the vibronic spectra of flavins and predicts a Stokes shift of 3400 cm$^{-1}$. As has been observed for other complex molecules [@Stockett2016b], these vibrational frequencies correspond to about half the value of the gas-phase Stokes shift. This helps us understand the discrepancy between calculated vertical transition energies and observed absorption band maxima, the correspondence between which relies on the assumption of high vibrational excitation upon absorption such that the vibrational wavefunctions peak at the classical turning points [@Davidson1998]. Although the ground and excited state structures of flavins may be significantly displaced from each other, the difference in energy between the vertical and adiabatic (0-0) transition is covered by only a few quanta of the most strongly coupled vibrational modes and thus fails to meet this criterion.
The fluorescence signal detected from FAD mono-anions is very weak. Determination of absolute fluorescence quantum yields of trapped ions is experimentally challenging as key parameters such as the number of ions in the trap and the overlap between the laser beam and the ion cloud are difficult to measure precisely. Instead, the “brightness” (the total integrated fluorescence signal per laser shot) is often used to compare the luminescence from different ions recorded under similar experimental conditions [@Yao2013; @Stockett2017a]. The brightness of FAD mono-anions is at least an order of magnitude lower than that of resorufin, an anionic xanthene dye [@Kjaer2017]. If we assume that the fluorescence quantum yield of gas-phase resorufin is the same as in aqueous solution (0.75 [@Bueno2002]), we can roughly estimate the gas-phase quantum yield of FAD to be about 0.1 (for additional details, see Supplementary Material).
While the quantum yields of riboflavin and FMN are reasonably high (around 0.26 [@Drossler2002; @Islam2003; @Sikorska2005; @Zirak2009]) in neutral solutions, FAD is thought to exist in a stacked conformation where electron transfer from the adenine moiety quenches the flavin excited state [@Berg2002; @Li2009]. This leads to a reduced fluorescence quantum yield of 0.033 [@Islam2003]. At reduced pH, un-stacked conformation becomes dominant and the quantum yield rises to 0.13 [@Islam2003].
While our estimate of the gas-phase quantum yield is too crude to distinguish between stacked and non-stacked conformations, it is interesting to note that the quantum yield does not appear to be significantly lower than in solution. In contrast, *no* fluorescence was detected for FAD di-anions or, more remarkably, from FMN anions, using the LUNA instrument. Using the same brightness comparison with resorufin, we can estimate an upper limit on the gas-phase quantum yield of FMN of 0.04, much less than in solution. This suggests that these ions may decay through some non-radiative channel (*e.g.* electron detachment or inter-system crossing) that is not competitive for FAD mono-anions. Changes in fluorescence quantum yield upon desolvation have been reported for other molecules [@McQueen2010; @Sciuto2015] and may be a more sensitive indicator of changes in photophysics than transition energies.
Conclusion
==========
We have reported the photo-induced dissociation action spectrum and the luminescence spectrum of FAD mono-anions *in vacuo*. These results confirm that the vertical transition energies calculated using various electronic structure methods overestimate the intrinsic absorption band maxima. Neglect of vibronic structure, rather than solvent effects, is the cause of this discrepancy. Bulk polarization and micro-solvation effects appear to be important only for the $S_0\rightarrow S_2$ transition, in agreement with calculations which indicate that this transition has significant charge-transfer character. Luminescence emission occurs at a rather high Stokes shift compared to previously reported studies of complex molecular ions *in vacuo* in what again appears to be a vibronic effect. No emission was seen from two other flavin anions. The observed micro-environmental sensitivity of fluorescence portends a prominent role for gas-phase luminescence spectroscopy studies in unraveling the intrinsic photophysics of complex biomolecules.
Supplementary Material {#supplementary-material .unnumbered}
======================
The Supplementary Material includes additional experimental details, a tabulation of previously published calculations of transition energies of lumiflavin, and a description of our approach to estimating the gas-phase quantum yield of FAD.
Acknowledgements {#acknowledgements .unnumbered}
================
This work was supported by the Swedish Research Council (grant numbers 2016-03675 and 2016-04181) and the Danish Council for Independent Research (4181-00048B). LG thanks H. Zettergren of Stockholm University.
[57]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\
12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty [****, ()](\doibase 10.1042/bst0280283), [****, ()](\doibase
10.1146/annurev-arplant-042110-103759), , [****, ()](\doibase 10.1021/ja3074819), , [****, ()](\doibase
10.1242/jeb.110981), [****, ()](http://dx.doi.org/10.1021/acs.jpcb.6b10547) [ ()](\doibase 10.1002/chem.201605167) @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} [****, ()](\doibase 10.1021/ja8045469), , @noop [****, ()]{} @noop [****, ()]{} [****, ()](\doibase 10.1016/j.jphotochem.2007.01.033) @noop [****, ()]{} [****, ()](\doibase https://doi.org/10.1016/j.chemphys.2012.11.001), @noop [****, ()]{} [****, ()](\doibase 10.1039/C7CP04068G) @noop [****, ()]{} @noop [****, ()]{} [****, ()](\doibase 10.1103/PhysRevLett.87.228102) [****, ()](\doibase
http://dx.doi.org/10.1063/1.1447305) [****, ()](\doibase
https://doi.org/10.1016/j.chemphys.2004.08.006) [****, ()](\doibase https://doi.org/10.1016/j.chemphys.2003.08.013) @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} [****, ()](http://dx.doi.org/10.1063/1.4948316) @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} [****, ()](\doibase 10.1021/jp100642c), , @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} [****, ()](\doibase 10.1039/C7CP04689H) [****, ()](http://dx.doi.org/10.1063/1.4962364) @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} [****, ()](\doibase
10.1039/C7PP00049A) @noop [****, ()]{} [****, ()](\doibase https://doi.org/10.1016/S0301-0104(02)00731-0) [****, ()](\doibase 10.1021/jp020356s), [****, ()](\doibase 10.1021/jp905020u), , @noop [****, ()]{}
[^1]: The SAC-CI calculations by Hasegawa *et al.* [@Hasegawa2007] for LF, while agreeing with the qualitative description of the orbitals, yield significantly different transition energies, particularly for $S_0\rightarrow S_1$ which is 0.5 eV below the other calculations for LF and 0.28 eV below the present experimental value for FAD. This value is not included in the consensus spectrum in Figure \[fig\_abs\].
|
---
abstract: 'This paper proposes a supervisory control structure for networked systems with time-varying delays. The control structure, in which a supervisor triggers the most appropriate controller from a multi-controller unit, aims at improving the closed-loop performance relative to what can be obtained using a single robust controller. Our analysis considers average dwell-time switching and is based on a novel multiple Lyapunov-Krasovskii functional. We develop stability conditions that can be verified by semi-definite programming, and show that the associated state feedback synthesis problem also can be solved using convex optimization tools. Extensions of the analysis and synthesis procedures to the case when the evolution of the delay mode is described by a Markov chain are also developed. Simulations on small and large-scale networked control systems are used to illustrate the effectiveness of our approach.'
address:
- 'Automatic Control Laboratory, School of Electrical Engineering, KTH Royal Institute of Technology'
- 'Control Theory and Systems Biology, Department of Biosystems Science and Engineering, Swiss Federal Institute of Technology Zürich'
author:
- Burak Demirel
- Corentin Briat
- Mikael Johansson
bibliography:
- 'ACC12bibtex.bib'
nocite:
- '[@MoT:07; @HNX:07]'
- '[@HCB:06; @JFK+:09; @DBJ:12; @HDR+:11]'
- '[@XHH:00; @YWH+:06; @BLL:10; @WCW:06]'
- '[@WCW:06]'
title: 'Deterministic and Stochastic Approaches to Supervisory Control Design for Networked Systems with Time-Varying Communication Delays'
---
switched systems ,time-delay systems ,stochastic switched systems ,linear matrix inequality
Introduction {#sec:intro}
============
Networked control systems are distributed systems that use communication networks to exchange information between sensors, controllers and actuators [@ZBP:01; @WYB:02]. The networked control system architecture promises advantages in terms of increased flexibility, reduced wiring and lower maintenance costs, and is finding its way into a wide range of applications, from automobiles and transportation to process control and power systems, see *e.g.*, [@WYB:02] – [@NSS+:05].
The use of a shared communication medium introduces time-varying delays and information losses which may deteriorate the system’s performance, even to the point where the closed-loop system becomes unstable. A conservative approach is to design a robust controller that considers the worst-case delay. However, this might cause poor performance if the actual delay is only rarely close to its upper bound. Therefore, there is currently a renewed interest in adapting the control law to the delay evolution (*e.g.*, [@CHB:06] – [@KJF+:12]). Inspired by the communication delays that we have experienced in applications, see Figure \[fig:experimental\_delay\], we design a supervisory control scheme in the sense of [@Mor:96]. This control architecture consists of a finite number of controllers, each designed for a bounded delay variation (corresponding, *e.g.*, to low, medium and high network load) and a supervisor which orchestrates the switching among them.
The analysis of switched systems with fixed time-delays is challenging and has attracted significant attention in the literature, *e.g.*, [@HCB:06; @XiW:05; @SZH:06; @YaO:08]. Only recently, however, attempts to analyze switched systems with *time-varying* delays have begun to appear. Distinctively, [@JFK+:09] constructed multiple Lyapunov-Krasovskii functionals that guarantee closed-loop stability under a minimum dwell-time condition for interval time-varying delays. An alternative approach to deal with time-varying delays is to assume that they evolve according to a Markov chain and develop conditions that ensure (mean-square) stability, see *e.g.*, [@Nil:98] – [@BeB:98]. The work in [@Nil:98] assumed that the time delay never exceeds a sampling interval, modeled its evolution as a Markov process, and derived the associated LQG-optimal controller. However, this formulation is not able to deal with longer time-delays. The work in [@XHH:00] proposed a discrete-time Markovian jump linear system formulation, which allows longer (but bounded) time delays, and posed the design of a mode-dependent controller as a non-convex optimization problem. Complementary to these discrete-time formulations, [@BLL:10] – [@BeB:98] have investigated the mean-square stability of continuous-time linear systems with random time delays using stochastic Lyapunov-Krasovskii functionals. The papers [@CHB:08; @CHB:08b] have applied these techniques to networked control systems with random communication delays and synthesized mode-dependent controllers.
![The figure shows a recorded delay trace from the multi-hop wireless networking protocol used for networked control in [@WPJ+:07]. The delay exhibits distinct mode changes (here corresponding to one, two or three-hop communication) and varies around its piecewise constant mode-dependent mean. Similar behavior was reported by [@KTK+:08], who measured the delay of sensor data sent over a CAN bus. Their delay varied between 10-20 ms, but increased abruptly to around 150 ms under certain network conditions.[]{data-label="fig:experimental_delay"}](elsarticle-template-1-num-figure0)
In this paper, we analyze our proposed supervisory control structure by combining a novel multiple Lyapunov-Krasovskii functional with the assumption of average dwell-time switching. The average dwell-time concept, introduced in [@HeM:99], is a natural deterministic abstraction of load changes in communication networks, where minimal or maximal durations for certain traffic conditions are hard to guarantee. We demonstrate that the existence of a multiple Lyapunov-Krasovskii functional that ensures closed-loop stability under average dwell-time switching can be verified by solving a set of linear matrix inequalities. In addition, we show that the state feedback synthesis problem for the proposed supervisory control structure can also be solved via semi-definite programming. A similar analysis for Markovian time-delays is developed based on a slightly less powerful Lyapunov-Krasovskii functional than the one underpinning our deterministic analysis. Also in this case, we manage to design mode-dependent state feedback controllers using convex optimization.
The organization of the paper is as follows. Section 2.1 presents the supervisory control structure and formalizes the relevant analysis and synthesis problems in a deterministic setting. In Section 2.2, multiple Lyapunov-Krasovskii functionals are constructed for establishing exponential stability of supervisory control systems under average dwell-time switchings. Additionally, LMI conditions that verify the existence of such a multiple Lyapunov functional are derived. State-feedback synthesis conditions are also given in Section 2.3. Section 3 develops a similar analysis framework for stochastic delays. Section 3.1 formulates switched control system problem introduced in Section 2 as a Markovian jump linear system. Section 3.2 develops stochastic exponential mean-square stability conditions for the supervisory control system under stochastic delays. The corresponding state-feedback synthesis conditions are proposed in Section 3.3. Numerical examples are used to demonstrate the effectiveness of the proposed techniques in Section 4. Finally, Section 5 concludes the paper.
*Notation:* Throughout this paper, $\mathbb{R}^{n}$ denotes the *n*–dimensional Euclidean space, $\mathbb{R}^{m \times n}$ is the set of all $m \times n$ real matrices, and $\mathbb{S}^{n}_{++}$ denotes the cone of real symmetric positive definite matrices of dimension $n$. For a real square matrix $M$ we define $M^{\mathcal{S}}\triangleq M+M^{\intercal}$ where $M^{\intercal}$ is its transpose. Additionally, ’$\star$’ represents symmetric terms in symmetric matrices and in quadratic forms, $\otimes$ denotes the Kronecker product, and $\mathbb{R}_{\geq 0}~\big(\mathbb{R}_{>0}\big)$ is the set of nonnegative (positive) real numbers. Lastly, $\mathrm{col}(\lambda_{i})$ is the column vector with components $\lambda_{i}$.
![The general block scheme of the proposed supervisory control structure.[]{data-label="fig:control_structure"}](SupervisorBlockDiagram "fig:"){width="0.5\columnwidth"}\
Deterministic Switched Systems
==============================
System Modeling {#sec:sysmodel}
---------------
We consider the supervisory control system in Figure 2. Here, $G$ is the plant to be controlled, described by $$\dot{x}(t) = Ax(t) + Bu(t)$$ where $A\in \mathbb{R}^{n\times n}$ and $B\in \mathbb{R}^{n\times m}$. The network is modelled as a time-varying delay $\tau_{\sigma(t)}(t)$ where $\sigma:\mathbb{R}_{\geq0}\mapsto\mathcal{M}$ with $\mathcal{M}=\{1,\ldots,M\}$ is the mode (operating condition) of the network. We assume that the delay in each mode is bounded, $$h_{1}\leq\underline{h}_{\sigma(t)}\leq\tau_{\sigma(t)}(t)\leq \overline{h}_{\sigma(t)}\leq h_{M+1}\;.$$ The multi-controller unit uses the mode signal to select and apply the corresponding mode-dependent feedback law $$u(t) = K_{\sigma(t)}x\big( t-\tau_{\sigma(t)}(t) \big).$$ In this way, the closed-loop system is described by the following switched linear system with time-varying delay $$\begin{aligned}
%\renewcommand{\arraystretch}{1.0}
\setlength{\arraycolsep}{2.0pt}
\begin{array}{rll}
\Sigma_{1}: & \dot{x}(t)=Ax(t)+A_{\sigma(t)}x\big(t-\tau_{\sigma(t)}(t)\big), & \forall t\in\mathbb{R}_{\geq0} \\ \vspace*{1mm}
& x(t) = \varphi(t), & \forall t\in [-h_{M+1},0]
\end{array}\label{eq:SwitchedSys}\end{aligned}$$ where $A_{\sigma(t)}\triangleq BK_{\sigma(t)}\in\mathbb{R}^{n\times n}$ and $\varphi(t)\in \mathcal{C}\big([-h_{M+1},0],\mathbb{R}^{n}\big)$ is the initial function belonging to $\mathcal{C}\big([-h_{M+1},0],\mathbb{R}^{n}\big)$, the Banach space of continuous functions defined on $[-h_{M+1},0]$.
The system is exponentially stable under the switching signal $\sigma (t)$ if there exist positive constants $\gamma$ and $\alpha$ such that the solution of $x(t)$ of the system satisfies $$\parallel x(t)\parallel \; \leq\gamma\parallel x(t_{0})\parallel_{\mathcal{C}}~e^{-\alpha (t-t_{0})}, \quad t\geq t_{0}$$ where $\parallel x(t_{0})\parallel_{\mathcal{C}}\triangleq \underset{-h_{M+1}\leq\theta\leq 0}{\sup}\{\parallel x(t_{0}+\theta)\parallel,\parallel\dot{x}(t_{0}+\theta)\parallel\}$.
In order to guarantee exponential stability, we will put restrictions on the switching signal $\sigma(t)$. Specifically, we will assume that the signal satisfies an average dwell-time condition in the following sense.
We denote the number of jumps of a switching signal $\sigma$ on the interval $(t,T)$ by $N_{\sigma}(T,t)$. Then we say that $\sigma$ has the *average dwell-time* $\tau_{a}$ if there exist two positive numbers $N_{0}$ and $\tau_{a}$ such that $$\begin{aligned}
N_{\sigma}(T,t)\leq N_{0}+\frac{T-t}{\tau_{a}}\,,\quad \forall T>t\geq 0\,.\end{aligned}$$ The set of all switching signals satisfying the above condition is denoted by $\mathcal{S}[\tau_{a}]$.
We consider two specific problems in this section. The first is to verify that the switched linear system is exponentially stable under average dwell-time switching. The second one is to design state feedback controllers for each mode such that the supervisory control system is exponentially stable with guaranteed convergence rate.
Exponential Stability Analysis Using Multiple Lyapunov – Krasovskii Functionals {#subsec:DeterStability}
-------------------------------------------------------------------------------
The exponential stability of switched system is equivalent to the existence of a scalar $\alpha\in\mathbb{R}_{\geq0}$ such that $e^{\alpha t}||x(t)||$ converges asymptotically to zero for each $\sigma\in\mathcal{S}[\tau_{a}]$. To characterize the rate of convergence of the system , let us consider the change of variables $\xi(t)\triangleq e^{\alpha t}x(t)$. Then we have: $$\begin{aligned}
\dot{\xi}(t) =\alpha e^{\alpha t}x(t)+e^{\alpha t}\dot{x}(t) & = \alpha\xi(t)+e^{\alpha t}\big[ Ax(t)+A_{i}x\big(t-\tau_{i}(t)\big) \big] \nonumber\\
&=(\alpha I_{n}+A)\xi(t)+e^{\alpha \tau_{i}(t)}A_{i}\xi\big(t-\tau_{i}(t)\big)\,, \label{SwitchedSysTV}\end{aligned}$$ where $\tau_{i}(t)\in[h_{i},h_{i+1})~\forall i\in\mathcal{M}$. Note that the asymptotic stability of implies that the original system is exponentially stable with decay rate $\alpha$. However, this change of variables introduces a time-varying coefficient in the switched system model . Similar to [@SDR:04], we can exploit the (mode-dependent) bounds on the delay and rewrite in a polytopic form. Specifically, we express the term $e^{\alpha\tau_{i}(t)}$ as a convex combination of the bounds $e^{\alpha h_{i}}$ and $e^{\alpha h_{i+1}}$: $$e^{\alpha\tau_{i}(t)}=\lambda_{1}(t)e^{\alpha h_{i}}+\lambda_{2}(t)e^{\alpha h_{i+1}}\,,\quad \forall i\in\mathcal{M} \;,$$ where $\lambda_{1}(t),\lambda_{2}(t)\in\mathbb{R}_{\geq 0}$ and $\lambda_{1}(t)+\lambda_{2}(t)=1,~\forall t\in\mathbb{R}_{\geq 0}$. The delayed differential equation is then rewritten as $$\begin{aligned}
\dot{\xi}(t)=A_{\alpha}\xi(t)+\sum_{j=1}^{2}\lambda_{j}(t)A_{\alpha_{ij}}\xi\big(t-\tau_{i}(t)\big) \;, \label{eq:SwitchedSysTV2}\end{aligned}$$ where $A_{\alpha}\triangleq (\alpha I_{n}+A)$ and $A_{\alpha_{ij}}\triangleq\varrho_{ij}A_{i}$ with $\varrho_{ij}\triangleq e^{\alpha h_{i+j-1}}$ when $\tau_{i}(t)\in [h_{i},h_{i+1}),~\forall i\in\mathcal{M}$.
We combine a novel multiple Lyapunov-Krasovskii functional with the dwell-time approach of [@HeM:99] to establish exponential stability of the switched system . For ease of notation, we state the theorem for the case of two delay modes only, but the approach extends immediately to a system with $M$ modes.
\[thm:Switched\_System\_Stability\] There exists a finite constant $\tau_{a}$ such that the switched linear system is exponentially stable over $\mathcal{S}[\tau_{a}]$ with a given decay rate $\alpha>0$ for time-varying delays $\tau_{i}(t)\in[h_{i},h_{i+1})$, $\forall i\in\{1,2\}$ if there exist real matrices $P_{i}, Q_{ik}, R_{ik}, S_{ik}, T_{ik}\in\mathbb{S}_{++}^{n}$ and $Z_{ik}\in\mathbb{R}^{n\times n},~\forall i,k\in\{1,2\}$ and a constant scalar $\mu>1$ satisfying $P_{i}\leq\mu P_{j}$, $Q_{ik}\leq\mu Q_{jk}$, $R_{ik}\leq\mu R_{jk}$, $S_{ik}\leq\mu S_{jk}$ and $T_{ik}\leq\mu T_{jk}$, $\forall i, j,k\in\{1,2\}$ such that the LMIs $$\begin{aligned}
\renewcommand{\arraystretch}{1.2}
\left[\begin{array}{c:cc:ccc:ccccccc}
\Phi_{1} & P_{1}A_{\alpha_{1j}} & 0 & S_{11} & S_{12} & 0 & h_{1}A_{\alpha}^{\intercal}S_{11} & \delta_{1}A_{\alpha}^{\intercal}T_{11} & h_{2}A_{\alpha}^{\intercal}S_{12} & \delta_{2}A_{\alpha}^{\intercal}T_{12}\\ \hdashline
\star & -\Upsilon_{11}^{\mathcal{S}} & 0 & \Upsilon_{11}^{\intercal} & \Upsilon_{11} & 0 & h_{1}A_{\alpha_{1j}}^{\intercal}S_{11} & \delta_{1}A_{\alpha_{1j}}^{\intercal}T_{11} & h_{2}A_{\alpha_{1j}}^{\intercal}S_{12} & \delta_{2}A_{\alpha_{1j}}^{\intercal}T_{12} \\
\star & \star & -\Upsilon_{12}^{\mathcal{S}} & 0 & \Upsilon_{12}^{\intercal} & \Upsilon_{12} & 0 & 0 & 0 & 0 \\ \hdashline
\star & \star & \star & -\Xi_{11} & Z_{11} & 0 & 0 & 0 & 0 & 0 \\
\star & \star & \star & \star & -\Xi_{12} & Z_{12} & 0 & 0 & 0 & 0\\
\star & \star & \star & \star & \star & -\Xi_{13} & 0 & 0 & 0 & 0 \\ \hdashline
\star & \star & \star & \star & \star & \star & -S_{11} & 0 & 0 & 0 \\
\star & \star & \star & \star & \star & \star & \star & -T_{11} & 0 & 0 \\
\star & \star & \star & \star & \star & \star & \star & \star & -S_{12} & 0 \\
\star & \star & \star & \star & \star & \star & \star & \star & \star & -T_{12}
\end{array}\right] < 0\label{eq:thm1a} \\
\renewcommand{\arraystretch}{1.2}
\left[\begin{array}{c:cc:ccc:ccccccc}
\Phi_{2} & 0 & P_{2}A_{\alpha_{2j}} & S_{21} & S_{22} & 0 & h_{1}A_{\alpha}^{\intercal}S_{21} & \delta_{1}A_{\alpha}^{\intercal}T_{21} & h_{2}A_{\alpha}^{\intercal}S_{22} & \delta_{2}A_{\alpha}^{\intercal}T_{22}\\ \hdashline
\star & -\Upsilon_{21}^{\mathcal{S}} & 0 & \Upsilon_{21}^{\intercal} & \Upsilon_{21} & 0 & 0 & 0 & 0 & 0 \\
\star & \star & -\Upsilon_{22}^{\mathcal{S}} & 0 & \Upsilon_{22}^{\intercal} &\Upsilon_{22} & h_{1}A_{\alpha_{2j}}^{\intercal}S_{21} & \delta_{1}A_{\alpha_{2j}}^{\intercal}T_{21} & h_{2}A_{\alpha_{2j}}^{\intercal}S_{22} & \delta_{2}A_{\alpha_{2j}}^{\intercal}T_{22} \\ \hdashline
\star & \star & \star & -\Xi_{21} & Z_{21} & 0 & 0 & 0 & 0 & 0 \\
\star & \star & \star & \star & -\Xi_{22} & Z_{22} & 0 & 0 & 0 & 0\\
\star & \star & \star & \star & \star & -\Xi_{23} & 0 & 0 & 0 & 0 \\ \hdashline
\star & \star & \star & \star & \star & \star & -S_{21} & 0 & 0 & 0 \\
\star & \star & \star & \star & \star & \star & \star & -T_{21} & 0 & 0 \\
\star & \star & \star & \star & \star & \star & \star & \star & -S_{22} & 0 \\
\star & \star & \star & \star & \star & \star & \star & \star & \star & -T_{22}
\end{array}\right] < 0\label{eq:thm1b} \end{aligned}$$ $$\begin{aligned}
\begin{bmatrix}
T_{ij}&Z_{ij} \\ \star&T_{ij}
\end{bmatrix} \geq 0 \label{eq:thm1c} \end{aligned}$$ where $\Phi_{i}=P_{i}A_{\alpha}+A_{\alpha}^{\intercal}P_{i}+\sum_{k=1}^{2}\big(Q_{ik}+R_{ik}-S_{ik}\big)$, $\Upsilon_{i1}=T_{i1}-Z_{i1}$, $\Upsilon_{i2}=T_{i2}-Z_{i2}$, $\Xi_{i1}=Q_{i1}+T_{i1}+S_{i1}$, $\Xi_{i2}=Q_{i2}+S_{i2}+\sum_{k=1}^{2}T_{ik}+R_{i1}$ and $\Xi_{i3}=R_{i2}+T_{i2}$ hold for all $i,j\in\{1,2\}$.
**Proof.** Our claim follows if we can find Lyapunov-Krasovskii functionals $V_{i}(t)$ that guarantee decay rate $\alpha$ while in mode $i$ and a constant $\mu >1$ such that $V_{i}(t)\leq\mu V_{j}(t)~\forall i,j\in\{1,2\}$. Then, by [@HeM:99 Theorem 1], is exponentially stable for every switching signal $\sigma$ with average dwell-time $\tau_{a}>\tau_{a}^{\ast}=\frac{\ln\mu}{\alpha}$.
We consider the following Lyapunov-Krasovskii functional, inspired from [@Sha:08], $V_{i}:\mathbb{R}^{n}\rightarrow\mathbb{R}_{\geq 0},~\forall i\in\mathcal{M}$ : $$\begin{gathered}
V_{i}(t)=\xi^{\intercal}(t)P_{i}\xi(t) + \sum_{k=1}^{2}\int_{t-h_{k}}^{t}\xi^{\intercal}(s)Q_{ik}\xi(s)ds +\sum_{k=1}^{2}\int_{t-h_{k+1}}^{t}\xi^{\intercal}(s)R_{ik}\xi(s)ds \\
+ \sum_{k=1}^{2}\int_{-h_{k}}^{0}\int_{t+s}^{t}h_{k}\dot{\xi}^{\intercal}(\theta)S_{ik}\dot{\xi}(\theta)d\theta ds
+ \sum_{k=1}^{2}\int_{-h_{k+1}}^{-h_{k}}\int_{t+s}^{t}\overbrace{(h_{k+1}-h_{k})}^{\displaystyle{\delta_{k}}}\dot{\xi}^{\intercal}(\theta)T_{ik}\dot{\xi}(\theta)d\theta ds \;. \label{eq:LyapCand}\end{gathered}$$ The derivative of $V_{i}(t)$ along the trajectory of the system is given by $$\begin{gathered}
\dot{V}_{i}(t) = 2\dot{\xi}^{\intercal}(t)P_{i}\xi(t) + \xi^{\intercal}(t)\bigg[\sum_{k=1}^{2}\big( Q_{ik}+R_{ik} \big)\bigg]\xi(t) -\sum_{k=1}^{2}\xi^{\intercal}(t-h_{k})Q_{ik} \xi(t-h_{k})
-\sum_{k=1}^{2}\xi^{\intercal}(t-h_{k+1})R_{ik}\xi(t-h_{k+1}) \\
+\xi^{\intercal}(t)\bigg[\sum_{j=1}^{2}\big(h_{k}^{2}S_{ik}+\delta_{k}^{2}T_{ik}\big)\bigg]\xi(t) - \sum_{k=1}^{2}\int_{t-h_{k}}^{t}h_{k}\dot{\xi}^{\intercal}(s)S_{ik}\dot{\xi}(s)ds - \sum_{k=1}^{2}\int_{t-h_{k+1}}^{t-h_{k}}\delta_{k}\dot{\xi}^{\intercal}(s)T_{ik}\dot{\xi}(s)ds \;. \label{eq:dotV}\end{gathered}$$ Using Jensen’s inequality [@GKC:03], the integral term $\int_{t-h_{k}}^{t}h_{k}\dot{\xi}^{\intercal}(s)S_{ik}\dot{\xi}(s)ds$ in the preceding equality is bounded as $$-\int_{t-h_{k}}^{t}h_{k}\dot{\xi}^{\intercal}(s) S_{ik}\dot{\xi}(s)ds \leq -\big[\xi(t)-\xi(t-h_{k})\big]^{\intercal}S_{ik}\big[\xi(t)-\xi(t-h_{k})\big] \;.
\label{eq:Jensen}$$ To upperbound the integral term $\int_{t-h_{k+1}}^{t-h_{k}}\delta_{k}\dot{\xi}^{\intercal}(s)T_{ik}\dot{\xi}(s)ds$, we use the reciprocally convex combination from [@PKJ:11]: $$\begin{aligned}
-\int_{t-h_{k+1}}^{t-h_{k}}\delta_{k}\dot{\xi}^{\intercal}(s) T_{ik}\dot{\xi}(s)ds = &\; -\int_{t-h_{k+1}}^{t-\tau_{k}(t)}\delta_{k}\dot{\xi}^{\intercal}(s)T_{ik}\dot{\xi}(s)ds
-\int_{t-\tau_{k}(t)}^{t-h_{k}}\delta_{k}\dot{\xi}^{\intercal}(s)T_{ik}\dot{\xi}(s)ds \;, \nonumber \\
\leq &\; -
\begin{bmatrix} \xi(t-h_{k})-\xi(t-\tau_{k}(t)) \\ \xi(t-\tau_{k}(t))-\xi(t-h_{k+1}) \end{bmatrix}^{\intercal}
\underbrace{\begin{bmatrix} T_{ik} & Z_{ik} \\ \star & T_{ik} \end{bmatrix}}_{\geq 0}
\begin{bmatrix} \xi(t-h_{k})-\xi(t-\tau_{k}(t)) \\ \xi(t-\tau_{k}(t))-\xi(t-h_{k+1}) \end{bmatrix}\;.
\label{eq:Reciprocal}\end{aligned}$$ Substituting and into , we compute an upper bound for the Lyapunov functional as $$\begin{aligned}
\renewcommand{\arraystretch}{2}
\setlength{\arraycolsep}{1.2pt}
\begin{array}{rl}
\dot{V}_{i}(t)\leq & \xi^{\intercal}(t)\bigg[A_{\alpha}^{\intercal}P_{i}+P_{i}A_{\alpha}+\sum_{k=1}^{2}\big(Q_{ik}+R_{ik}-S_{ik}\big) + A_{\alpha}^{\intercal}\sum_{k=1}^{2}\big(h_{k}^{2}S_{ik}+\delta_{k}^{2}T_{ik}\big)A_{\alpha}\bigg]\xi(t) \\
& +2\xi^{\intercal}(t)\bigg[P_{i}\sum_{j=1}^{2}\big(\lambda_{j}(t)A_{\alpha_{ij}}\big) + A_{\alpha}^{\intercal}\sum_{k=1}^{2}\big(h_{k}^{2}S_{ik} + \delta_{k}^{2}T_{ik}\big)\sum_{j=1}^{2}\big(\lambda_{j}(t)A_{\alpha_{ij}}\big)\bigg]\xi(t-\tau_{i}(t)) \\
& +\xi^{\intercal}(t-\tau_{i}(t))\sum_{j=1}^{2}\big(\lambda_{j}(t)A_{\alpha_{ij}}\big)^{\intercal}\bigg[\sum_{k=1}^{2}\big(h_{k}^{2}S_{ik}+\delta_{k}^{2}T_{ik}\big)\bigg]\sum_{j=1}^{2}\big(\lambda_{j}(t)A_{\alpha_{ij}}\big)\xi(t-\tau_{i}(t)) \\
& -\sum_{k=1}^{2}\xi^{\intercal}(t-\tau_{k}(t))\Big(2T_{ik}-Z_{ik}-Z_{ik}^{\intercal}\Big)\xi(t-\tau_{k}(t)) +2\xi^{\intercal}(t)\sum_{k=1}^{2}S_{ik}\xi(t-h_{k}) \\
& -\sum_{k=1}^{2}\xi^{\intercal}(t-h_{k})\Big(Q_{ik}+S_{ik}+T_{ik}\Big)\xi(t-h_{k}) - \sum_{k=1}^{2}\xi^{\intercal}(t-h_{k+1})\Big(R_{ik}+T_{ik}\Big)\xi(t-h_{k+1}) \\
& +2\sum_{k=1}^{2}\xi^{\intercal}(t-\tau_{k}(t))\Big(T_{ik}-Z_{ik}\Big)\xi(t-h_{k+1}) + 2\sum_{k=1}^{2}\xi^{\intercal}(t-h_{k})Z_{ik}\xi(t-h_{k+1}) \\
& + 2\sum_{k=1}^{2}\xi^{\intercal}(t-\tau_{k}(t))\Big(T_{ik}-Z_{ik}^{\intercal}\Big)\xi(t-h_{k}) \triangleq \psi^{\intercal}(t)\widetilde{\Gamma}_{i}(t)\psi(t)\;,
\end{array}\end{aligned}$$ where $\psi(t)=\mbox{col}\big\{\xi(t),\xi(t-\tau_{1}(t)),\xi(t-\tau_{2}(t)),\xi(t-h_{1}),\xi(t-h_{2}),\xi(t-h_{3})\big\}$. Note that the time derivative of $V_{i}(t)$ is bounded by a quadratic function in $\psi(t)$, *i.e.*, $$\begin{aligned}
\dot{V}_{i}(t)\leq \psi^{\intercal}(t)\widetilde{\Gamma}_{i}(t)\psi(t)\;,\end{aligned}$$ with $$\begin{aligned}
\widetilde{\Gamma}_{i}(t)=\lambda_{1}(t)\widetilde{\Gamma}_{i1} + \lambda_{2}(t)\widetilde{\Gamma}_{i2}\\end{aligned}$$ for all $i\in\{1,2\}$. Then, for two different modes, we form the following two matrices: $$\renewcommand{\arraystretch}{1.15}
\setlength{\arraycolsep}{1.25pt}
\widetilde{\Gamma}_{1j}=
\left[\begin{array}{c:cc:ccc}
\Phi_{1} & P_{1}A_{\alpha_{1j}} & 0 & S_{11} & S_{12} & 0 \\ \hdashline
\star & -\Upsilon_{11}^{\mathcal{S}} & 0 & \Upsilon_{11} ^{\intercal} & \Upsilon_{11} & 0 \\
\star & \star & -\Upsilon_{12}^{\mathcal{S}} & 0 & \Upsilon_{12}^{\intercal} & \Upsilon_{12} \\ \hdashline
\star & \star & \star & -\Xi_{11} & Z_{11} & 0 \\
\star & \star & \star & \star & -\Xi_{12} & Z_{12} \\
\star & \star & \star & \star & \star & -\Xi_{13}
\end{array}\right] + \phi_{1}^{\intercal}\sum_{k=1}^{2}\Big(h_{k}^{2}S_{1k}+\delta_{k}^{2}T_{1k}\Big)\phi_{1} \;,$$ where $\Upsilon_{1j}\triangleq T_{1j}-Z_{1j}$ and $\phi_{1}=\big[~A_{\alpha}~A_{\alpha_{1j}}~0_{n\times 4n}~\big]$ for all $j\in\{1,2\}$, and $$\renewcommand{\arraystretch}{1.15}
\setlength{\arraycolsep}{1.25pt}
\widetilde{\Gamma}_{2j}=
\left[\begin{array}{c:cc:ccc}
\Phi_{2} & 0 & P_{2}A_{\alpha_{2j}} & S_{21} & S_{22} & 0 \\ \hdashline
\star & -\Upsilon_{21}^{\mathcal{S}} & 0 & \Upsilon_{21}^{\intercal} & \Upsilon_{21} & 0 \\
\star & \star & -\Upsilon_{22}^{\mathcal{S}} & 0 & \Upsilon_{22}^{\intercal} & \Upsilon_{22} \\ \hdashline
\star & \star & \star & -\Xi_{21} & Z_{21} & 0 \\
\star & \star & \star & \star & -\Xi_{22} & Z_{22} \\
\star & \star & \star & \star & \star & -\Xi_{23}
\end{array}\right] +\phi_{2}^{\intercal}\sum_{k=1}^{2}\Big(h_{k}^{2}S_{2k}+\delta_{k}^{2}T_{2k}\Big)\phi_{2} \;,$$ where $\Upsilon_{2j}\triangleq T_{2j}-Z_{2j}$ and $\phi_{2}=\big[~A_{\alpha}~0_{n}~A_{\alpha_{2j}}~0_{n\times 3n}~\big]$ for all $j\in\{1,2\}$.
By applying the Schur complement twice to $\widetilde{\Gamma}_{ij}$ to form $\Gamma_{ij}$, we arrive at the equivalent condition: $$\Gamma_{i}(t)=\lambda_{1}(t)\Gamma_{i1} + \lambda_{2}(t)\Gamma_{i2}< 0, \quad \forall i\in\{1,2\}\,.$$ As argued above, the condition is satisfied for all $\lambda_{i}(t)$ if $\Gamma_{1i}$ and $\Gamma_{i2}$ are both negative definite. By guaranteeing that $\Gamma_{i}(t)<0$, we ensure that the dynamics in each fixed mode is exponentially stable with decay rate $\alpha$. However, to guarantee stability for the switched system under the average dwell-time assumption, we also need to guarantee that $$V_{i}(t)\leq \mu V_{j}(t)\,, \quad \forall i,j\in\{1,2\} \label{eq:swicthed_cond}$$ for some $\mu>1$. Noting that $V_{i}(t)$ is linear in $P_{i}$, $Q_{ik}$, $R_{ik}$, $S_{ik}$ and $T_{ik}$, is implied by the following conditions: $$P_{i}\leq \mu P_{j}\,,~Q_{ik}\leq \mu Q_{jk}\,,~R_{ik}\leq \mu R_{jk}\,,~S_{ik}\leq \mu S_{jk}\,,~T_{ik}\leq \mu T_{jk}$$ for all $i,j,k\in\{1,2\}$. This concludes the proof.$\square$
The analysis procedure extends immediately to the system with $M$ modes. However, the LMIs grow in both size and number. In contrast to the two-mode case, we need to check $2M$ LMIs (extensions of , ) whose dimensions are $2(2M+1)n\times 2(2M+1)n$, $M^{2}$ supplementary LMIs (*e.g.*, ), and $M(4M^{2}-3M-1)$ additional LMIs (*e.g.*, $P_{i}\leq\mu P_{j}$). The LMIs use $M(5M+1)$ matrix variables, each with $n(n+1)/2$ decision variables.
A lower bound on the average dwell-time ensuring the global stability of switched delay system is given by $\tau_a^{\circ}=\ln\mu/\alpha_{\circ}$ where $\alpha_{\circ}$ is the optimal value of the convex optimization problem $$\begin{aligned}
\begin{cases}
\begin{array}{cl}
\underset{\substack{P_{i}>0,Q_{ik}>0,\\ R_{ik}>0,S_{ik}>0,T_{ik}>0}}{\mbox{maximize}} & \alpha \\
\mbox{subject~to} &
\begin{array}{l}
\mathit{LMIs}~\eqref{eq:thm1a},~\eqref{eq:thm1b},~\mathit{and}~\eqref{eq:thm1c}, \\ P_{i}\leq\mu P_{j}, Q_{ik}\leq\mu Q_{jk}, R_{ik}\leq\mu R_{jk}, S_{ik}\leq\mu S_{jk}, T_{ik}\leq\mu T_{jk} \;\forall\; i,j,k\in \{1,2\} \;.
\end{array}
\end{array}
\end{cases} \label{eq:optprob}\end{aligned}$$
Due to the presence of multiple product terms $\alpha P_{i}$ and $e^{\alpha h_{i+j-1}}A_{i}P_{i}$ in and , the problem cannot be solved directly using semidefinite programming. However, the problem is easily seen to be quasi-convex. Hence, we can solve it by bisection in $\alpha$. Since the decay rate $\alpha$ is inversely proportional to $\tau_{a}$, this solution procedure gives us a lower bound on the allowable average dwell-time $\tau_{a}$. If we can guarantee that the average dwell-time between mode changes in the communication network is larger than this bound, then global stability of the closed-loop is guaranteed.
State-Feedback Controller Design {#sec:SwitchedSyssSynthesis}
--------------------------------
In this section, we will extend our analysis conditions to mode-dependent state feedback synthesis for the supervisory control structure introduced in Section \[sec:sysmodel\]. More precisely, we consider a linear time-invariant plant $$\begin{aligned}
\dot{x}(t) &= Ax(t)+Bu(t)\end{aligned}$$ where the control input is a mode-dependent linear feedback of the delayed state vector, *i.e*, $$\begin{aligned}
u(t) &= K_i x(t-\tau_i(t)) \label{eq:SwitchedSF}\end{aligned}$$ when $\sigma(t)=i$ (and hence, $\tau_i(t)\in [h_i, h_i+1)$), $i\in {\mathcal M}$. The design problem is to find feedback gain matrices $K_i$ that ensure closed-loop stability for all switching signals in ${\mathcal S}[\tau_a]$. Clearly, this problem is closely related to the stability analysis problem considered in Section 3, since the supervisory control structure induces a switched linear system on the form with $A_{\sigma(t)}=BK_{\sigma(t)}$. We have the following result:
\[thm:Switched\_System\_Stabilization\] For a given decay rate $\alpha>0$, there exists a state-feedback control of the form which exponentially stablizes system over $\mathcal{S}[\tau_{a}]$ for time-varying delays $\tau_{i}(t)\in[h_{i},h_{i+1}),~\forall i\in\{1,2\}$ if there exist real constant matrices $\tilde{P}_{i}, \tilde{Q}_{ik}, \tilde{R}_{ik}, \tilde{S}_{ik}, \tilde{T}_{ik}\in\mathbb{S}^{n}_{++}~\forall i,k\in\{1,2\}$ and $\tilde{X}_{i}, \tilde{Z}_{ik}\in\mathbb{R}^{n\times n}~\forall i,k\in\{1,2\}$, and a constant scalar $\mu>1$ such that the LMIs $$\begin{aligned}
\renewcommand{\arraystretch}{1.20}
\setlength{\arraycolsep}{1.6pt}
\left[\begin{array}{c:c:cc:ccc:c:cccc}
-\tilde{X}_{1}^{\mathcal{S}} & A_{\alpha}\tilde{X}_{1}+\tilde{P}_{1} & \varrho_{1j}B\tilde{Y}_{1} & 0 & 0 & 0 & 0 & \tilde{X}_{1} & h_{1}\tilde{S}_{11} & \delta_{1}\tilde{T}_{11} & h_{2}\tilde{S}_{12} & \delta_{2}\tilde{T}_{12} \\ \hdashline
\star & \sum_{k=1}^{2}\big(\tilde{Q}_{1k}+\tilde{R}_{1k}-\tilde{S}_{1k}\big)-\tilde{P}_{1} & 0 & 0 & \tilde{S}_{11} & \tilde{S}_{12} & 0 & 0 & 0 & 0 & 0 & 0 \\ \hdashline
\star & \star & -\tilde{\Upsilon}_{11}^{\mathcal{S}} & 0 & \tilde{\Upsilon}_{11}^{\intercal} & \tilde{\Upsilon}_{11} & 0 & 0 & 0 & 0 & 0 & 0 \\
\star & \star & \star & -\tilde{\Upsilon}_{12}^{\mathcal{S}} & 0 & \tilde{\Upsilon}_{12}^{\intercal} & \tilde{\Upsilon}_{12} & 0 & 0 & 0 & 0 & 0 \\ \hdashline
\star & \star & \star & \star & -\tilde{\Xi}_{11} & \tilde{Z}_{11} & 0 & 0 & 0 & 0 & 0 & 0 \\
\star & \star & \star & \star & \star & -\tilde{\Xi}_{12} & \tilde{Z}_{12} & 0 & 0 & 0 & 0 & 0 \\
\star & \star & \star & \star & \star & \star & -\tilde{\Xi}_{13} & 0 & 0 & 0 & 0 & 0 \\ \hdashline
\star & \star & \star & \star & \star &\star & \star & -\tilde{P}_{1} & -h_{1}\tilde{S}_{11} & -\delta_{1}\tilde{T}_{11} & -h_{2}\tilde{S}_{12} & -\delta_{2}\tilde{T}_{12} \\ \hdashline
\star & \star & \star & \star & \star &\star & \star & \star & -\tilde{S}_{11} & 0 & 0 & 0\\
\star & \star & \star & \star & \star &\star & \star & \star & \star & -\tilde{T}_{11} & 0 & 0\\
\star & \star & \star & \star & \star &\star & \star & \star & \star & \star & -\tilde{S}_{12} & 0 \\
\star & \star & \star & \star & \star &\star & \star & \star & \star & \star & \star & -\tilde{T}_{12} \\
\end{array}\right] < 0 \label{eq:thm2c} \\
%
\renewcommand{\arraystretch}{1.20}
\setlength{\arraycolsep}{1.6pt}
\left[\begin{array}{c:c:cc:ccc:c:cccc}
-\tilde{X}_{2}^{\mathcal{S}} & A_{\alpha}\tilde{X}_{2}+\tilde{P}_{2} & 0 & \varrho_{2j}B\tilde{Y}_{2} & 0 & 0 & 0 & \tilde{X}_{2} & h_{1}\tilde{S}_{21} & \delta_{1}\tilde{T}_{21} & h_{2}\tilde{S}_{22} & \delta_{2}\tilde{T}_{22} \\ \hdashline
\star & \sum_{k=1}^{2}\big(\tilde{Q}_{2k}+\tilde{R}_{2k}-\tilde{S}_{2k}\big)-\tilde{P}_{2} & 0 & 0 & \tilde{S}_{21} & \tilde{S}_{22} & 0 & 0 & 0 & 0 & 0 & 0 \\ \hdashline
\star & \star & -\tilde{\Upsilon}_{21}^{\mathcal{S}} & 0 & \tilde{\Upsilon}_{21}^{\intercal} & \tilde{\Upsilon}_{21} & 0 & 0 & 0 & 0 & 0 & 0 \\
\star & \star & \star & -\tilde{\Upsilon}_{22}^{\mathcal{S}} & 0 & \tilde{\Upsilon}_{22}^{\intercal} & \tilde{\Upsilon}_{22} & 0 & 0 & 0 & 0 & 0 \\ \hdashline
\star & \star & \star & \star & -\tilde{\Xi}_{21} & \tilde{Z}_{21} & 0 & 0 & 0 & 0 & 0 & 0 \\
\star & \star & \star & \star & \star & -\tilde{\Xi}_{22} & \tilde{Z}_{22} & 0 & 0 & 0 & 0 & 0 \\
\star & \star & \star & \star & \star & \star & -\tilde{\Xi}_{23} & 0 & 0 & 0 & 0 & 0 \\ \hdashline
\star & \star & \star & \star & \star &\star & \star & -\tilde{P}_{2} & -h_{1}\tilde{S}_{21} & -\delta_{1}\tilde{T}_{21} & -h_{2}\tilde{S}_{22} & -\delta_{2}\tilde{T}_{22} \\ \hdashline
\star & \star & \star & \star & \star &\star & \star & \star & -\tilde{S}_{21} & 0 & 0 & 0\\
\star & \star & \star & \star & \star &\star & \star & \star & \star & -\tilde{T}_{21} & 0 & 0\\
\star & \star & \star & \star & \star &\star & \star & \star & \star & \star & -\tilde{S}_{22} & 0 \\
\star & \star & \star & \star & \star &\star & \star & \star & \star & \star & \star & -\tilde{T}_{22} \\
\end{array}\right] < 0 \label{eq:thm2d} \end{aligned}$$ $$\begin{aligned}
\begin{bmatrix}
\tilde{T}_{ik} & \tilde{Z}_{ik} \\ \star & \tilde{T}_{ik}
\end{bmatrix} \geq 0 \label{eq:thm2e} \end{aligned}$$ where $\tilde{\Xi}_{i1}=\tilde{Q}_{i1}+\tilde{T}_{i1}+\tilde{S}_{i1}$, $\tilde{\Xi}_{i2}=\tilde{Q}_{i2}+\tilde{S}_{i2}+\sum_{k=1}^{2}\tilde{T}_{ik}+\tilde{R}_{i1}$ $\tilde{\Xi}_{i3}=\tilde{R}_{i2}+\tilde{T}_{i2}$, $\tilde{\Upsilon}_{ik}=\tilde{T}_{ik}-\tilde{Z}_{ik}$, and $\tilde{P}_{i}\leq\mu\tilde{P}_{j}$, $\tilde{Q}_{ik}\leq\mu\tilde{Q}_{jk}$, $\tilde{R}_{ik}\leq\mu\tilde{R}_{jk}$, $\tilde{S}_{ik}\leq\mu\tilde{S}_{jk}$ and $\tilde{T}_{ik}\leq\mu\tilde{T}_{jk}~\forall i,j,k\in\{1,2\}$ are feasible. A stabilizing control law is given by with gain $K_{i}=\tilde{Y}_{i}\tilde{X}_{i}^{-1}$ for all $i\in\{1,2\}$.
**Proof:** The structure of and is not suitable for the synthesis of a state-feedback controller due to the presence of multiple product terms $A_{\alpha}S_{ik}$, $A_{\alpha}T_{ik}$, $A_{\alpha_{ij}}S_{ik}$ and $A_{\alpha_{ij}}T_{ik}$. These product terms prevent finding a linearizing change of variable even after congruence transformation. Instead, we will use the relaxation term introduced in Briat *et. al* [@BSL:10] to decouple the products at the expense of an increased conservatism. Denote and by $\Theta_{1j}$ and $\Theta_{2j}$, respectively. Then we prove that $\Theta_{ij}< 0~\forall i\in\{1,2\}$ implies the feasibility of and . Note that $\Theta_{ij}$ can be decomposed as $$\begin{aligned}
\Theta_{ij}=\Theta_{ij}\vert_{X_{i}=0}+U_{i}^{\intercal}X_{i}V_{i}+V_{i}^{\intercal}X_{i}^{\intercal}U_{i}< 0, \quad \forall i\in\{1,2\}\end{aligned}$$\[eq:Project\] where $U_{1}=\big[-I_{n}~A_{\alpha}~A_{\alpha_{1j}}~0_{n\times 4n}~I_{n}~0_{n\times 4n}\big]$, $V_{1}=\big[I_{n}~0_{n\times 11n}\big]$, $U_{2}=\big[-I_{n}~A_{\alpha}~0_{n}~A_{\alpha_{2j}}~0_{n\times 3n}~I_{n}~0_{n\times 4n}\big]$ and $V_{2}=\big[I_{n}~0_{n\times 11n}\big]$. Then invoking the projection lemma [@GaA:94], the feasibility of $\Theta_{ij}< 0$ implies the feasibility of the LMIs $$\begin{aligned}
\mathcal{N}_{U_{i}}^{T}\Theta_{ij}|_{X_{i}=0}\mathcal{N}_{U_{i}}< 0 \label{eq:adjLMIa} \\
\mathcal{N}_{V_{i}}^{T}\Theta_{ij}|_{X_{i}=0}\mathcal{N}_{V_{i}}< 0 \label{eq:adjLMIb}\end{aligned}$$ where $\mathcal{N}_{U_{i}}$ and $\mathcal{N}_{V_{i}}$ are basis of the null space of $U_{i}$ and $V_{i}$, respectively. After some tedious calculations, we can show that LMIs and are equivalent to and showing that $\Theta_{ij}< 0~\forall i\in\{1,2\}$ implies the feasibility of and . Moreover, LMI characterizes the conservatism of the relaxation.
$$\begin{aligned}
\renewcommand{\arraystretch}{1.2}
\setlength{\arraycolsep}{1.6pt}
\left[\begin{array}{c:c:cc:ccc:c:cccc}
-X_{1}^{\mathcal{S}} & X_{1}^{\intercal}A_{\alpha}+P_{1} & X_{1}^{\intercal}A_{\alpha_{1j}} & 0 & 0 & 0 & 0 & X_{1}^{\intercal} & h_{1}S_{11} & \delta_{1}T_{11} & h_{2}S_{12} & \delta_{2}T_{12} \\ \hdashline
\star & \sum_{k=1}^{2}\big(Q_{1k}+R_{1k}-S_{1k}\big)-P_{1} & 0 & 0 & S_{11} & S_{12} & 0 & 0 & 0 & 0 & 0 & 0 \\ \hdashline
\star & \star & -\Upsilon_{11}^{\mathcal{S}} & 0 & \Upsilon_{11}^{\intercal} & \Upsilon_{11} & 0 & 0 & 0 & 0 & 0 & 0 \\
\star & \star & \star & -\Upsilon_{12}^{\mathcal{S}} & 0 & \Upsilon_{12}^{\intercal} & \Upsilon_{12} & 0 & 0 & 0 & 0 & 0 \\ \hdashline
\star & \star & \star & \star & -\Xi_{11} & Z_{11} & 0 & 0 & 0 & 0 & 0 & 0 \\
\star & \star & \star & \star & \star & -\Xi_{12} & Z_{12} & 0 & 0 & 0 & 0 & 0 \\
\star & \star & \star & \star & \star & \star & -\Xi_{13} & 0 & 0 & 0 & 0 & 0 \\ \hdashline
\star & \star & \star & \star & \star &\star & \star & -P_{1} & -h_{1}S_{11} & -\delta_{1}T_{11} & -h_{2}S_{12} & -\delta_{2}T_{12} \\ \hdashline
\star & \star & \star & \star & \star &\star & \star & \star & -S_{11} & 0 & 0 & 0\\
\star & \star & \star & \star & \star &\star & \star & \star & \star & -T_{11} & 0 & 0\\
\star & \star & \star & \star & \star &\star & \star & \star & \star & \star & -S_{12} & 0 \\
\star & \star & \star & \star & \star &\star & \star & \star & \star & \star & \star & -T_{12} \\
\end{array}\right] < 0\label{eq:thm2a} \\
\renewcommand{\arraystretch}{1.2}
\setlength{\arraycolsep}{1.6pt}
\left[\begin{array}{c:c:cc:ccc:c:cccc}
-X_{2}^{\mathcal{S}} & X_{2}^{\intercal}A_{\alpha}+P_{2} & 0 & X_{2}^{\intercal}A_{\alpha_{2j}} & 0 & 0 & 0 & X_{2}^{\intercal} & h_{1}S_{21} & \delta_{1}T_{21} & h_{2}S_{22} & \delta_{2}T_{22} \\ \hdashline
\star & \sum_{k=1}^{2}\big(Q_{2k}+R_{2k}-S_{2k}\big)-P_{2} & 0 & 0 & S_{21} & S_{22} & 0 & 0 & 0 & 0 & 0 & 0 \\ \hdashline
\star & \star & -\Upsilon_{21}^{\mathcal{S}} & 0 & \Upsilon_{21}^{\intercal} & \Upsilon_{21} & 0 & 0 & 0 & 0 & 0 & 0 \\
\star & \star & \star & -\Upsilon_{22}^{\mathcal{S}} & 0 & \Upsilon_{22}^{\intercal} & \Upsilon_{22} & 0 & 0 & 0 & 0 & 0 \\ \hdashline
\star & \star & \star & \star & -\Xi_{21} & Z_{21} & 0 & 0 & 0 & 0 & 0 & n 0 \\
\star & \star & \star & \star & \star & -\Xi_{22} & Z_{22} & 0 & 0 & 0 & 0 & 0 \\
\star & \star & \star & \star & \star & \star & -\Xi_{23} & 0 & 0 & 0 & 0 & 0 \\ \hdashline
\star & \star & \star & \star & \star &\star & \star & -P_{2} & -h_{1}S_{21} & -\delta_{1}T_{21} & -h_{2}S_{22} & -\delta_{2}T_{22} \\ \hdashline
\star & \star & \star & \star & \star &\star & \star & \star & -S_{21} & 0 & 0 & 0\\
\star & \star & \star & \star & \star &\star & \star & \star & \star & -T_{21} & 0 & 0\\
\star & \star & \star & \star & \star &\star & \star & \star & \star & \star & -S_{22} & 0 \\
\star & \star & \star & \star & \star &\star & \star & \star & \star & \star & \star & -T_{22} \\
\end{array}\right] < 0\label{eq:thm2b} \end{aligned}$$
Since LMIs and do not include any multiple product, it can easily be used for controller design. Hence, it is possible to use congruence transformations and change of variables so as to design the state-feedback controller. Performing a congruence transformation with respect to matrix $I_{12n}\otimes X^{-1}$ and applying the following linearizing change of variables $\tilde{X}_{i}\triangleq X_{i}^{-1},~\tilde{P}_{i}\triangleq \tilde{X}_{i}^{\intercal}P_{i}\tilde{X}_{i},~\tilde{Q}_{ik}\triangleq \tilde{X}_{i}^{\intercal}Q_{ik}\tilde{X}_{i},~\tilde{R}_{ik}\triangleq \tilde{X}_{i}^{\intercal}R_{ik}\tilde{X}_{i},~\tilde{S}_{ik}\triangleq \tilde{X}_{i}^{\intercal}S_{ik}\tilde{X}_{i},~\tilde{T}_{ik}\triangleq \tilde{X}_{i}^{\intercal}T_{ik}\tilde{X}_{i},~\tilde{\Xi}_{i1}\triangleq\tilde{X}_{i}^{\intercal}\Xi_{i1}\tilde{X}_{i},~\tilde{\Xi}_{i2}\triangleq\tilde{X}_{i}^{\intercal} \Xi_{i2}\tilde{X}_{i},~\tilde{\Xi}_{i3}\triangleq \tilde{X}_{i}^{\intercal}\Xi_{i3}\tilde{X}_{i},~\tilde{Z}_{ik}\triangleq \tilde{X}_{i}^{\intercal}Z_{ik}\tilde{X}_{i}$ and $\tilde{Y}_{i}=K_{i}\tilde{X}_{i},~\forall i,k\in\{1,2\}$ yields LMI and . $\square$
The synthesis procedure readily extends to systems with $M$ modes, yet both the size and the number of LMIs increase. Specifically, we need to check $2M$ LMIs (extensions of , ) whose dimensions are $2(2M+2)n\times 2(2M+2)n$ and $M(5M^{2}-3M-1)$ additional LMIs (*e.g.*, , $\tilde{P}_{i}\leq\mu \tilde{P}_{j}$). In total, these LMIs comprise $M(5M+3)$ matrix variables, each of which has $n(n+1)/2$ decision variables.
Stochastic Switched Systems {#sec:StocSwitchedSys}
===========================
Our deterministic modeling framework has several advantages: it allows to model long time-delays, is able to account for mode-dependent delay bounds and admits a convex formulation of the (mode-dependent) state feedback synthesis problem. However, it also has a disadvantage in that it does not allow to account for more detailed knowledge about the evolution of the delay mode beyond the average dwell-time. It is therefore interesting to derive similar results when the delay mode varies according to a Markov chain, cf. [@BLL:10; @WCW:06; @BeB:98; @BoL:02]. Such results will be developed next.
System Model {#subsec:StochSystemModel}
------------
Let us consider a dynamical system in a probability space $(\Omega,\mathcal{F},\mathbf{P})$, where $\Omega$ is the sample space, $\mathcal{F}$ is the $\sigma$-algebra of subsets of the sample space and $\mathbf{P}$ is the probability measure on $\mathcal{F}$. Over this probability space, we consider the following class of linear stochastic systems with Markovian jump parameters and mode-dependent time delays $$\begin{aligned}
\begin{array}{rll}
\Sigma_{2}: & \dot{x}(t) = Ax(t) + A_{r(t)}x\big(t-\tau_{r(t)}(t)\big)\;, & \forall t\in \mathbb{R}_{\geq 0}\;, \\
& x(t) = \varphi(t)\;, & \forall t\in [-h_{M+1},0]\;, \label{eq:MJLS}
\end{array}\end{aligned}$$ Here, $x(t)\in\mathbb{R}^{n}$ is the state, $A\in\mathbb{R}^{n\times n}$ and $A_{r(t)}\in\mathbb{R}^{n\times n}$ are the known system matrices while $\big\{ r_{t}, t\in\mathbb{R}_{\geq 0} \big\}$ is a homogeneous, finite-state Markovian process with right continuous trajectories and taking values in the finite set $\mathcal{M}=\{1,\cdots,M\}$. The Markov process describes the switching between the different modes and its evolution is governed by the following transition probabilities $$\mathbf{P}\big[ r_{t+\Delta}=j\;\vert \; r_{t}=i \big] =
\begin{cases}
\pi_{ij}\Delta + \mathit{o}(\Delta) & \text{if}~i\neq j\;, \\
1 + \pi_{ii}\Delta + \mathit{o}(\Delta), & \text{if}~i=j\;,
\end{cases}$$ where $\pi_{ij}$ is the transition rate from mode $i$ to $j$ with $\pi_{ij}\geq 0$ when $i\neq j$ and $\pi_{ii}=-\sum_{j=1,j\neq i}^{N}\pi_{ij}$ and $\mathit{o}(\Delta)$ is such that $\lim_{\Delta\rightarrow 0}\frac{\mathit{o}(\Delta)}{\Delta}=0$. Furthermore, $\tau_{r(t)}(t)$ is the time-varying stochastic delay function satisfying $$h_{1}\leq \underline{h}_{r(t)} \leq \tau_{r(t)}(t) \leq \overline{h}_{r(t)} \leq h_{M+1}\;.$$ Finally, $\varphi(t)$ is a vector-valued initial continuous function defined on the interval $[-h_{M+1} 0]$, and $r_{0}\in\mathcal{M}$ are the initial conditions of the continuous state and the mode.
The Markovian jump system is exponentially mean-square stable if there exist positive constants $\alpha$ and $\gamma$ such that $$\mathbb{E}\Big[ \parallel x(t) \parallel^{2} \big\vert\; \varphi(t_{0}), r_{t_{0}} \Big] \leq\; \gamma\parallel x(t_{0},r_{t_{0}}) \parallel^{2}e^{-\alpha (t-t_{0})}$$ holds for any finite $\varphi(t_{0})\in\mathbb{R}^{n}$ defined on $[-h_{M+1},0]$ and any initial mode $r_{t_{0}}\in\mathcal{M}$.
Exponential Stability Analysis Using Stochastic Lyapunov-Krasovskii Functionals {#subsec:StocStability}
-------------------------------------------------------------------------------
In this subsection, we analyze the exponential stability of the Markovian jump linear system using a similar approach to what we developed for the switched delay case. To portray the convergence rate of the system , we thus consider the change of variables $\xi(t)\triangleq e^{\alpha t}x(t)$ and find $$\begin{aligned}
\dot{\xi}(t) = &\; (\alpha I_{n}+A)\xi(t) +e^{\alpha\tau_{r(t)}(t)}A_{r(t)}\xi(t-\tau_{r(t)}(t)) \;,
\label{eq:MJLS_exp}\end{aligned}$$ where $\tau_{r(t)}\in\big[\underline{h}_{r(t)}, \overline{h}_{r(t)}\big)$. For each $r(t)=i, \forall i\in\mathcal{M}$, we rewrite as $$\dot{\xi}(t) = (\alpha I_{n}+A)\xi(t) +e^{\alpha\tau_{i}(t)}A_{i}\xi(t-\tau_{i}(t)) \;.
\label{eq:MJLS_exp2}$$ Using the same polytopic approach as in Section 2, we express $e^{\alpha\tau_{i}(t)}$ as a convex combination of its mode-dependent bounds: $$e^{\alpha\tau_{i}(t)}=\lambda_{1}(t)e^{\alpha h_{i}}+\lambda_{2}(t)e^{\alpha h_{i+1}}\,,\quad \forall i\in\mathcal{M}$$ where $\lambda_{1}(t),\lambda_{2}(t)\in\mathbb{R}_{\geq 0}$ and $\lambda_{1}(t)+\lambda_{2}(t)=1,~\forall t\in\mathbb{R}_{\geq 0}$. Thus, the stochastic switched system can be defined, for each $r(t)=i, \forall i\in\mathcal{M}$, as $$\begin{aligned}
\dot{\xi}(t)=A_{\alpha}\xi(t)+\sum_{j=1}^{2}\lambda_{j}(t)A_{\alpha_{ij}}\xi\big(t-\tau_{i}(t)\big)\label{SwitchedSysTV2}\end{aligned}$$ where $A_{\alpha}\triangleq (\alpha I_{n}+A)$ and $A_{\alpha_{ij}}\triangleq\varrho_{ij}A_{i}$ with $\varrho_{ij}\triangleq e^{\alpha h_{i+j-1}}$ when $\tau_{i}(t)\in [h_{i},h_{i+1})\;,~\forall i\in\mathcal{M}$.
The Markovian jump linear system is exponentially mean-square stable with a given decay rate $\alpha >0$ for randomly varying delays $\tau_{i}\in [h_{i},h_{i+1}),~\forall i\in\{1,2\}$ if there exist matrices $P_{i}, Q_{i}, R_{i}\in\mathbb{S}_{++}^{n},~\forall i\in\{1,2\}$, $S, T, \mathcal{Q}, \mathcal{R}\in\mathbb{S}_{++}^{n}$ and $Z\in\mathbb{R}^{n\times n}$ such that the following LMIs hold for $i, j \in \{1,2\}$ $$\begin{aligned}
\begin{bmatrix} T & Z \\ Z^{\intercal} & T\end{bmatrix} \geq 0\;, \qquad \sum_{j=1}^{2}\pi_{ij}Q_{j}\leq\mathcal{Q}\;, \qquad \sum_{j=1}^{2}\pi_{ij}R_{j}\leq\mathcal{R}\;, \label{eq:MJLS_Stability_add_LMI} \\
\renewcommand{\arraystretch}{1.4}
\setlength{\arraycolsep}{1.6pt}
\left[\begin{array}{c:c:cc:cc}
\Phi_{i} & P_{i}A_{\alpha_{ij}} & S & 0 & \sqrt{\epsilon_{1,i}}A_{\alpha}^{\intercal}S & \sqrt{\epsilon_{2,i}}A_{\alpha}^{\intercal}T \\ \hdashline
\star & -\Upsilon^{\mathcal{S}} & \Upsilon^{\intercal} & \Upsilon & \sqrt{\epsilon_{1,i}}A_{\alpha_{ij}}^{\intercal}S & \sqrt{\epsilon_{2,i}}A_{\alpha_{ij}}^{\intercal}T \\ \hdashline
\star & \star & -(Q_{i}+S+T) & Z & 0 & 0 \\
\star & \star & \star & -(R_{i}+T) & 0 & 0 \\ \hdashline
\star & \star & \star & \star & -S & 0 \\
\star & \star & \star & \star & \star & -T
\end{array}\right] < 0\;, \label{eq:MJLS_Stability_LMI}\end{aligned}$$ where $\Phi_{i}\triangleq A_{\alpha}^{\intercal}P_{i}+P_{i}A_{\alpha}+\sum_{j=1}^{2}\pi_{ij}P_{j} + \big(Q_{i} + h_{2}\mathcal{Q} + \delta_{1}Q_{\kappa}\big) + \big(R_{i} + h_{3}\mathcal{R} + \delta_{2}R_{\kappa}\big) - S$, $\epsilon_{1,i}\triangleq h_{i}^{2} + \eta\frac{h_{2}^{3}-h_{1}^{3}}{2}$ and $\epsilon_{2,i}\triangleq \delta_{i}^{2} + \eta\delta_{\max}\frac{h_{3}^{2}-h_{1}^{2}}{2}$ with $\eta\triangleq \max\vert\pi_{ii}\vert$, $\kappa\triangleq\mathrm{argmax}\vert\pi_{ii}\vert$ and $\delta_{\max}=\max\vert h_{i+1}-h_{i} \vert,~\forall i\in\{1,2\}$, and $\Upsilon = T - Z$.
**Proof.** We define a stochastic Lyapunov-Krasovskii functional $V:\mathbb{R}^{n}\times\mathcal{M}\rightarrow\mathbb{R}_{\geq 0}$ candidate for the system as $$V(\xi_{t},r_{t}) = \underbrace{\xi^{\intercal}(t)P_{r(t)}\xi(t)}_{V_{1}(\xi_{t},r_{t})} + \sum_{k=2}^{5}V_{k}(\xi_{t},r_{t}) \;,$$ where $$\begin{aligned}
V_{2}(\xi_{t},r_{t}) = & \int_{t-\underline{h}_{r(t)}}^{t} \xi^{\intercal}(s)Q_{r(t)}\xi(s)ds + \int_{-h_{2}}^{0} \int_{t+s}^{t} \xi^{\intercal}(\theta)\mathcal{Q}\xi(\theta)d\theta ds + \eta\int_{-h_{2}}^{-h_{1}} \int_{t+s}^{t} \xi^{\intercal}(\theta)Q_{\kappa}\xi(\theta)d\theta ds \\
V_{3}(\xi_{t},r_{t}) = & \int_{t-\overline{h}_{r(t)}}^{t}\xi^{\intercal}(s)R_{r(t)}\xi(s)ds + \int_{-h_{3}}^{0} \int_{t+s}^{t} \xi^{\intercal}(\theta)\mathcal{R}\xi(\theta)d\theta ds + \eta\int_{-h_{3}}^{-h_{2}} \int_{t+s}^{t} \xi^{\intercal}(\theta)R_{\kappa}\xi(\theta)d\theta ds \\
V_{4}(\xi_{t},r_{t}) = &\; \underline{h}_{r(t)} \int_{-\underline{h}_{r(t)}}^{0} \int_{t+s}^{t} \dot{\xi}^{\intercal}(\theta) S \dot{\xi}(\theta)d\theta ds + \eta h_{2}\int_{-h_{2}}^{-h_{1}} \int_{s}^{0} \int_{t+\theta}^{t} \dot{\xi}^{\intercal}(\upsilon) S\dot{\xi}(\upsilon) d\upsilon d\theta ds \\
& \; + \eta\delta_{1}\int_{-h_{1}}^{0} \int_{s}^{0} \int_{t+\theta}^{t} \dot{\xi}^{\intercal}(\upsilon) S\dot{\xi}(\upsilon) d\upsilon d\theta ds\\
V_{5}(\xi_{t},r_{t}) = &\; \underbrace{\big(\overline{h}_{r(t)}-\underline{h}_{r(t)}\big)}_{\delta_{r(t)}}\int_{-\overline{h}_{r(t)}}^{-\underline{h}_{r(t)}} \int_{t+s}^{t} \dot{\xi}^{\intercal}(\theta) T\dot{\xi}(\theta)d\theta ds + \eta\delta_{\max}\int_{-h_{3}}^{-h_{1}}\int_{s}^{0}\int_{t+\theta}^{t}\dot{\xi}^{\intercal}(\upsilon)T\dot{\xi}(\upsilon)d\upsilon d\theta ds\end{aligned}$$ with $\eta\triangleq\max{\vert\pi_{ii}\vert}$, $\kappa\triangleq\mathrm{argmax}\vert\pi_{ii}\vert$, and $\delta_{\max}\triangleq\max\vert h_{i+1}-h_{i}\vert,~\forall i\in\mathcal{M}$.
The weak infinitesimal operator $\mathcal{A}$ of the Markovian process $\big\{ \big(x(t),r_{t}\big),t\geq 0 \big\}$ is defined by $$\begin{aligned}
\mathcal{A}V\big(x(t),r_{t}\big) = \lim_{\Delta\rightarrow 0} \frac{\mathbb{E}\Big[V\big(x(t+\Delta),r_{t+\Delta}\big)\big\vert \mathcal{F}_{t}\Big] - V\big(x(t),r_{t}\big)}{\Delta} \;,\end{aligned}$$ where $\mathcal{F}_{t}=\sigma\big((x(t),r_{t}), t\geq 0\big)$.
Straightforward but tedious calculations yield that, for each $r_{t}=i$, $i\in\mathcal{M}$, along solutions of , we have $$\begin{aligned}
\mathcal{A}V_{1} =&\; \xi^{\intercal}(t)\Bigg[A_{\alpha}^{\intercal}P_{i}+P_{i}A_{\alpha}+\sum_{j=1}^{2}\pi_{ij}P_{j}\Bigg]\xi(t) + 2\xi^{\intercal}(t)P_{i}\sum_{j=1}^{2}(\lambda_{j}(t)A_{\alpha_{ij}})\xi\big(t-\tau_{i}(t)\big) \label{eq:Vdot1} \\
\mathcal{A}V_{2} =&\; \xi^{\intercal}(t)Q_{i}\xi(t) - \xi^{\intercal}(t-h_{i})Q_{i}\xi(t-h_{i}) + \sum_{j=1}^{2}\pi_{ij}\int_{t-h_{j}}^{t} \xi^{\intercal}(s)Q_{j}\xi(s)ds + h_{2} \xi^{\intercal}(t)\mathcal{Q}\xi(t) + \delta_{1}\eta\xi^{\intercal}(t)Q_{\kappa}\xi(t) \nonumber\\
&\; - \Bigg[ \int_{t-h_{2}}^{t} \xi^{\intercal}(s)\mathcal{Q}\xi(s)ds + \eta\int_{t-h_{2}}^{t-h_{1}} \xi^{\intercal}(s)Q_{\kappa}\xi(s)ds \Bigg] \label{eq:Vdot2} \\
\mathcal{A}V_{3} =&\; \xi^{\intercal}(t)R_{i}\xi(t) - \xi^{\intercal}(t-h_{i+1})R_{i}\xi(t-h_{i+1}) + \sum_{j=1}^{2}\pi_{ij}\int_{t-h_{j+1}}^{t} \xi^{\intercal}(s)R_{j}\xi(s)ds + h_{3}\xi^{\intercal}(t)\mathcal{R}\xi(t) + \delta_{2}\eta\xi^{\intercal}(t)R_{\kappa}\xi(t) \nonumber\\
&\; - \Bigg[ \int_{t-h_{3}}^{t} \xi^{\intercal}(s)\mathcal{R}\xi(s)ds + \eta\int_{t-h_{3}}^{t-h_{2}} \xi^{\intercal}(s)R_{\kappa}\xi(s)ds \Bigg] \label{eq:Vdot3} \\
\mathcal{A}V_{4} =&\; h_{i}^{2}~\dot{\xi}^{\intercal}(t)S\dot{\xi}(t) - h_{i}\int_{t-h_{i}}^{t}\dot{\xi}^{\intercal}(s)S\dot{\xi}(s)ds + \sum_{j=1}^{2}\pi_{ij}~h_{j} \int_{-h_{j}}^{0} \int_{t+s}^{t} \dot{\xi}^{\intercal}(\theta) S \dot{\xi}(\theta)d\theta ds \nonumber\\
&\; + \eta\frac{h_{2}^{3}-h_{1}^{3}}{2}\dot{\xi}^{\intercal}(t)S\dot{\xi}(t) - \eta\Bigg[ h_{2}\int_{-h_{2}}^{-h_{1}}\int_{t+s}^{t}\dot{\xi}^{\intercal}(\theta)S\dot{\xi}(\theta)d\theta ds + (h_{2}-h_{1})\int_{-h_{1}}^{0}\int_{t+s}^{t}\dot{\xi}^{\intercal}(\theta)S\dot{\xi}(\theta)d\theta ds \Bigg] \label{eq:Vdot4} \\
\mathcal{A}V_{5} = &\; \delta_{i}^{2}\dot{\xi}^{\intercal}(t)T\dot{\xi}(t) - \delta_{i}\int_{t-h_{i+1}}^{t-h_{i}}\dot{\xi}^{\intercal}(s)T\dot{\xi}(s)ds + \sum_{j=1}^{2}\pi_{ij}\;\delta_{j}\int_{-h_{j+1}}^{-h_{j}}\int_{t+s}^{t}\dot{\xi}^{\intercal}(\theta)T\dot{\xi}(\theta)d\theta ds \nonumber\\
&\; + \eta\;\delta_{\max}\frac{h_{3}^{2}-h_{1}^{2}}{2}\dot{\xi}^{\intercal}(t)T\dot{\xi}(t) - \eta\;\delta_{\max}\int_{-h_{3}}^{-h_{1}}\int_{t+s}^{t}\dot{\xi}^{\intercal}(\theta)T\dot{\xi}(\theta)d\theta ds \label{eq:Vdot5} \;.\end{aligned}$$
Similar to Section \[subsec:DeterStability\], we bound the integral terms $\int_{t-h_{i}}^{t}h_{i}\dot{\xi}^{\intercal}(s)S\dot{\xi}(s)ds$ and $\int_{t-h_{i+1}}^{t-h_{i}}\delta_{i}\dot{\xi}^{\intercal}(s)T\dot{\xi}(s)ds$ that appear in the preceding equalities as follows: $$\begin{aligned}
-\int_{t-h_{i}}^{t}h_{i}\dot{\xi}^{\intercal}(s)S\dot{\xi}(s)ds &\leq \; -\big[ \xi(t) - \xi(t-h_{i}) \big]^{\intercal} S \big[ \xi(t) - \xi(t-h_{i}) \big]\;, \label{eq:Jensen2}\\
%
-\int_{t-h_{i+1}}^{t-h_{i}}\delta_{i}\dot{\xi}^{\intercal}(s) T\dot{\xi}(s)ds &\leq \; -
\begin{bmatrix} \xi(t-h_{i})-\xi(t-\tau_{i}(t)) \\ \xi(t-\tau_{i}(t))-\xi(t-h_{i+1}) \end{bmatrix}^{\intercal}
\begin{bmatrix} T & Z \\ \star & T \end{bmatrix}
\begin{bmatrix} \xi(t-h_{i})-\xi(t-\tau_{i}(t)) \\ \xi(t-\tau_{i}(t))-\xi(t-h_{i+1}) \end{bmatrix} \;, \label{eq:Reciprocal2}\end{aligned}$$ where $\Bigl[\begin{smallmatrix} T & Z \\ \star & T \end{smallmatrix} \Bigr]\geq 0$ holds.
In addition, for the stochastic formulation, we need to upper bound a number of additional integrals. We do so by noting $\pi_{ij}\geq 0$ for $j\neq i$ and $\pi_{ii}\leq 0$, and that $$\begin{aligned}
\renewcommand{\arraystretch}{2}
\setlength{\arraycolsep}{1.5pt}
\begin{array}{rll}
\displaystyle{\sum_{j=1}^{M}\pi_{ij}\int_{t-h_{j}}^{t}\xi^{\intercal}(s)Q_{j}\xi(s)ds} & = \displaystyle{\sum_{j\neq i}^{M}\pi_{ij}\int_{t-h_{j}}^{t}\xi^{\intercal}(s)Q_{j}\xi(s)ds} & +\; \displaystyle{\pi_{ii}\int_{t-h_{i}}^{t}\xi^{\intercal}(s)Q_{i}\xi(s)ds} \\
& \leq \displaystyle{\int_{t-h_{M}}^{t}\xi^{\intercal}(s)\Bigg(\sum_{j\neq i}^{M}\pi_{ij}Q_{j}\Bigg)\xi(s)ds} & +\; \displaystyle{\pi_{ii}\int_{t-h_{1}}^{t}\xi^{\intercal}(s)Q_{i}\xi(s)ds} \\
& = \displaystyle{\int_{t-h_{M}}^{t}\xi^{\intercal}(s)\Big[\mathcal{Q}-\pi_{ii}Q_{i}\Big]\xi(s)ds} & +\; \displaystyle{\pi_{ii}\int_{t-h_{1}}^{t}\xi^{\intercal}(s)Q_{i}\xi(s)ds} \\
& \leq \displaystyle{\int_{t-h_{M}}^{t}\xi^{\intercal}(s)\mathcal{Q}\xi(s)ds} & +\; \displaystyle{\eta\int_{t-h_{M}}^{t-h_{1}}\xi^{\intercal}(s)Q_{\kappa}\xi(s)ds}
\end{array} \label{eq:UppBound1}\end{aligned}$$ where $\sum_{j=1}^{M}\pi_{ij}Q_{j}\leq\mathcal{Q}$. A similar upper bound is readily established for $\sum_{j=1}^{M}\pi_{ij}\int_{t-h_{j+1}}^{t}\dot{\xi}^{\intercal}(s)R\dot{\xi}(s)ds$. We also bound
$$\begin{aligned}
\renewcommand{\arraystretch}{2}
\setlength{\arraycolsep}{1.5pt}
\begin{array}{rll}
\displaystyle{\sum_{j=1}^{M}\pi_{ij}h_{j} \int_{-h_{j}}^{0} \int_{t+s}^{t} \dot{\xi}^{\intercal}(\theta) S\dot{\xi}(\theta)d\theta ds} & =\displaystyle{\sum_{j\neq i}^{M}\pi_{ij}h_{j}\int_{-h_{j}}^{0} \int_{t+s}^{t} \dot{\xi}^{\intercal}(\theta) S \dot{\xi}(\theta)d\theta ds} & +\;\displaystyle{\pi_{ii}h_{i}\int_{-h_{i}}^{0} \int_{t+s}^{t} \dot{\xi}^{\intercal}(\theta) S \dot{\xi}(\theta)d\theta ds} \\
& \leq \displaystyle{-\pi_{ii}h_{M}\int_{-h_{M}}^{0} \int_{t+s}^{t} \dot{\xi}^{\intercal}(\theta) S \dot{\xi}(\theta)d\theta ds} & +\;\displaystyle{\pi_{ii}h_{1}\int_{-h_{1}}^{0} \int_{t+s}^{t} \dot{\xi}^{\intercal}(\theta) S \dot{\xi}(\theta)d\theta ds} \\
& = \displaystyle{-\pi_{ii}h_{M}\int_{-h_{M}}^{-h_{1}} \int_{t+s}^{t} \dot{\xi}^{\intercal}(\theta) S \dot{\xi}(\theta)d\theta ds} & -\;\displaystyle{\pi_{ii}\big(h_{M}-h_{1}\big)\int_{-h_{1}}^{0} \int_{t+s}^{t} \dot{\xi}^{\intercal}(\theta) S \dot{\xi}(\theta)d\theta ds} \\
& \leq\displaystyle{\eta h_{M}\int_{-h_{M}}^{-h_{1}} \int_{t+s}^{t} \dot{\xi}^{\intercal}(\theta) S \dot{\xi}(\theta)d\theta ds} &+\;\displaystyle{\eta\big(h_{M}-h_{1}\big)\int_{-h_{1}}^{0} \int_{t+s}^{t} \dot{\xi}^{\intercal}(\theta) S \dot{\xi}(\theta)d\theta ds}
\end{array} \label{eq:UppBound2}\end{aligned}$$
and $$\begin{aligned}
\sum_{j=1}^{M}\pi_{ij}(h_{j+1}-h_{j})\int_{-h_{j+1}}^{-h_{j}} \int_{t+s}^{t} \dot{x}^{\intercal}(\theta) T \dot{x}(\theta)d\theta ds \leq &\; \eta\delta_{\max}\int_{-h_{M+1}}^{-h_{1}}\int_{t+s}^{t} \dot{x}^{\intercal}(\theta) T \dot{x}(\theta)d\theta ds \;.
\label{eq:UppBound3}\end{aligned}$$ Now, substituting – into – , we get the following inequality $$\begin{aligned}
\mathcal{A}V(\xi_{t},i) \leq &\; \xi^{\intercal}(t)\Bigg[ A_{\alpha}^{\intercal}P_{i}+P_{i}A_{\alpha}+\sum_{j=1}^{2}\pi_{ij}P_{j} + \big(Q_{i} + h_{2}\mathcal{Q} + \delta_{1}Q_{\kappa}\big) + \big(R_{i} + h_{3}\mathcal{R} + \delta_{2}R_{\kappa}\big) - S + A_{\alpha}^{\intercal}\big(\epsilon_{1,i}S+\epsilon_{2,i}T\big)A_{\alpha} \Bigg]\xi(t) \nonumber\\
&\; + 2\xi^{\intercal}(t)\Bigg[ P_{i} + A_{\alpha}^{\intercal}\big(\epsilon_{1,i}S+\epsilon_{2,i}T\big)\Bigg]\sum_{j=1}^{2}(\lambda_{j}(t)A_{\alpha_{ij}})\xi\big(t-\tau_{i}(t)\big) - \xi^{\intercal}(t-h_{i})\big(Q_{i}+S+T\big)\xi(t-h_{i}) +2\xi^{\intercal}(t)S\xi\big(t-h_{i}\big) \nonumber\\
&\; - \xi^{\intercal}(t-h_{i+1})\big(R_{i}+T\big)\xi(t-h_{i+1}) +\xi^{\intercal}\big(t-\tau_{i}(t)\big)\big(-2T+Z+Z^{\intercal}\big)\xi\big(t-\tau_{i}(t)\big) + 2\xi^{\intercal}\big(t-\tau_{i}(t)\big)\big(T-Z^{\intercal}\big)\xi\big(t-h_{i}\big) \nonumber\\
&\; + 2\xi^{\intercal}\big(t-\tau_{i}(t)\big)\big(T-Z\big)\xi\big(t-h_{i+1}\big) +2\xi^{\intercal}\big(t-h_{i}\big)Z\xi\big(t-h_{i+1}\big)\nonumber\\
&\; + \xi^{\intercal}(t-\tau_{i}(t))\sum_{j=1}^{2}(\lambda_{j}(t)A_{\alpha_{ij}})^{\intercal}\big(\epsilon_{1,i}S+\epsilon_{2,i}T\big)\sum_{j=1}^{2}(\lambda_{j}(t)A_{\alpha_{ij}})x(t-\tau_{i}(t)) \nonumber\\
= &\; \psi^{\intercal}(t)\tilde{\Gamma}_{i}(t)\psi(t) \;,\end{aligned}$$ where $\psi(t)=\mbox{col}\big\{ \xi(t),\xi(t-\tau_{i}(t)),\xi(t-h_{i}),\xi(t-h_{i+1}) \big\}$. Note that $\mathcal{A}V(\xi_{t},r_{t})$ is bounded by a quadratic function in $\psi(t)$: $$\mathcal{A}V(\xi_{t},i) \leq \psi^{\intercal}(t)\widetilde{\Gamma}_{i}\psi(t) \;,$$ where $$\widetilde{\Gamma}_{ij} = \lambda_{1}(t)\widetilde{\Gamma}_{i1} + \lambda_{2}(t)\widetilde{\Gamma}_{i2}$$ for all $i\in\{1,2\}$ and $$\renewcommand{\arraystretch}{1.2}
\setlength{\arraycolsep}{1.25pt}
\widetilde{\Gamma}_{ij} =
\left[\begin{array}{c:c:cc}
\Phi_{i} & P_{i}A_{\alpha_{ij}} & S & 0 \\ \hdashline
\star & -2T+Z+Z^{\intercal} & T-Z^{\intercal} & T-Z \\ \hdashline
\star & \star & -(Q_{i}+S+T) & Z \\
\star & \star & \star & -(R_{i}+T)
\end{array}\right] + \phi_{ij}^{\intercal}\Big( \epsilon_{3,i}S + \epsilon_{4,i}T \Big)\phi_{ij} \;,$$ where $\phi_{ij}\triangleq\big[~A~A_{\alpha_{ij}}~0_{n\times 2n}~\big]$ for all $i,j\in\{1,2\}$. As in Section 2, applying the Schur complement lemma to $\widetilde{\Gamma}_{ij}$ to form $\Gamma_{ij}$, we arrive at the equivalent condition $$\Gamma_{i}(t)=\lambda_{1}(t)\Gamma_{i1} + \lambda_{2}(t)\Gamma_{i2}< 0\;, \quad \forall i\in\{1,2\}\,.$$ This condition is satisfied for all $\lambda_{i}$ if $\Gamma_{i1}$ and $\Gamma_{i2}$ are both negative definite. Hence, satisfying the condition $\Gamma_{ij}<0$, we guarantee that the dynamics is exponentially stable with decay rate $\alpha$. This concludes the proof.$\square$
In order to investigate the stability of the Markovian jump system with $M$ modes, we must examine $2M$ LMIs (*e.g.*, ) whose dimensions are $6n\times 6n$ and, additionally, $2M+1$ small size LMIs (*e.g.*, ). In total, we use $3M+5$ matrix variables, each with $n(n+1)/2$ decision variables.
State-Feedback Controller Design {#subsec:MJSControllerDesign}
--------------------------------
Hereafter we concentrate our interest on extending analysis conditions to state-feedback synthesis for exponential mean-square stability of the Markovian jump linear system scheme introduced in Section \[subsec:StochSystemModel\]. To this end, we consider a linear time-invariant plant $$\dot{x}(t) = Ax(t) + Bu(t) \label{eq:Closed_Loop_System}$$ where $x(t)\in\mathbb{R}^{n}$ is the state and $u(t)\in\mathbb{R}^{m}$ is the control input being mode-dependent linear feedback of the delayed state with the following control law $$u(t) = K_{i}x(t-\tau_{i}(t)) \label{eq:StochControl}$$ when $r(t) = i$ (and hence, $\tau_{i}(t)\in[h_{i},h_{i+1})$), $i\in\mathcal{M}$. Indeed, the design problem is to determine a set of state-feedback gain matrices $K_{i}$ that guarantees the closed-loop control stability in exponentially mean-squared sense for the transition rates $\Pi = [\pi_{ij}]_{i,j=1,\cdots,M}$. This problem is closely related to the stability analysis problem discussed in Section \[subsec:StocStability\] because the control system structure can also be represented as a Markovian jump linear system on the form of with $A_{r(t)}=BK_{r(t)}$. We have the following result.
\[thm:MJLS\_Stabilization\] For a given decay rate $\alpha >0$, there exists a state-feedback control of the form that stabilizes system for randomly varying delays $\tau_{i}\in[h_{i},h_{i+1}),~\forall i\in\{1,2\}$ in exponentially mean-squared sense if there exist real constant matrices $\tilde{P}_{i},\tilde{Q}_{i},\tilde{R}_{i}\in\mathbb{S}_{++}^{n},~\forall i\in\{1,2\}$, $\tilde{S},\tilde{T},\tilde{\mathcal{Q}},\tilde{\mathcal{R}}\in\mathbb{S}_{++}^{n}$ and $\tilde{X},\tilde{Z}\in\mathbb{R}^{n\times n}$ such that the following LMIs hold for $i,j\in\{1,2\}$ $$\begin{aligned}
\begin{bmatrix}\tilde{T} & \tilde{Z} \\ \tilde{Z}^{\intercal} & \tilde{T}\end{bmatrix} \geq 0\;, \qquad \sum_{j=1}^{2}\pi_{ij}\tilde{Q}_{j}\leq\tilde{\mathcal{Q}}\;, \qquad \sum_{j=1}^{2}\pi_{ij}\tilde{R}_{j}\leq\tilde{\mathcal{R}}\;, \label{eq:MJLS_Stabilization_add_LMI}\end{aligned}$$ and
$$\begin{aligned}
\renewcommand{\arraystretch}{1.2}
\setlength{\arraycolsep}{1.25pt}
\left[\begin{array}{c:c:c:cc:c:cc}
-\tilde{X}^{\mathcal{S}} & A_{\alpha}\tilde{X}+\tilde{P}_{i} & \varrho_{ij}B\tilde{Y}_{i} & 0 & 0 & \tilde{X} & \sqrt{\epsilon_{1,i}}\tilde{S} & \sqrt{\epsilon_{2,i}}\tilde{T} \\ \hdashline
\star & \tilde{\beth}_{i} & 0 & \tilde{S} & 0 & 0 & 0 & 0 \\ \hdashline
\star & \star & -\tilde{\Upsilon}^{\mathcal{S}} & \tilde{\Upsilon}^{\intercal} & \tilde{\Upsilon} & 0 & 0 & 0 \\ \hdashline
\star & \star & \star & -(\tilde{Q}_{i}+\tilde{S}+\tilde{T}) & \tilde{Z} & 0 & 0 & 0 \\
\star & \star & \star & \star & -(\tilde{R}_{i}+\tilde{T}) & 0 & 0 & 0 \\ \hdashline
\star & \star & \star & \star & \star & -\tilde{P}_{i} & -\sqrt{\epsilon_{1,i}}\tilde{S} & -\sqrt{\epsilon_{2,i}}\tilde{T} \\ \hdashline
\star & \star & \star & \star & \star & \star & -\tilde{S} & 0 \\
\star & \star & \star & \star & \star & \star & \star & -\tilde{T}
\end{array}\right] < 0 \;,
\label{eq:MJLS_Stabilization_LMI}\end{aligned}$$
where $\tilde{\beth}_{i}=\sum_{j=1}^{2} \pi_{ij}\tilde{P}_{j}+\big(\tilde{Q}_{i} + h_{2}\tilde{\mathcal{Q}} + \delta_{1}\tilde{Q}_{\kappa}\big) + \big(\tilde{R}_{i} + h_{3}\tilde{\mathcal{R}} + \delta_{2}\tilde{R}_{\kappa}\big)-\tilde{P}_{i}-\tilde{S}$, $\epsilon_{1,i}\triangleq h_{i}^{2} + \eta\frac{h_{2}^{3}-h_{1}^{3}}{2}$ and $\epsilon_{2,i}\triangleq \delta_{i}^{2} + \eta\delta_{\max}\frac{h_{3}^{2}-h_{1}^{2}}{2}$ with $\eta\triangleq \max\vert\pi_{ii}\vert$, $\kappa\triangleq\mathrm{argmax}\vert\pi_{ii}\vert$ and $\delta_{\max}=\max\vert h_{i+1}-h_{i} \vert,~\forall i\in\{1,2\}$, and $\tilde{\Upsilon}=\tilde{T}-\tilde{Z}$. A stabilizing control law is given by with gain $K_{i}=\tilde{Y}_{i}\tilde{X}^{-1}$ for all $i\in\{1,2\}$.
**Proof.** The proof of Theorem \[thm:MJLS\_Stabilization\] is similar to that of Theorem \[thm:Switched\_System\_Stabilization\] but an outline of the proof is included for completeness. Again, the structure of is not adapted to the controller design due to the existance of the multiple product terms $A_{\alpha}S$, $A_{\alpha}T$, $A_{\alpha_{ij}}S$ and $A_{\alpha_{ij}}T$ preventing to find a linearizing change of variable even after congruence transformations. As a result, a relaxation approach is applied (as in § \[sec:SwitchedSyssSynthesis\]) to remove the multiple product terms preventing the change of variables. We let be called $\Psi_{ij}$ and we prove the condition $\Psi_{ij}<0$. Similarly to $\Theta_{ij}$ (but with the difference that we have $X$ instead of $X_{i}$), $\Psi_{ij}$ can be decomposed as follows: $$\begin{aligned}
\Psi_{ij}=\Psi_{ij}\vert_{X=0}+U_{i}^{\intercal}XV_{i}+V_{i}^{\intercal}X^{\intercal}U_{i}< 0, \quad \forall i\in\{1,2\}\end{aligned}$$ where $U_{i}=\big[-I_{n}~A_{\alpha}~A_{\alpha_{ij}}~0_{n\times 2n}~I_{n}~0_{n\times 2n}\big]$ and $V_{i}=\big[I_{n}~0_{n\times 7n}\big]$. Then invoking the projection lemma [@GaA:94], the feasibility of $\Psi_{ij}< 0$ implies the feasibility of the LMIs $$\begin{aligned}
\mathcal{N}_{U_{i}}^{T}\Psi_{ij}|_{X=0}\mathcal{N}_{U_{i}}< 0 \;, \\
\mathcal{N}_{V_{i}}^{T}\Psi_{ij}|_{X=0}\mathcal{N}_{V_{i}}< 0 \;,\end{aligned}$$ where $\mathcal{N}_{U_{i}}$ and $\mathcal{N}_{V_{i}}$ are basis of the null space of $U_{i}$ and $V_{i}$, respectively. Subsequently, the inequality is obtained as $$\renewcommand{\arraystretch}{1.2}
\setlength{\arraycolsep}{1.25pt}
\left[\begin{array}{c:c:c:cc:c:cc}
-X^{\mathcal{S}} & X^{\intercal}A_{\alpha}+P_{i} &X^{\intercal}A_{\alpha_{ij}} & 0 & 0 & X^{\intercal} & \sqrt{\epsilon_{1,i}}S & \sqrt{\epsilon_{2,i}}T \\ \hdashline
\star & \beth_{i} & 0 & S & 0 & 0 & 0 & 0 \\ \hdashline
\star & \star & -\Upsilon^{\mathcal{S}} & \Upsilon^{\intercal} & \Upsilon & 0 & 0 & 0 \\ \hdashline
\star & \star & \star & -(Q_{i}+S+T) & Z & 0 & 0 & 0 \\
\star & \star & \star & \star & -(R_{i}+T) & 0 & 0 & 0 \\ \hdashline
\star & \star & \star & \star & \star & -P_{i} & -\sqrt{\epsilon_{1,i}}S & -\sqrt{\epsilon_{2,i}}T\\ \hdashline
\star & \star & \star & \star & \star & \star & -S & 0 \\
\star & \star & \star & \star & \star & \star & \star & -T
\end{array}\right] < 0 \;.
\label{eq:MJLS_Application_of_Projection_Lemma_LMI}$$
We substitute the closed-loop system into the inequality , and introduce a constant matrix $X\in\mathbb{R}^{n\times n}$. Then, we compel $X$ to be constant, and we perform a congruence transformation with respect to matrix $I_{8n}\otimes X^{-1}$ and apply the following linearizing change of variables $\tilde{X}\triangleq X^{-1},~\tilde{P}_{i}\triangleq \tilde{X}^{\intercal}P_{i}\tilde{X},~\tilde{Q}_{i}\triangleq \tilde{X}^{\intercal}Q_{i}\tilde{X},~\tilde{\mathcal{Q}}\triangleq \tilde{X}^{\intercal}\mathcal{Q}\tilde{X},~\tilde{R}_{i}\triangleq \tilde{X}^{\intercal}R_{i}\tilde{X},~\tilde{\mathcal{R}}\triangleq \tilde{X}^{\intercal}\mathcal{R}\tilde{X},~\tilde{S}\triangleq \tilde{X}^{\intercal}S\tilde{X},~\tilde{T}\triangleq \tilde{X}^{\intercal}T\tilde{X},~\tilde{\Xi}_{i1}\triangleq\tilde{X}^{\intercal}\Xi_{i1}\tilde{X},~\tilde{\Xi}_{i2}\triangleq\tilde{X}^{\intercal} \Xi_{i2}\tilde{X},~\tilde{\Xi}_{i3}\triangleq \tilde{X}^{\intercal}\Xi_{i3}\tilde{X},~\tilde{Z}\triangleq \tilde{X}^{\intercal}Z\tilde{X}$ and $\tilde{Y}_{i}=K_{i}\tilde{X},~\forall i\in\{1,2\}$ in , LMI is derived. $\square$
To design a set of stabilizing controllers for Markovian jump system (as discussed in § \[subsec:MJSControllerDesign\]) with $M$ modes, $2M$ LMIs (*e.g.*, ) whose dimensions are $8n\times 8n$, and also $2M+1$ small size LMIs (*e.g.*, ) must be checked. All these LMIs include $3M+6$ matrix variables with $n(n+1)/2$ decision variables.
Numerical Examples {#sec:design_example}
==================
We are now ready to demonstrate the proposed technique on numerical examples. Our first example, a simple DC-motor model, is included to demonstrate the flexibility of our control structure compared to a single robust controller. The second example is taken from wide-area power systems, and demonstrates that the numerical techniques scale to non-trivial system dimensions. Finally, we return to the small-scale example to illustrate the analysis procedures for the random Markovian delay model.
![The left figures illustrate a sample evolution of the time delay and the associated delay bounds that define the two controller modes; the associated mode evolution is shown in the bottom left figure. The right figure shows a representative state trajectory of the closed-loop system under supervisory control (dashed line) and mode-independent state-feedback (solid). Both simulations are performed from the same initial value.[]{data-label="fig:DC_Motor_Example"}](elsarticle-template-1-num-figure1)
Small-scale Example: DC Motor
-----------------------------
Consider the following linear system $$\begin{aligned}
\dot{x}(t) = \begin{bmatrix} 0 & 1 \\ 0 & -10 \end{bmatrix}x(t) + \begin{bmatrix} 0 \\ 0.024 \end{bmatrix}u(t) \label{eq:DC_Motor}\end{aligned}$$ with a time-varying communication delay between sensor and controller that behaves as shown in Figure \[fig:DC\_Motor\_Example\]. The supervisor generates the switching signals shown in Figure \[fig:DC\_Motor\_Example\] and triggers the most appropriate controller according to $$\begin{aligned}
u(t) = \begin{cases}
K_{L}x\big(t-\tau_{L}(t)\big) & \mbox{if}~\tau_{L}(t)\in[20,70)\,\mathrm{ ms}\;, \\
K_{M}x\big(t-\tau_{M}(t)\big) & \mbox{if}~\tau_{M}(t)\in[70,200)\,\mathrm{ ms}\;, \\
K_{H}x\big(t-\tau_{H}(t)\big) & \mbox{if}~\tau_{H}(t)\in[200,300)\,\mathrm{ ms}\;.
\end{cases}\end{aligned}$$
By solving the optimization problem for $\mu=1.4$, we find the lower bound on the average dwell-time $\tau_{a}^{\circ}=0.12\mathrm{s}$ and the corresponding convergence rate $\alpha_{\circ}=2.78$ guaranteed for the gains $$\begin{aligned}
K_{L} =&\; \begin{bmatrix} -1421.0 & -138.9 \end{bmatrix}\;, \\
K_{M} =&\; \begin{bmatrix} -1035.9 & -101.5 \end{bmatrix}\;, \\
K_{H} =&\; \begin{bmatrix} -757.09 & -72.71 \end{bmatrix}\;.\end{aligned}$$ Using the same class of Lyapunov-Krasovskii functionals, we design a classical state-feedback controller to compare the non-switching and switching control performance. Specifically, we use the Lyapunov-Krasovskii functional with $Q_{k}=0$, $R_{k}=0$, $S_{k}=0$, $T_{k}=0,~\forall k>1$ and $\tau(t)\in[20,300)$ ms, and find $\alpha=1.72$ for the state feedback gain $$K = \begin{bmatrix} -765.74 & -75.74 \end{bmatrix}\;.$$ We observe that the switching controller has a better performance than the non-switching one, $\alpha_{\circ}=2.78>\alpha=1.72$, and that the controller in the low-delay mode can be made much more agressive when we use a mode-dependent controller. The improved convergence rate is confirmed by the simulations shown in Figure \[fig:DC\_Motor\_Example\].
For different values of $h_{2}$ and $h_{3}$, which define the boundary between the low-, medium- and high-delay mode, we compute the exponential decay rate $\alpha$ of the supervisory controller, and then try to find the largest $h_{\max}$ for which a single mode-independent controller can guarantee the same decay rate. The results are depicted in Table \[tab:table\_example\]. As can be seen, increasing $h_{2}$ and decreasing $h_{3}$ allow to guarantee an improved decay rate of the switching controller. To guarantee the same decay rate under a mode-indenendent controller, the maximum delay must be reduced, and sometimes significantly so.
[ > c < > c < > c < > c < > c < > c < > c < ]{} & &\
**$\alpha$ & **$h_{1}$ & $h_{2}$ & $h_{3}$ & $h_{4}$ & $h_{\min}$ & $h_{\max}$\
3.00 & 20 & 100 & 200 & 300 & 20 & 158\
2.42 & 20 & 100 & 250 & 300 & 20 & 204\
2.78 & 20 & 70 & 200& 300 & 20 & 173\
2.27 & 20 & 70 & 250 & 300 & 20 & 219\
****
Large Scale Example: Wide-Area Power Networks
---------------------------------------------
![IEEE nine-bus power system.[]{data-label="fig:ninebus"}](nine_bus_system "fig:"){width="0.4\hsize"}\
To demonstrate the applicability of our methods to systems of higher dimension, we consider the IEEE nine-bus system [@AnF:03] shown in Figure \[fig:ninebus\]. We adopt a second order (swing) model with phase and frequency $(\delta_{i}$, $\omega_{i})$ for all generators and use the Power System Analysis Toolbox [@Mil:10] to obtain the following numerical model $$\begin{aligned}
\renewcommand{{1.2}}{1.2}
\frac{d}{dt}
\left[\begin{array}{c}
\delta_{1} \\ \omega_{1} \\ \delta_{2} \\ \omega_{2} \\ \delta_{3} \\ \omega_{3}
\end{array}\right] =
\left[\begin{array}{cccccc}
0 & 1 & 0 & 0 & 0 & 0 \\
-0.0432 & -0.0702 & 0.0209 & 0 & 0.0223 & 0 \\
0 & 0 & 0 & 1 & 0 & 0 \\
0.1248 & 0 & -0.2372 & -0.2594 & 0.1124 & 0 \\
0 & 0 & 0 & 0 & 0 & 1 \\
0.3761 & 0 & 0.3554 & 0 & -0.7315 & -0.5515
\end{array}\right]
\left[\begin{array}{c}
\delta_{1} \\ \omega_{1} \\ \delta_{2} \\ \omega_{2} \\ \delta_{3} \\ \omega_{3}
\end{array}\right] +
\left[\begin{array}{c}
0 \\ 0.1471 \\ 0 \\ 0 \\ 0 \\ 0
\end{array}\right]u(t) \;. \label{eq:ninebus}\end{aligned}$$
We assume that the phase and frequency of each bus can be measured and be communicated to a central controller. In wide-area power systems, the communication delays vary depending on communication technologies, protocols and network load. In this example, we assume that the delay varies between 20 and 110 ms (see, Figure \[fig:Delay\_Evolution\]) and that mode changes are such that the average dwell-time is guaranteed to be at least 0.35 seconds.
![The left figures show a sample delay evolution and the delay bounds that define the supervisory control modes (top) along with the associated mode evolution (bottom). The right figure illustrates a representative state trajectory of the closed-loop system under supervisory control (dashed line) and a single mode-independent state-feedback (solid). Both simulations are performed from the same initial value.[]{data-label="fig:Delay_Evolution"}](elsarticle-template-1-num-figure2)
To begin, the supervisory controller is designed for the delay intervals \[20, 50) and \[50,110) ms. We design supervisory control gains targeting a decay rate of $\alpha=0.9$, and use $\mu=1.4$ to guarantee $\tau_a=0.3739$. As a result, we compute the following control gain matrices $$\begin{aligned}
K_{L} = \begin{bmatrix} -181.60 & -53.94 & -83.97 & -2090.02 & -647.79 & -85.90 \end{bmatrix}\;, \\
K_{H} = \begin{bmatrix} -159.82 & -49.22 & -48.09 & -1714.38 & -547.06 & -94.92 \end{bmatrix}\;.\end{aligned}$$ Solving the LMIs for this larger system takes a few hours, but appears to be numerically stable.
It turns out that an exponential decay rate of $\alpha = 0.9$ can also be achieved by the mode-independent feedback gain $$K =
\begin{bmatrix}
-153.71 & -48.22 & -41.69 & -1605.33 & -514.67 & -96.07
\end{bmatrix} \;,$$ which is less aggressive than the controller used in the low-delay mode of the switching controller. As depicted in Figure \[fig:Delay\_Evolution\], simulations are carried out for the delay trace and corresponding switching singnals shown in the left of the same figure, the switching controller damps oscillations between generators $G_2$ and $G_3$ better than its mode-independent counterpart.
Markovian Jump Linear System Formulation
----------------------------------------
Finally, we return to the DC-motor example to illustrate the applicability of the analysis tools developed in Section \[sec:StocSwitchedSys\] for random delays. We consider the system with low traffic delays (*i.e.*, $\tau_{L}\in[20,70)$ms) and high traffic delays (*i.e.*, $\tau_{H}\in[70,300)$ms). The switching between these two modes is described by the following transition probability rate matrix: $$\Pi =
\begin{bmatrix}
-p & p \\ q & -q
\end{bmatrix} \;.$$ whose invariant distribution is $\pi_{1}^{\infty}=q/(p+q)$ and $\pi_{2}^{\infty}=p/(p+q)$. We vary the transition probabilities $p$ and $q$, compute mode-dependent controllers and the rate of exponential mean-square stability. The results are summerized in Table \[tab:Comparision\_of\_MJLS\]. While mitigating $\max\big\{p,q\big\}$, the decay rate is increasing and controllers are getting more aggressive. Furthermore, when $\pi_{1}^{\infty}$ decreases, we also observe a slight decrease in the decay rate.
[ > c < > c < > c < > c < > c < > c < > c < ]{} & & &\
& & &\
**$\alpha$ & **$\pi_{1}^{\infty}$ & $\pi_{2}^{\infty}$ & $p$ & $q$ & $K_{L}$ & $K_{H}$\
$1.07$ & $12.50 $ & $87.50$ & $3.5$ & $0.5$ & $\big[-651.93~-63.80\big]$ & $\big[-542.15~-53.04\big]$\
$1.08$ & $36.36 $ & $63.64$ & $3.5$ & $2.0$ & $\big[-652.71~-63.87\big]$ & $\big[-537.72~-52.60\big]$\
$1.09$ & $50.00 $ & $50.00$ & $3.5$ & $3.5$ & $\big[-654.28~-64.50\big]$ & $\big[-536.68~-52.89\big]$\
$1.10$ & $63.64 $ & $36.36$ & $2.0$ & $3.5$ & $\big[-652.41~-65.18\big]$ & $\big[-539.49~-53.90\big]$\
$1.10$ & $87.50 $ & $12.50$ & $0.5$ & $3.5$ & $\big[-656.22~-64.04\big]$ & $\big[-503.26~-49.07\big]$\
$1.23$ & $16.67$ & $83.33$ & $2.5$ & $0.5$ & $\big[-725.08~-72.40\big]$ & $\big[-592.00~-59.11\big]$\
$1.25$ & $37.50$ & $62.50$ & $2.5$ & $1.5$ & $\big[-726.46~-72.12\big]$ & $\big[-586.40~-58.21\big]$\
$1.25$ & $50.00$ & $50.00$ & $2.5$ & $2.5$ & $\big[-728.26~-72.33\big]$ & $\big[-583.34~-57.93\big]$\
$1.25$ & $62.50$ & $37.50$ & $1.5$ & $2.5$ & $\big[-721.48~-71.11\big]$ & $\big[-579.53~-57.09\big]$\
$1.26$ & $83.33$ & $16.67$ & $0.5$ & $2.5$ & $\big[-715.62~-71.20\big]$ & $\big[-581.17~-57.81\big]$\
$1.43$ & $25.00$ & $75.00$ & $1.5$ & $0.5$ & $\big[-804.16~-80.05\big]$ & $\big[-645.74~-64.27\big]$\
$1.46$ & $75.00$ & $25.00$ & $0.5$ & $1.5$ & $\big[-798.94~-79.58\big]$ & $\big[-639.16~-63.66\big]$\
$1.45$ & $50.00$ & $50.00$ & $1.5$ & $1.5$ & $\big[-810.53~-80.11\big]$ & $\big[-637.13~-62.95\big]$\
****
Conclusion {#sec:conc}
==========
This paper has been dedicated to supervisory control of networked control systems with time-varying delays. The main contribution of this paper is to develop a stability analysis and a state feedback synthesis technique for a supervisory control system that switches among a multi-controller unit based on the current network state. A novel class of Lyapunov-Krasovskii functionals were introduced that, somewhat remarkably, admit both the analysis and state feedback synthesis problems to be solved via convex optimization over linear matrix inequalities. In addition, we also investigated the corresponding problem for a class of stochastic systems with interval-bounded time-varying delay. Sufficient conditions were established without ignoring any terms in the weak infinitesimal operator of the Lyapunov Krasovskii functional by considering the relationship among the time-varying delay, its upper bound, and their difference. Finally, examples were given to show the effectiveness of the proposed analysis and synthesis techniques.
|
---
abstract:
-
-
author:
- Wesam Elshamy
bibliography:
- 'bibliography.bib'
title: 'Continuous-time Infinite Dynamic Topic Models'
---
|
---
abstract: 'We consider a one dimensional Lévy bridge $x_B$ of length $n$ and index $0 < \alpha < 2$, [*i.e.*]{} a Lévy random walk constrained to start and end at the origin after $n$ time steps, $x_B(0) = x_B(n)=0$. We compute the distribution $P_B(A,n)$ of the area $A = \sum_{m=1}^n x_B(m)$ under such a Lévy bridge and show that, for large $n$, it has the scaling form $P_B(A,n) \sim n^{-1-1/\alpha} F_\alpha(A/n^{1+1/\alpha})$, with the asymptotic behavior $F_\alpha(Y) \sim Y^{-2(1+\alpha)}$ for large $Y$. For $\alpha=1$, we obtain an explicit expression of $F_1(Y)$ in terms of elementary functions. We also compute the average profile $\langle \tilde x_B (m) \rangle$ at time $m$ of a Lévy bridge with fixed area $A$. For large $n$ and large $m$ and $A$, one finds the scaling form $\langle \tilde x_B(m) \rangle = n^{1/\alpha} H_\alpha\left({m}/{n},{A}/{n^{1+1/\alpha}} \right)$, where at variance with Brownian bridge, $H_\alpha(X,Y)$ is a non trivial function of the rescaled time $m/n$ and rescaled area $Y = A/n^{1+1/\alpha}$. Our analytical results are verified by numerical simulations.'
address:
- 'Laboratoire de Physique Théorique, CNRS-UMR 8627, Université Paris-Sud, 91405 Orsay Cedex, France '
- 'Laboratoire de Physique Théorique et Modèles Statistiques, CNRS-UMR 8626, Université Paris-Sud, Bât. 100, 91405 Orsay Cedex, France'
author:
- 'Gr[é]{}gory Schehr'
- 'Satya N. Majumdar'
date: 'Received: / Accepted: / Published '
title: Area distribution and the average shape of a Lévy bridge
---
Introduction
============
Random walks, and the associated continuous-time Brownian motion (BM), are ubiquitous in nature. As such, they are not only the cornerstones of statistical physics [@chandrasekhar; @feller; @hughes] but have also found many applications in a variety of areas such as biology [@koshland], computer science [@asmussen; @satya_functionals] and finance [@williams]. Continuous time Brownian motion is simply defined by the equation of motion $$\begin{aligned}
\label{def_BM}
x(0) = 0 \;, \; \frac{\rmd x(t)}{\rmd t} = \eta(t) \;,\end{aligned}$$ where $\eta(t)$ is a Gaussian white noise of zero mean $\langle
\eta(t) \rangle = 0$ and short range correlations $\langle \eta(t)
\eta(t')\rangle = 2 D \delta(t-t')$ where $D$ is the diffusion constant (in the following we set $D = 1$). An interesting variant of Brownian motion in a given time interval $[0,T]$ is the so called Brownian bridge $x_B(t)$ which is a Brownian motion conditioned to start and end at zero, [*i.e.*]{} $x_B(T) = x_B(0) = 0$. Here we focus on two interesting observables associated with this bridge, namely
- [the distribution of the area $A$ under the bridge (see Fig. \[fig\_1\] a)) $$\begin{aligned}
\label{def_A}
A = \int_0^T x_B(t)\, \rmd t \;,\end{aligned}$$ which is obviously a random variable, being the sum of (strongly correlated) random variables. For the Brownian bridge, the distribution of $A$ can easily be computed using the fact that $x_B(t)$ is a Gaussian random variable. This can be seen from the well known identity in law [@feller] $$\begin{aligned}
\label{id_bb}
x_B(t) := x(t) - \frac{t}{T} x(T) \;,\end{aligned}$$ where $x(t)$ is a standard Brownian motion (\[def\_BM\]). For the Brownian bridge, $A$ is thus also a centered Gaussian random variable. A direct computation of the second moment $\langle A^2 \rangle$ yields straightforwardly $$\begin{aligned}
\label{dist_A_bb}
P_B(A,T) = \sqrt{\frac{3}{\pi T^3}} \exp{\left(-\frac{3 A^2}{T^3}
\right)} \;. \end{aligned}$$ ]{}
- [the average shape of the bridge, $\langle \tilde x_B(t)\rangle$ for a fixed area $A$ (see Fig. \[fig\_1\] b)). For the Brownian bridge, it takes a simple form [@rivasseau] $$\begin{aligned}
\label{shape_bb}
\langle \tilde x_B(t)\rangle = \frac{A}{T} f\left(\frac{t}{T} \right)
\;, \; f(x) = 6x(1-x) \;.\end{aligned}$$ ]{}
The distribution of the area under a Brownian bridge (\[dist\_A\_bb\]) is a standard result and its extension to various constrained Brownian motions has recently attracted much attention [@satya_airy; @kearney; @schehr_airy; @welinder; @janson_review; @rajabpour; @rambeau_airy]. For instance, the distribution of the area under a Brownian excursion ([*i.e.*]{} a Brownian motion conditioned to start and end at $0$ and constrained to stay positive in-between), the so called Airy-distribution, describes the statistics of the maximal relative height of one-dimensional elastic interfaces [@satya_airy; @schehr_airy; @rambeau_airy]. Another example is the area $A$ under a Brownian motion till its first-passage time $t_f$ [@kearney], which has an interesting application to the description of the avalanches in the directed Abelian sandpile model proposed in Ref. [@dhar], such that $t_f$ relates to the avalanche duration and $A$ to the size of the avalanche cluster. Related quantities were recently studied in the statistics of avalanches near the depinning transition of elastic manifolds in random media [@pld]. On the other hand, the average shape of random walk bridges with a fixed area $A$ (\[shape\_bb\]) has been studied some time ago in the context of wetting [@rivasseau] to prove the validity of the Wulff construction in $1+1$ dimensions and more recently in the context of mass transport models [@waclaw]. In these models, where the transport rules depend on the environment of the departure site, the steady state has a pair-factorized form [@satya_condensation], which generalizes the factorized steady states found in simpler system like the zero range process [@evans_review; @godreche_review; @satya_review_condensation]. As the mass density crosses some critical value, the system exhibits a condensation transition which is governed by interactions, which in turn give rise to a spatially extended condensate. It was shown in Ref. [@waclaw] that the shape of this condensate can be described by the average shape of a random walk bridge with fixed area, and the results of Ref. [@rivasseau] were recovered.
While these quantities are well understood for a Brownian bridge, much less is known for the case of a Lévy bridge. The aim of the present paper is to compute the distribution of the area and the average shape for a fixed area $A$ in that case. To this purpose, it is convenient to consider a random walk $x(m)$, in discrete time (see Fig. \[fig\_rw\] a)), starting at $x(0)=x_0$ at time $0$ and evolving according to $$\begin{aligned}
\label{rw}
x(m) = x(m-1) + \eta(m) \;, \\end{aligned}$$ where $\eta(m)$ are independent and identically distributed (i.i.d.) random variables distributed according to a common distribution $\phi(\eta)$. Here we focus on the case where $\phi(\eta) = {\cal S}_{\alpha}(\eta)$ where ${\cal S}_{\alpha}(\eta)$ is a symmetric $\alpha$-stable (Lévy) distribution. Its characteristic function is given by $\int_{-\infty}^\infty {\cal
S}_\alpha(\eta) e^{i k \eta} \rmd \eta = e^{- |k|^\alpha}$. In particular, for large $\eta$, the distribution of $\eta$ has a power law tail $\phi(\eta) \sim \eta^{-(1+\alpha)}$, with $0< \alpha < 2$. The Lévy bridge $x_B(m)$, on the interval $[0,n]$, is a Lévy random walk conditioned to start and end at $0$, [*i.e.*]{} $x_B(n)=x_B(0)=0$. In the following we will compute the distribution $P_B(A,n)$ of the area $A$ under the bridge of length $n$, [*i.e.*]{} $A = \sum_{m=0}^n x_B(m)$ and the average shape of a bridge $\langle \tilde x_B(m) \rangle$, with $0 \leq m \leq n$, for fixed $A$. Here we consider the natural scaling limit where $x_B \sim n^{1/\alpha}$, while $A \sim n^{1+1/\alpha}$, whereas the aforementioned previous works [@rivasseau; @waclaw] focused on a different scaling limit which, for Brownian motion, corresponds to $x_B \sim \sqrt{n}$ and $A \sim n^2$. Note that, even in this natural scaling limit, the identity in law valid for the Brownian bridge (\[id\_bb\]) does not hold for a Lévy bridge [@knight; @bertoin] and one thus expects the distribution of the area $A$ to be non trivial. Our results can be summarized as follows:
- for a free Lévy random walk starting at $x_0 = 0$, one finds that the distribution of the area $P(A,n)$ takes the form
$$\hspace*{-10cm}$$
where ${\cal S}_\alpha(x)$ is a symmetric $\alpha$-stable distribution (\[def\_stable\]). For a Lévy bridge, one finds, in the scaling limit $A \to \infty$, $n \to \infty$, keeping $A/n^{1+1/\alpha}$ fixed, that the distribution of the area $P_B(A,n)$ takes the scaling form
$$\hspace*{-10cm}$$
where $F_\alpha(y)$ is a monotonically decreasing function, with asymptotic behaviors
$$\hspace*{-10cm}$$
where $F_\alpha(0)$, see Eq. (\[expr\_constant\]), and $a_\alpha$, see Eq. (\[large\_y\]), are computable constants. For $\alpha=1$, we obtain an explicit expression for $F_1(Y)$ in terms of elementary functions (\[elem\]).
- on the other hand, in the aforementioned scaling limit, one obtains the average profile $\langle \tilde x_B(m) \rangle$ for a Lévy bridge as well as the average profile $\langle \tilde x(m) \rangle$ for a free Lévy walk with a fixed area $A$. For $\langle \tilde x(m) \rangle$ one obtains a farely simple expression
$$\hspace*{-10cm}$$
For a Lévy bridge, the expression is more involved. For generic $\alpha$ one finds the scaling form
$$\hspace*{-10cm}$$
which, in general, has a non-trivial dependence on $A$. One recovers a linear dependence in $A$ (as for the Brownian bridge in Eq. (\[shape\_bb\])) only in the limits $A \to 0$ (\[small\_y\_shape\]) and $A \to \infty$ (\[large\_y\_shape\]).
The paper is organized as follows. In section 2, we compute the joint distribution of the position and the area under a Lévy random walk. In section 3, we use these results to compute the distribution of the area $A$ under a Lévy bridge of size $n$ while in section 4, we use them to compute the average profile of a Lévy walk with a fixed area. Finally, in section 5 we present a numerical method, based on a Monte-Carlo algorithm, to compute numerically $P_B(A,n)$ and $\langle \tilde x_B(m)\rangle$ before we conclude in section 6. Some technical (and useful) details have been left in Appendices A,B and C.
Free Lévy walk : joint distribution of the position and the area
================================================================
We start with the computation of the joint distribution $P(x,A,m|x_0,x_0,0)$ of the position and the area after $m$ steps given that $x(0)=x_0$ (see Fig. \[fig\_rw\] a)). If we denote by $A(m)$ the area under the random walk after $m$ time steps, this random variable evolves according to the equation $$\begin{aligned}
\label{area}
&&A(0) = x_0 \;, \\
&&A(m) = A(m-1) + x(m) \;.\end{aligned}$$ Therefore $P(x,A,m|x_0,x_0,0)$ satisfies the following recursion relation: $$\begin{aligned}
\label{recurrence}
&&P(x,A,0|x_0,x_0,0) = \delta(x-x_0) \delta(A-x_0) \;, \nonumber \\
&&P(x,A,m|x_0,x_0,0) = \int_{-\infty}^\infty P(x-\eta, A-x,m-1|x_0,x_0,0) \phi(\eta) \rmd \eta \;.\end{aligned}$$ Introducing $\hat \phi(k) = \int_{-\infty}^\infty \phi(\eta) e^{i k \eta} \rmd \eta$ the Fourier transform of $\phi(\eta)$, and thus $\hat \phi(k) = e^{-|k|^\alpha}$ for a Lévy random walk, and $\hat P(k_1, k_2,m|x_0,x_0,0)$ the double Fourier transform of $P(x,A,m|x_0,x_0,0)$ with respect to both $x$ and $A$, [*i.e.*]{} $\hat P(k_1, k_2,m|x_0,x_0,0) = \int_{-\infty}^\infty \rmd x \int_{-\infty}^\infty \rmd A P(x,A,m|x_0,x_0,0) e^{i k_1 x + i k_2 A}$ the recursion relation (\[recurrence\]) reads $$\begin{aligned}
\label{recurrence_fourier}
&& \hat P(k_1,k_2,0|x_0,x_0,0) = e^{i (k_1+k_2) x_0 } \;, \\
&& \hat P(k_1, k_2,m|x_0,x_0,0) = \hat \phi(k_1 + k_2) \hat P(k_1+k_2,k_2,m-1|x_0,x_0,0) \;,\end{aligned}$$ which can be solved, yielding $$\begin{aligned}
\label{expr_Fourier_joint}
\hat P(k_1, k_2,n|x_0,x_0,0) = \prod_{m=1}^n \hat \phi(k_1 + m k_2) e^{i (k_1 + (n+1)k_2) x_0} \;.\end{aligned}$$ Hence for a Lévy walk of index $\alpha$ one has simply $$\begin{aligned}
\label{start_expr}
\fl P(x,A,n|x_0,x_0,0) = \int_{-\infty}^\infty \frac{\rmd k_1}{2 \pi} \int_{-\infty}^\infty \frac{\rmd k_2}{2 \pi} e^{- \sum_{m=1}^n |k_1 + m k_2|^\alpha}e^{- ik_1 (x-x_0)} e^{-ik_2 (A-(n+1)x_0)} \;.\end{aligned}$$ Note that this expression (\[start\_expr\]) can also be obtained directly by noticing that $x(n) = x_0 + \sum_{m=1}^n \eta(i)$ and thus $A(n) - (n+1)x_0 = \sum_{m=1}^n x(m) = \sum_{m=1}^n \sum_{l=1}^m \eta(l) = \sum_{m=1}^n m \eta(n+1-m)$ such that $$\begin{aligned}
\label{direct_joint}
\fl && P(x,A,n|x_0,x_0,0) = \prod_{m=1}^n \int_{-\infty}^\infty \rmd
\eta(m) \prod_{m=1}^n \phi\left[\eta(m) \right]\delta\left(x-x_0-\sum_{m=1}^n
\eta(m)\right) \nonumber \\
&& \times \delta\left(A - (n+1)x_0 - \sum_{m=1}^n m \eta(n+1-m)\right) \;.\end{aligned}$$ After a double Fourier transform with respect to $x$ and $A$, this Eq. (\[direct\_joint\]) yields immediately the expression $\hat
P(k_1, k_2,n|x_0,x_0,0)$ in Eq. (\[expr\_Fourier\_joint\]). Of course the marginal distribution of the position $P(x,n|x_0,0)$ and of the area $P(A,n|x_0,0)$ are also stable laws. Indeed one has $$\begin{aligned}
\label{marginals}
\fl && P(x,n|x_0,0) = \int_{-\infty}^\infty P(x,A,n|x_0,x_0,0) {\rm d} A = \frac{1}{n^{1/\alpha}} {\cal S}_\alpha \left( \frac{x-x_0}{n^{1/\alpha}} \right) \;, \\
\fl && P(A,n|x_0,0) = \int_{-\infty}^\infty P(x,A,n|x_0,x_0,0) {\rm d} x = \frac{1}{\gamma_n} {\cal S}_\alpha \left( \frac{A-(n+1)x_0}{\gamma_n} \right) \;, \nonumber \\
\fl && \hspace*{1.6cm}\gamma_n = \left( \sum_{m=1}^n m^\alpha \right)^{1/\alpha} \sim \frac{n^{1+1/\alpha}}{(\alpha+1)^{1/\alpha}} \;, \; n \gg 1 \;, \nonumber\end{aligned}$$ where $$\begin{aligned}
\label{def_stable}
{\cal S}_\alpha(x) = \int_{-\infty}^\infty \, e^{-|k|^\alpha - i k x} \frac{\rmd k}{2 \pi} \;.\end{aligned}$$ For example, ${\cal S}_1(x)$ is the Cauchy distribution while ${\cal S}_2(x)$ is a Gaussian distribution : $$\begin{aligned}
\label{stable_explicit}
{\cal S}_1(x) = \frac{1}{\pi} \frac{1}{1+x^2} \;, \; {\cal S}_2(x) = \frac{1}{2 \sqrt{\pi}} e^{-\frac{x^2}{4}} \;.\end{aligned}$$ Note also the explicit expression $$\begin{aligned}
\label{salpha_zero}
{\cal S}_\alpha(0) = \int_{-\infty}^\infty \, e^{-|k|^\alpha} \frac{\rmd k}{2 \pi} = \frac{\Gamma(1+\alpha^{-1})}{\pi} \;,\end{aligned}$$ which will be useful in the following.
We now want to study $P(x,A,n|x_0,x_0,0)$ in the limit of large $n$. The marginal distributions in Eq. (\[marginals\]) suggest the scaling $x \sim n^{1/\alpha}$ and $A \sim n^{1/\alpha + 1}$. From the expression in Eq. (\[start\_expr\]) one checks explicitly that in the limit $n \to \infty$, keeping $X = x/n^{1/\alpha}$ and $Y = A/n^{1/\alpha+1}$ fixed, the joint distribution takes the scaling form $$\begin{aligned}
\label{scaling_form}
P(x,A,n|x_0,x_0,0) = \frac{1}{n^{2/\alpha+1}} G\left(
\frac{x}{n^{1/\alpha}}, \frac{A}{n^{1/\alpha+1}} \bigg |
\frac{x_0}{n^{1/\alpha}} \right) \;,\end{aligned}$$ where the function $G(X,Y|X_0)$ is given by $$\begin{aligned}
\fl G(X,Y|X_0) &=& \int_{-\infty}^\infty \frac{\rmd k_1}{2 \pi} \int_{-\infty}^\infty \frac{\rmd k_2}{2 \pi} e^{-\int_0^1 |k_1 + k_2 z |^\alpha dz - i k_1 (X-X_0) - i k_2 (Y-X_0) } \;.\end{aligned}$$ After the change of variable $k_2 = k$ and $k_1 = k r$, we obtain $$\begin{aligned}
\label{expr_scaling}
\fl G(X,Y|X_0) &=& \int_{-\infty}^\infty \frac{\rmd r}{2\pi} \int_{-\infty}^\infty\frac{\rmd k}{2\pi} |k| e^{-|k|^\alpha \gamma(r) - i k r (X-X_0 ) - i k (Y - X_0)}
\;, \; \gamma(r) = \int_0^1 |r+z|^\alpha \rmd z \;,\end{aligned}$$ where the function $\gamma(r)$ is explicitly given by $$\begin{aligned}
\label{expr_gamma}
\gamma(r) =
\cases{
\frac{1}{\alpha+1} \left( (-r)^{\alpha+1} - (-1-r)^{\alpha+1} \right) \;, \; r < -1 \;, \\
\frac{1}{\alpha+1} \left((r+1)^{\alpha+1} + (-r)^{\alpha+1} \right) \;, \; - 1 \leq r \leq 0 \;, \\
\frac{1}{\alpha+1}\left((r+1)^{\alpha+1} - r^{\alpha+1} \right) \;, \; r > 0 \;.
}\end{aligned}$$ For $\alpha=2$, one has $\gamma(r) = r^2+r+1/3$ and one finds $$\begin{aligned}
G(X,Y|X_0) = \frac{\sqrt{3}}{2 \pi} \exp{\left[- 3(Y-X_0)(Y-X) - (X-X_0)^2 \right]} \;,\end{aligned}$$ which yields back the propagator of the so-called random acceleration process [@rap].
Lévy bridge
===========
In the absence of any constraint for the walker, the area under a Lévy random walk is the sum of Lévy random variables and is thus again a Lévy random variable. However, if one considers constrained Lévy walks, this is not true anymore and the area may become different from a simple Lévy random variable. In the first subsection, we compute the distribution of the position for a Lévy bridge while the second subsection is devoted to the distribution of the area under this bridge.
Distribution of the position
----------------------------
Here we study the Lévy bridge $\{ x_B(m) \}_{0\leq m \leq n}$ which starts at $0$ at time $0$, $x_B(0) =0$, and is constrained to come back to $0$ after $n$ time steps, [*i.e.*]{} $x_B(n) =0$. In that case, one can compute the distribution of the position $P_B(x,m)$ after $m$ time steps for such a bridge as $P_B(x,m) = P(x,m|0,0) P(x,n-m|0,0)/P(0,n|0,0)$, such that in the scaling limit one has $$\begin{aligned}
\label{marginal_bridge1}
&& P_B(x,m) = \frac{1}{n^{1/\alpha}} G_B\left(\frac{x}{n^{1/\alpha}},\frac{m}{n} \right) \;, \; \nonumber \\
&& G_B(X,\tau) = \frac{\pi}{\Gamma(1+\alpha^{-1})}\frac{1}{(\tau(1-\tau))^{1/\alpha}} {\cal S}_\alpha \left(\frac{X}{\tau^{1/\alpha}} \right) {\cal S}_\alpha \left( \frac{X}{(1-\tau)^{1/\alpha}}\right) \;,\end{aligned}$$ where we have used ${\cal S}_\alpha(0) = \Gamma(1+\alpha^{-1})/\pi$, see Eq. (\[salpha\_zero\]). For $\alpha=2$ it is easy to see from Eq. (\[marginal\_bridge1\]) that $x_B(m)$ is a Gaussian variable. However, for $\alpha < 2$, the Lévy bridge is not any more a Lévy random variable. For instance, for $\alpha = 1$ one obtains a non trivial distribution $$\begin{aligned}
\label{marginal_bridge2}
G_B(X,\tau) = \frac{1}{\pi} \frac{\tau(1-\tau)}{(\tau^2 + X^2)((1-\tau)^2+X^2)} \;.\end{aligned}$$ It is also easy to see that, for any $\alpha < 2$ one has the asymptotic behavior $$\begin{aligned}
\label{asympt_bridge}
G_B(X,\tau) \sim c'_\alpha \tau(1-\tau) X^{-2(\alpha+1)} \;, \; X \gg 1 \;,\end{aligned}$$ where $c'_\alpha$ is independent of $\tau$, which implies that $\langle (x_B(m))^2 \rangle$ is well defined for $\alpha > 1/2$ (of course $\langle x_B(m) \rangle = 0$ by symmetry for all $\alpha$). A straightforward calculation shows that $$\begin{aligned}
\label{variance_bridge}
\frac{\langle x^2_B(m) \rangle}{n^{2/\alpha}} = \tilde a_\alpha \frac{m}{n}\left(1-\frac{m}{n} \right) \;, \; \tilde a_\alpha = \frac{\alpha \Gamma(2-\alpha^{-1})}{\Gamma(1+\alpha^{-1})} \;.\end{aligned}$$ It is interesting to notice that ${\langle x^2_B(m) \rangle}/{n^{2/\alpha}}$ depends on $\alpha$ only through the amplitude $\tilde a_\alpha$ but the parabolic shape in $(m/n)(1-m/n)$ holds for all values of $2 \geq \alpha > 1/2$. Besides $\tilde a_\alpha$ is diverging for $\alpha \to 1/2^+$ while one has ${\tilde a}(1) = 1$ and ${\tilde a}(2) = 2$. One finds, curiously, that it reaches a minimum for a non-trivial value $\alpha^* = 0.74122 \dots$ for which ${\tilde a}(\alpha^*) = 0.85264 \dots$. In view of these properties (\[marginal\_bridge1\], \[marginal\_bridge2\]) one expects that, for $\alpha < 2$, the area under such a Lévy bridge has a non trivial distribution, which we now focus on.
Distribution of the area
------------------------
In this subsection, we consider a Lévy bridge, [*i.e.*]{} a Lévy walk which starts at the origin $x_0=0$ and is conditioned to come back to the origin after $n$ steps and we ask : what is the distribution ${P}_B(A,n)$ of the area $A$ under this [*Lévy*]{} bridge ? One can obtain ${P}_B(A,n)$ from Eqs (\[start\_expr\], \[marginals\]) as $$\begin{aligned}
{P}_B (A,n) = \frac{P(0,A,n|0,0,0)}{P(x=0,n|0,0)} \;.\end{aligned}$$ Therefore, using $P(x=0,n|0,0) = n^{-1/\alpha} {\cal S}_\alpha(0) = n^{-1/\alpha} \Gamma(1+\alpha^{-1})/\pi$, see Eq. (\[salpha\_zero\]), together with the scaling form (\[scaling\_form\], \[expr\_scaling\]) one obtains, in the limit $n \to \infty$, keeping $A/n^{1+1/\alpha}$ fixed: $$\begin{aligned}
\label{expr_scaling_genalpha}
{P}_B (A,n) = \frac{1}{n^{1+1/\alpha}}
{F}_\alpha\left(\frac{A}{n^{1+1/\alpha}} \right) \;, \nonumber \\
{F}_\alpha(Y) =\frac{1}{2\Gamma(1+\alpha^{-1})} \int_{-\infty}^\infty \, \rmd r \int_{-\infty}^\infty \frac{\rmd k}{2 \pi} |k| e^{-|k|^\alpha \gamma(r) - i Y k} \;.\end{aligned}$$ Using the explicit expression of $\gamma(r)$ above (\[expr\_gamma\]), one computes the Fourier transform $\hat {F}_\alpha(k)$ of ${F}_\alpha(Y)$ as $$\begin{aligned}
\label{expr_fourier_gen_alpha}
&&\hat {F}_\alpha(k) = \int_{-\infty}^\infty {F}_\alpha(Y) e^{i k Y} \rmd Y \;, \\
&& = \frac{|k|}{\Gamma(1+\frac{1}{\alpha})} \left[ \int_0^\infty e^{-\frac{|k|^\alpha}{\alpha+1} \left[(r+1)^{\alpha+1}-r^{\alpha+1}\right]} \rmd r + \int_0^{1/2} e^{-\frac{|k|^\alpha}{\alpha+1} \left[(1/2+r)^{\alpha+1}+(1/2-r)^{\alpha+1}\right]} \rmd r
\right] \;. \nonumber \end{aligned}$$ For generic $\alpha$, it seems quite difficult to perform explicitly the integrals over $r$ and $k$ in the expression for the distribution ${F}_\alpha(Y)$ in Eq. (\[expr\_scaling\_genalpha\]). One can however extract from this expression the asymptotic behaviors both for $Y \to 0$ and $Y \to \infty$.
[**Asymptotic behavior for small argument.**]{} For small argument, it is straightforward to see on the expression (\[expr\_scaling\_genalpha\]) above that the leading behavior of $F_\alpha(Y)$ when $Y \to 0$ is given by $$\begin{aligned}
\label{expr_constant}
F_\alpha(Y) \sim F_\alpha(0) \;, \; Y \to 0 \;, {\rm with} \; \nonumber \\
\fl F_\alpha(0) = \frac{(\alpha+1)^{\frac{2}{\alpha}}}{2 \pi} \frac{\Gamma(1+\frac{2}{\alpha})}{\Gamma(1+\frac{1}{\alpha})} \left( \int_0^\infty \frac{\rmd r}{((r+1)^{\alpha+1} - r^{\alpha+1})^{\frac{2}{\alpha}}} + \int_0^{1/2} \frac{\rmd r}{((1/2+r)^{\alpha+1} + (1/2-r)^{\alpha+1} )^{\frac{2}{\alpha}}}\right) \nonumber \;.\end{aligned}$$ A study of this function $F_\alpha(0)$ shows that it is a decreasing function of $\alpha$ on the interval $]0,2]$, which is diverging when $\alpha \to 0$. For $\alpha = 1$ and $\alpha =2 $, $F_\alpha(0)$ assumes simple values $$\begin{aligned}
F_1(0) = 1 + \frac{4}{\pi} = 2.27324... \;, \; F_2(0) = \sqrt{\frac{3}{\pi}} = 0.977205...\end{aligned}$$
[**Asymptotic behavior for large argument.**]{} The analysis of the large argument behavior of $F_\alpha(Y)$ is more involved. A careful analysis, left in \[app\_asympt\], shows that for large $Y$ one has: $$\begin{aligned}
\label{large_y}
&& F_\alpha(Y) \sim \frac{a_\alpha}{Y^{2(1+\alpha)}} \;, \; Y \gg 1 \;, \\
&& a_\alpha = \frac{2^{-2(2+\alpha)} \sqrt{\pi} \Gamma(2+2\alpha) \tan{(\alpha \pi/2)}}{\Gamma(2+\alpha^{-1}) \Gamma(1-\alpha) \Gamma(\frac{5}{2} + \alpha)} \;.\end{aligned}$$ When $\alpha \to 2$, one has from (\[large\_y\]), $a_{\alpha} \sim \frac{\sqrt{\pi}}{21}(\alpha-2)^2$. From Eq. (\[large\_y\]) one obtains that the second moment of the distribution $\langle Y^2 \rangle$ is defined only for $\alpha > 1/2$ where it takes the value (see Eq. (\[expr\_appendix\])) $$\begin{aligned}
\label{area_variance}
\langle Y^2 \rangle = \frac{{\tilde a}_\alpha}{12} = \frac{\alpha \Gamma(2-\alpha^{-1})}{12 \Gamma(1+\alpha^{-1})} \;, \; \alpha > 1/2 \;,\end{aligned}$$ where the amplitude ${\tilde a}_\alpha$ appears in the expression for $\langle x^2_B(m) \rangle$ computed above (\[variance\_bridge\]). This power law tail of the area distribution (\[large\_y\]) with an exponent $2(1+\alpha)$ is quite interesting. Indeed, the area itself is the sum of non-identical and strongly correlated variables $x_B(m)$ all having a similar power law tail also with exponent $2(1+\alpha)$ (\[asympt\_bridge\]). For $\alpha > 1/2$, their variance is finite and the non-Gaussianity of $A$ can a priori be due both to the correlations between the $x_B(m)$’s and to the fact that $A$ is the sum of non-identical random variables. To test which of these features is responsible for the non-Gaussianity of $A$, we study the sum of $n$ random variables $X_m$ which are [*independent*]{} and such that $X_m$ has the same distribution as $x_B(m)$. Defining $S_n = \sum_{m=1}^n X_m$ and $\Sigma_n^2 = \sum_{m=1}^n \langle X_m^2 \rangle$, it is known that $S_n/\Sigma_n$ converges to a centered Gaussian variable of unit variance if the following condition (known as the Lindeberg’s condition) is satisfied [@feller] $$\begin{aligned}
\label{lindeberg}
\lim_{n \to \infty} \frac{1}{\Sigma_n^2} \int_{|x| > \epsilon \Sigma_n} x^2 {\rm Proba}(X_m = x) \rmd x = 0 \;, \; \forall \epsilon > 0 \;. \end{aligned}$$ Intuitively, this Lindeberg condition (\[lindeberg\]) ensures that the probability that any term $X_m$ will be of the same order of magnitude as the sum $S_n$ must tend to zero (see \[appendix\_lindeberg\] for an example of non-identical independent random variables which do not satisfy the Lindeberg condition). In the present case of the Lévy bridge, one can check (see \[appendix\_lindeberg\]) that if $X_m$ is distributed like $x_B(m)$ then the above Lindeberg condition (\[lindeberg\]) is satisfied (see \[appendix\_lindeberg\]). Therefore the deviations from Gaussianity (\[large\_y\]) are purely due to the [*strong correlations*]{} between the positions of the walker $x_B(m)$’s.
[**The special case $\alpha = 1$**]{}. In the Cauchy case, $\alpha=1$, the integral over $k$ can be done in Eq. (\[expr\_scaling\]) to obtain $$\begin{aligned}
G(X,Y|0) = \int_{-\infty}^\infty \frac{\gamma^2(r) - (r X + Y)^2}{\left(\gamma^2(r) + (r X + Y)^2\right)^2} \frac{\rmd r}{2 \pi^2} \;,\end{aligned}$$ where the function $\gamma(r)$ (\[expr\_gamma\]) takes here a rather simple form $$\begin{aligned}
\gamma(r) =
\cases{ - r - \frac{1}{2} \;, \; r < -1 \\
r^2+r + \frac{1}{2} \;, \; - 1 \leq r \leq 0 \\
r + \frac{1}{2} \;, r > 0 \;.
}\end{aligned}$$ The distribution of the area under a Cauchy bridge is thus given by $$\begin{aligned}
\label{cauchy_integral}
&& {P}_B (A,n) = \frac{1}{n^{2}} F_1 \left(\frac{A}{n^{2}} \right) \;, \nonumber \\
&& F_1 (Y) = \frac{1}{\pi}\frac{2}{1+4Y^2} + \frac{1}{\pi} \int_0^{1/2} \frac{(u^2+1/4)^2 - Y^2}{((u^2+1/4)^2+Y^2)^2} \rmd u \;.\end{aligned}$$ Under this form (\[cauchy\_integral\]), one can easily obtain the asymptotic behaviors as $$\begin{aligned}
F_1(Y) \sim
\cases{
1 + \frac{4}{\pi} \;, \; Y \to 0 \\
\frac{1}{20 \pi Y^4} \;, \; Y \to \infty
}\end{aligned}$$ in agreement with the asymptotic behaviors obtained above (\[expr\_constant\], \[large\_y\]). In fact the integral over $u$ in the expression above (\[cauchy\_integral\]) can be done explicitly yielding the expression $$\begin{aligned}
\label{expr_arctan}
&& F_1(Y) = \frac{1}{\pi}\frac{2}{1+4Y^2} \\
&&+ \frac{1}{\pi} \left( \frac{2(1-8Y^2)}{(1+4Y^2)(1+16Y^2)} + \frac{4}{(1+16Y^2)^{\frac{3}{2}}} {\rm Re}\left[{(1- 4 i Y)^{\frac{3}{2}} \arctan{\left((1+4 i Y)^{-\frac{1}{2}}\right)}}\right] \right) \;, \nonumber\end{aligned}$$ where ${\rm Re}{(z)}$ denotes the real part of the complex number $z$. In \[appendix\_elementary\] we show how this expression (\[expr\_arctan\]) can be written explicitly in terms of elementary functions (\[def\_ab\], \[def\_lm\], \[elem\]).
Average profile for fixed area $A$.
===================================
The case of a free Lévy walk
----------------------------
We first consider the case of a free Lévy walk of constrained area. We compute the probability $\tilde P(x,m | A,n)$ that the position of the random walker, starting at $x(0)=x_0=0$ at time $0$, is $x$ after $m$ time steps given that the area, after $n$ time steps, is fixed to $A$. From this probability, one obtains the average profile as $\langle \tilde x(m)
\rangle = \int_{-\infty}^\infty x \tilde P(x,m|A,n) \rmd x$. To compute this probability $\tilde P(x,m|A,n)$, we divide the interval $[0,n]$ into two intervals $[0,m]$ and $[m,n]$. Over $[0,m]$ the process starts at $x_0 = 0$ with area $A_0=0$ and reaches to $x$ with area $A_1$ (see the light area on Fig. \[fig\_rw\] b)). Over the interval $[m,n]$, the process starts in $x$ and reaches to $x_F$ with area $A-A_1$ (see the shaded area on Fig. \[fig\_rw\] b)). Therefore this probability $\tilde P(x,m|A,n)$ can be simply expressed in terms of the propagator $P(x,A,n|x_0,x_0,0)$ computed above (\[start\_expr\]) as (see Fig. \[fig\_rw\] b)): $$\begin{aligned}
\label{start_free}
\fl \tilde P(x,m | A,n) = \frac{1}{P(A,n)} \int_{-\infty}^\infty \rmd x_F \int_{-\infty}^\infty \rmd A_1 P(x,A_1,m|0,0,0) P(x_F,A-A_1,n-m|x,x,0) \;,\end{aligned}$$ where we have used the Markov property of the Lévy random walk. In the above expression (\[start\_free\]), $x_F$ is the end point of the walk (see Fig. \[fig\_rw\] b)), which is free here. Hence $\tilde P(x,m | A,n)$ is obtained by integration over this end point $x_F$. Notice that it is normalized according to $\int_{-\infty}^\infty \tilde P(x,m | A,n) \rmd x = 1$ (and therefore we have divided by $P(A,n)$ in the expression above (\[start\_free\]) because the measure is restricted to random walks of fixed area $A$ after $n$ time steps). Using the explicit expressions computed above (\[start\_expr\]) one obtains after integration over $x_F$ and $A_1$: $$\begin{aligned}
\fl \tilde P(x,m | A,n) = \frac{1}{P(A,n|0,0)} \int_{-\infty}^\infty \frac{\rmd k_1}{2\pi} \int_{-\infty}^\infty \frac{\rmd k_2}{2\pi} e^{-\sum_{\nu=0}^m |k_1+\nu k_2|^\alpha - |k_2|^\alpha \sum_{\nu=0}^{n-m} |\nu|^\alpha} e^{- i k_1 x -ik_2 (A - (n-m) x)} \;. \nonumber \\\end{aligned}$$ In the large $n$ limit, keeping $X = x/n^{1/\alpha}$, $Y = A/n^{1+1/\alpha}$ and $\tau = m/n$ fixed one has $$\begin{aligned}
\fl && \tilde P(x,m | A,n) = \frac{1}{n^{1/\alpha}} \tilde G(X, \tau | Y) \;, \nonumber \\
\fl && \tilde G(X, \tau | Y) = \frac{(1+\alpha)^{-1/\alpha}}{{\cal S}_\alpha((\alpha + 1)^{1/\alpha} Y)} \int_{-\infty}^\infty \frac{\rmd r}{2\pi} \int_{-\infty}^\infty \frac{\rmd k}{2\pi} |k| e^{-|k|^\alpha\left( \tilde \gamma(r,\tau) + \tilde \gamma(0,1-\tau) \right)} e^{- i k r X - i k (Y - (1-\tau) X)} \;,\end{aligned}$$ where we have used the expression of $P(A,n|0,0)$ given in Eq. (\[marginals\]) and we have introduced $$\begin{aligned}
\tilde \gamma(r,\tau) = \int_0^\tau |r + z|^\alpha dz \;,\end{aligned}$$ which is a generalization of the function $\gamma(r) \equiv \tilde \gamma(r,1)$ in (\[expr\_scaling\]). It reads $$\begin{aligned}
\label{expr_gen_gamma}
\tilde \gamma(r , \tau) =
\cases{
\frac{1}{\alpha+1} \left( (-r)^{\alpha+1} - (-\tau-r)^{\alpha+1} \right) \;, \; r \leq -\tau \;, \\
\frac{1}{\alpha+1} \left( (r+\tau)^{\alpha+1} + (-r)^{\alpha+1} \right) \;, \; -\tau \leq r \leq 0 \;, \\
\frac{1}{\alpha+1} \left( (r+\tau)^{\alpha+1} - r^{\alpha+1} \right) \;, \; r \geq 0 \;.
}\end{aligned}$$ We can now compute $\langle \tilde x(m) \rangle$ which, in the large $n$ limit, takes the scaling form $$\begin{aligned}
\label{free_1}
\fl &&\frac{\langle \tilde x(m) \rangle}{n^{1/\alpha}} = h_\alpha\left(\frac{m}{n}, \frac{A}{n^{1+1/\alpha}}\right) \\
\fl &&h_\alpha(\tau, Y) = \frac{(1+\alpha)^{-1/\alpha}}{{\cal S}_\alpha((\alpha + 1)^{1/\alpha} Y)} \int_{-\infty}^\infty \rmd X X \int_{-\infty}^\infty \frac{\rmd r}{2\pi} \int_{-\infty}^\infty \frac{\rmd k}{2\pi} |k| e^{-|k|^\alpha g(r,\tau)} e^{- i k r X - i k (Y - (1-\tau) X)} \;. \nonumber \end{aligned}$$ with $g(r,\tau) = \tilde \gamma(r,\tau) + \tilde \gamma(0,1-\tau)$. This function $h_\alpha(\tau, Y)$ can be written as $$\begin{aligned}
\fl &&h_\alpha(\tau, Y) = \frac{(1+\alpha)^{-1/\alpha}}{{\cal S}_\alpha((\alpha + 1)^{1/\alpha} Y)} \int_{-\infty}^\infty \rmd X \int_{-\infty}^\infty \frac{\rmd r}{2\pi} \int_{-\infty}^\infty \frac{\rmd k}{2\pi} \frac{|k|}{(- i k)} e^{-|k|^\alpha g(r,\tau)} \frac{\partial}{\partial r} \left( e^{- i k r X - i k (Y - (1-\tau) X)}\right)\end{aligned}$$ which suggests to perform an integration by part in the integral over $r$, yielding (one can check that the boundary terms vanish) $$\begin{aligned}
\label{intermediaire}
\fl h_\alpha(\tau, Y) = \frac{(1+\alpha)^{-1/\alpha}}{{\cal S}_\alpha((\alpha + 1)^{1/\alpha} Y)} \int_{-\infty}^\infty \rmd X \int_{-\infty}^\infty \frac{\rmd r}{2\pi} \int_{-\infty}^\infty \frac{\rmd k}{2\pi} \frac{|k|^{1+\alpha}}{(- i k)}\frac{\partial g(r,\tau)}{\partial r} e^{-|k|^\alpha g(r,\tau)} e^{- i k r X - i k (Y - (1-\tau) X)} \nonumber \\\end{aligned}$$ On this expression (\[intermediaire\]), the integral over $X$ can be done yielding simply a delta function of $r$, namely $2 \pi |k|^{-1}\delta(r - (1-\tau))$. This allows us to perform then the integral over $r$ to obtain $$\begin{aligned}
\label{free_2}
&& h_\alpha(\tau, Y) = \frac{(1+\alpha)^{-1/\alpha}}{{\cal S}_\alpha((\alpha + 1)^{1/\alpha} Y)} i \left(\frac{\partial g(r,\tau)}{\partial r} \right)_{r = 1-\tau}
\int_{-\infty}^\infty |k|^\alpha k^{-1} e^{-\frac{|k|^\alpha}{1+\alpha}} e^{-ikY} \frac{\rmd k}{2\pi} \;,\end{aligned}$$ where we have used the relation $\tilde \gamma(1-\tau,\tau) + \tilde \gamma(0,1-\tau) = 1/(1+\alpha)$. It is then easy to check that $$\begin{aligned}
i \int_{-\infty}^\infty |k|^\alpha k^{-1} e^{-\frac{|k|^\alpha}{1+\alpha}} e^{-ikY} \frac{\rmd k}{2\pi} = \frac{\alpha+1}{\alpha} Y (1+\alpha)^{1/\alpha} {\cal S}_\alpha\left[(1+\alpha)^{1/\alpha} Y\right] \;,\end{aligned}$$ so that finally one obtains the simple result $$\begin{aligned}
\label{explicit_free}
h_\alpha(\tau, Y) = Y \frac{\alpha+1}{\alpha} (1- (1-\tau)^\alpha) \;,\end{aligned}$$ where we have used $\left(\frac{\partial g(r,\tau)}{\partial r} \right)_{r = 1-\tau} = 1-(1-\tau)^\alpha$. In Fig. (\[fig\_profiles\]) a), we show a plot of $h_\alpha(\tau, Y)/Y$ as a function of $\tau$.
The case of a Lévy bridge
-------------------------
Here we consider a Lévy bridge, i.e. a Lévy random walker starting in $0$ at initial time and constrained to come back to the origin after $n$ time steps. We compute the probability $\tilde P_B(x,m |
A,n)$ that the position of the random walker is $x$ after $m$ time steps given that the area, after $n$ time steps, is fixed to $A$. From this probability, one obtains the average profile as $\langle \tilde x_B(m)
\rangle = \int_{-\infty}^\infty x \tilde P_B(x,m | A,n) \rmd
x$. This probability $\tilde P_B(x,m | A,n)$ can be expressed, as in Eq. (\[start\_free\]) in terms of the propagator $P(x,A,n| x_0,x_0,0)$ computed above in Eq. (\[start\_expr\]) as: $$\begin{aligned}
\label{start_dist_profile}
\fl \tilde P_B(x,m | A,n) = \frac{1}{P(0,A,n|0,0,0)}\int_{-\infty}^\infty P(x,A_1,m|0,0,0) P(x,A-A_1,n-m|0,0,0) \rmd A_1 \;,\end{aligned}$$ where we have used the Markov property of the Lévy random walk. It is normalized according to $\int_{-\infty}^\infty \tilde P_B(x,m | A,n) \rmd
x = 1$ (and therefore we have divided by $P(0,A,n|0,0,0)$ because the measure is restricted to bridges of fixed area $A$). Using the explicit expressions obtained above (\[start\_expr\]) one has $$\begin{aligned}
\label{dist_profile}
\tilde P_B(x,m | A,n) = &&\frac{1}{P(0,A,n|0,0,0)}\int_{-\infty}^\infty \frac{\rmd k_1}{2\pi} \int_{-\infty}^\infty \frac{\rmd k'_1}{2\pi} \int_{-\infty}^\infty \frac{\rmd k_2}{2\pi} e^{-i(k_1 + k'_1) x - i k A} \nonumber \\
&& \times
e^{- \sum_{\nu=1}^n |k_1 + \nu k_2|^\alpha - \sum_{\nu=1}^{n-m} |k'_1 + \nu k_2|^\alpha} \;.\end{aligned}$$ In the large $n$ limit, keeping $X = x/n^{1/\alpha}$, $Y = A/n^{1+1/\alpha}$ and $\tau = m/n$ fixed one has $$\begin{aligned}
\label{start_expr_profile}
&& \tilde P_B(x,m | A,n) = \frac{1}{n^{1/\alpha}} \tilde G_B(X,\tau|Y) \;, \nonumber \\
&& \tilde G_B(X,\tau|A) = \frac{\pi}{\Gamma(1+\alpha^{-1}) F_\alpha(Y)}\int_{-\infty}^\infty \frac{\rmd r}{2\pi} \int_{-\infty}^\infty \frac{\rmd r'}{2\pi}\int_{-\infty}^\infty \frac{\rmd k}{2\pi} k^2 e^{-i k (r + r') X - i k A} \\
&& \times e^{ - |k|^\alpha [\tilde \gamma(r,\tau) + \tilde \gamma(r',1-\tau) ] }\end{aligned}$$
We can now compute $\langle \tilde x_B(m) \rangle$, which in the large $n$ limit takes the scaling form $$\begin{aligned}
\frac{\langle \tilde x_B(m) \rangle}{n^{1/\alpha}} = H_\alpha\left(\frac{m}{n}, \frac{A}{n^{1+1/\alpha}} \right) \;,\end{aligned}$$ where $$\begin{aligned}
\label{expr_bridge_stable}
\!\!\!\!\!H_\alpha(\tau, Y) &=& i \frac{1}{2\Gamma(1+\alpha) F_\alpha(Y)} \int_{-\infty}^\infty \frac{\rmd k}{2 \pi} \int_{-\infty}^\infty {\rmd} r |k|^{\alpha-1} k \partial_r \tilde \gamma(r,\tau) e^{- |k|^\alpha \tilde \Gamma(r,\tau) - i k Y} \\
\!\!\!\!\!\! &=& - \frac{1}{2\Gamma(1+\alpha) F_\alpha(Y)} \frac{\partial}{\partial Y} \left( \int_{-\infty}^\infty \frac{\rmd k}{2 \pi} \int_{-\infty}^\infty {\rmd} r |k|^{\alpha-1}
\partial_r \tilde \gamma(r,\tau) e^{- |k|^\alpha \tilde \Gamma(r,\tau) - i k Y} \right)\;, \nonumber\end{aligned}$$ where we have introduced the notation $\tilde \Gamma(r,\tau) = \tilde \gamma(r,\tau) + \tilde \gamma(-r,1-\tau)$ which we compute straightforwardly from Eq. (\[expr\_gen\_gamma\]) as: $$\begin{aligned}
\label{def_biggamma}
\fl \tilde \Gamma(r,\tau) = \tilde \gamma(r,\tau) + \tilde \gamma(-r,1-\tau) =
\cases{
\frac{1}{\alpha+1} \left((-r+1-\tau)^{\alpha+1} - (-r-\tau)^{\alpha+1} \right) \;, \; r \leq -\tau \\
\frac{1}{\alpha+1} \left( (r+\tau)^{\alpha+1} + (-r+1-\tau)^{\alpha+1} \right) \;, \; -\tau \leq r \leq 1-\tau \\
\frac{1}{\alpha+1} \left( (r+\tau)^{\alpha+1} - (r-1+\tau)^{\alpha+1} \right) \;, \; r \geq 1-\tau \;.
}\end{aligned}$$ Note that this function $\tilde \Gamma(r,\tau)$ satisfies the identity $$\begin{aligned}
\tilde \Gamma(1-\tau-r,\tau) = \int_0^{1} |z-r|^\alpha \rmd z = \gamma(-r) \;,\end{aligned}$$ independently of $\tau$.
For generic $\alpha$, the expression above (\[expr\_bridge\_stable\]) is quite difficult to handle. For $\alpha = 2$ (Brownian motion) and $\alpha = 1$, further analytical progress is however possible. For $\alpha = 2$, one has $\partial_r \tilde \gamma(r,\tau) =
\tau(2r+\tau)$ and $\tilde \Gamma(r,\tau) = r^2 + r (2\tau-1) -
\tau(1-\tau)+1/3$ and therefore one checks $$\begin{aligned}
\label{identity}
\int_{-\infty}^\infty \partial_r \tilde \gamma(r,\tau) e^{-k^2
\tilde \Gamma(r,\tau) } \rmd r = \frac{\sqrt{\pi}}{|k|} e^{-\frac{k^2}{12}} \tau(1-\tau) \;.\end{aligned}$$ Using this identity (\[identity\]) and integrating over $r$ in the expression above (\[expr\_bridge\_stable\]), and using $F_2(Y) = \sqrt{3/\pi}\exp{(-3Y^3)}$ one obtains for $\alpha=2$: $$\begin{aligned}
H_2(\tau, Y) = 6 Y \tau(1-\tau) \;.\end{aligned}$$
Another interesting case where analytical progress is possible is $\alpha =1$. Given the expression of $\tilde \gamma(r,\tau)$ in Eq. (\[expr\_gen\_gamma\]) and $\tilde \Gamma(r,\tau)$ in Eq. (\[def\_biggamma\]), one observes that the integral over $r$ in Eq. (\[expr\_bridge\_stable\]) gives rise to $4$ different terms, corresponding to $r \in ]-\infty, -\tau]$, $r \in [-\tau,0]$, $r \in [0, 1-\tau]$ and finally $r \in [1-\tau,+\infty[$. For $\alpha = 1$ it turns out that the first and fourth terms, corresponding to $r \in ]-\infty, -\tau]$ and $r \in [1-\tau, +\infty[$ do cancel each other (which is the case only for $\alpha=1$) resulting in the following expression: $$\begin{aligned}
\fl H_1(\tau, Y) = \frac{Y}{\pi \Gamma(1+\alpha) F_1(Y)} \left(
\int_{-\tau}^0 (2r+\tau) \frac{\tilde \Gamma(r,\tau)}{(\tilde \Gamma(r,\tau)^2 +
Y^2)^2} \rmd r + \tau \int_{0}^{1-\tau}
\frac{\tilde \Gamma(r,\tau)}{(\tilde \Gamma(r,\tau)^2+Y^2)^2} \rmd r
\right) \;,\end{aligned}$$ with $\tilde \Gamma(r,\tau) = \frac{1}{2} ((r+\tau)^2+(r+\tau-1)^2)$. In the asymptotic limit $Y \to 0$ one obtains $$\begin{aligned}
\label{small_y_shape}
H_1(\tau, Y) \sim \frac{Y}{4 + \pi} \left( 3\pi-2 + \frac{2}{1+2\tau(\tau-1)} + 12 (2\tau-1) \arctan{(1-2\tau)} \right) \;.\end{aligned}$$ In the opposite limit $Y \to \infty$ one obtains $$\begin{aligned}
\label{large_y_shape}
H_1(\tau, Y) \sim Y \frac{10}{3} \tau(1-\tau)(2 - \tau(1-\tau)) \;.\end{aligned}$$ Note that although the two functions of $\tau$ entering these asymptotic expansions in Eq. (\[small\_y\_shape\]) and Eq. (\[large\_y\_shape\]) have very different analytical expressions, they are actually quite close to each other on the interval $[0,1]$ (see Fig. (\[fig\_profiles\]) b)).
Numerical results
=================
We now come to numerical simulations of Lévy bridges. As mentioned above, one can not use the relation above (\[id\_bb\]), which is only valid for $\alpha=2$ [@knight] to simulate a Lévy bridge. Instead, we consider the joint probability distribution function (pdf) of the increments $\eta(m)$ for a Lévy bridge of size $n$. Indeed, these increments are independent random variables, distributed according to $\phi(\eta)$ with the global constraint that $x(n) = \sum_{m=1}^n \eta(n) = 0$. Therefore the joint pdf of the increments $P_B\left(\eta(1), \eta(2), \cdots, \eta(n)\right)$ is simply given by $$\begin{aligned}
\label{joint_increments}
P_{B}\left(\eta(1), \eta(2), \cdots, \eta(n) \right) &\propto& \prod_{m=1}^n \phi\left[\eta(m)\right] \delta\left(\sum_{m=1}^n \eta(m)\right) \\
&\propto& \exp{\left[ \sum_{m=1}^n \ln{\left [\phi\left[\eta(m)\right] \right]} \right ]} \delta \left(\sum_{m=1}^n \eta(m) \right) \;. \nonumber \end{aligned}$$ This joint distribution can thus be considered as a Boltzmann weight with an effective energy $E = - \sum_{m=1}^n \ln{\left [\phi\left[\eta(m)\right] \right] }$ and effective inverse temperature $\beta = 1$. This thus leads us to use a Monte-Carlo algorithm, with a global constraint, to generate “configurations” of the increments distributed according to the distribution above (\[joint\_increments\]). We implement it in the following way. We start with a random initial configuration of the $\eta(m)$’s which satisfies the global constraint $\sum_{m=1}^n \eta(m)=0$ (it can also be $\eta(m) = 0$, for all $m$). At each time step we choose randomly two sites $i$ and $j$ among $1, 2, \cdots, n$ and the simple following moves are proposed $$\begin{aligned}
\label{move}
&&\eta(i) \to \eta'(i) = \eta(i) + \Delta \eta \;, \nonumber \\
&&\eta(j) \to \eta'(j) = \eta(j) - \Delta \eta \;,\end{aligned}$$ such that the global constraint of zero sum is automatically satisfied. This move is then accepted, in Metropolis algorithm that we use here, with a probability $P_{ij}$ given by $$\begin{aligned}
P_{ij} &=& \min{\left(1, \frac{\phi \left[ \eta'(i) \right] \phi\left[\eta'(j)\right]}{\phi \left[\eta(i) \right] \phi\left[\eta(j) \right]} \right)} \\
&=& \min(1,\exp{(-\Delta E)}) \;, \; \Delta E = \log{\left(\frac{\phi \left[\eta(i) \right] \phi\left[\eta(j) \right]}{\phi \left[ \eta'(i) \right] \phi\left[\eta'(j)\right]}\right)}\end{aligned}$$ This Monte Carlo algorithm is thus very similar to the Kawasaki dynamics for ferromagnetic spin systems relaxing towards equilibrium with a conserved global magnetization [@kawasaki]. Once the increments $\eta(k)$’s are generated according to this joint probability (\[joint\_increments\]), we can generate the random walk bridge $x_B(m) = \sum_{k=1}^m \eta(k)$ and compute the distribution of the area $A = \sum_{m=1}^n x_B(m)$ under the Lévy bridge. In Fig. \[fig\_numerics\] a), we show a plot of this distribution $P_B(A,n)$ for $\alpha=1$ and $n=100$. To compute it we have first run $10^7$ Monte Carlo steps to equilibrate the system and the distribution was then computed as an average over $10^7$ samples generated in the time interval $[10^7, 2 . 10^7]$. In Fig. \[fig\_numerics\], we also show a plot of the exact explicit expression for $F_1(Y)$ given in Eq. (\[elem\]), showing a very good agreement with our numerics. We have also computed numerically this distribution for other values of $\alpha \in ]0,2[$, showing a good agreement with the power law tail obtained in Eq. (\[large\_y\]). Note however that for small $\alpha$, it is actually quite difficult to equilibrate the system such that a precise estimate of the exponent characterizing the power law tail of $P_B(A,n)$ is quite difficult for $\alpha < 1$.
We can use a similar Monte Carlo approach to generate a random walk bridge with a fixed area $A = \sum_{m=1}^n (n+1-m) \eta(m) $. In that case, the joint pdf of the increments $\tilde P_B\left(\eta(1), \eta(2), \cdots, \eta(n)\right)$ is simply given by $$\begin{aligned}
\label{joint_increments_area}
\fl P_{B}\left(\eta(1), \eta(2), \cdots, \eta(n) \right) &\propto& \prod_{m=1}^n \phi\left[\eta(m) \right ]\delta\left(\sum_{m=1}^n \eta(m)\right) \delta(\sum_{m=1}^n (n+1-m) \eta(m)-A) \;.\end{aligned}$$ We start with an initial configuration of the $\eta(m)$’s which satisfies the both global constraints. In practice, we start with $\eta(m) = 6 A (N+1-2 m)/((N-1)N(N+1))$. Then, to satisfy both constraints (\[joint\_increments\_area\]), at each time step we choose randomly three sites $i$, $j$ and $k$ among $1, 2, \cdots, n$ and the simple following moves are proposed $$\begin{aligned}
\label{move_area}
&&\eta(i) \to \eta'(i) = \eta(i) + \Delta \eta \;, \nonumber \\
&&\eta(j) \to \eta'(j) = \eta(j) + \frac{i-k}{k-j} \Delta \eta \;, \nonumber \\
&&\eta(k) \to \eta'(k) = \eta(k) + \frac{j-i}{k-j} \Delta \eta \;. \end{aligned}$$ Note that to converge to the correct probability measure (\[joint\_increments\_area\]) one has to choose $\eta$ either positive or negative with equal probability. This move (\[move\_area\]) is then accepted with a probability $P_{ijk}$ given by $$\begin{aligned}
P_{ijk} &=& \min{\left(1, \frac{\phi\left[\eta'(i)\right] \phi\left[\eta'(j)\right] \phi\left[\eta'(k)\right]}{\phi\left[ \eta(i)\right] \phi\left[\eta(j)\right] \phi\left[\eta(k) \right]} \right)} \\
&=& \min(1,\exp{(-\Delta E)}) \;, \; \Delta E = \log{\left(\frac{\phi \left[\eta(i) \right] \phi\left[\eta(j) \right] \phi\left[\eta(k) \right] }{\phi \left[ \eta'(i) \right] \phi\left[\eta'(j)\right] \phi\left[\eta'(k) \right] }\right)}\end{aligned}$$ Once the increments $\eta(k)$’s are generated according to this joint probability (\[joint\_increments\_area\]), we can generate the random walk bridge $\tilde x_B(m) = \sum_{k=1}^m \eta(k)$ with fixed area $A$ and compute the profile $\langle \tilde x_B(m) \rangle$. In Fig. \[fig\_numerics\] b), we show a plot of this average profile for $\alpha=1$, $n=100$ and $A/n^2 \sim 20$. To compute it we have first run $10^7$ Monte Carlo steps to equilibrate the system and the average was then computed over $10^7$ samples generated in the time interval $[10^7, 2 . 10^7]$. In Fig. \[fig\_numerics\] b), we also plot, with a solid line, our asymptotic result in Eq. (\[large\_y\_shape\]), showing a relatively good agreement with our numerics (note that here $A/n^2 = 20$). On the same plot, Fig. \[fig\_numerics\] b), we also show in dotted line, the result for the Brownian bridge (\[shape\_bb\]), which is independent of $A$. It is quite remarkable that these two profiles are very similar which show that the global constraints that we impose here have strong consequences on the statistics of the Lévy random walk.
Conclusion
==========
To conclude, we have studied two main properties of a Lévy bridge $x_B(m)$ of length $n$ : (i) the distribution $P_B(A,n)$ of the area under a Lévy bridge and (ii) the average profile $\langle \tilde x_B(m) \rangle$ of a Lévy bridge with fixed area $A$.
- [ For $P_B(A,n)$ we have found the scaling form, valid for large $n$, $P_B(A,n) \sim n^{-1-1/\alpha} F_\alpha(Y)$ with an interesting power law behavior $F_\alpha(Y) \sim Y^{-2(1+\alpha)}$. For $\alpha=1$, we have obtained an explicit expression for $F_1(Y)$ in terms of elementary functions (\[elem\]). We have also shown, using the Lindeberg condition that the non-Gaussianity of $P_B(A,n)$, for $\alpha > 1/2$, is due only to the correlations between the positions of the walkers $x_B(m)$’s.]{}
- [For the average profile, $\langle \tilde x_B(m) \rangle$, we have found the scaling form $\langle \tilde x_B(m) \rangle \sim n^{1/\alpha} H_{\alpha}(m/n,A/n^{1+1/\alpha})$ where, at variance with Brownian motion, $H_\alpha(X,Y)$ is a non trivial function of the rescaled area $Y$. For $\alpha=1$, we have obtained simple analytical expressions for $H_1(X,Y)$ in both limits $Y \to 0$ and $Y \to \infty$. In particular, we have shown that the average profile of the Lévy random walk with a fixed area is not very far from the profile of a Brownian bridge with fixed area.]{}
- [We have finally compared our analytical results with Monte Carlo simulations of these Lévy random walks with global constraints.]{}
In view of recent developments in the study of area distributions for variants of Brownian motions [@janson_review; @satya_airy; @schehr_airy; @rambeau_airy], it would be very interesting to extend the results presented here to other constrained Lévy walk, including in particular Lévy random walks conditioned to stay positive (Lévy excursion), which is a challenging open problem.
On the use of the Lindeberg condition {#appendix_lindeberg}
=====================================
Let us consider [*independent*]{} and [*non-identical*]{} random variables $X_1, \cdots, X_n$ which have the same distribution as the Lévy bridge $x_B(m)$ (\[marginal\_bridge1\]), [*i.e.*]{} $$\begin{aligned}
\!\!\!\! {\rm Proba}(X_m = x) = \frac{\pi}{\Gamma(1+\alpha^{-1/2})} \frac{n^{1/\alpha}}{m^{1/\alpha}(n-m)^{1/\alpha}} {\cal S}_\alpha \left(\frac{x}{m^{1/\alpha}} \right) {\cal S}_\alpha \left(\frac{x}{(n-m)^{1/\alpha}} \right) \;.\end{aligned}$$ For $\alpha > 1/2$, $\sigma_m^2 = \langle X_m^2 \rangle$ is well defined and one has (\[variance\_bridge\]) $$\begin{aligned}
\sigma_m^2 = \langle X_m^2 \rangle = \langle x^2_B(m) \rangle = \tilde a_\alpha n^{2/\alpha - 2} m (n-m) \;.\end{aligned}$$ Given that the variables $X_i$ are not identical, one can not apply directly the Central Limit Theorem. However, one can show that these random variables $X_m$ do satisfy the Lindeberg condition which guarantees that their sum $A_n = \sum_{m=0}^n X_m$ is distributed according a Gaussian distribution in the large $n$ limit. Let us first introduce $\Sigma_n^2$ $$\begin{aligned}
\Sigma_n^2 = \sum_{m=1}^n \sigma_m^2 = \frac{\tilde a_\alpha}{2} n^{2/\alpha-2} (n+1) n (n-3) \sim \frac{\tilde a_\alpha}{2} n^{2/\alpha +1} \;, \; n \gg 1 \;,\end{aligned}$$ which implies $\Sigma_n \sim n^{1/2+1/\alpha}$ for large $n$. To apply the Lindeberg condition, we need to estimate for any $\epsilon > 0$ $$\begin{aligned}
&& \langle X_m^2 \rangle_\epsilon = \int_{|x| > \epsilon \Sigma_n} x^2 {\rm Proba}(X_m = x) \rmd x \\
&& = 2 \frac{\pi}{\Gamma(1+\alpha^{-1/2})} \frac{n^{1/\alpha}}{m^{1/\alpha}(n-m)^{1/\alpha}} \int_{\epsilon \Sigma_n}^\infty x^2 {\cal S}_\alpha \left(\frac{x}{m^{1/\alpha}} \right) {\cal S}_\alpha \left(\frac{x}{(n-m)^{1/\alpha}} \right) \rmd x \nonumber \\
&& \sim c'_\alpha n^{2/\alpha} n^{-(3/2+\alpha)} m (n-m) \;,\end{aligned}$$ where $c'_\alpha$ is independent of $m$ and $n$. Therefore one has $$\begin{aligned}
\label{last_lindeberg}
\frac{\sum_{m=0}^n \langle X_m^2 \rangle_\epsilon}{\Sigma_n^2} \sim n^{-(\alpha-1/2)} \;.\end{aligned}$$ For random variables for which the above ratio (\[last\_lindeberg\]) goes to zero in the limit $n \to \infty$ (which is the case here for $\alpha < 1/2$), a theorem due to Lindeberg (thus called the ’Lindeberg condition’) [@feller], says that their sum $A_n/\Sigma_n = \Sigma_n^{-1}\sum_{m=0}^n X_m$ is distributed, in the limit $n \to \infty$, according to a Gaussian distribution of unit variance. The fact that, for a Lévy bridge, the area is not a Gaussian distribution (\[large\_y\]) is thus, for $\alpha > 1/2$, a consequence of the correlations between the random variables $x_B(m)$.
To conclude this paragraph, we discuss a simple case where the Lindeberg condition does not hold. Consider the case where $X_1, \cdots, X_n$ are independent random variables distributed according to [@bertin_review] $$\begin{aligned}
{\rm Proba}(X_m=x) = m e^{-mx} \;,\end{aligned}$$ such that one has $$\begin{aligned}
\langle X_m \rangle = \frac{1}{m} \;, \; \langle (X_m - \langle X_m \rangle)^2 \rangle = \frac{1}{m^2} \;.\end{aligned}$$ Then in that case one has immediately $$\begin{aligned}
\Sigma_n^2 = \sum_{m=1}^n \langle (X_m - \langle X_m \rangle)^2 \rangle = \sum_{m=1}^n \frac{1}{m^2} \to \frac{\pi^2}{6} \;, \; n \to \infty \;. \end{aligned}$$ One computes straightforwardly, for $\epsilon > 0$ $$\begin{aligned}
\int_{\epsilon \Sigma_n}^\infty (X_m - \langle X_m \rangle)^2 {\rm Proba}(X_m=x) \rmd x = \frac{e^{-\epsilon \Sigma_n m}}{m^2} \left( 1 + m^2 (\epsilon \Sigma_n)^2 \right) \;,\end{aligned}$$ such that here one has $$\begin{aligned}
\hspace*{-1cm} \frac{1}{\Sigma_n^2} \sum_{m=1}^\infty \int_{\epsilon \Sigma_n}^\infty (X_m - \langle X_m \rangle)^2 {\rm Proba}(X_m=x) \rmd x \to \frac{6}{\pi^2} \sum_{m=1}^\infty \frac{e^{-\epsilon' m}}{m^2} (1+ (\epsilon' m)^2) > 0 \;, \end{aligned}$$ with $\epsilon' = \epsilon \pi^2/6$. Therefore the Lindeberg condition (\[lindeberg\]) does not hold here. In fact, it can be shown that the distribution of the variable $(S_n- \sum_{k=1}^n k^{-1})/\Sigma_n$ converges to a Gumbel distribution [@bertin_review].
Asymptotic behavior of $F_\alpha(Y)$ for large $Y$ {#app_asympt}
==================================================
To analyse the large argument behavior of $F_\alpha(Y)$, we analyse the small $k$ behavior of its Fourier transform $\hat F_\alpha(k)$ given in the text in Eq. (\[expr\_fourier\_gen\_alpha\]): $$\begin{aligned}
\hat F_\alpha(k) &=& \int_{-\infty}^\infty \rmd Y F_\alpha(Y) e^{i k Y} = \hat F_{\alpha,1}(k) +\hat F_{\alpha,2}(k) \;, \\
\hat F_{\alpha,1}(k) &=& \frac{|k|}{\Gamma(1+\alpha^{-1})} \int_0^\infty e^{-\frac{|k|^\alpha}{\alpha+1} \left[(r+1)^{\alpha+1}-r^{\alpha+1}\right]} \rmd r \;, \\
\hat F_{\alpha,2}(k) &=& \frac{|k|}{\Gamma(1+\alpha^{-1})} \int_0^{1/2} e^{-\frac{|k|^\alpha}{\alpha+1} \left[(1/2+r)^{\alpha+1}+(1/2-r)^{\alpha+1}\right]} \rmd r \;. \end{aligned}$$ The analysis of the small $k$ behavior of $\hat F_{\alpha,2}(k)$ is simply obtained by expanding the exponential under the integral. It yields straightforwardly: $$\begin{aligned}
\label{f2_small_k}
\fl \hat F_{\alpha,2}(k) = \frac{1}{\Gamma(1+\alpha^{-1})} \left[\frac{|k|}{2} - \frac{|k|^{1+\alpha}}{(\alpha+1)(\alpha+2)} + \frac{|k|^{1+2\alpha}}{2 (\alpha+1)^2} \left( \frac{1}{3+2 \alpha} + \frac{\sqrt{2 \pi} \Gamma(2+\alpha)}{\Gamma(\frac{5}{2}+\alpha) 2^{3 + 2\alpha}} \right)
\right] + {\cal O}(|k|^{1+3\alpha}) \nonumber \\\end{aligned}$$
The asymptotic expansion of $\hat F_{\alpha,1}(k)$ is a bit a more subtle. To get the two first terms of the expansion, one performs the change of variable $z = |k| r$ and then expand $(r+1)^{\alpha+1}$ using the binomial formula, $$\begin{aligned}
\frac{1}{\alpha+1} ((r+1)^{\alpha+1}-r^{\alpha+1} ) = r^{\alpha} + \frac{\alpha}{2} r^{\alpha-1} + ... \;.\end{aligned}$$ This yields $$\begin{aligned}
\label{intermediate_f1}
\hat F_{\alpha,1}(k) \sim \frac{1}{\Gamma(1+\alpha^{-1})} \int_0^{\infty} e^{-r^{\alpha} - \frac{\alpha}{2} |k| r^{\alpha-1}} \rmd r + {\cal O}(k^{1+\eta}) \;,\end{aligned}$$ where $\eta > 0$ is yet unknown (see below). From this expression (\[intermediate\_f1\]), one immediately obtains the two first terms of the expansion of $\hat F_{\alpha,1}(k)$ as $$\begin{aligned}
\label{f1_small_k_first}
\hat F_{\alpha,1}(k) = 1 - \frac{|k|}{2\Gamma(1+\alpha^{-1})} + {\cal O}(k^{1+\eta}) \;.\end{aligned}$$ Combining Eq. (\[f2\_small\_k\]) and Eq. (\[f1\_small\_k\_first\]) one sees that the first non-trivial term, proportional to $|k|$ cancel in $\hat F_\alpha(k)$. Therefore, one needs to develop $\hat F_{\alpha,1}(k)$ beyond the first terms (\[f1\_small\_k\_first\]). To this purpose, we need to study separately the cases $0 < \alpha \leq 1/2$, $1/2 < \alpha \leq 1$ and $1 < \alpha \leq 2$.
The case $0 < \alpha \leq 1/2$
------------------------------
Let us first analyse the term $\hat F_{\alpha,1}(k)$ which we decompose as $$\begin{aligned}
&& \hat F_{\alpha,1}(k) = B_1 (k) + B_2 (k) + B_3(k) \label{decompose_app} \;, \\
&& B_1(k) = \frac{|k|}{\Gamma(1+\alpha^{-1})} \int_0^1 \rmd r e^{-\frac{|k|^\alpha}{\alpha+1} \left[(r+1)^{\alpha+1}-r^{\alpha+1}\right]} \label{def_b1}\;, \nonumber \\
&& B_2(k) = \frac{|k|}{\Gamma(1+\alpha^{-1})} \left[ \int_1^\infty \rmd r e^{-\frac{|k|^\alpha}{\alpha+1} \left[(r+1)^{\alpha+1}-r^{\alpha+1}\right]}
- e^{-|k|^\alpha[r^{\alpha} + \frac{\alpha}{2} r^{\alpha-1}]} \right] \label{def_b2} \;, \\
&& B_3(k) = \frac{k}{\Gamma(1+\alpha^{-1})}\int_1^\infty \rmd r e^{-|k|^\alpha (r^{\alpha} + \frac{\alpha}{2} |k| r^{\alpha-1})} \label{def_b3} \;.\end{aligned}$$ It is easy to expand $B_1(k)$ for small $k$ as $$\begin{aligned}
\fl B_1(k) =\frac{|k|^{1+\alpha}}{\Gamma(1+\alpha^{-1})} \frac{2-2^{\alpha+2}}{(\alpha+1)(\alpha+2)} + \frac{|k|^{1+2\alpha}}{2 \Gamma(1+\alpha^{-1})} \int_0^1 \left[\frac{\rmd r}{\alpha+1} \left( (r+1)^{\alpha+1} - r^{\alpha+1}\right) \right]^2 + {\cal O}(k^{1+3 \alpha}) \nonumber \\\end{aligned}$$ To expand $B_2(k)$, one checks the asymptotic behaviors, for large $r$ $$\begin{aligned}
&& \frac{1}{\alpha+1} [ (r+1)^{\alpha+1}-r^{\alpha+1} ] - (r^\alpha + \frac{\alpha}{2} r^{\alpha-1}) = {\cal O} (r^{\alpha - 2}) \;, \\
&& \left( \frac{1}{\alpha+1} [ (r+1)^{\alpha+1}-r^{\alpha+1} ]\right)^2 - (r^\alpha + \frac{\alpha}{2} r^{\alpha-1})^2 = {\cal O} (r^{2\alpha - 2}) \;,\end{aligned}$$ so that, for $\alpha < 1/2$ one can safely expand the exponentials in the integrand of $B_2(k)$ (\[def\_b2\]) up to second order to obtain $$\begin{aligned}
\label{exp_b2_1}
\fl B_2(k) = &-& \frac{|k|^{1+\alpha}}{\Gamma(1+\alpha^{-1})} \int_1^\infty \rmd r \left[ \frac{1}{\alpha+1} [ (r+1)^{\alpha+1}-r^{\alpha+1} ] - (r^\alpha + \frac{\alpha}{2} r^{\alpha-1}) \right] \\
\fl &+& \frac{|k|^{1+2\alpha}}{2 \Gamma(1+\alpha^{-1})} \int_1^\infty \rmd r \left( \left( \frac{1}{\alpha+1} [ (r+1)^{\alpha+1}-r^{\alpha+1} ]\right)^2 - (r^\alpha + \frac{\alpha}{2} r^{\alpha-1})^2 \right) \\
\fl &&+ {\cal O}(|k|^{\min(2, 1+3\alpha)}) \;.\end{aligned}$$ Summing up the contributions from $B_1(k)$ and $B_2(k)$ and performing the integrals yields $$\begin{aligned}
\label{comb_b1_b2}
\fl B_1(k) + B_2(k) = &&\frac{|k|^{1+\alpha}}{\Gamma(1+\alpha^{-1})} \left(\frac{1}{(\alpha+1)(\alpha+2)} - \frac{1}{\alpha+1} - \frac{1}{2}\right) \\
\fl && +\frac{|k|^{1+2\alpha}}{2\Gamma(1+\alpha^{-1})} \left[ \frac{1}{2} + \frac{1}{1+2\alpha} - \frac{1}{(1+\alpha^2) (3 + 2\alpha)} + \frac{\alpha^2}{8\alpha-4} + 2\frac{\Gamma(-3-2\alpha) \Gamma(1+\alpha)}{\Gamma(-\alpha)} \right] \nonumber \\
\fl && + {\cal O}(|k|^{\min(2, 1+3\alpha)}) \;,\end{aligned}$$ which we have carefully checked using Mathematica.
Let us now expand $B_3(k)$ (\[def\_b3\]) for small $k$. It is easily seen from Eq. (\[f1\_small\_k\_first\]) that the first terms of this expansion are indeed given by $$\begin{aligned}
\label{eq_b3_1}
B_3(k) = 1 - \frac{|k|}{2\Gamma(1+\alpha^{-1})} + {\cal O}(|k|^{1+\mu}) \;,\end{aligned}$$ with $\mu > 0$. To go beyond the lowest orders, we first perform a change of variable $x = |k| r$ and then compute $B_3''(k)$ and finally expand it for small $k$. This yields $$\begin{aligned}
\label{b3_seconde}
&& B_3''(k) = \frac{1}{\Gamma(1+\alpha^{-1})} \left(\alpha (\frac{1}{2} + (1 + \frac{\alpha}{2})) k^{\alpha-1} e^{-(1+\frac{\alpha}{2}) k^{\alpha}} + I_3(k)\right) \;, \\
&& I_3(k) = \left(\frac{\alpha}{2} \right)^2 \int_k^\infty e^{-x^\alpha - \frac{\alpha}{2} k x^{\alpha-1}} x^{2(\alpha-1)}\, \rmd x \;,\end{aligned}$$ where $2(\alpha - 1)<-1$ for $\alpha < 1/2$. One then obtains the small $k$ behavior of $I_3(k)$ by simply expanding the term $e^{- \frac{\alpha}{2} k x^{\alpha-1}}$ in the integrand. This yields, to lowest order $$\begin{aligned}
\label{eq_i3}
I_3(k) = \frac{1}{1-2\alpha} \left(\frac{\alpha}{2} \right)^2 k^{2\alpha-1} + {\cal O}(k^{3\alpha-1}) \;.\end{aligned}$$ From Eq. (\[b3\_seconde\]) and Eq. (\[eq\_i3\]), one obtains straightforwardly $$\begin{aligned}
\label{eq_b3_asympt}
&& B_3(k) = 1 - \frac{|k|}{\Gamma(1+\alpha^{-1})} + \frac{|k|^{1+\alpha}}{\Gamma(1+\alpha^{-1})} \left(\frac{1}{2} + \frac{1}{\alpha+1} \right) \\
&& + \frac{|k|^{1+2\alpha}}{\Gamma(1+\alpha^{-1})} \left(-\frac{1+\alpha/2}{2(1+2\alpha)} \left(\frac{1}{2} + (1+\alpha/2) \right) + \frac{1}{2\alpha(1+2\alpha)} \frac{1}{1-2\alpha} \left(\frac{\alpha}{2}\right)^2 \right) \;.\end{aligned}$$ Finally, combining Eq. (\[comb\_b1\_b2\]) and Eq. (\[eq\_b3\_asympt\]) together with the small $k$ expansion of ${\hat F_{\alpha,2}(k)}$ above (\[f2\_small\_k\]), one sees that the term proportional to $|k|^{1+\alpha}$ actually cancels, yielding $$\begin{aligned}
\label{f_small_k_app}
&& \hat F_\alpha(k) = 1 + c_\alpha |k|^{1+2\alpha} + {\cal O}(k^{1+3\alpha}) \;, \\
&& c_\alpha = \frac{1}{\Gamma(1+\alpha^{-1})} \frac{2^{-2(2+\alpha)} \pi^{3/2} \tan{(\alpha \pi/2)} }{\cos{(\alpha \pi)} (1+\alpha) \Gamma(-\alpha) \Gamma(5/2+\alpha)} \;.\end{aligned}$$ This singular behavior of $\hat F_\alpha(k)$ for small $k$ (\[f\_small\_k\_app\]) yields the power law behavior of $F_\alpha(Y)$ for large $Y$ $$\begin{aligned}
\label{expr_appendix}
&& F_\alpha(Y) \propto \frac{a_\alpha}{Y^{2(1+\alpha)}} \;, \; Y \gg 1 \;, \\
&& a_\alpha = - \frac{1}{\pi} \Gamma(2+2\alpha) \cos{(\alpha \pi)} c_\alpha = \frac{2^{-2(2+\alpha)} \sqrt{\pi} \Gamma(2+2\alpha) \tan{(\alpha \pi/2)}}{\Gamma(2+\alpha^{-1}) \Gamma(1-\alpha) \Gamma(\frac{5}{2} + \alpha)} \;.\end{aligned}$$
The case $1/2 < \alpha \leq 1$
------------------------------
This case can be studied along the same line as above except that in that case, $1 + 2\alpha > 2$ and therefore one has to handle with care the analysis of terms which are proportional to $k^2$, while the coefficient proportional to $k^{1+2\alpha}$ has the same form (\[f\_small\_k\_app\]). We will not repeat the analysis and simply give the result. One finds that $\hat F_\alpha(k)$ behaves for small $k$ as $$\begin{aligned}
\label{expr_bc}
\hat F_\alpha(k) = 1 - \frac{b_\alpha}{2} k^2 + c_\alpha k^{1+2\alpha} + {\cal O}(k^{\min{(3,1+3\alpha)}})
\;, \; b_{\alpha} = \frac{\alpha \Gamma(2-\alpha^{-1})}{12 \Gamma(1+\alpha^{-1})} \;.\end{aligned}$$ The expression for $b_\alpha$ given above (\[expr\_bc\]) yields the expression for $\langle Y^2 \rangle$ given in the text in Eq. (\[area\_variance\]).
The case $1 < \alpha \leq 2$
----------------------------
In this case one can again perform a similar analysis but in this case one has $1+2 \alpha > 3$. And therefore one has to handle carefully the term proportional to $|k|^3$. A quite lengthy calculation shows that this term actually vanishes for $\alpha > 1$, while the coefficients of the terms proportional to $k^2$ and $|k|^{1+2\alpha}$ are still given by the expressions above (\[expr\_bc\]). This yields again as above (\[expr\_bc\]) $$\begin{aligned}
\hat F_\alpha(k) = 1 - \frac{b_\alpha}{2} k^2 + c_\alpha k^{1+2\alpha} + {\cal O}(k^5) \;.\end{aligned}$$
Explicit expression of $F_\alpha(Y)$ for $\alpha=1$ {#appendix_elementary}
===================================================
In this appendix, we give an explicit expression of $F_1(Y)$ for $\alpha=1$. The starting point of our analysis is the expression (\[expr\_arctan\]) given in the text: $$\begin{aligned}
&& F_1(Y) = \frac{1}{\pi}\frac{2}{1+4Y^2} \\
&&+ \frac{1}{\pi} \left( \frac{2(1-8Y^2)}{(1+4Y^2)(1+16Y^2)} + \frac{4}{(1+16Y^2)^{\frac{3}{2}}} {\rm Re} \left[{(1- 4 i Y)^{\frac{3}{2}} \arctan{\left((1+4 i Y)^{-\frac{1}{2}}\right)}}\right]
\right) \nonumber \end{aligned}$$ For $z$ a complex number, the following elementary relations are useful : $$\begin{aligned}
&& \arctan{z} = \frac{1}{2i} \left(\log{(1+i z)} - \log{(1-iz)} \right) \;, \\
&& \log{(x+ i y)} = \log{(\sqrt{x^2+y^2})} + 2 i \arctan{\left(\frac{y}{x + \sqrt{x^2+y^2}} \right)} \;.\end{aligned}$$ On the other hand has $$\begin{aligned}
\label{def_ab}
\frac{1}{\sqrt{1+ 4 i Y}} = a + i b \; , \; && a = \frac{1}{(1+16Y^2)^{1/4}} \cos{\left(\frac{\theta}{2}\right)} \;, \\
&& b = \frac{-1}{(1+16Y^2)^{1/4}} \sin{\left(\frac{\theta}{2}\right)} \;, \nonumber \\
&& \theta = \arctan{(4 Y)} \;. \nonumber \end{aligned}$$ Defining $\lambda$ and $\mu$ as $$\begin{aligned}
\label{def_lm}
\lambda &=& \arctan{\left[\frac{a^2+b^2-1+\sqrt{a^2+(1-b)^2}\sqrt{a^2+(1+b)^2}}{2a} \right]} \;, \\
\mu &=& -\frac{1}{4} \log{\left[\frac{(1-b)^2+a^2}{(1+b)^2+a^2} \right]} \;, \nonumber \end{aligned}$$ in terms of $a, b$ defined above (\[def\_ab\]), one obtains finally (after straightforward algebra) $$\begin{aligned}
\label{elem}
F_1(Y) = \frac{1}{\pi}\frac{2}{1+4Y^2} \\
+ \frac{1}{\pi} \left( \frac{2(1-8Y^2)}{(1+4Y^2)(1+16Y^2)} + \frac{4}{(1+ 16Y^2)^{3/4}} \left( \lambda \cos{\left(\frac{3\theta}{2} \right)} + \mu \sin{\left(\frac{3\theta}{2} \right)} \right) \right) \;. \nonumber\end{aligned}$$
References {#references .unnumbered}
==========
[100]{}
S. Chandrasekhar, Rev. Mod. Phys. [**15**]{}, 1 (1943).
W. Feller, [*An introduction to Probability Theory and its Applications*]{}, (Wiley), New York (1968).
B. Hughes, [*Random walks and random environments*]{}, (Clarendon Press), Oxford (1968).
D.E. Koshland, [*Bacterial Chemotaxis as a Model Behavioral System*]{}, (Raven), New York (1980).
S. Asmussen, [*Applied Probability and Queues*]{}, (Springer), New York (2003); M.J. Kearney, J. Phys. A [**37**]{}, 8421 (2004).
S.N. Majumdar, [*Brownian functionals in Physics and Computer Science*]{}, Current Science [**89**]{}, 2076 (2005); [*Universal First-passage Properties of Discrete-time Random Walks and Lévy Flights on a Line: Statistics of the Global Maximum and Records*]{}, Leuven Lectures FPSP-XII (2009), preprint arXiv:0912.2586 (to appear in Physica A).
R.J. Williams, [*Introduction to the Mathematics of Finance*]{}, (AMS), (2006); M. Yor, [*Exponential Functionals of Brownian Motion and Related Topics*]{}, (Springer), Berlin (2000).
J. de Coninck, F. Dunlop, V. Rivasseau, Commun. Math. Phys. [**121**]{}, 401 (1989).
S. N. Majumdar, A. Comtet, Phys. Rev. Lett. [**92**]{}, 225501 (2004); J. Stat. Phys. [**119**]{}, 777 (2005).
M. J. Kearney, S.N. Majumdar, J. Phys. A: Math. Gen. [**38**]{}, 4097 (2005); M. J. Kearney, S.N. Majumdar, R.J. Martin, J. Phys. A: Math. Theor. [**40**]{}, F863 (2007).
G. Schehr, S.N. Majumdar, Phys. Rev. E [**73**]{}, 056103 (2006).
P. Welinder, G. Pruessner, K. Christensen, New J. of Phys. [**9**]{}, 149 (2007).
S. Janson, Proba. Survey [**4**]{}, 80 (2007).
M. Rajabpour, J. Phys. A : Math. Theor. [**42**]{}, 485205 (2009).
J. Rambeau, G. Schehr, J. Stat. Mech., P09004 (2009).
D. Dhar, R. Ramaswamy, Phys. Rev. Lett. [**63**]{}, 1659 (1989).
P. Le Doussal, K. J. Wiese, Phys. Rev. E [**79**]{}, 051105 (2009).
B. Waclaw, J. Sopik, W. Janke, H. Meyer-Ortmanns, Phys. Rev. Lett. [**103**]{}, 080602 (2009); J. Stat. Mech. P10021 (2009).
M. R. Evans, T. Hanney, S. N. Majumdar, Phys. Rev. Lett. [**97**]{}, 010602 (2006)
M. R. Evans, T. Hanney, J. Phys. A: Math. Gen. [**38**]{}, R195, (2005).
C. Godrèche, Lect. Notes Phys. [**716**]{} 261 (2007), arXiv:cond-mat/0604276.
S. N. Majumdar, Les Houches lecture notes for the summer school [*Exact Methods in Low-dimensional Statistical Physics and Quantum Computing*]{}, (2008), preprint arXiv:0904.4097.
F.B. Knight, [*Hommage à P.A. Meyer et J. Neveu*]{}, [Astérisques]{}, 171 (1996); L. Chaumont, D.G. Hobson, M. Yor, Sém. de Prob. XXXV, 334, (2001).
J. Bertoin, [*Lévy processes*]{}, Camb. Univ. Press., Melbourne, NY, (1996).
For a short review see T.W. Burkhardt, J. Stat. Mech. P07004 (2007).
K. Kawasaki, Phys. Rev. [**145**]{}, 224 (1966).
M. Clusel, E. Bertin, Int. J. Mod. Phys. B [**22**]{}, 3311 (2008).
|
---
abstract: 'We show that the B-mode polarization signal detected at low multipoles by BICEP2 cannot be entirely due to topological defects. This would be incompatible with the high-multipole B-mode polarization data and also with existing temperature anisotropy data. Adding cosmic strings to a model with tensors, we find that B-modes *on their own* provide a comparable limit on the defects to that already coming from [[*Planck*]{}]{} satellite temperature data. We note that strings at this limit give a modest improvement to the best-fit of the B-mode data, at a somewhat lower tensor-to-scalar ratio of $r \simeq 0.15$.'
author:
- Joanes Lizarraga
- Jon Urrestilla
- David Daverio
- Mark Hindmarsh
- Martin Kunz
- 'Andrew R. Liddle'
bibliography:
- 'CosmicStrings.bib'
title: 'Can topological defects mimic the BICEP2 B-mode signal?'
---
Introduction
============
The detection of low-multipole B-mode polarization anisotropies by the BICEP2 project [@Ade:2014xna] opens a new observational window on models that generate the primordial perturbations leading to structure formation. The leading candidate to explain such a B-mode signal is primordial gravitational wave (tensor) perturbations generated by the inflationary cosmology. For a tensor-to-scalar ratio $r$ of around $0.2$, these give a good match to the spectral shape in the region $\ell \simeq 40$ – $150$, while falling some way short of the observed signal at higher multipoles for reasons yet to be uncovered.
An alternative mechanism of generating primordial B-modes is the presence of an admixture of topological defects (see e.g. Refs. [@VilShe94; @Hindmarsh:1994re; @Durrer:2001cg; @Copeland:2009ga; @Hindmarsh:2011qj] for reviews). Many inflation scenarios, particularly of hybrid inflation type, end with a phase transition. Defect production at such a transition is natural and plausibly a sub-dominant contributor to the total temperature anisotropy. Many papers have used recent data to impose constraints on the fraction of defects, typically obtaining limits of a few percent contribution to the large-angle temperature anisotropies [@Wyman:2005tu; @Bevis:2007gh; @Battye:2010xz; @Dunkley:2010ge; @Urrestilla:2011gr; @Avgoustidis:2011ax; @Ade:2013xla]. The tensor and defect spectra were previously compared in Refs. [@Urrestilla:2008jv; @Mukherjee:2010ve].
An important question then arises: does the observed B-mode polarization confirm the existence of a primordial gravitational wave background due to inflationary dynamics in the early Universe, or could it instead be entirely due to the presence of topological defects? In this *Letter* we show that topological defects alone cannot explain the BICEP2 data points.
B-mode constraints from BICEP2
==============================
As with inflationary tensors, a distinctive signature of topological defects lies in the B-mode polarization, where the signal is not masked by a dominant contribution from inflationary scalars. Figure \[pol\] shows a comparison of cosmic microwave background (CMB) spectra predicted from inflation with those of cosmic strings as computed via field theory simulations[^1] by Bevis et al. [@Bevis:2007qz; @Bevis:2010gj], for a particular value of $f_{10}$ near the [*Planck*]{} upper limit [@Ade:2013xla] (where $f_{10}$ is the fractional contribution of defects to the temperature anisotropies at $\ell = 10$). The scalar B-mode spectrum is the one inevitably produced by lensing of the scalar E-modes. In the B-mode channel the string spectrum has a quite different shape to the inflationary tensors, peaking towards smaller scales. Figure \[polar\] shows the B-mode polarization spectra for several classes of defects (textures, semilocal strings, and Abelian Higgs strings [@Urrestilla:2007sf]), showing that they share the same general shape in the multipole range of interest. We focus on cosmic strings (using the Abelian Higgs model) as a specific example for the remainder of this work.
We first attempt to match the cosmic string B-mode spectrum to the BICEP2 data, showing the result in the lower panel of Figure \[BBr0\]. It is clear that the defect spectrum has the wrong shape, and could only match the low-multipole data at $\ell < 100$ by substantially over-predicting the high multipole data ($\ell >100$). In detail, we see that we need $f_{10} \simeq 0.3$ to generate the necessary power at $\ell = 80$, which in turn leads to a B-mode amplitude which is a factor of about 5 too large at higher $\ell$.
In addition, matching the low-multipole data requires a fractional contribution to the total TT power spectrum at $\ell=10$ far larger than the maximum allowed by [[*Planck*]{}]{} [@Ade:2013xla], as shown in the upper panel of Figure \[BBr0\]. We show the defect contributions to the temperature spectrum as the blue-dotted curves, with the required contributions to match the B-mode polarization amplitude at $\ell = 80$ as the highest blue-dotted curve (which corresponds to $f_{10} = 0.3$). The solid back line is the best-fit [$\Lambda$CDM]{} model, while the grey dashed line shows the sum of the $f_{10} = 0.3$ string prediction with the [[*Planck*]{}]{} best-fit [$\Lambda$CDM]{} model [@Ade:2013ktc]. The model in which strings match the B-mode polarization amplitude at $\ell = 80$ is clearly incompatible with the temperature data. Allowing the parameters of the [$\Lambda$CDM]{} model to vary does not help: the 95% upper limit from [*Planck*]{} is around 0.03 to 0.055 depending on the type of defect [@Ade:2013xla].
We can therefore immediately conclude that defects do not provide an alternative to inflationary tensors in explaining the observed data.
\
We can also use the B-mode data to constrain the contribution of defects to the total anisotropy in a scenario where both strings and inflationary gravitational waves contribute significantly, as anticipated in Refs. [@Seljak:2006hi; @Pogosian:2007gi]. In fact, because the strings contribute more substantially at higher multipoles than inflationary tensors do, a modest admixture of defects improves the fit to the BICEP2 data; as seen in Fig. \[BBr02\] a string fraction of around 0.04 would explain the excess signal at $\ell \simeq 200$ (as an alternative to the more prosaic possible explanations of a foreground contribution or undiscovered systematic), while a fraction above about 0.06 is disfavoured. It is noteworthy that the first detection of the B-modes already gives a limit on defects which is competitive with that from the temperature spectrum. This conclusion can of course only strengthen if some or all of the BICEP2 signal turns out not to be cosmological.
Conclusions
===========
If this detection of B-mode polarization is confirmed, then primordial gravitational waves appear to be a necessary addition to the standard cosmological model. However, the BICEP2 data points do not agree well with expectations at higher $\ell$. It is intriguing that an admixture of topological defects appears able to improve the fit, while reducing the tensor-to-scalar ratio to $r \simeq 0.15$. But precise quantitative statements for such a model, which would simultaneously include primordial tensors, defects, and perhaps also a running of the scalar spectral index, require a more careful numerical analysis.
In conclusion, we have shown that topological defects alone cannot explain the BICEP2 data points, and that B-modes already give a constraint on defects competitive with that from temperature anisotropies.
JL and JU acknowledge support from the University of the Basque Country UPV/EHU (EHUA 12/11), the Basque Government (IT-559-10), the Spanish Ministry (FPA2012-34456) and the Consolider-Ingenio Programme CPAN (CSD2007-00042), EPI (CSD2010-00064). DD and MK acknowledge financial support from the Swiss NSF. MH and ARL acknowledge support from the Science and Technology Facilities Council (grant numbers ST/J000477/1 and ST/K006606/1).
[*Shortly after our article was posted on arxiv.org, a related paper [@Moss:2014cra] was posted investigating similar ideas.*]{}
[^1]: Strings can also be studied in the Nambu–Goto approximation, most recently in Ref. [@Blanco-Pillado:2013qja]. However, the shapes of the cosmic string CMB spectra are reasonably generic and can be understood from simple modelling [@Pogosian:1999np; @Martins:2003vd; @Battye:2010xz]. There are significant differences in other observational constraints: for a review see Ref. [@Hindmarsh:2011qj].
|
[Small systems of Diophantine equations which have]{}
0.2truecm
[only very large integer solutions]{}
1.1truecm
[Apoloniusz Tyszka]{}
1.1truecm [**Abstract.**]{} Let . There is an algorithm that for every computable function returns a positive integer $m(f)$, for which a second algorithm accepts on the input $f$ and any integer , and returns a system such that $S$ has infinitely many integer solutions and each integer tuple $(x_1,\ldots,x_n)$ that solves $S$ satisfies $x_1=f(n)$. For each integer we construct a system such that $S$ has infinitely many integer solutions and they all belong to . 1.1truecm
[**Key words and phrases:**]{} computable function, computable upper bound for the heights of integer (rational) solutions of a Diophantine equation, Davis-Putnam-Robinson-Matiyasevich theorem, Diophantine equation with a finite number of integer (rational) solutions, system of Diophantine equations. 1.3truecm
[**2010 Mathematics Subject Classification:**]{} 03D20, 11D99, 11U99. 1.1truecm
We present a general method for constructing small systems of Diophantine equations which have only very large integer solutions. Let $\Phi_n$ denote the following statement $$\forall x_1,\ldots,x_n \in {\mathbb Z}~\exists y_1,\ldots,y_n \in {\mathbb Z}$$ $$\Bigl(2^{\textstyle 2^{n-1}}<|x_1| \Longrightarrow \bigl(|x_1|<|y_1| \vee \ldots \vee |x_1|<|y_n|\bigr)\Bigr) ~\wedge$$ $$\Bigl(\forall i,j,k \in \{1,\ldots,n\}~(x_i+x_j=x_k \Longrightarrow y_i+y_j=y_k)\Bigr) ~\wedge$$ $$\forall i,j,k \in \{1,\ldots,n\}~(x_i \cdot x_j=x_k \Longrightarrow y_i \cdot y_j=y_k)$$
For $n \geq 2$, the bound $2^{\textstyle 2^{n-1}}$ cannot be decreased because for $$(x_1,\ldots,x_n)=\Bigl(2^{\textstyle 2^{n-1}},2^{\textstyle 2^{n-2}},2^{\textstyle 2^{n-3}},\ldots,256,16,4,2\Bigr)$$ the conjunction of statements (1) and (2) guarantees that $$(y_1,\ldots,y_n)=(0,\ldots,0) \vee (y_1,\ldots,y_n)=\Bigl(2^{\textstyle 2^{n-1}},2^{\textstyle 2^{n-2}},2^{\textstyle 2^{n-3}},\ldots,256,16,4,2\Bigr)$$
The statement $\forall n \Phi_n$ has powerful consequences for Diophantine equations, but is still unproven, see [@Tyszka]. In particular, it implies that if a Diophantine equation has only finitely many solutions in integers (non-negative integers, rationals), then their heights are bounded from above by a computable function of the degree and the coefficients of the equation. For integer solutions, this conjectural upper bound can be computed by applying equation (3) and Lemmas \[lem2\] and \[lem7\]. 0.2truecm
[**Observation.**]{} [*For all positive integers $n$, $m$ with , if the statement $\Phi_n$ fails for and , then the statement $\Phi_m$ fails for .*]{}
0.2truecm
By the Observation, the statement $\forall n \Phi_n$ is equivalent to the statement $\forall n \Psi_n$, where $\Psi_n$ denote the statement $$\forall x_1,\ldots,x_n \in {\mathbb Z}~\exists y_1,\ldots,y_n \in {\mathbb Z}$$ $$\Bigl(2^{\textstyle 2^{n-1}}<|x_1|={\rm max}\bigl(|x_1|,\ldots,|x_n|\bigr) \leq 2^{\textstyle 2^n} \Longrightarrow \bigl(|x_1|<|y_1| \vee \ldots \vee |x_1|<|y_n|\bigr)\Bigr) ~\wedge$$ $$\Bigl(\forall i,j,k \in \{1,\ldots,n\}~(x_i+x_j=x_k \Longrightarrow y_i+y_j=y_k)\Bigr) ~\wedge$$ $$\forall i,j,k \in \{1,\ldots,n\}~(x_i \cdot x_j=x_k \Longrightarrow y_i \cdot y_j=y_k)$$ In contradistinction to the statements $\Phi_n$, each true statement $\Psi_n$ can be confirmed by a brute-force search in a finite amount of time. 0.2truecm
The statement $$\forall n ~\forall x_1,\ldots,x_n \in {\mathbb Z}~\exists y_1,\ldots,y_n \in {\mathbb Z}$$ $$\bigl(2^{\textstyle 2^{n-1}}<|x_1| \Longrightarrow |x_1|<|y_1|\bigr) ~\wedge$$ $$\bigl(\forall i,j,k \in \{1,\ldots,n\}~(x_i+x_j=x_k \Longrightarrow y_i+y_j=y_k)\bigr) ~\wedge$$ $$\forall i,j,k \in \{1,\ldots,n\}~(x_i \cdot x_j=x_k \Longrightarrow y_i \cdot y_j=y_k)$$ strengthens the statement $\forall n \Phi_n$ but is false, as we will show in the Corollary. 0.2truecm
Let $$E_n=\{x_i=1,~x_i+x_j=x_k,~x_i \cdot x_j=x_k: i,j,k \in \{1,\ldots,n\}\}$$
To each system $S \subseteq E_n$ we assign the system $\widetilde{S}$ defined by 0.2truecm
$\left(S \setminus \{x_i=1:~i \in \{1,\ldots,n\}\}\right) \cup$
$\{x_i \cdot x_j=x_j:~i,j \in \{1,\ldots,n\} {\rm ~and~the~equation~} x_i=1 {\rm ~belongs~to~} S\}$
0.2truecm
In other words, in order to obtain $\widetilde{S}$ we remove from $S$ each equation $x_i=1$ and replace it by the following $n$ equations: 0.2truecm
$\begin{array}{rcl}
x_i \cdot x_1 &=& x_1\\
&\ldots& \\
x_i \cdot x_n &=& x_n
\end{array}$
0.2truecm
\[lem1\] For each system $S \subseteq E_n$ $$\begin{aligned}
\{(x_1,\ldots,x_n) \in {{\mathbb Z}}^n:~(x_1,\ldots,x_n) {\rm ~solves~} \widetilde{S}\} &=& \\
\{(x_1,\ldots,x_n) \in {{\mathbb Z}}^n:~(x_1,\ldots,x_n) {\rm ~solves~} S\} \cup
\{(0,\ldots,0)\}&\end{aligned}$$
\[lem2\] The statement $\Phi_n$ can be equivalently stated thus: if a has only finitely many solutions in integers , then each such satisfies .
It follows from Lemma \[lem1\].
Nevertheless, for each integer there exists a system which has infinitely many integer solutions and they all belong to . We will prove it in Theorem \[the1\]. First we need a few lemmas.
\[lem3\] If a positive integer $n$ is odd and a pair $(x,y)$ of positive integers solves the negative Pell equation , then the pair $$\left(\frac{\left(x+y\sqrt{d}\right)^n+\left(x-y\sqrt{d}\right)^n}{2},~
\frac{\left(x+y\sqrt{d}\right)^n-\left(x-y\sqrt{d}\right)^n}{2\sqrt{d}}\right)$$ consists of positive integers and solves the equation .
\[lem4\] In the domain of positive integers, all solutions to are given by $$\left(2+\sqrt{5}\right)^{2k+1}=x+y\sqrt{5}$$ where $k$ is a non-negative integer.
\[lem5\] The pair $(2,1)$ solves the equation $x^2-5y^2=-1$. If a pair solves the equation , then the pair solves this equation too.
\[lem6\]
Lemma \[lem5\] allows us to compute all positive integer solutions to .
It follows from Lemma \[lem4\]. Indeed, if $\left(2+\sqrt{5}\right)^{2k+1}=x+y\sqrt{5}$, then $$\left(2+\sqrt{5}\right)^{2k+3}=\left(2+\sqrt{5}\right)^2 \cdot \left(2+\sqrt{5}\right)^{2k+1}=$$ $$\left(9+4\sqrt{5}\right) \cdot \left(x+y\sqrt{5}\right)=\left(9x+20y\right)+\left(4x+9y\right)\sqrt{5}$$
\[the1\] For each integer $n \geq 12$ there exists a system such that $S$ has infinitely many integer solutions and they all belong to .
By Lemmas \[lem4\]–\[lem6\], the equation has infinitely many solutions in positive integers and all these solutions can be simply computed. For a positive integer $n$, let denote the solution to . We define $S$ as 0.2truecm
$x_1=1$ $x_1+x_1=x_2$ $x_2+x_2=x_3$ $x_1+x_3=x_4$
0.2truecm
$x_4 \cdot x_4=x_5$ $x_5 \cdot x_5=x_6$ $x_6 \cdot x_7=x_8$ $x_8 \cdot x_8=x_9$
0.2truecm
$x_{10} \cdot x_{10}=x_{11}$ $x_{11}+x_1=x_{12}$ $x_4 \cdot x_9=x_{12}$
0.2truecm
$x_{12} \cdot x_{12}=x_{13}$ $x_{13} \cdot x_{13}=x_{14}$ … $x_{n-1} \cdot x_{n-1}=x_n$
0.2truecm The first $11$ equations of $S$ equivalently expresses that and 625 divides $x_8$. The equation [$x_{10}^2-5^9 \cdot x_7^2=-1$]{} expresses the same fact. Execution of the following [*MuPAD*]{} code
> x:=2:
> y:=1:
> for n from 2 to 313 do
> u:=9*x+20*y:
> v:=4*x+9*y:
> if igcd(v,625)=625 then print(n) end_if:
> x:=u:
> y:=v:
> end_for:
> float(u^2+1);
> float(2^(2^(12-1)));
returns only $n=313$. Therefore, in the domain of positive integers, the solution to is given by the pair . Hence, if an integer tuple solves $S$, then and $$x_{12}=x_{10}^2+1 \geq u(313)^2+1>2^{\textstyle 2^{12-1}}$$ The final inequality comes from the execution of the last two of the code, as they display the numbers and . Applying induction, we get . By Lemma \[lem3\] (or by , the equation has infinitely many solutions. This conclusion transfers to the .
J. C. Lagarias studied the equation for , where . His theorem says that for these values , the least integer solution grows exponentially , . 0.2truecm
The next theorem generalizes Theorem \[the1\]. But first we need Lemma \[lem7\] together with introductory matter. 0.2truecm
Let . For the Diophantine equation , let $M$ denote the maximum of the absolute values of its coefficients. Let ${\cal T}$ denote the family of all polynomials $W(x_1,\ldots,x_p) \in {{\mathbb Z}}[x_1,\ldots,x_p]$ whose all coefficients belong to the interval $[-M,M]$ and ${\rm deg}(W,x_i) \leq d_i={\rm deg}(D,x_i)$ for each $i \in \{1,\ldots,p\}$. Here we consider the degrees of $W(x_1,\ldots,x_p)$ and $D(x_1,\ldots,x_p)$ with respect to the variable $x_i$. It is easy to check that $${\rm card}({\cal T})=(2M+1)^{\textstyle (d_1+1) \cdot \ldots \cdot (d_p+1)}$$
We choose any bijection . Let ${\cal H}$ denote the family of all equations of the form 0.2truecm
$x_i=1$, $x_i+x_j=x_k$, $x_i \cdot x_j=x_k$ ($i,j,k \in \{1,\ldots,{\rm card}({\cal T})\})$
0.2truecm which are polynomial identities in if $$\forall s \in \{p+1,\ldots,{\rm card}({\cal T})\} ~~x_s=\tau(s)$$ There is a unique such that . For each ring ${\textbf{\textit{K}}}$ extending ${\mathbb Z}$ the system ${\cal H}$ implies . To see this, we observe that there exist pairwise distinct such that $m>p$ and $$t_0=1~ \wedge ~t_1=x_1~ \wedge ~\ldots~ \wedge ~t_p=x_p~ \wedge ~t_m=2 \cdot D(x_1,\ldots,x_p)~ \wedge$$ $$\forall i \in \{p+1,\ldots,m\}~ \exists j,k \in \{0,\ldots,i-1\} ~~(t_j+t_k=t_i \vee t_i+t_k=t_j \vee t_j \cdot t_k=t_i)$$ For each ring ${\textbf{\textit{K}}}$ extending ${\mathbb Z}$ and for each there exists a unique tuple such that the tuple solves the system . The sought elements are given by the formula $$\forall s \in \{p+1,\ldots,{\rm card}({\cal T})\} ~~x_s=\tau(s)(x_1,\ldots,x_p)$$
\[lem7\] The system ${\cal H} \cup \{x_q+x_q=x_q\}$ can be simply computed. For each ring ${\textbf{\textit{K}}}$ extending ${\mathbb Z}$, the equation $D(x_1,\ldots,x_p)=0$ is equivalent to the system ${\cal H} \cup \{x_q+x_q=x_q\} \subseteq E_{{\rm card}({\cal T})}$. Formally, this equivalence can be written as $$\forall x_1,\ldots,x_p \in {\textbf{\textit{K}}}~\Bigl(D(x_1,\ldots,x_p)=0 \Longleftrightarrow
\exists x_{p+1},\ldots,x_{{\rm card}({\cal T})} \in {\textbf{\textit{K}}}$$ $$(x_1,\ldots,x_p,x_{p+1},\ldots,x_{{\rm card}({\cal T})}) {\rm ~solves~the~system~}
{\cal H} \cup \{x_q+x_q=x_q\} \Bigr)$$ For each ring ${\textbf{\textit{K}}}$ extending ${\mathbb Z}$ and for each with there exists a unique tuple such that the tuple solves the system . Hence, for each ring ${\textbf{\textit{K}}}$ extending ${\mathbb Z}$ the equation has the same number of solutions as the system .
Putting $M=M/2$ we obtain new families ${\cal T}$ and ${\cal H}$. There is a unique $q \in \{1,\ldots,{\rm card}({\cal T})\}$ such that $$\Bigl(q \in \{1,\ldots,p\}~ \wedge ~x_q=D(x_1,\ldots,x_p)\Bigr)~ \vee$$ $$\Bigl(q \in \{p+1,\ldots,{\rm card}({\cal T})\}~ \wedge ~\tau(q)=D(x_1,\ldots,x_p)\Bigr)$$ The new system is equivalent to and can be simply computed.
The Davis-Putnam-Robinson-Matiyasevich theorem states that every recursively enumerable set has a Diophantine representation, that is $$(a_1,\ldots,a_n) \in {\cal M} \Longleftrightarrow
\exists x_1, \ldots, x_m \in {\mathbb N}~~W(a_1,\ldots,a_n,x_1,\ldots,x_m)=0$$
for some polynomial $W$ with integer coefficients, see [@Matiyasevich] and [@Kuijer]. The polynomial $W$ can be computed, if we know a Turing machine $M$ such that, for all , $M$ halts on if and only if , see [@Matiyasevich] and [@Kuijer].
\[the2\] There is an algorithm that for every computable function returns a positive integer $m(f)$, for which a second algorithm accepts on the and any integer , and returns a system such that $S$ has infinitely many integer solutions and each integer tuple $(x_1,\ldots,x_n)$ that solves $S$ satisfies $x_1=f(n)$.
By the Davis-Putnam-Robinson-Matiyasevich theorem, the function $f$ has a Diophantine representation. It means that there is a polynomial $W(x_1,x_2,x_3,\ldots,x_r)$ with integer coefficients such that for each non-negative integers $x_1$, $x_2$, $$\tag*{\tt (E1)}
x_1=f(x_2) \Longleftrightarrow \exists x_3, \ldots, x_r \in {\mathbb N}~~W(x_1,x_2,x_3,\ldots,x_r)=0$$
By the equivalence [(E1)]{} and Lagrange’s four-square theorem, for each integers $x_1$, $x_2$, the conjunction holds true if and only if there exist integers $a,b,c,d,\alpha,\beta,\gamma,\delta,x_3,x_{3,1},x_{3,2},x_{3,3},x_{3,4},\ldots,x_r,x_{r,1},x_{r,2},x_{r,3},x_{r,4}$ such that $$W^2(x_1,x_2,x_3,\ldots,x_r)+\bigl(x_1-a^2-b^2-c^2-d^2\bigr)^2+\bigl(x_2-\alpha^2-\beta^2-\gamma^2-\delta^2\bigr)^2+$$ $$\bigl(x_3-x^2_{3,1}-x^2_{3,2}-x^2_{3,3}-x^2_{3,4}\bigr)^2+\ldots+\bigl(x_r-x^2_{r,1}-x^2_{r,2}-x^2_{r,3}-x^2_{r,4}\bigr)^2=0$$ By Lemma \[lem7\], there is an integer such that for each integers $x_1$, $x_2$, $$\tag*{\tt (E2)}
\Bigl(x_2 \geq 0 \wedge x_1=f(x_2)\Bigr) \Longleftrightarrow \exists x_3,\ldots,x_s \in {\mathbb Z}~~\Psi(x_1,x_2,x_3,\ldots,x_s)$$ where the formula $\Psi(x_1,x_2,x_3,\ldots,x_s)$ is algorithmically determined as a conjunction of formulae of the form , , . Let $m(f)=8+2s$, and let $[\cdot]$ denote the integer part function. For each integer , $$n-\left[\frac{n}{2}\right]-4-s \geq m(f)-\left[\frac{m(f)}{2}\right]-4-s \geq m(f)-\frac{m(f)}{2}-4-s=0$$ Let $S$ denote the following system $$\left\{
\begin{array}{rcl}
{\rm all~equations~occurring~in~}\Psi(x_1,x_2,\ldots,x_s) \\
n-\left[\frac{n}{2}\right]-4-s {\rm ~equations~of~the~form~} z_i=1 \\
t_1 &=& 1 \\
t_1+t_1 &=& t_2 \\
t_2+t_1 &=& t_3 \\
&\ldots& \\
t_{\left[\frac{n}{2}\right]-1}+t_1 &=& t_{\left[\frac{n}{2}\right]} \\
t_{\left[\frac{n}{2}\right]}+t_{\left[\frac{n}{2}\right]} &=& w \\
w+y &=& x_2 \\
y+y &=& y {\rm ~(if~}n{\rm ~is~even)} \\
y &=& 1 {\rm ~(if~}n{\rm ~is~odd)} \\
u+u &=& v
\end{array}
\right.$$ with $n$ variables. By the equivalence [(E2)]{}, the system $S$ is consistent over ${\mathbb Z}$. The equation guarantees that $S$ has infinitely many integer solutions. If an integer $n$-tuple $(x_1,x_2,\ldots,x_s,\ldots,w,y,u,v)$ solves $S$, then by the equivalence [(E2)]{}, $$x_1=f(x_2)=f(w+y)=f\left(2 \cdot \left[\frac{n}{2}\right]+y\right)=f(n)$$
0.2truecm [**Corollary.**]{}
*There is an algorithm that for every computable function returns a positive integer $m(f)$, for which a second algorithm accepts on the and any integer , and returns an integer tuple for which $x_1=f(n)$ and 0.2truecm*
(4) for each integers $y_1,\ldots,y_n$ the conjunction $$\Bigl(\forall i \in \{1,\ldots,n\}~(x_i=1 \Longrightarrow y_i=1)\Bigr) ~\wedge$$ $$\Bigl(\forall i,j,k \in \{1,\ldots,n\}~(x_i+x_j=x_k \Longrightarrow y_i+y_j=y_k)\Bigr) ~\wedge$$ $$\forall i,j,k \in \{1,\ldots,n\}~(x_i \cdot x_j=x_k \Longrightarrow y_i \cdot y_j=y_k)$$
implies that $x_1=y_1$.
0.2truecm
[*Proof.*]{} Let denote the order on which ranks the tuples first to and then lexicographically. The ordered set is isomorphic to . To find an integer tuple , we solve the system $S$ by performing the brute-force search in the order $\leq_n$.
0.2truecm
If $n \geq 2$, then the tuple $$\left(x_1,\ldots,x_n\right)=\left(2^{\textstyle 2^{n-2}},2^{\textstyle 2^{n-3}},\ldots,256,16,4,2,1\right)$$ has property [*(4)*]{}. Unfortunately, we do not know any explicitly given integers with property [*(4)*]{} and .
[6]{}
L. B. Kuijer, MSc thesis, Faculty of Mathematics and Natural Sciences, University of Groningen, 2010, <http://irs.ub.rug.nl/dbi/4b87adf513823>.
J. C. Lagarias, Trans. Amer. Math. Soc. 260 (1980), no. 2, 485–508.
Yu. Matiyasevich, MIT Press, Cambridge, MA, 1993.
T. Nagell, , John Wiley & Sons Inc., New York, 1951.
A. Tyszka, <http://arxiv.org/abs/0901.2093>.
S. Y. Yan, Springer, Berlin, 2002.
Apoloniusz Tyszka\
Technical Faculty\
Hugo Kołłtaj University\
Balicka 116B, 30-149 Kraków, Poland\
E-mail address: [rttyszka@cyf-kr.edu.pl](rttyszka@cyf-kr.edu.pl)
|
---
abstract: 'The role of coherent population oscillations is evidenced in the noise spectrum of an ultra-low noise lasers. This effect is isolated in the intensity noise spectrum of an optimized single-frequency vertical external cavity surface emitting laser. The coherent population oscillations induced by the lasing mode manifest themselves through their associated dispersion that leads to slow light effects probed by the spontaneous emission present in the non-lasing side modes.'
author:
- 'A. El Amili$^1$'
- 'B.-X. Miranda$^{1,2}$'
- 'F. Goldfarb$^1$'
- 'G. Baili$^3$'
- 'G. Beaudoin$^4$'
- 'I. Sagnes$^4$'
- 'F. Bretenaker$^1$'
- 'M. Alouini$^{2,3}$'
title: Observation of slow light in the noise spectrum of a vertical external cavity surface emitting laser
---
Since the early works of Sommerfeld [@Sommerfeld1] and Brillouin [@Brillouin1; @Brillouin2] on light propagation through resonant atomic systems, slow and fast light (SFL) have been the subject of considerable research efforts. To control the group velocity of light, various approaches have been proposed and demonstrated, such as, e. g., electromagnetically induced transparency [@Harris1990; @Hau], coherent population oscillations (CPO) [@Bigelow1; @Bigelow2], and stimulated Brillouin scattering [@Thevenaz]. All these approaches are based on the well known Kramers-Krönig relations stating that a narrow resonance in a given absorption profile gives rise to very strong index dispersion in the medium. Consequently, a pulse of light can propagate through a material slower or faster than the velocity of light in vacuum without violating Einstein’s causality [@Milonni]. In this framework, the major part of the studies reported in the literature is devoted to single-pass propagation in the considered dispersive medium: the pulse shape or the amplitude modulation of the light is fixed at the entrance of the SFL system. The point is then to investigate how these characteristics evolve during propagation through the medium.
Systems, such as lasers, in which the light is self organized, have not attracted so much attention in this context. Yet, CPO, an ubiquitous mechanism inducing SFL, is present in any active medium provided that a strong optical beam saturates this medium. Thus, CPO must be present in any single frequency laser since the oscillating beam acts as a strong pump which, by definition, saturates the active medium. This effect could be observed using an external probe whose angular frequency is detuned with respect to the oscillating mode, by less than the inverse of the population inversion lifetime $1/\tau_{\mathrm{c}}$. Besides, it has been shown in semiconductor optical amplifiers (SOAs) that CPO induced SFL leads to a significant modification of the spectral noise characteristics at the output of the SOA [@Gadi; @Perrine]. Consequently, this effect should be also visible in the laser excess noise, using the spontaneous emission present in the non-lasing side longitudinal modes of a single-frequency laser as probe of the CPO effect. To reach this situation, the free spectral range (FSR) of the laser must not be larger than $1/\tau_{\mathrm{c}}$. This is seldom fulfilled in most common lasers. For instance, in ion-doped solid-state lasers, $\tau_{\mathrm{c}}$ is in the range of $1\ \mu$s - 10 ms [@Siegman]. Thus, the FSR of the laser should be smaller than 1 MHz, forbidding single-frequency operation. On the other hand, $\tau_{\mathrm{c}}$ in semiconductor lasers is in the ns range. Consequently, CPO effects are efficient at offset frequencies below a few GHz from the lasing mode [@Agrawal]. The FSR of edge emitting semiconductor lasers being around 100 GHz makes them unsuitable for this experiment. However, class-A vertical external cavity surface emitting semiconductor lasers (VECSELs) [@Ghaya1] recently developed for their low noise characteristics exhibit i) single-frequency operation, ii) ultra-narrow linewidth [@Garnache], iii) shot-noise limited intensity noise [@Ghaya2], and iv) a FSR in the GHz range. All these characteristics make them perfectly suited for the observation of CPO induced SFL in their noise spectrum.
The laser used in our experiment is a VECSEL which operates at $\sim1\;\mu$m (Fig. \[fig-set-up\]). The 1/2-VCSEL gain chip is a multi-layered stack, over $L_{m}\approx10\,\mu\mathrm{m}$ length, of semiconductors materials. Gain is produced by six InGaAs/GaAsP strained quantum wells grown on a high reflectivity Bragg mirror. The Bragg mirror side is bonded onto a SiC substrate to dissipate the heat towards a Peltier cooler. The top of the gain structure is covered by an anti-reflection coating. The gain is broad ($\sim$ 6 THz bandwidth) and spectrally flat and has been optimized to reach a low threshold [@Garnache]. The output mirror (10-cm radius of curvature, 99 % reflectivity) is placed at $L \lesssim 10\;\mathrm{cm}$ from the gain structure. In these conditions, $1/2\pi\tau_{\mathrm{c}}$ is not negligible compared with the FSR ($\Delta\gtrsim 1.5\;\mathrm{GHz}$). The laser is optically pumped at 808 nm. The pump is focused to an elliptical spot on the structure with the ellipse aligned with the \[110\] crystal axis to avoid polarization flips. A $200-\mu$m thick glass étalon is inserted inside the cavity to make the laser single mode. Its spectrum is continuously analyzed with a Fabry-Perot interferometer to ensure that the laser remains monomode and that there is no mode hop during spectra acquisitions.
\
The noise spectrum is measured using a setup similar to the one described by Baili *et al.* [@Ghaya1]. We use a wide bandwidth photodiode and a low noise radio-frequency amplifier in order to reveal the excess noise due to the beatnotes between the laser line and the spontaneous emission noise at neighboring longitudinal mode frequencies [@Ghaya2]. Indeed, the laser output field reads $E\left(t\right)=\sum_{p}\mathcal{A}_{p}e^{-2i\pi\nu_{p} t}+\mathrm{c.c.}$, where $p$ holds for the different mode orders of amplitudes $\mathcal{A}_{p}$ at the cold cavity frequencies $\nu_{p}=\nu_{0}+p~\Delta$. $p=0$ corresponds to the lasing mode, and $p=\pm 1$ to the two closest non-lasing modes, etc... This field leads to the following photocurrent at the output of the detector: $$i_{ph}\left(t\right)\propto \left|\mathcal{A}_{0}\right|^2+ \sum_{p\neq0}\left|\mathcal{A}_{p}\right|^2+\sum_{p\neq0}\left[\mathcal{A}_{0}\mathcal{A}^{\ast}_{p}
\exp\left(-2i\pi f_{p}t\right)+\mathrm{c.c.}\right],$$ where the side mode fields $\left|\mathcal{A}_{p}\right|$ (containing only spontaneous emission) are very small compared with the lasing mode field $\left|\mathcal{A}_{0}\right|$. Thus, the excess intensity noise, characterized by $\mathcal{A}_{0}\mathcal{A}^{\ast}_{p}$, consists of peaks located at $\left|f_{p}\right|=\left|\nu_{0}-\nu_{p}\right|$ in the Fourier space.
![Typical laser intensity noise spectrum. For a cavity length $L\approx$ 10 cm, the beat note frequency appear at the first harmonic of the resonator FSR $\Delta\approx$ 1.5 GHz. The inset is a zoom of the excess noise in the region around $\Delta$. The fact that this noise is composed of two Lorentzian peaks is the signature of a CPO induced gain modulation, that leads to a dispersion effect probed by the non-lasing modes located $\pm\Delta$ from the lasing frequency $\nu_0$.[]{data-label="fig-2"}](Fig2.eps){width="0.8\linewidth"}
On that account, the beat frequencies $\left|f_{p}\right|$ occur at harmonics of the FSR in the noise spectrum (Fig. \[fig-2\]). Just above threshold ($\eta-1\ll 1$, where $\eta$ is the laser excitation ratio), the excess noise peak exhibits a Lorentzian shape with a width completely described by the excess of losses $\delta\gamma_{p}$ induced by the étalon on the $p^{\mathrm{th}}$ side mode [@Ghaya1]. At the $p^{\mathrm{th}}$ FSR frequency $p\Delta$, the noise spectrum is thus the sum of two Lorentzian peaks due to the beat notes of the lasing mode with the corresponding sidebands ($p^{\mathrm{th}}$ and $-p^{\mathrm{th}}$ modes). By contrast, when the pumping rate is increased, we found experimentally that the excess noise consists of two peaks separated by $\delta f=f_{p}-f_{-p}\sim$ 100 kHz (inset of Fig. \[fig-2\]). This frequency shift is given by $$\delta f\approx \nu_{0}\frac{L_{m}}{L+n_{0}L_{m}}(\delta n_p + \delta n_{-p}) , \label{eqdeltaf}$$ where $n_{0}$ is the bulk refractive index of the semiconductor structure. $\delta n_{\pm p}$ are the modifications of the refractive index of the structure experienced by the $\pm p$ side modes and induced by the dispersion associated with the CPO effect. In a semiconductor active medium, thanks to the Bogatov effect [@Bogatov], the dispersion is not an odd function of the frequency detuning with respect to $\nu_0$. Thus, $\delta n_p \neq -\delta n_{-p}$ and the two beat note frequencies $f_p$ and $f_{-p}$ corresponding to the $p$ and $-p$ modes occur at slightly different frequencies, as evidenced by the double peak of Fig. \[fig-2\].
![(a) Round-trip gain versus probe frequency detuning $\nu_0-\nu_p$. The thin line is the unsaturated gain. The dashed line is the saturated gain for the light at $\nu_0$. The full and dotted-dashed lines are the gains seen by the probe for $\alpha=0$ and $\alpha=5$, respectively. (b) Round-trip phase modification experienced by the side modes for $\alpha=0$ (full line) and $\alpha=5$ (dotted-dashed line). These profiles are plotted from Eqs. (\[eqn-gain\]) and (\[eqn-index\]) with $\tau_{\mathrm{c}}=2\ \mathrm{ns}$, $\mathcal{S}=0.5$, and $G_0=2g_0 L_m=0.07$, which correspond to our experimental conditions.[]{data-label="fig-3"}](Fig3a.eps "fig:"){width="0.8\linewidth"} ![(a) Round-trip gain versus probe frequency detuning $\nu_0-\nu_p$. The thin line is the unsaturated gain. The dashed line is the saturated gain for the light at $\nu_0$. The full and dotted-dashed lines are the gains seen by the probe for $\alpha=0$ and $\alpha=5$, respectively. (b) Round-trip phase modification experienced by the side modes for $\alpha=0$ (full line) and $\alpha=5$ (dotted-dashed line). These profiles are plotted from Eqs. (\[eqn-gain\]) and (\[eqn-index\]) with $\tau_{\mathrm{c}}=2\ \mathrm{ns}$, $\mathcal{S}=0.5$, and $G_0=2g_0 L_m=0.07$, which correspond to our experimental conditions.[]{data-label="fig-3"}](Fig3b.eps "fig:"){width="0.8\linewidth"}\
More precisely, this CPO induced index modification can be derived from the gain medium rate equation. We assume that this medium can be modeled by a two-level system driven by an intracavity light field $E\left(t\right)$ which is the sum of the lasing mode and the two closest side modes: $$E\left(t\right)=\mathcal{A}_{0}e^{-2i\pi\nu_{0}t}+\mathcal{A}_{-1}e^{-2i\pi\nu_{-1}t}+\mathcal{A}_{1}e^{-2i\pi\nu_{1} t}+\mathrm{c.c.}.$$ As the étalon forces the laser to operate in single mode regime, one has $\left|A_{0}\right|^2\gg\left|A_{-1}\right|^2\approx\left|A_{1}\right|^2$. Consequently, we consider only the beat notes between the lasing and the adjacent modes which create modulations of the population inversion at frequencies close to $\Delta$. Under these assumptions, the gain $g\left(\nu_{p}\right)$ and the refractive index variation $\delta n\left(\nu_{p}\right) = n\left(\nu_{p}\right)-n_0$ seen by the side modes, that can be considered as weak probes, are given by [@Agrawal2]: $$\begin{aligned}
g(\nu_{p}) &=& \frac{g_{0}}{1+\mathcal{S}}\left\{1-\frac{\mathcal{S}\left[(1+\mathcal{S})+\alpha 2\pi\left(\nu_{0}-\nu_{p}\right)\tau_{c}\right]}{(1+\mathcal{S})^2+\left[2\pi\left(\nu_{0}-\nu_{p}\right) \tau_{c}\right]^2}\right\}, \qquad \label{eqn-gain} \\
\delta n\left(\nu_{p}\right)
&=& \frac{c}{4\pi\nu_{0}}\frac{g_{0}\mathcal{S}}{1+\mathcal{S}}
\frac{2\pi\left(\nu_{0}-\nu_{p}\right)\tau_{c}+\alpha(1+\mathcal{S})}{\left(1+\mathcal{S}\right)^2+\left[2\pi\left(\nu_{0}-\nu_{p}\right)\tau_{c}\right]^2}\ . \label{eqn-index}\end{aligned}$$ Here $g_{0}$ is the unsaturated gain and $\mathcal{S}$ the saturation parameter. $\alpha$ is the phase-intensity coupling coefficient (Henry’s factor) that is responsible for the Bogatov effect. Eq. (\[eqn-gain\]) describes two phenomena: i) the self-saturation of the gain at $\nu_0$ by the field at $\nu_0$ \[dashed line in Fig. \[fig-3\](a)\] and ii) the modifications due to the CPO effect of the gains probed by the side modes at $\nu_{\pm p}$. The evolution of this gain versus probe frequency is plotted as a full line (resp. dotted-dashed line) in Fig. \[fig-3\](a) for $\alpha=0$ (resp. $\alpha=5$). This CPO effect is also responsible for the modification of the refractive index seen by the side modes which modifies the round-trip phase accumulated by each side mode \[see Fig. \[fig-3\](b)\]. With $\alpha\neq 0$, we notice that the phase shifts for two symmetric side modes are not opposite, restraining $\delta f$ from vanishing \[see eq. (\[eqdeltaf\])\].
![Experimental noise spectrum for different intracavity powers. The difference between the widths of the two peaks is clearly visible. The peak widths and the spacing increase with the intracavity power. Resolution Bandwidth=1 kHz.[]{data-label="fig:DoublePeaks"}](Fig4.eps){width="1.0\linewidth"}
Fig. \[fig:DoublePeaks\] shows the double peak for different intracavity powers $P_{\mathrm{circ}}$ (defined as the power of one of the two traveling waves creating the intracavity standing wave). It should be noticed that the two excess noise peak profiles have different widths. This is explained by the fact that at the first order, the widths depend on the losses induced by the intracavity étalon. These extra losses lead to the following extra loss rates for the $p^{\mathrm{th}}$ side mode: $$\label{eqn-T}
\delta\gamma_{p}=2 \Delta \left[1-\mathcal{T}\left(\nu_{p}\right)\right] ,$$ where $\mathcal{T}\left(\nu_{p}\right)$ is the étalon intensity transmission for that mode.
![(a) Etalon transmission versus frequency. When the lasing mode frequency is shifted by $\delta\nu$ from the maximum of étalon transmission, the transmissions for the side modes at $\nu_{\pm 1}$ are no longer equal. (b) Extra loss rates $\delta\gamma_{\pm 1}$ versus $\delta\nu$.[]{data-label="fig-5"}](Fig5.eps "fig:"){width="0.8\linewidth"}\
When the lasing mode frequency $\nu_{0}$ coincides with a maximum of the transmission spectrum, both side modes transmissions are equal: $\mathcal{T}\left(\nu_{1}\right)=\mathcal{T}\left(\nu_{-1}\right)<1$ and the peak widths are also equal: $\delta\gamma_{1}=\delta\gamma_{-1}$. But if $\nu_{0}$ is shifted by $\delta\nu >0$ from the étalon resonance frequency \[see Fig. \[fig-5\](a)\], the étalon transmission for mode $p=+1$ (resp. $p=-1$) decreases (resp. increases) with $\delta\nu$. Figure \[fig-5\](b) shows the effect of such a detuning on the extra loss rates $\delta\gamma_{\pm 1}$.
![(a) Peak widths $\delta\gamma_{\pm 1}$ versus intracavity power. (b) Peak spacing $\delta f$ versus intracavity power. Squares: measurements. Full line: prediction obtained from eqs. (\[eqdeltaf\]) and (\[eqn-index\]) with the same parameters as in Fig. \[fig-3\][]{data-label="fig:WidthAndSpacing"}](Fig6a.eps "fig:"){width="0.8\linewidth"} ![(a) Peak widths $\delta\gamma_{\pm 1}$ versus intracavity power. (b) Peak spacing $\delta f$ versus intracavity power. Squares: measurements. Full line: prediction obtained from eqs. (\[eqdeltaf\]) and (\[eqn-index\]) with the same parameters as in Fig. \[fig-3\][]{data-label="fig:WidthAndSpacing"}](Fig6b.eps "fig:"){width="0.8\linewidth"}
We check from the experiment whether this evolution of the extra losses experienced by the side modes correctly explains the widths of the two peaks, like in the simple model of Ref. [@Ghaya2]. The two peaks of Fig. \[fig:DoublePeaks\] are fitted by two Lorentzians in which some asymmetry is included to take into account the Bogatov effect induced by the Henry factor [@Bogatov]. Fig. \[fig:WidthAndSpacing\](a) reproduces the evolution of the peak widths versus intracavity power. This intracavity power is varied by introducing controlled diffraction losses inside the cavity using a knife edge for a constant pump power, in order to keep $g_0$ constant. The variation of the intracavity power modifies the laser frequency shift $\delta\nu$, leading to different evolutions of $\delta\gamma_{1}$ and $\delta\gamma_{-1}$, as expected from Fig. \[fig-5\](b). However, the magnitudes of the experimentally observed variations of these widths are significantly larger than those calculated in the simple linear model of Eq. (\[eqn-T\]), suggesting the enhancement of this effect by nonlinear contributions. Moreover, it is expected that increasing the intracavity power, and thus the gain saturation, leads to an increase of $\delta f$. Fig. \[fig:WidthAndSpacing\](b) clearly shows that the frequency shift $\delta f$ between the two peaks increases with the intracavity power, evidencing the nonlinear origin of the double peak noise spectrum expected from Eq. (\[eqn-index\]). The full line in Fig. \[fig:WidthAndSpacing\](b) is obtained from eqs. (\[eqdeltaf\]) and (\[eqn-index\]) with our experimental parameters. It shows that our simple model based on a two-level system including Henry’s factor gives the good order of magnitude for $\delta f$ and the correct sign for its evolution versus intracavity power. One should not be surprised by the fact that the agreement with the measurements is not perfect: the model of eqs. (\[eqn-gain\]) and (\[eqn-index\]) is too crude to fully describe the gain and index saturation in strained quantum wells. Moreover, we overlooked many effects that may lead to a discrepancy with respect to our simple approach such as i) the variation of $\alpha$ with the carrier density, ii) the thermally induced variations of the index and of the laser mode diameter, iii) the variations of $\tau$ with the carrier density, iv) the possible existence of an offset in $\delta f$ due to the linear dispersion of the gain medium and the étalon. Notice also that since the cavity FSR $\Delta$ is larger than the width of the CPO dip of Fig. \[fig-3\](a), we are probe the wings of the dispersion profile of Fig. \[fig-3\](b), i. e., in the slow light regime. Moreover, we have checked that this phenomenon is not related to a coupled cavity effect since we observed exactly the same behavior of the noise spectrum with another 1/2-VCSEL without any anti-reflection coating. If the splitting between the two peaks were due to a coupled cavity effect, it should be completely different in the absence of the anti-reflection coating, contrary to our observations.
In conclusion, we experimentally evidenced the existence of intracavity slow light effects in a laser induced by the CPO mechanism. These effects are probed by the laser spontaneous emission noise present in the non lasing modes. We have shown that this noise is a very efficient probe to explore the intracavity CPO effects and their evolution with the laser parameters such as the intracavity power. Moreover, we have predicted that this first observation of slow light inside a laser cavity should be able to lead to intracavity fast light if the side mode frequencies are closer to the lasing mode frequencies, i. e., for a longer cavity. This opens interesting perspectives on the study of intracavity fast light [@Gadi; @Perrine] which raises numerous interests for applications to sensors [@Shariar1; @Shariar2]. Moreover, the study of the phase noise of the light present in the side modes of such a laser should lead to interesting features including the noise correlations induced by the laser nonlinear effects.
The authors acknowledge partial support from the Agence Nationale de la Recherche, the Triangle de la Physique, and the Région Bretagne.
[10]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{}
A. Sommerfeld, Ann. Physik **44**, 177 (1914).
L. Brillouin, Ann. Physik **44**, 203 (1914).
L. Brillouin, *Wave propagation and group velocity* (Academic Press, New York, 1960).
S. E. Harris, J. E. Field, and A. Imamoğlu, Phys. Rev. Lett. **64**, 1107 (1990).
L. V. Hau, S. E. Harris, Z. Dutton, and T. Behroozi, Nature **397**, 594 (1999).
M. S. Bigelow, N. N. Lepeshkin, and R. W. Boyd, Phys. Rev. Lett. **90**, 113903 (2003).
M. S. Bigelow, N. N. Lepeshkin, and R. W. Boyd, Science **200**, 200 (2003).
L. Thevenaz, Nature Photonics **2**, 474 (2008).
P. W. Milonni, *Fast Light, Slow light, and Left-Handed Light* (Taylor and Francis, New York, 2005).
E. Shumakher, S. Ó Dúill, and G. Eisenstein, Opt. Lett. **34**, 1940 (2009).
P. Berger *et al.*, C. R. Physique **10**, 991 (2009).
A. E. Siegman, *Lasers* (University Science Books, Mill Valley, 1986).
G. P. Agrawal and N. K. Dutta, *Semiconductor Lasers, 2nd edition* (Springer, Berlin, 1993).
G. Baili *et al.*, Opt. Lett. **31**, 62-64 (2006).
A. Laurain *et al.*, Opt. Expr. **17**, 9503-9508 (2009).
G. Baili *et al.*, J. Lightwave Technol. **26**, 8 (2008).
A. P. Bogatov, P. G. Eliseev, and B. N. Sverdlov, IEEE J. Quantum Electron. **11**, 510-515 (1975).
G. P. Agrawal, J. Opt. Soc. Am. B **5**, 147 (1988).
G. S. Pati, M. Salit, K. Salit, and M. S. Shahriar, Phys. Rev. Lett. **99**, 133601 (2007).
M. S. Shahriar *et al.*, Phys. Rev. A **75**, 053807 (2007).
|
---
abstract: 'The class of antiperovskite compounds $A_3B$O ($A$ = Ca, Sr, Ba; $B$ = Sn, Pb) has attracted interest as a candidate 3D Dirac system with topological surface states protected by crystal symmetry. A key factor underlying the rich electronic structure of $A_3B$O is the unusual valence state of $B$, i.e., a formal oxidation state of $-4$. Practically, it is not obvious whether anionic $B$ can be stabilized in thin films, due to its unusual chemistry, as well as the polar surface of $A_3B$O, which may render the growth-front surface unstable. We report X-ray photoelectron spectroscopy (XPS) measurements of single-crystalline films of Sr$_3$SnO and Sr$_3$PbO grown by molecular beam epitaxy (MBE). We observe shifts in the core-level binding energies that originate from anionic Sn and Pb, consistent with density functional theory (DFT) calculations. Near the surface, we observe additional signatures of neutral or cationic Sn and Pb, which may point to an electronic or atomic reconstruction with possible impact on putative topological surface states.'
author:
- 'D. Huang'
- 'H. Nakamura'
- 'K. Küster'
- 'A. Yaresko'
- 'D. Samal'
- 'N. B. M. Schröter'
- 'V. N. Strocov'
- 'U. Starke'
- 'H. Takagi'
title: 'Unusual Valence State in the Antiperovskites Sr$_3$SnO and Sr$_3$PbO Revealed by X-ray Photoelectron Spectroscopy'
---
Introduction
============
Complex oxides have long provided a rich platform to explore exotic electronic phases that emerge from the interplay of charge, spin and orbital degrees of freedom [@Imada_RMP_1998]. In recent years, efforts to engineer Dirac, Weyl and other topological semimetallic phases in these compounds have intensified [@Uchida_JPD_2018]. The effects of strong electronic correlations [@Fujioka_NatCommun_2019], magnetism [@Wan_PRB_2011] and interface reconstructions [@Hwang_NatMat_2012] in complex oxides are expected to enrich the topological phases that can be realized. Such investigations are facilitated by the ability to synthesize these compounds in thin-film heterostructures.
A pertinent example is the class of antiperovskites (or inverse perovskites) with chemical formula $A_3B$O, where $A$ is an alkaline earth metal (Ca, Sr or Ba) and $B$ is Sn or Pb. These compounds crystallize into the archetypal perovskite structure, but with the usual positions of the cations and anions exchanged \[Fig. \[Fig1\](a)\]. These antiperovskites have been predicted to host a unique set of electronic properties. According to rigorous classification, several members of this family are topological crystalline insulators [@Hsieh_PRB_2014] with type-I and type-II Dirac surface states [@Chiu_PRB_2017]. However, the actual band gap, which lies along the $\Gamma$-$X$ line at six equivalent points in the Brillouin zone (BZ), is only a few tens of millielectronvolts, such that in the vicinity of these points, there is a quasilinear 3D Dirac dispersion [@Kariyado_JPSJ_2011; @Kariyado_JPSJ_2012; @Kariyado_PRM_2017]. Experimentally, angle-resolved photoemission spectroscopy [@Obata_PRB_2017], magnetotransport [@Suetsugu_PRB_2018; @Obata_PRB_2019] and nuclear magnetic resonance [@Kitagawa_PRB_2018] measurements have probed the possible 3D Dirac nature of the electrons in these compounds. Experiments have also revealed signatures of ferromagnetism arising from oxygen vacancies [@Lee_APL_2013; @Lee_MRS_2014], high thermoelectric performance [@Okamoto_JAP_2016], superconductivity arising from Sr vacancies [@Oudah_NatComm_2016; @Oudah_SciRep_2019] and weak antilocalization due to spin-orbital entanglement [@Nakamura_arXiv_2018].
![(a) Crystal structure of the antiperovskite Sr$_3$(Sn, Pb)O. The horizontal bars (red and blue) illustrate polar (001) planes. (b), (c) Band structure plots of Sr$_3$SnO and Sr$_3$PbO. The thickness of the orange (green/purple) line denotes the weight of the projection of the given state onto the Sr 4$d$ (Sn 5$p$/Pb 6$p$) orbitals.[]{data-label="Fig1"}](FIG1.pdf)
The rich electronic properties of the antiperovskites take as their fundamental origin the unusual valence state of $B$ (= Sn, Pb). In the ionic limit, the constituent elements of $A_3B$O would exist in the following oxidation states: $A^{2+}$, $B^{4-}$ and O$^{2-}$. We note that Bader analysis reveals that the effective charge of $B$ lies closer to $-$2 (see Section \[secDFT\]). Nevertheless, such a highly anionic state of $B$ implies that a large fraction of its outermost $p$-orbitals are occupied. This configuration produces an unusual situation in $A_3B$O, wherein the valence bands near the Fermi energy are dominated by $B$ $p$-orbitals and the conduction bands near the Fermi energy are dominated by $A$ $d$-orbitals (Figs. \[Fig1\](b), (c); refer to Section \[SecMet\] for details of the band structure calculations). Around $\Gamma$, there is a moderate inversion between the $B$ $p$-bands and $A$ $d$-bands. When interorbital hybridization and spin-orbit coupling are taken into account, the six equivalent band crossings at the Fermi energy are only slightly gapped, resulting in the approximate 3D Dirac semimetallic phase [@Kariyado_JPSJ_2011; @Kariyado_JPSJ_2012; @Kariyado_PRM_2017], as well as the topological crystalline insulating phase in some cases [@Hsieh_PRB_2014].
It is natural to ask whether anionic Sn or Pb can be actually stabilized in a thin film. Not only do these anionic states represent unusual chemistry, resulting in extreme air sensitivity, but the antiperovskites may also be prone to surface reconstruction. As illustrated in Fig. \[Fig1\](a), the (001) planes alternate between an overall oxidation state of $+2$ and $-2$, leading to a polar catastrophe at a surface [@Ohtomo_Nature_2004]. To alleviate a divergence in the electrostatic potential, it is possible that Sr vacancies form at the surface (analogous to O vacancies in oxide perovskites), and/or Sn and Pb shift to a more stable valence state (neutral or cationic) via electronic reconstruction. If so, this has profound implications on surface states [@Chiu_PRB_2017], similar to the case of the Kondo insulator SmB$_6$, whose polar surface has complicated the elucidation of its topological properties [@Zhu_PRL_2013].
Previous measurements of bulk Sr$_{3-x}$SnO crystals uncovered signatures of anionic Sn [@Oudah_SciRep_2019]. Using $^{119}$Sn Mössbauer spectroscopy, Oudah *et al.* observed an isomer shift of the main peak by $+$1.88 mm/s, matching that of Mg$_2$Sn, another compound in which Sn is formally $-$4. The situation in thin films, however, is less clear. Minohara *et al.* performed X-ray photoelectron spectroscopy (XPS) measurements of Ca$_3$SnO films under ultra high vacuum (UHV) [@Minohara_JCG_2018]. The reported Sn 3$d_{5/2}$ spectrum showed a surface component corresponding to Sn$^{4+}$ or Sn$^{2+}$, as well as a bulk component, which they attributed to the antiperovskite phase. However, the bulk component had a binding energy of 484.8 eV, lying within the range expected for neutral Sn: 484.3-485.2 eV [@NIST]. Further investigation is needed to clarify the anionic state of Sn in thin films.
Here, we performed XPS measurements of Sr$_3$SnO and Sr$_3$PbO films grown by molecular beam epitaxy (MBE) and kept in UHV conditions. In the bulk, we observe peaks in the Sn 3$d$ and Pb 4$f$ core levels that lie at lower binding energies than those of cationic or neutral Sn and Pb. DFT calculations confirm that these shifts match predictions for anionic Sn (Pb) in Sr$_3$SnO (Sr$_3$PbO). At the surface, we find signatures of cationic and neutral Sn and Pb, consistent with the scenario of an atomic or electronic reconstruction at the surface.
Methods {#SecMet}
=======
Films of Sr$_3$SnO and Sr$_3$PbO with thickness $\sim$100 nm were grown in an Eiko MBE chamber with base pressure in the low $10^{-9}$ mbar range. The films were deposited on (001)-cut substrates of yttria-stabilized zirconia (YSZ), which were pre-coated at two opposite edges with Au or Nb for electrical grounding in XPS measurements. Elemental sources of Sr (99.9% purity from vendor, further refined in house by sublimation), Sn (99.999% purity) and Pb (99.999%) were thermally sublimated from effusion cells. A mixture of 2% O$_2$ in Ar gas was supplied through a leak valve (pressure range: $10^{-6}$ to $10^{-5}$ mbar). Since the samples reported in this work were grown at different times spanning a two-year period, different growth parameters were used in the course of optimizing film quality (example parameters can be found in Refs. [@Samal_APLM_2016; @Nakamura_arXiv_2018]). We will not focus on these systematic differences, but instead on the ubiquitous observation of an antiperovskite phase via XPS.
Following growth, the films were examined *in situ* using reflection high-energy electron diffraction (RHEED), then transferred in vacuum suitcases (Ferrovac GmbH; pressure range: low $10^{-10}$ mbar) for XPS measurements. We note that other films grown with identical conditions to these ones were capped with Au or Apiezon-N grease in an Ar glove box then characterized by X-ray diffraction (XRD) and/or transport [@Samal_APLM_2016; @Nakamura_arXiv_2018].
XPS data were acquired at the Max Planck Institute for Solid State Research (MPI-FKF) in a system equipped with a commercial Kratos AXIS Ultra spectrometer and a monochromatized Al K$_{\alpha}$ source (photon energy: 1486.6 eV). The base pressure was in the low $10^{-10}$ mbar range. An analyzer pass energy of 20 eV was used to collected detailed spectra. In addition to the antiperovskite films, we measured reference spectra from a Sn film (grown using MBE, transported in a vacuum suitcase) and a Pb foil (cleaned *in situ* using Ar sputtering). The Sn film \[Fig. \[Fig3\](b)\] showed charging due to poor electrical grounding; in this instance, we used the Fermi edge to recalibrate the binding energy. We also performed low-energy electron diffraction (LEED) on our films in an adjoining chamber equipped with a commercial SPECS ErLEED 150 system.
XPS spectra were analyzed using the CasaXPS software. To fit the various peaks, we used multiple Gaussian-Lorentzian mixture functions on top of a Shirley background. To constrain our fitting parameters, we fixed the doublet spacing energy of Sn 3$d$, Sr 3$d$ and Pb 4$f$ to their literature values of 8.41 eV, 1.79 eV and 4.86 eV, respectively [@Moulder_1992]. We also constrained the area ratio of the doublets to 2:3 for $d$-core levels and 3:4 for $f$-core levels.
We also collected XPS data at grazing emission, which are more sensitive to the surface elemental composition. This allowed us to disentangle surface and bulk contributions in the XPS spectra. Similarly, for films measured at the ADRESS beamline of the Swiss Light Source (SLS) [@Strocov_JSR_2010; @Strocov_JSR_2014], we were able to control the surface sensitivity by tuning the photon energy (see Supplemental Material).
We performed DFT calculations using the Vienna *ab-initio* simulation package (`VASP`) [@Kresse_CMS_1996; @Kresse_PRB_1996], which implements the projector augmented-wave (PAW) method [@Bloch_PRB_1994; @Kresse_PRB_1999]. The following electrons were treated as valence: $3s3p4s$ in Ca, $4s4p5s$ in Sr, $5s5p6s$ in Ba, $5s4d5p$ in Sn, $6s5d6p$ in Pb and $2s2p$ in O. We used the generalized gradient approximation (GGA) as parameterized by Perdew, Burke and Ernzerhof (PBE) [@Perdew_PRL_1996]. An energy cutoff of 750 eV was used, along with a BZ sampling as dense as $28 \times 28 \times 28$ for the self-consistent calculation of the charge density. For the band structure calculations shown in Figs. \[Fig1\](b) and (c), spin-orbit coupling was included in an additional non-self-consistent cycle. We also performed Bader charge analysis. Estimates of core-level shifts, which we performed using both `VASP` and `PY LMTO`, will be discussed in Section \[secDFT\]. Atomic structures were visualized using `VESTA` [@Momma_JAC_2011].
Results and Discussion
======================
RHEED and LEED
--------------
Figures \[Fig2\](a), (b) show RHEED images acquired along the \[100\] direction of Sr$_3$SnO and Sr$_3$PbO, respectively. We note that the underlying YSZ substrate and a thin SrO buffer layer deposited prior to the antiperovskite are also cubic with similar lattice constants. However, they are forbidden by their crystal structure from exhibiting (0$l$) streaks with odd integer $l$. Hence, the appearance of the (01) streak establishes the existence of the target antiperovskite phase.
Figures \[Fig2\](c)-(e) and (f)-(h) show LEED images acquired at different energies for Sr$_3$SnO and Sr$_3$PbO, respectively. The square array of the diffraction spots is consistent with the antiperovskite crystal structure. In addition, the complex evolution of the structure factor as a function of electron energy is observed [@vanHove_1986]. In general, LEED images of Sr$_3$PbO exhibit brighter patterns than those of Sr$_3$SnO, and this is also reflected in the RHEED streaks. As discussed in the following two subsections, XPS measurements show that both the Sr$_3$SnO and Sr$_3$PbO films have a thin surface layer covering the bulk antiperovskite phase. The brighter RHEED/LEED images in Sr$_3$PbO may point to a thinner surface layer covering Sr$_3$PbO, or to the stronger scattering strength of Pb compared to Sn.
![(a), (b) RHEED images of Sr$_3$SnO (sample SS91) and Sr$_3$PbO (sample AP389) taken along the \[100\] direction. Electron energy: 15 keV. (c)-(e) LEED images of Sr$_3$SnO (sample SS60), acquired at 55 eV, 84 eV and 144 eV. (f)-(h) LEED images of Sr$_3$PbO (sample AP149), acquired at the same energies.[]{data-label="Fig2"}](FIG2.pdf)
Sr$_3$SnO XPS
-------------
Figure \[Fig3\](a) presents the Sn $3d$ spectrum of a Sr$_3$SnO film (SS91). As clearly seen, each of the spin-split levels ($3d_{3/2}$ and $3d_{5/2}$) exhibits two pronounced peaks, indicative of multiple Sn valence states. Using the fitting procedure described in Section \[SecMet\], we find that actually, a minimum of three Gaussian-Lorentzian mixture functions are required to fit each level. For the Sn 3$d_{5/2}$ level, the three peaks are centered at 483.87 eV, 484.79 eV and 486.03 eV \[Table \[T\_SS\]\]. We label these peaks as Sn$^A$, Sn$^B$ and Sn$^C$, respectively \[Fig. \[Fig3\](a)\].
![(a) XPS spectrum of the Sn 3$d_{3/2}$ and $3d_{5/2}$ doublet of Sr$_3$SnO (sample SS91), acquired at MPI-FKF with photon energy 1486.6 eV. The gray circles are the measured data, the black line is the overall fit and the green shaded areas are the individual peaks that constitute the fit. (b) A reference spectrum for a thin film of metallic Sn is shown for comparison with (a).[]{data-label="Fig3"}](FIG3.pdf)
[c|ccc]{} & Sn$^A$ \[eV\] & Sn$^B$ \[eV\] & Sn$^C$ \[eV\]\
SS60 \[Fig. \[Fig4\](b)\] & 483.82 & 484.72 & 486.02\
SS91 \[Fig. \[Fig3\](a)\] & 483.87 & 484.79 & 486.03\
\
& Sn$^0$ \[eV\] & &\
Sn film \[Fig. \[Fig3\](b)\] & 484.92\
\[T\_SS\]
To understand the origin of these peaks, we performed XPS measurements on a control sample, a thin film of Sn deposited on YSZ \[Fig. \[Fig3\](b)\]. The Sn 3$d_{5/2}$ level shows a sharp peak centered at 484.92 eV, closely matching the literature value for metallic Sn, 485.0 eV [@Moulder_1992]; we thus label this peak Sn$^0$. Comparing with the spectrum from Sr$_3$SnO \[Fig. \[Fig3\](a)\], we note that Sn$^0$ overlaps with Sn$^B$. Sn$^C$, with higher binding energy, matches literature values for SnO [@Moulder_1992]. Sn$^A$, with lower binding energy, could be assigned to the antiperovskite phase. Intuititively, Sn states with higher binding energy than Sn$^0$ (red line in Fig. \[Fig3\]) are cationic (positively charged), such that core electrons are less readily removed, whereas Sn states with lower binding energy than Sn$^0$ are anionic (negatively charged), such that core electrons are more readily removed. Indeed, XPS measurements of Ni$_3$Sn$_4$ electrodes for Li-ion batteries showed that when Sn was lithiated and therefore negatively charged, the 3$d_{5/2}$ peak corresponding to Sn$^0$ shifted to lower binding energies [@Ehinon_ChemMater_2008].
The dependence of the Sn $3d$ spectra on the emission angle of the electrons is shown in Fig. \[Fig4\] for another Sr$_3$SnO film (SS60). The spectrum obtained at grazing emission is more surface sensitive than that at normal emission. We observe that near the surface, Sn$^B$ and Sn$^C$ occupy a greater fraction of the total intensity than Sn$^A$ \[Fig. \[Fig4\](a)\]. Deeper into the bulk, however, Sn$^A$ is enhanced relative to Sn$^B$ and Sn$^C$ \[Fig. \[Fig4\](b)\]. Thus, the bulk phase of our film is characterized by Sn$^A$, consistent with anionic Sn in Sr$_3$SnO. Nevertheless, there is a surface layer in which Sn reverts to its neutral (Sn$^B$ $\sim$ Sn$^0$) and cationic (Sn$^C$ $\sim$ SnO) states, likely originating from the unstable polar (001) surface of Sr$_3$SnO. Since an electron with kinetic energy on the order of 1 keV has an inelastic mean free path on the order of 1 nm [@Powell_JVSTA_1999], we deduce the surface layer to have thickness less than 1 nm.
![Angle dependence: Sn $3d$ spectra of sample SS60, taken at (a) grazing (60$^{\circ}$ off normal) and (b) normal emission. Data were acquired at MPI-FKF with photon energy $h\nu$ = 1486.6 eV. []{data-label="Fig4"}](FIG4.pdf)
Sr$_3$PbO XPS
-------------
In essence, the XPS results of Sr$_3$PbO are similar to the Sr$_3$SnO results. Figure \[Fig5\](a) presents the overlapping Sr $3d$ and Pb $4f$ spectra of a Sr$_3$PbO film (AP337). Two Gaussian-Lorentzian mixture functions were required to fit the Pb 4$f$ levels, with peaks Pb$^A$ = 136.10 eV and Pb$^B$ = 137.32 eV \[Table \[T\_AP\]\]. To identify these peaks, we again performed XPS measurements on a control sample, a Pb foil cleaned *in situ* by Ar sputtering \[Fig. \[Fig5\](b)\]. The Pb 4$f_{7/2}$ level shows a pronounced peak centered at 136.86 eV, close to the literature value for metallic Pb, 136.9 eV [@Moulder_1992]; we thus label this peak Pb$^0$. There is also a residual peak at higher binding energies, 137.43 eV, which agrees with literature values for PbO$_2$ [@Moulder_1992]. Comparing with the data from Sr$_3$PbO \[Fig. \[Fig5\](a)\], we observe that Pb$^B$ overlaps with PbO$_2$, whereas Pb$^A$ is exclusive to the antiperovskite film. Its lower binding energy relative to Pb$^0$ indicates that it is anionic.
![(a) XPS spectrum of the Sr 3$d_{3/2}$ and 3$d_{5/2}$ doublet and the Pb 4$f_{5/2}$ and 4$f_{7/2}$ doublet of Sr$_3$PbO (sample AP337), acquired at MPI-FKF with photon energy 1486.6 eV. The gray circles are the measured data, the black line is the overall fit and the orange (purple) shaded areas are the individual Sr (Pb) peaks that constitute the fit. (b) A reference spectrum for a metallic Pb foil is shown for comparison with (a).[]{data-label="Fig5"}](FIG5.pdf)
[c|cc]{} & Pb$^A$ \[eV\] & Pb$^B$ \[eV\]\
AP149 \[Fig. \[Fig6\](b)\] & 136.10 & 137.42\
AP337 \[Fig. \[Fig5\](a)\] & 136.10 & 137.32\
\
& Pb$^0$ \[eV\] & PbO$_2$ \[eV\]\
Pb foil \[Fig. \[Fig5\](b)\] & 136.86 & 137.43\
\[T\_AP\]
Figure \[Fig6\] presents the angle dependence of the overlapping Sr 3$d$ and Pb 4$f$ spectra, along with fits, for sample AP149. At grazing emission, Pb$^B$ dominates the spectrum, but at normal emission, the intensity of Pb$^A$ is enhanced relative to Pb$^B$. Again, we conclude that the bulk phase of our film is characterized by Pb$^A$, which we assign to anionic Pb in Sr$_3$PbO, but in a thin surface layer, Pb reverts to its cationic state (Pb$^B$ $\sim$ PbO$_2$).
![Angle dependence: Sr 3$d$ and Pb 4$f$ spectra of sample AP149, taken at (a) grazing (60$^{\circ}$ off normal) and (b) normal emission. Data were acquired at MPI-FKF with photon energy $h\nu$ = 1486.6 eV.[]{data-label="Fig6"}](FIG6.pdf)
Density functional theory {#secDFT}
=========================
In this section, we use DFT to demonstrate that in the antiperovskite compounds $A_3B$O, $B$ (= Sn, Pb) does indeed carry a negative effective charge, consistent with the heuristic concept of formal oxidation states. Then we show that the XPS binding energy of the Sn$^A$ (Pb$^A$) peak in Sr$_3$SnO (Sr$_3$PbO) relative to the Sn$^0$ (Pb$^0$) peak in metallic Sn (Pb) matches predictions by DFT calculcations. This further confirms our assignment of the Sn$^A$ and Pb$^A$ peaks to the bulk antiperovskite phase at a quantitative level.
Effective charges
-----------------
While formal oxidation states are a useful construct when examining chemical bonding or electronic structure, they are not identical to the actual effective charges surrounding each atom. Generally, bonds in a crystal exhibit a greater degree of covalency than is expected in a pure ionic picture. To compute the effective charges, we used Bader’s method of partitioning the charge density via zero-flux surfaces [@Tang_JPCM_2009]. Table \[T\_Bader\] presents effective charges computed for various antiperovskites. The effective charge of $B$ (= Sn, Pb) averages around $-2$ across the compounds considered. Thus, while the effective charge is clearly lower than the value of $-4$ expected from formal oxidation states, it is still clear that $B$ is unusally anionic. We note a trend that as the size of $A$ increases from Ca to Sr to Ba, the effective charge of $B$ becomes less negative [@Kariyado_PRM_2017].
$A$ = Ca/Sr/Ba $B$ = Sn/Pb O
----------- ---------------- ------------- ---------
Ca$_3$SnO $+1.30$ $-2.38$ $-1.51$
Ca$_3$PbO $+1.29$ $-2.35$ $-1.52$
Sr$_3$SnO $+1.26$ $-2.30$ $-1.48$
Sr$_3$PbO $+1.25$ $-2.26$ $-1.48$
Ba$_3$SnO $+1.14$ $-1.98$ $-1.44$
Ba$_3$PbO $+1.12$ $-1.93$ $-1.44$
: Effective charges computed by Bader analysis for various antiperovskites $A_3B$O.
\[T\_Bader\]
Core-level shifts
-----------------
Experimentally, what XPS measures is neither the formal oxidation state nor the effective charge, but shifts in the core-level binding energies. We therefore used DFT to quantitatively confirm the shift towards lower binding energies for anionic $B$ relative to metallic $B$. To calculate the core-level binding energies ($E_c$), we worked within the initial state approximation, wherein a selected core electron is removed, but the remaining electrons are kept frozen [@Kohler_PRB_2004]. Then $E_c$ is simply given by the Kohn-Sham (KS) eigenvalue of the core electron ($\epsilon_c$), relative to the Fermi energy ($\epsilon_F$): $$E_c = -(\epsilon_c - \epsilon_F).$$ While final state effects, primarily the screening of the core hole, are neglected, the initial state approximation captures the chemical state of the atom as reflected in its valence charge configuration [@Bellafont_PCCP_2015].
In the PAW formalism of `VASP`, $\epsilon_c$ is computed in two steps [@Kohler_PRB_2004]: First, the core electrons are frozen and the valence charge density is computed via the normal, self-consistent electronic relaxation. Second, the KS eigenvalues for the core electrons are solved inside the PAW spheres while keeping the valence charge density fixed. As a check, we also performed all-electron PBE GGA calculations using the relativistic linear muffin-tin orbital (LMTO) method as implemented in the `PY LMTO` computer code. Here, spin-orbit coupling was included by solving the Dirac equations inside the atomic spheres. Some details of the implementation can be found in Ref. [@Antonov_2004].
The absolute values of $E_c$ as determined from the KS eigenvalues of DFT are typically 20-30 eV lower than the experimental values reported by XPS, due to a breakdown of Koopman’s theorem [@vanSetten_JCTC_2018]. However, DFT does provide meaningful values of $\Delta E_c$, the shift of the core-level binding energy between two systems. In a study, van Setten *et al.* demonstrated that for a set of molecules containing C, O, N or F, the mean absolute difference between $\Delta E_c$, as calculated from the KS eigenvalues, and the actual core-level shifts, as measured by XPS, was only 0.74 eV [@vanSetten_JCTC_2018].
To make a meaningful comparison with our data, we took metallic Sn ($\alpha$ allotrope, diamond structure) and metallic Pb (face-centered cubic) as our references. We calculated $\Delta E_{\textrm{Sn }3d}$ between Sr$_3$SnO and $\alpha$-Sn, and $\Delta E_{\textrm{Pb }4f}$ between Sr$_3$PbO and Pb. Dense BZ sampling and a high energy cutoff, as stated previously, were needed to converge the core-level binding energies within 1 meV. The results are shown in Table \[T\_CLS\], computed using the following experimental lattice constants: 5.139 Å for Sr$_3$SnO, 6.489 Å for $\alpha$-Sn, 5.151 Å for Sr$_3$PbO and 4.950 Å for Pb [@Nuss_ACSB_2015; @Thewlis_Nature_1954; @Bouad_JSSC_2003]. We note that in the case of $\Delta E_{\textrm{Sn }3d}$, there is a shift by $+0.12$ eV when the DFT-optimized lattice parameter is used, due to a discrepancy of 2.5% between the experimental and DFT-optimized lattice constants of $\alpha$-Sn. We also note that differences in $\Delta E_c$ arising from the use of the local density approximation (LDA) instead of GGA are within 0.1 eV.
DFT: $\Delta E_{\textrm{Sn }3d}$ \[eV\] DFT: $\Delta E_{\textrm{Pb }4f}$ \[eV\]
----------------------------------------- -----------------------------------------
`VASP`: $-0.95$ `VASP`: $-0.79$
`PY LMTO`: $-1.14$ `PY LMTO`: $-0.98$
XPS: Sn$^A$ $-$ Sn$^0$ \[eV\] XPS: Pb$^A$ $-$ Pb$^0$ \[eV\]
SS60: $-1.10$ AP149: $-0.76$
SS91: $-1.05$ AP337: $-0.76$
: Comparison between DFT and XPS. The core-level shifts, as predicted by DFT, are given by $\Delta E_{\textrm{Sn }3d} = E_{\textrm{Sn }3d}(\textrm{Sr}_3\textrm{SnO}) - E_{\textrm{Sn }3d}(\alpha\textrm{-Sn})$ and $\Delta E_{\textrm{Pb }4f} = E_{\textrm{Pb }4f}(\textrm{Sr}_3\textrm{PbO}) - E_{\textrm{Pb }4f}(\textrm{Pb})$. Results from two different codes (`VASP` and `PY LMTO`) are shown. The core-level shifts, as measured by XPS, are given by the difference between the Sn$^A$ peak in Sr$_3$SnO and the Sn$^0$ peak in metallic Sn, or between the Pb$^A$ peak in Sr$_3$PbO and the Pb$^0$ peak in metallic Pb.
\[T\_CLS\]
Shown in Table \[T\_CLS\] are also the core-level shifts as measured by XPS. For the Sr$_3$SnO films, we took the difference between the Sn$^A$ peak, which we ascribed to the bulk antiperovskite phase, and the Sn$^0$ peak in the reference Sn metal. Similarly, the difference between Pb$^A$ in Sr$_3$PbO and Pb$^0$ in Pb metal was used to derive the shift for Sr$_3$PbO. The DFT and XPS results for the core-level shifts show a very good agreement. Furthermore, in both theory and experiment, the magnitude of the shift is larger in Sn 3$d$ compared to Pb 4$f$. Hence, we conclude again with additional confirmation that the Sn$^A$ (Pb$^A$) peak corresponds to anionic Sn (Pb) in Sr$_3$SnO (Sr$_3$PbO).
Summary
=======
In this work, we have investigated the antiperovskites Sr$_3$SnO and Sr$_3$PbO, whose predicted topological crystalline insulating phase and approximate 3D Dirac semimetallic phase hinge upon the stabilization of Sn and Pb in an unusual anionic state ($\sim$$-2$ according to Bader charge analysis). Our XPS measurements, along with DFT calculations, confirm that anionic Sn and Pb do indeed exist in thin films of Sr$_3$SnO and Sr$_3$PbO. Interestingly though, we observed signatures of cationic or neutral Sn and Pb distributed at the surface of the films. This suggests that the polar (001) surface of these antiperovskites is susceptible to a reconstruction wherein the valence states of Sn and Pb are altered. Such a modification is likely to have drastic impact on the surface electronic structure. We suggest using scanning tunneling microscopy to elucidate the nature of potential surface reconstruction (electronic or atomic) and its effects on putatitve topological surface states.
We thank U. Wedig for helpful discussions. We also thank M. Konuma, C. Mühle, K. Pflaum, S. Prill-Diemer and S. Schmid for technical assistance at MPI-FKF. We acknowledge the Paul Scherrer Institut, Villigen, Switzerland for provision of synchrotron radiation beamtime at the ADRESS beamline of the SLS. D. H. acknowledges support from a Humboldt Research Fellowship for Postdoctoral Researchers. N. B. M. S acknowledges partial financial support from Microsoft.
[45]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\
12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty [****, ()](https://doi.org/10.1103/RevModPhys.70.1039) [****, ()](https://doi.org/10.1088/1361-6463/aaaf00) [****, ()](https://doi.org/10.1038/s41467-018-08149-y) [****, ()](https://doi.org/10.1103/PhysRevB.83.205101) [****, ()](https://doi.org/10.1038/nmat3223) [****, ()](https://doi.org/10.1103/PhysRevB.90.081112) [****, ()](https://doi.org/10.1103/PhysRevB.95.035151) [****, ()](https://doi.org/10.1143/JPSJ.80.083704) [****, ()](https://doi.org/10.1143/JPSJ.81.064701) [****, ()](https://doi.org/10.1103/PhysRevMaterials.1.061201) [****, ()](https://doi.org/10.1103/PhysRevB.96.155109) [****, ()](https://doi.org/10.1103/PhysRevB.98.115203) [****, ()](https://doi.org/10.1103/PhysRevB.99.115133) [****, ()](https://doi.org/10.1103/PhysRevB.98.100503) [****, ()](https://doi.org/10.1063/1.4820770) [****, ()](https://doi.org/10.1557/mrc.2014.4) [****, ()](https://doi.org/10.1063/1.4952393) [****, ()](https://doi.org/10.1038/ncomms13617) [****, ()](https://doi.org/10.1038/s41598-018-38403-8) @noop [ ]{} [****, ()](https://doi.org/10.1038/nature02308) [****, ()](https://doi.org/10.1103/PhysRevLett.111.216402) [****, ()](http://www.sciencedirect.com/science/article/pii/S0022024818303877) @noop () [****, ()](https://doi.org/10.1063/1.4955213) @noop [**]{} (, ) [****, ()](https://doi.org/10.1107/S0909049510019862) [****, ()](https://doi.org/10.1107/S1600577513019085) [****, ()](https://doi.org/10.1016/0927-0256(96)00008-0) [****, ()](https://doi.org/10.1103/PhysRevB.54.11169) [****, ()](https://doi.org/10.1103/PhysRevB.50.17953) [****, ()](https://doi.org/10.1103/PhysRevB.59.1758) [****, ()](https://doi.org/10.1103/PhysRevLett.77.3865) [****, ()](https://doi.org/10.1107/S0021889811038970) @noop [**]{} (, ) [****, ()](https://doi.org/10.1021/cm8006099) [****, ()](https://doi.org/10.1116/1.581784) [****, ()](https://doi.org/10.1088/0953-8984/21/8/084204) [****, ()](https://doi.org/10.1103/PhysRevB.70.165405) [****, ()](https://doi.org/10.1039/C4CP05434B) [**](https://doi.org/10.1007/1-4020-1906-8) () [****, ()](https://doi.org/10.1021/acs.jctc.7b01192) [****, ()](https://doi.org/10.1107/S2052520615006150) [****, ()](https://doi.org/10.1038/1741011a0) [****, ()](http://www.sciencedirect.com/science/article/pii/S0022459603000173)
|
---
abstract: 'We propose a physically based, analytic model for intergalactic filaments during the first gigayear of the universe. The structure of a filament is based upon a gravitationally bound, isothermal cylinder of gas. The model successfully predicts for a cosmological simulation the total mass per unit length of a filament (dark matter plus gas) based solely upon the sound speed of the gas component, contrary to the expectation for collisionless dark matter aggregation. In the model, the gas, through its hydrodynamic properties, plays a key role in filament structure rather than being a passive passenger in a preformed dark matter potential. The dark matter of a galaxy follows the classic equation of collapse of a spherically symmetric overdensity in an expanding universe. In contrast, the gas usually collapses more slowly. The relative rates of collapse of these two components for individual galaxies can explain the varying baryon deficits of the galaxies under the assumption that matter moves along a single filament passing through the galaxy centre, rather than by spherical accretion. The difference in behaviour of the dark matter and gas can be simply and plausibly related to the model. The range of galaxies studied includes that of the so-called “too big to fail” galaxies, which are thought to be problematic for the standard $\Lambda$CDM model of the universe. The isothermal-cylinder model suggests a simple explanation for why these galaxies are, unaccountably, missing from the night sky.'
bibliography:
- 'harford.bib'
---
cosmology: theory – galaxies: formation – galaxies: intergalactic medium – cosmology: dark ages, reionization, first stars – galaxies: structure – galaxies: haloes –
Introduction {#intro}
============
In the currently popular $\Lambda$CDM model of the universe, collisionless dark matter dominates over baryons by a factor of nearly six (see @frieman08 for review). A two-stage model for galaxy formation [@white78] has survived as a general paradigm, in which the overall architecture of the universe is formed by the potential wells formed by the collisionless gravitation of the dark matter For reviews see @frenk12 [@kravtsov12; @conselice14; @somerville15]. The luminous structures that might be observed then would result from dissipational processes of the baryons trapped within these wells. This paradigm has justified the extensive use of N-body simulations using only dark matter to understand structure formation in the universe
The present paper challenges this paradigm in the case of intergalactic filaments during the first gigayear. Using a cosmological simulation that includes gas hydrodynamics, radiative transfer, and chemistry in addition to dark matter, we show that the mass per unit length of intergalactic filaments depends upon the sound speed of the gas in the manner expected if the filaments were gravitationally bound, isothermal cylinders of gas [@stod63; @ostriker64] with dark matter mixed in.
An important implication of this model is that the filaments should have a preferred size as deduced from the simple analytic expression for their structure. This is because the mass per unit length of such a cylinder depends solely upon the temperature and ionization state of the gas, which in turn are constrained by the Lyman alpha cooling floor. A preferred size for intergalactic filaments is not expected from collisionless structure formation by dark matter alone[^1].
![ \[baryrich\] [Visualization of structures of gas and dark matter.]{} Shown are the gas and dark matter surrounding the centre of one of the larger galaxies in the simulation at redshift 5.134. Gas and dark matter are shown in separate images, each of which is a projection on to the page of a sphere having a comoving radius of $266$ kpc centred on the centre of the galaxy. The normal to the page coincides with the normal to the principal plane of the gas. ](gasrich "fig:") ![ \[baryrich\] [Visualization of structures of gas and dark matter.]{} Shown are the gas and dark matter surrounding the centre of one of the larger galaxies in the simulation at redshift 5.134. Gas and dark matter are shown in separate images, each of which is a projection on to the page of a sphere having a comoving radius of $266$ kpc centred on the centre of the galaxy. The normal to the page coincides with the normal to the principal plane of the gas. ](darkrich "fig:")
The present study was motivated by our previous simulation studies [@harford08; @harford11], which showed that gas and dark matter may assume very different structures shortly after the end of the first gigayear at a redshift of $5.134$. For example, Figure \[baryrich\] shows separate images of gas and dark matter for the same projection of a sphere centred upon a galaxy. The gas appears as a relatively smooth filament while the dark matter is more irregular, with discrete clumps positioned along the filament. We showed that the gravitational potential of the gas resembles that of a gravitationally bound, isothermal cylinder in cases where the filament is highly enriched in baryons.
In investigations that led to the present paper, we began to explore the development of the filaments throughout the first gigayear of the simulation, in order to understand the origin of the isothermal structure. We found that baryon enrichment of filaments emerges late in the first gigayear. Earlier most of the filaments are not enriched, and yet, as we show in this paper, can be described as gravitationally bound, isothermal cylinders, provided that we make the simplifying assumption that the dark matter adds to the gravitational potential as if it were uniformly mixed with the gas.
Current thinking ascribes an important role to intergalactic filaments in the transport of gas into the small galaxies that form at high redshift. Two modes of gas entry into galaxies have been described in the literature. In the “hot mode” incoming gas is shock heated when it encounters the potential well of the halo [@rees77]. The shocked gas is elevated in temperature to where it can cool efficiently and then enter the halo. This mode is thought to predominate for galaxies larger than a few times $10^{11}{\, {\rm M}}_\odot$ and at low redshifts. It is thought that for smaller galaxies, like the ones we have studied, the gas is not heated because the smaller potential wells cannot sustain shocks [@birnboim03]. In this “cold mode” scenario, gas passes directly into the galaxy without supersonic heating, perhaps mediated by intergalactic filaments [@birnboim03; @binney04; @katz03; @keres05; @dekel06; @ocvirk08; @keres09b; @dekel09d; @brooks09].
A gravitationally bound, isothermal cylinder provides a physical framework for thinking about the motion of matter along an intergalactic filament. In the present paper, we show that the gas is generally retarded in its entry into the galaxy centre relative to the dark matter. We suggest that it is the hydrodynamic pressure of the gas, counter-balancing the inward force of gravity, that causes this behaviour.
We show that the different behaviours of dark matter and gas are consistent with a simple scheme for galaxy formation in which the dark matter collapses according to collisionless theory, while the gas tends to remain in extended, isothermal cylinders in which infall into the galaxy is retarded by the pressure of the gas.
In addition we show that the different rates of accretion of dark matter and gas predict roughly the baryon deficits of the final galaxies under the assumption that matter moves along a single filament passing through the galaxy centre rather than by spherical accretion.
Understanding the structure of intergalactic filaments may not only provide insights into the mechanisms and pace of early galaxy formation, but may also place limits upon the sizes of galaxies that can form from a filamentary precursor. In this way the model might be relevant to the so-called “missing satellite” problem [@kauffmann93; @klypin99; @moore99], which refers to the discrepancy between the observed number of satellite galaxies of the Milky Way and the predicted number of dark matter haloes from N-body simulations containing only dark matter. An additional, “too big to fail” problem has also arisen in which a class of dark matter haloes thought to be massive enough to support star formation fail to be observed [@boylankolchin09]. The model suggests a simple explanation for this anomaly. Resolution of these problems is critical to the viability of the cold dark matter hypothesis.
Most previous studies on intergalactic filaments have dealt with larger structures at later times in a more complex universe. Our study suggests that the intergalactic filaments associated with small galaxies during the first gigayear, and present during the critical period of reionization, may be a relatively simple model system to study.
The plan of the paper is as follows. In Section \[methods\] we describe the self-consistent simulation of the first gigayear that we have analyzed. Section \[model\] gives an overview of principal features of the model with illustrative diagrams. Section \[results\] compares the model to the simulation. Section \[galdefinition\] sets out exactly what we mean by a galaxy in the simulation and how we decide where it is at any given time.
Section \[filstructure\] begins with a brief summary of the algorithm for identifying intergalactic filaments associated with specific galaxies. The algorithm is described in more detail with diagrams in the Appendix. Then the basic structural equations of a gravitationally bound, isothermal cylinder are laid out as developed by @stod63 and @ostriker64. The agreement of the filaments with such structures is then tested by comparing the total mass per unit length to that predicted by the model.
Section \[darkmotion\] shows that the collapse of the dark matter can be described by the textbook collapse of a spherically symmetric overdensity. Section \[contrast\] describes how we compare and contrast the collapse of gas and dark matter onto the final galaxy. Three categories of galaxies are distinguished in this regard: plunging-gas, retreating-gas, and lingering-gas. Section \[baryondeficits\] relates the findings to the baryon deficits of the final galaxies. The rates of movement of the gas and dark matter in the lingering-gas galaxies argue for collapse along a single filament as opposed to spherical accretion.
Section \[retard\] relates the three categories of galaxies to the formation of isothermal cylinders and to the mass available to form filaments. Section \[sectiontbtf\] relates the model to the so-called “too big to fail” galaxies.
Section \[sectiondarkmodel\] explores an alternative model in which filament structure is determined primarily by dark matter.
Section \[sectionalign\] presents evidence that the filaments are aligned as predicted by the model.
Finally the results are discussed and summarized in Section \[discussion\] and Section \[sectionsummary\] respectively.
Cosmological simulation {#methods}
=======================
As in our previous work [@harford08; @harford11], the results reported in the present paper are based on a flat $\Lambda$CDM cosmological simulation that includes gas hydrodynamics, radiative transfer, and chemistry in addition to dark matter. The simulation followed a $8 h^{-1}$ comoving Mpc cube on a $256^3$ grid up to a redshift of $5.13$ using a “softened Lagrangian hydrodynamics” (SLH-P$^{3}$M) code [@gnedin95; @gnedin_bertschinger_96]. The cosmological parameters are $\Omega_{m} = 0.27$, $\Omega_{b} = 0.04$, $\sigma_{8} = 0.91$, and $h = 0.71$.
Dark matter is computed with collisionless particles of mass $2.73\times10^{6}{\, {\rm M}}_\odot$. Gas dynamics is computed on a quasi-Lagrangian mesh that deforms adaptively to provide higher resolution in higher density regions. The mass of a gas particle is initially $4.75\times10^{5}{\, {\rm M}}_\odot$, but each gas particle mass may adjust slightly as the hydrodynamic computation evolves.
Overview of Model {#model}
=================
A simple paradigm for gravitational collapse begins with collapse in one dimension to produce a plane. Further collapse in a second dimension produces a rod, which then collapses in the third dimension to a quasi-spherical ball [@zeldovich70; @park90; @bertschinger91; @cen93]. Visual inspection of the simulation suggested that this paradigm would be a good starting point.
![ \[modeldiagram\] [Schematic diagram of the model.]{} These two images illustrate the model schematically in its purest form. The model is formulated as the collapse of a rod to form a quasi-spherical galaxy. Imagine that the centre of collapse is in the centre of each image. The upper image shows the rod at an early time with dark matter shown as red, filled circles and gas as green squares. The gas assumes the structure of a gravitationally bound, isothermal cylinder. The dark matter contributes gravitational effects but not hydrodynamic ones. To facilitate computation we assume that the dark matter and gas are uniformly mixed, at least initially. The lower image shows the rod as it might appear at a later time. Dark matter has begun to collapse toward the centre while the gas has remained extended because of the additional hydrodynamic effects that oppose the gravitational coalescence. In this paper, when the model is compared to the simulation, the two halves of the rod on either side of the centre of collapse are regarded as two separate filaments since both sides are not always present for every galaxy. The structure of such a filament is sampled at a position well separated from the centre of the forming galaxy. An example from the simulation that resembles this diagram is shown in Figure \[modelexample\]. ](model)
![ \[modelexample\] [Images from the simulation that resemble the schematic drawing in Figure \[modeldiagram\].]{} These are projections of a sphere centred on a single galaxy at an early time (left) and at a later time (right). The top two images show just the dark matter. The red, filled circles show particles that will actually end up in the galaxy at the end of the simulation. They are seen to coalesce in the centre at a later time in the image on the right. The black X’s are other dark matter particles. Some of these may contribute gravitationally to the structure of the filament. The bottom two images show just the gas particles as green, filled circles. They assume a relatively smooth filamentary structure throughout. ](flat_fig2_dark "fig:") ![ \[modelexample\] [Images from the simulation that resemble the schematic drawing in Figure \[modeldiagram\].]{} These are projections of a sphere centred on a single galaxy at an early time (left) and at a later time (right). The top two images show just the dark matter. The red, filled circles show particles that will actually end up in the galaxy at the end of the simulation. They are seen to coalesce in the centre at a later time in the image on the right. The black X’s are other dark matter particles. Some of these may contribute gravitationally to the structure of the filament. The bottom two images show just the gas particles as green, filled circles. They assume a relatively smooth filamentary structure throughout. ](flat_fig2_gas "fig:")
It is common to assume that galaxies form at the intersections of filaments. This is because large-scale views of simulations at late times show galaxies as nodes in a complex network of filamentous material. This paper takes a different point of view for the first gigayear. We consider an individual filament to be an intermediate structure in the formation of a galaxy. In the model an individual galaxy can be traced back in time to a single rod, in the centre of which a quasi-spherical galaxy will form as the contents of the rod collapse. After the first gigayear, multiple collisions may lead to the build up of complex networks, which we will not consider here.
In the model the rod is a gravitationally bound, isothermal cylinder of gas with comingled dark matter. Initially the gas and dark matter are uniformly mixed. The dark matter then proceeds to collapse toward the centre of the rod, where the future galaxy will be found. The gas, in general, collapses more slowly than the dark matter because the pressure of the gas retards its flow into the galaxy.
The rod, in its initial form and during differential collapse, is shown schematically in Figure \[modeldiagram\]. [In the upper image the red, filled circles representing the dark matter are uniformly mixed with the green squares representing the gas. In the lower image, showing a later time, the dark matter particles have begun to coalesce into the centre of the rod leaving behind the gas particles.]{} An actual example from the simulation is shown in Figure \[modelexample\].
Comparison of the model to the simulation {#results}
=========================================
The purpose of this section is to compare the model proposed in §\[model\] to the cosmological simulation described in §\[methods\].
Galaxies {#galdefinition}
--------
The objects referred to as “galaxies” in the present paper were identified at the end of the simulation, at a redshift of 5.134 (1.15 gigayears after the big bang), from the total mass distribution by DENMAX [@bertschinger91]. The positions of the individual dark matter particles associated with each of these galaxies can be followed throughout the simulation. For a given galaxy defined by DENMAX at the end of the simulation, we define it at earlier times as the arrangement of its constituent dark matter particles at that time. The centre of the galaxy at any time is defined as the centre of mass of its dark matter particles, and the formation of each galaxy is followed with time in a local coordinate system relative to that centre of mass. In this coordinate system the dark matter particles of each galaxy exhibit an initial expansion followed by a well defined turnaround. The transient, quasi-spherical sub-structures seen at intermediate redshifts are not treated as separate galaxies.
In order to follow the development of single galaxies in time since the big bang, we focus on just the $1{,}200$ largest galaxies identified at the end of the simulation just after the first gigayear. The galaxies considered range in total mass from about $10^{9}{\, {\rm M}}_\odot$ to several times $10^{11}{\, {\rm M}}_\odot$. The total mass of the galaxy is defined to include dark matter, gas, and stellar material.
Isothermal Filaments {#filstructure}
--------------------
To avoid subjective bias in identifying filaments, we adopt an objective algorithm which is detailed in Appendix \[filamentid\]. Shortly after turnaround of the dark matter, the single rod of the model appears embedded in a planar slab of material. We take advantage of this situation to select the filaments in the two dimensions of the principal plane. The filaments are selected using only the gas component because its structure is more regular than that of the dark matter. Although the model in its purest form is best understood as the collapse of a single rod, we will refer to the two halves of the rod as separate filaments because both halves are not always present in the structures surviving our selection algorithm. No more than two filaments are selected for a single galaxy. When two are present, the filaments are usually oriented end-to-end as if part of a single, continuous structure.
The mass per unit length, $\Upsilon$, predicted for a gravitationally bound, isothermal gas cylinder is $$\label{isoequation}
\Upsilon = \frac{2 c_s^{2}}{G}$$ where $c_s$ is the isothermal sound speed and $G$ is the gravitational constant [@stod63; @ostriker64]. The sound speed $c_s$ at temperature $T$ is $$\label{soundspeedequation}
c_s = \sqrtsign{\frac{kT}{1.22\mu m_H}}$$ where $k$ is the Boltzmann constant, $m_H$ the mass of the hydrogen atom and $\mu$ a mean particle mass to take into account the ionization of the hydrogen. The factor of $1.22$ corrects for the contribution of neutral helium to the mean atomic weight.
The important point here is that the mass per unit length of a filament depends only upon its temperature and ionization state, and not upon the concentration of the gas in the transverse direction,
The presence of dark matter adds gravitational field without any additional pressure. The simplest way, and the way we adopt, to account for the effect of dark matter is to assume that the gas and dark matter are uniformly mixed. The result of this assumption is that Equation \[isoequation\] predicts, not the mass per unit length of the gas, but the total mass per unit length including the dark matter[^2]. It is interesting to note that this scheme means that if the relative amounts of gas and dark matter in the filament vary, then they must vary in opposite directions to keep the total in line with the sound speed. This situation contrasts with the conventional assumption that gas follows dark matter.
[Appendix A describes the algorithm for selecting regions of the filaments to study. For detailed analysis, a filament segment is defined as the gas and dark matter within a cylinder of length twelve proper kiloparsecs and a radius of six proper kiloparsecs centred on the filament region selected as described in Appendix A.]{} Even though the mass of the cylinder extends to infinity in theory, we find that most of the mass is within this proper radius. The orientation of the axis of the cylinder is determined by a principal component analysis.
We might expect the development of a filament to depend upon the local thermal history. Reionization occurs during the first gigayear. However, different regions of the simulation reionize at different times. For this reason we present the structure of the filaments using the square of the sound speed as a proxy for time. This scheme accomodates the possibility that different filaments are at different stages of development at the same time. The sound speed depends only upon the temperature and ionization state of the gas and is independent of the density. It is computed as the [average of the squares of the sound speeds]{} of the individual gas particles in the filament segment being analyzed. The sound speed of a gas particle is computed from its temperature and ionization state, which are independently computed. In the simulation the temperature and ionization are computed self-consistently from the radiation produced at specific loci of star formation. There is no superimposed, ionizing field as in many other simulations.
![image](reion_triplet_0_rh1b) ![image](reion_triplet_1_rh1b) ![image](reion_triplet_2_rh1b)
The course of reionization of the individual filament segments is shown in the scatter plot in Figure \[triplet\], in which each point is a filament segment at one of the six redshifts we studied. The abcissa is the average of the square of the sound speeds of the gas particles in the segment. The ordinate shows the mean mass per particle (nuclei and electrons) in the gas of the segment, which decreases as a result of reionization. It is clear that the segments at any given redshift are present in a range of different stages of reionization. In this paper the square of the sound speed is always expressed in the same units chosen to facilitate comparisons with the predictions of the model in Figure \[figcorr\] and Figure \[figcorrb\].
![ \[figcorr\] [Proportionality of total mass per unit length to square of sound speed.]{} The symbols show the average total (gas plus dark matter) mass per unit length for the filaments in each bin of sound speed squared. The units have been chosen to illustrate the expectation of proportionality for the model, shown as the diagonal, dotted line with a slope of one. A value of one on the abscissa corresponds to a sound speed of $12.9 {\, {\rm km}} {\, {\rm s}}^{-1}$. A value of one on the ordinate corresponds to $7.71\times10^{7}{\, {\rm M}}_\odot {\, {\rm kpc}}^{-1}$ in proper units. The bin size is 0.035. The vertical line through each symbol represents one standard error for the average of the filaments in that bin. Standard errors are computed for each bin having more than two members. Each red square represents a bin containing at least fifteen filaments. The green triangles represent other bins having at least two filaments, for which a standard deviation can be calculated. The remaining orange, open circles represent bins with a single filament. The bins with at least fifteen filaments apiece have a linear regression line constrained to pass through the origin with a slope of $1.04 \pm 0.11$ with a P value of $1.7\times10^{-19}$ and RSquared of $0.96$. The probability that the residuals belong to a normal distribution is $0.21$. Regression analysis was performed using Mathematica (Wolfram Research). ](6gf)
Figure \[figcorr\] is at the crux of the argument for gravitationally bound, isothermal cylinders. It shows, over a range of sound speeds, that the total mass per unit length of the filament segments can be predicted from the sound speed of the gas using Equation \[soundspeedequation\]. The value computed for the gas from this equation is divided by the overall gas fraction of the segment to give the total mass per unit length plotted in the figure. The units of the axes have been chosen for convenience so that the model predicts a slope of one. The ordinate shows the average for filaments segments in bins of sound speed squared on the abscissa. Bins having at least fifteen members, shown by red squares, were chosen for a linear regression analysis. The slope obtained, $1.04 \pm 0.12$, shows good agreement with prediction.
The prediction, shown by the dotted, diagonal line of unit slope in Figure \[figcorr\] assumes for simplicity that the dark matter is uniformly intermixed with the gas and contributes to the gravitational field in proportion to its abundance[^3] An alternative scheme, in which the gas is concentrated in a relatively flat central part of a dark matter potential well would predict for most segments a gas mass per unit length greater than observed by a factor of about five, since in the latter scheme the gas behaves as if it were a pure gas cylinder without dark matter. The measured mass per unit length is more consistent with the model where the gas and dark matter are uniformly mixed.
In Figure \[figcorr\] the sparser bins to the right (green triangles) show only a general upward trend consistent with prediction. Orange circles indicate bins with a single member.
The period of best agreement with the model corresponds to the period of active reionization when the gas in the segments is partially ionized. The extent of this region is indicated by the horizontal, two-headed arrow. During this period most of the change in sound speed is due to reionization. The temperature changes little.
One should not conclude from these results that the model applies only to the reionization period. During the limited period of the simulation only a few of the filaments have progressed significantly beyond reionization, and these are associated with the largest galaxies. We will argue in Section \[retard\] that excessive mass can lead to deviation from the model. However, the small numbers of segments in these sparse bins make it difficult to tell just how well the model works for the largest galaxies.
![ \[figcorrbden\] [Total mass per unit length as a function of proper density at turnaround.]{} Proper density at turnaround on the abscissa is in units of $10^{6}{\, {\rm M}}_\odot {\, {\rm kpc}}^{-3}$. Total mass per unit proper length of the filament segment on the ordinate is in the same units as for Figure \[figcorr\] and the symbols are as described for that figure. Single standard errors are shown by vertical lines. The vertical, dotted lines indicate the range of turnaround densities used to create Figure \[figcorrb\] ](turn)
Another variable that might be expected to affect the filament structure is the proper density at turnaround. Figure \[figcorrbden\] shows that the total mass per unit length does indeed increase with density at turnaround as one might expect. To demonstrate that the correlation of Figure \[figcorr\] is not an artifact resulting from a correlation of sound speed with density at turnaround, we tested filaments from a restricted range of turnaround densities. Figure \[figcorrb\] shows a version of Figure \[figcorr\] in which only galaxies within a narrow range of turnaround densities, those between the two vertical, dotted lines in Figure \[figcorrbden\], are included. Figures \[figcorr\] and \[figcorrb\] are almost identical.
![ \[figcorrb\] [Proportionality of total mass per unit length to square of sound speed when controlled for density at turnaround.]{} This figure is the same as Figure \[figcorr\] except that the filaments have been drawn from a restricted range of proper density at turnaround, namely $3.0-5.0\times10^{5}{\, {\rm M}}_\odot {\, {\rm kpc}}^{-3}$. The symbols show the average total (gas plus dark matter) mass per unit length for the filament segments in each bin of sound speed squared. The units are the same as described for Figure \[figcorr\]. The vertical line through each symbol represents one standard error for the average of the filaments in that bin. Standard errors are computed for each bin having more than two members. Each red square represents a bin containing at least fifteen segments. The green triangles represent other bins having at least two segments, for which a standard deviation can be calculated. The remaining orange, open circles represent bins with a single segment. The bins with at least fifteen segments apiece have a linear regression line constrained to pass through the origin with a slope of $1.03\pm 0.13$ with a P value of $7.1\times10^{-18}$ and RSquared of $0.96$. The probability that the residuals belong to a normal distribution is $0.23$. The regression line is constrained to pass through the origin. ](b_6gf)
Collapse of Dark Matter {#darkmotion}
-----------------------
![ \[excycloid\] [Spherical collapse of dark matter in an example galaxy.]{} The average proper radius of galaxy dark matter particles as a function of time after the big bang is plotted as a green solid line with green filled circles. The corresponding cycloid is shown by the solid black line. ](cycloid)
![image](density) ![image](time) ![image](radius)
The simplest theory of galaxy formation in an expanding universe is that of the collapse of a spherically symmetric overdensity [@peacock99]. [@sugerman00] find that, despite its simplifying assumptions, the spherical model provides reasonable predictions for properties of dark matter haloes. The spherical model predicts that the radius $r$ of a mass $m$ evolves with time $t$ as a cycloid [@peacock99], $$\label{cycloideq}
r={r_{\rm turn} \over 2}(1-\cos\theta),
\quad
t=\sqrt{r_{\rm turn}^3 \over 8Gm}(\theta-\sin\theta).$$ The cycloid describes expansion from zero to a maximum radius, the turnaround radius $r_{\rm turn}$, as $\theta$ goes from $0$ to $\pi$, followed by contraction as $\theta$ goes from $\pi$ to $2\pi$. To compare this theory with a given galaxy in the simulation, the average radius of the dark matter particles is computed at a series of times before and after turnaround. To compare these radii to the cycloid prediction, the mass $m$ in equation \[cycloideq\] is computed from the actual turnaround sphere using the theoretically expected overdensity of $5.55$ [@peacock99].
We find the spherical model to be an excellent starting point for understanding the collapse of the dark matter of each galaxy. Figure \[excycloid\] shows, for an example galaxy, the average radius of the galaxy’s dark matter particles from their centre of gravity and the corresponding cycloid. Virtually all the galaxies fit a cycloid at least up until turnaround,
The fitted cycloid is translated slightly in time so that the turnaround time coincides with that in the simulation. We find that the cycloid generally begins its ascent either at the beginning of the simulation or slightly before. We interpret this result to mean that the galaxies originate from fluctuations imposed at the start of the simulation, rather than from density variations resulting from subsequent events.
Figure \[overden\] shows histograms of the overdensity, time, and radius at turnaround. The approximate agreement of the overdensity with the theoretical value of $5.55$ [@peacock99] supports our interpretation for the dark matter collapse as the collapse of a spherically symmetric overdensity.
Contrasting Motion of Dark Matter and Gas {#contrast}
-----------------------------------------
Unlike the dark matter particles, the individual gas particles in the simulation cannot be traced throughout the simulation. Rather, they are generated anew after each hydrodynamic timestep. Since the structures are continually moving and changing shape, it is difficult to pin down gas movements on a small scale. What we can compute unambiguously, however, is an overall measure of collapse. We compute this separately for the gas and the dark matter so that we can compare their relative rates of motion into the centre of the galaxy.
![image](11873_solo) ![image](04191_solo) ![image](06729_solo)
At each redshift we compute as a function of redshift the proper radius of a sphere centred on the galaxy that includes just that mass of gas or dark matter that was present inside the turnaround radius at the time of turnaround (the turnaround radius is the average radius of the dark matter particles of a galaxy at the time of turnaround).
Figure \[solo\] shows some individual galaxy histories that illustrate the types of results obtained. The abscissa shows the cosmic scale factor starting from the time of turnaround, which is slightly different for different galaxies. The ordinate is the radius as a fraction of the turnaround radius. Most of the time the gas collapses more slowly than the dark matter as shown in the first two graphs. In the left graph the gas does not collapse fast enough to outrun the expansion of the universe, and the gas radius increases with time. We call this the “retreating gas” type of galaxy. We will see that these are among the least massive of the galaxies we have studied. In the “lingering gas” type, shown in the middle graph, the gas collapses but more slowly than the dark matter. We will see in later sections that the filaments associated with these galaxies have the best fit to an isothermal cylinder. Finally, the right graph shows the gas collapsing more rapidly than the dark matter, the “plunging gas” case. There are $416$ retreating-gas galaxies, $722$ lingering-gas galaxies, and $62$ plunging-gas galaxies.
We will refer to the sphere containing the turnaround amount of mass as the *collapse sphere* for the gas or dark matter as the case may be for the time in question. The proper radius of the collapse sphere divided by the proper turnaround radius we will call the *collapse fraction*. When the radius is taken at the end of the simulation we will refer to this ratio as the *final collapse fraction* of the gas or dark matter.
![ \[limitsdarkcoll\] [Comparison of the movement of dark matter and gas into the galaxy. ]{} Histograms show the final collapse fractions of dark matter (red squares) and gas (green triangles), and the ratio of the two for individual galaxies (black, filled circles). The graphs show that for $94.6$ galaxies the dark matter collapses more rapidly than the gas. The final collapse fraction is defined in Section \[contrast\]. ](final)
Figure \[limitsdarkcoll\] shows histograms of the final collapse fractions of dark matter and gas for the $1{,}200$ galaxies studied in this paper. The histogram for the dark matter (the red curve with squares) peaks at a collapse fraction of about one half, in agreement with the virial expectation for collisionless particles subject only to gravitation. In contrast, the gas (green curve with triangles) generally collapses more slowly.
The black curve with filled circles in Figure \[limitsdarkcoll\] shows a histogram of the ratio of the collapse fraction of the gas to that of the dark matter for the same galaxy. The small fraction of the area under the curve to the left of a ratio of $1.0$ shows that most of the time the gas collapses more slowly than does the dark matter. We suggest that the pressure in the gas filament counter-balancing the force of gravity is responsible for this difference in behaviour.
![ \[timescattgfract\] [Gas fraction as function of sound speed.]{} Each black, circle represents an individual filament segment. The abscissa is the square of the sound speed and the ordinate is the gas fraction. The arrow shows the extent of the period of reionization of the filament segments. The cosmic baryon fraction is indicated by the red, dashed line. There is little stellar material in the filaments at these times, and its presence has been ignored. ](fraction)
The end of reionization marks the beginning of a period of increasing deviation of the gas fraction of the filament from the cosmic value as shown in the scatter plot in Figure \[timescattgfract\]. Here each open circle represents a filament segment.
![ \[collapsegf\] [Dark matter leaves gas behind to produce baryon enrichment of filaments.]{} The graph shows the average gas fraction of filament segments as a function of the collapse fraction of the dark matter of the galaxy at the redshift of the segment. Collapse fraction is defined and determined as specified in Section \[contrast\]. Note that in this figure maximal collapse is at the left and minimal collapse at the right. The horizontal, dotted, red line indicates the cosmic baryon fraction. ](collapse)
We suggest that this effect occurs when rapidly collapsing dark matter effectively leaves behind the gas in the filaments. Figure \[collapsegf\] shows the average gas fraction of filament segments as a function of the collapse fraction of the dark matter of the associated galaxy at the redshift of the filament. As the dark matter of a galaxy collapses, the filaments of that galaxy can become more baryon rich. Note that in this figure maximal collapse is at the left and minimal collapse at the right.
Filamentary as Opposed to Spherical Accretion. {#baryondeficits}
----------------------------------------------
Unlike many of the final galaxies, the turnaround sphere has a baryon fraction close to the cosmic value. The differential collapse of gas and dark matter can lead to significant baryon deficits in the final galaxies. Figure \[baryturncat\] compares histograms of the baryon fraction at turnaround and in the final galaxies.
![ \[baryturncat\] [Baryon fraction at turnaround and in final galaxy.]{} Histograms compare the baryon fraction at turnaround with that of the final galaxy for all of the galaxies in the present study. The green line with squares represents the baryon fraction of the matter within the turnaround radius. The black line with filled circles represents the baryon fraction of the final galaxies at the end of the simulation as determined by the galaxy finding algorithm. The vertical, red, dashed line indicates the cosmic baryon fraction. ](limits)
![ \[bfractpredict\] [Predicting baryon fraction assuming filament accretion. ]{} As described in Section \[contrast\], the baryon fraction of the final collapse sphere for the dark matter is predicted from the baryon fraction at turnaround and the final collapse fractions of the dark matter and gas. The computation assumes movement of the gas along a uniform filament. Each point represents a single galaxy. The abscissa shows the predicted value and the ordinate the actual one. The dashed, red line represents agreement of the prediction with the actual. The $5.4$ galaxies for which the dark matter does not collapse faster than the gas are excluded as well as the galaxies where the gas, unlike the dark matter, does not turn around at all. ](predict)
![ \[bfractpredictcube\] [Predicting baryon fraction assuming spherical accretion. ]{} As described in Section \[contrast\], the baryon fraction of the final collapse sphere for the dark matter is predicted from the baryon fraction at turnaround and the final collapse fractions of the dark matter and gas. The computation assumes spherical accretion of the gas onto the galaxy. Each point represents a single galaxy. The abscissa shows the predicted value and the ordinate the actual one. The dashed, red line represents agreement of the prediction with the actual. The $5.4$ galaxies for which the dark matter does not collapse faster than the gas are excluded, as well as the galaxies where the gas, unlike the dark matter, does not turn around at all. ](predict_cube)
The differing motions of the dark matter and gas that we have just described in Section \[contrast\] can be related in a simple way to these baryon deficits. Consider, for simplicity, the lingering-gas situation in which the dark matter collapses faster than the gas. At the end of the simulation the final collapse sphere for the galaxy’s dark matter will have a baryon fraction less than that of the turnaround sphere and closer to that of the baryon fraction of the final galaxy. The gas in this sphere is some fraction of the gas at turnaround. If, for simplicity, we assume that the gas in the both the initial turnaround sphere and the final dark matter collapse sphere is in the form of a uniform filament that runs along the diameter of both spheres, then the fraction of the turnaround gas that is in the dark matter collapse sphere can be estimated from the ratio of the radius of this sphere to that of the larger gas collapse sphere. Thus the baryon fraction of the final dark matter collapse sphere, can then be computed knowing the baryon fraction at turnaround.
Figure \[bfractpredict\] shows that this simple filamentary scheme predicts the baryon fraction of the final dark matter collapse sphere quite well. Each point on the graph represents a single galaxy. The abscissa is the predicted baryon fraction and the ordinate is the actual one. The red, dashed line represents agreement between the two. Excluded from this plot are plunging-gas galaxies where the gas collapses as fast or faster than the dark matter. These are mostly the largest galaxies. Also excluded are the retreating-gas galaxies where the gas fails to turn around even though the dark matter does. These are the galaxies where the final collapse fraction of the gas is greater than or equal to one.
Figure \[bfractpredictcube\] show the contrasting predictions if the relevant ratio of radii were instead cubed, as would be more appropriate for spherical accretion. The filamentary accretion model is a much better fit than is the spherical one.
The baryon fractions shown in Figure \[bfractpredict\] and Figure \[bfractpredictcube\] are for the final dark matter collapse sphere and are somewhat lower than those of the final galaxies as identified by the galaxy finding algorithm DENMAX. Since the turnaround sphere is based upon the average radius of the dark matter particles at turnaround, we might expect the final dark matter collapse sphere to represent an inner sphere of the galaxy. DENMAX might be adding to this a more baryon rich region at the periphery of the galaxy.
From the results of this section we suggest that a major cause of baryon deficits in the galaxies we have studied is the retarded motion of the gas relative to the dark matter. This situation is in constrast to a mechanism whereby gas already in the halo is subsequently expelled.
Isothermal Cylinders Retard Gas {#retard}
-------------------------------
In Section \[contrast\] we distinguished three types of galaxies. The ones where the gas moves toward the galaxy as fast or faster than the dark matter we referred to as “plunging-gas” galaxies. Those where the gas shows no net movement toward the galaxy we referred to as “retreating-gas” galaxies. Those where the gas moves toward the galaxy but not as fast as the dark matter we referred to as “lingering-gas” galaxies.
The filaments of these three galaxy types have been tested separately for evidence of gravitationally bound, isothermal cylinders. Figure \[tbtfcssq\] shows the correlation of the mass per unit length with the square of the sound speed. The filaments from the lingering-gas galaxies (black, open circles) show the best match. The filaments of the retreating-gas galaxies (red triangles) fail to keep up with the expansion of the universe. The plunging-gas galaxies, shown by the green squares have too much mass for isothermal cylinders.
![ \[tbtfcssq\] [Total mass per unit length as a function of sound speed.]{} The symbols show the average total mass (dark matter plus gas) for filaments associated with galaxies of the three types: black open circles for lingering-gas, green squares for plunging-gas, and red triangles for retreating-gas. As for Figure \[figcorr\] and Figure \[figcorrb\] the units on the axes have been chosen to illustrate the expectation of proportionality for the model, shown as the diagonal, dotted line. Only bins having at least five members are shown for each type. Vertical bars are single standard errors. The range of sound speeds during the process of reionization is indicated by the black arrow. ](sound)
![ \[excessgas\] [Available mass per unit length for the three galaxy types.]{} Histograms of the available mass per unit length for the three galaxy types. The abscissa is the available mass per unit length in the units used in Figure \[figcorr\], Figure \[figcorrb\], and Figure \[tbtfcssq\] to compare the mass per unit length to the sound speed. The ordinate is the fraction of that galaxy type. The black, solid line with filled circles represents lingering-gas galaxies. The green, dashed line with squares represents plunging-gas galaxies. The red, dotted line with triangles represents the retreating-gas galaxies. The vertical, dark green, dashed line at $1.0$ is the mass per unit length corresponding to one unit in the figures showing the correlation with the square of the sound speed. ](excess)
These contrasting results can be understood in terms of the key property of a gravitationally bound, isothermal cylinder that the mass per unit length is limited by the sound speed of the gas. We might expect the cylinder to break down if the overall collapsing mass is overwhelmingly larger than can be accomodated at the current sound speed. This situation would correspond to the plunging-gas case. On the other hand, the filaments of the retreating-gas galaxies might have too little matter to withstand the expansion of the universe.
If an isothermal cylinder does form, one would expect the hydrodynamic forces that stabilize it to compete with the gravitational pull of the dark matter that is collapsing to form the halo of the galaxy. This effect would be expected to retard the flow of the gas relative to that of the dark matter. Only the excess gas that cannot be accomodated into the cylinder structure would be free to proceed unhindered. The situation might describe the filaments of the lingering-gas galaxies.
A measure of the mass per unit length available to form a filament can be obtained by dividing the total mass of the turnaround sphere by its proper diameter, a quantity we will call the *available mass per unit length*, Figure \[excessgas\] shows histograms of the available mass per unit length for the three galaxy types. The histogram for the lingering-gas galaxies peaks at the mass per unit length value corresponding to a sound speed squared of one in Figure \[figcorr\], Figure \[figcorrb\], and Figure \[tbtfcssq\]. This value, which marks the end of reionization, is indicated by the vertical, dark green, dashed line. The retreating-gas galaxies peak to the left and the plunging-gas galaxies to the right.
![ \[baryloss\] [Change in baryon fraction after turnaround.]{} The abscissa of these histogram shows the ratio of the baryon fraction of the final collapse sphere to that of the turnaround sphere. The ordinate shows the fraction of galaxies of each of the three types. The vertical, black, dashed line indicates no change during this period. ](loss)
The change in baryon fraction between turnaround and the final collapse sphere for the three galaxy types is shown in Figure \[baryloss\]. As expected the retreating-gas galaxies lose the most baryons and the lingering-gas galaxies lose fewer baryons. The plunging-gas galaxies actually gain baryons.
![ \[turnavail\] [Available mass per unit length at turnaround as a function of the total mass of the galaxy. ]{} Each symbol represents a single galaxy. The available mass pre unit length is considered to be the ratio of the total mass within the turnaround radius to the diameter of the turnaround sphere. The left ordinate shows mass per unit length in the units used in Figure \[figcorr\], Figure \[figcorrb\], and Figure \[tbtfcssq\] to compare the mass per unit length to the sound speed. The right ordinate shows the mass per unit length in units of solar/kpc for comparison. If the collapse is self-similar the expected slope of this log-log plot is two-thirds. ](avail)
As expected, available mass per unit length increases with the total mass of the final galaxy. This relation is shown in Figure \[turnavail\]. Since the turnaround radius, and hence the mass within it as well, is determined by the average radius of the galaxy’s dark matter at turnaround, a simple hypothesis is that the log of the available mass per length should be proportional to the log of the total galaxy mass with a slope of two-thirds. The figure, in which each symbol represents a single galaxy, shows approximate agreement with this simple, self-similar picture.
Too Big To Fail Galaxies {#sectiontbtf}
------------------------
Considerable success has been seen in attempts to match observed galaxies to haloes seen in simulations based upon the $\Lambda$CDM model (for a review see @somerville15). A circular velocity for observed galaxies can be derived from the Doppler broadening of HI lines. A reasonable relation between this measurement and the circular velocity of simulated haloes can be adduced which leads to rough agreement between the number densities of observed and simulated galaxies as a function of circular velocity.
However, important discrepancies remain at low halo masses. The term “too big to fail” has been applied to haloes having a circular velocity between about $25$ and $45 {\, {\rm km}} {\, {\rm s}}^{-1}$ [@boylankolchin09; @boylankolchin11; @boylankolchin12; @ferrero12; @garrisonkimmel14; @tollerud14; @papastergis15; @klypin15; @papastergis16]. The frequency of these galaxies in observations is much lower than what would be predicted from the $\Lambda$CDM model. The choice of terminology comes from the paradox that these galaxies are not seen despite apparently being massive enough to retain much or all of their gas throughout reionization. This situation is in contrast to galaxies with lower circular velocities, whose relative absence from observation is more easily explained by reionization.
![ \[tbtf\] [Scatter plot of the final baryon fraction of galaxies as a function of the halo circular velocity. ]{} The “too big to fail” galaxies are considered to be those with circular velocity between $25$ and $45 {\, {\rm km}} {\, {\rm s}}^{-1}$ [@papastergis16] as indicated by the blue, dotted, vertical lines. Shown on the ordinate is the baryon fraction of the final dark matter collapse sphere (see Section \[contrast\] for definition). The red, dashed line indicates the cosmic baryon fraction. The circular velocity on the abscissa is computed from the dark matter in this sphere using Equation \[vcircequation\]. The green squares represent the plunging-gas galaxies, namely those where the gas collapses as fast or faster than the dark matter. The red triangles represent the retreating-gas galaxies, namely those where the gas fails to turn around unlike the dark matter. The black, filled circles are the galaxies between these two extremes, which we have called lingering-gas galaxies. These are the ones we suggest are the “too big to fail” galaxies and which show the best agreement with an isothermal cylinder. They are also the ones used to produce Figure \[bfractpredict\] and Figure \[bfractpredictcube\], which show that the baryon fraction can be predicted from the assumption that the matter collapse occurs along a filament rather than spherically. The reduction of the baryon fraction below the cosmic value for these galaxies might make them invisible. ](tbtf)
[The “too big to fail” concept is based on observations of present day galaxies, whose detailed histories are uncertain. However, the simulated galaxies we studied are expected to include the mass range of the “too big to fail” phenomenon, and so might be relevant models.]{} The scatter plot of Figure \[tbtf\] shows the baryon fraction of the final collapse sphere as a function of the circular velocity of its dark matter as estimated from Equation \[vcircequation\]. $$\label{vcircequation}
V_c = \sqrtsign{\frac{G M_d}{R}}$$ where $V_c$ is the circular velocity, $M_d$ the mass of dark matter, $R$ the radius of the sphere, and $G$ the gravitational constant. The approximate circular velocity range of the “too big to fail” galaxies is delimited by the vertical, light blue, dotted lines. These galaxies have significant baryon deficits that might make them difficult to observe. The green squares represent the galaxies where the gas collapses as fast or faster than the dark matter, ie. the plunging-gas galaxies. The baryon fraction for these galaxies is close to the cosmic value. The red triangles represent the retreating-gas galaxies, where the gas fails to turnaround. As might be expected these are clustered at the low end of the circular velocity range and have the greatest baryon deficits. The black, filled circles are the lingering-gas galaxies. We suggest that the lingering-gas galaxies are the “too big to fail” galaxies.
We suppose that the formation of gravitationally bound, isothermal cylinders inhibits the collapse of gas into haloes otherwise thought to be massive enough to have plenty of gas. These galaxies are then missing from observations because they form too few stars, that is, they are “too big to fail” but fail because of the structure of the filaments.
An Alternative Dark Matter Model {#sectiondarkmodel}
--------------------------------
In this section we consider an alternative filament model in which the gas is passively trapped in an overwhelming dark matter potential determined by the aggregation properties of dark matter. Could such a model be consistent with the filaments we have studied?
We do not favor this model because the filaments are undergoing reionization during the time in question, presumably by ionizing radiation coming in from the outside. It is not clear theoretically how external ionizing radiation would affect the aggregation properties of the dark matter except through its effect on the gas. Even a collection of the smallest galaxies has filament gas with the full range of sound speed during reionization.
To evaluate such a model we computed the bulk velocities of the dark matter and gas as a function of transverse distance from the filament axis.
To improve the accuracy of a velocity profile, we refined the determination of the position and orientation of the filament axes. Anticipating that the gas velocities would be minimal along the axis, a principal component analysis was done using just the gas particles having a tranverse velocity less than $5.38 {\, {\rm km}} {\, {\rm s}}^{-1}$ [^4]. The orientation of the newly determined axes was close to that of the original ones, and considerably greater directionality was achieved.
Figure \[vprof\] compares the velocity dispersion profiles of the dark matter and the gas of filaments from lingering-gas galaxies whose average sound speed squared is less than or equal to one in our usual units (see Figure \[figcorr\]). These are the filaments we have found to best match the predictions of the isothermal model we have put forward in this paper. Despite the uncertainties inherent in this computation, it is striking that the gas velocities are reduced near the axis as would be expected from hydrodynamic effects. The dark matter velocities, by contrast, are greater and are roughly constant out to nearly the assumed extent of the filament.
![ \[vprof\] [Transverse velocity dispersion profiles.]{} The median transverse velocity dispersion of the dark matter and gas is plotted as a function of distance from the axis of the cylinder. Only filaments from lingering-gas galaxies that have an average sound speed squared of less than one are included. Dark matter is represented by the dashed, red line with squares and gas by a solid, green line with filled circles. The vertical lines at each point on the curves indicate the range of twp-thirds of the values. The vertical, dotted, black line at the left shows the range of the sound speed squared of the filaments.in our usual units (see Figure \[figcorr\]. The units for the sound speed squared are chosen to correspond to the expected mass per unit length (see Figure \[figcorr\] for a description of these units). The units for the bulk velocities squared of the gas and dark matter are chosen to be twice that for the sound speed in order to account for the difference between Equation \[isoequation\] and Equation \[velequation\]. The unit corresponds to a velocity of $25.7 {\, {\rm km}} {\, {\rm s}}^{-1}$. ](velocity_profiles_distr_rh1b)
@eisenstein97 have derived a relation between the transverse velocity dispersion and the mass per unit length, $\Upsilon$, of a collisionless cylinder at virial equilibrium. $$\label{velequation}
\Upsilon = \frac{v^{2}}{G}$$ where $v$ is the average of the square of the transverse velocity and $G$ is the gravitational constant [@eisenstein97][^5]. The relation is similar to that for a gravitationally bound, isothermal gas cylinder (equation \[isoequation\]) with the bulk velocity substituted for the sound speed.
![ \[tally\] [Virial budget.]{} This figure compares the mass per unit length of a filament to that required to balance in a virial equilibrium both the sound speed of the gas and the bulk velocities of the dark matter and gas as explained in Section \[sectiondarkmodel\]. ](virial_tally3_rh1b)
Our strategy, to better understand the role of the dark matter, is to examine the total virial budget, taking into account both the sound speed of the gas and the bulk velocities of the dark matter and gas. In an hypothesized virial equilibrium each source of kinetic energy must be balanced by a mass per unit length for the filament. For simplicity, we have taken the mass per unit length of the gravitationally bound, isothermal cylinder as that necessary to balance the sound speed of the gas. For the bulk velocities we have taken the mass per unit length for the collisionless counterpart described above (Equation \[velequation\]).
Figure \[tally\] shows that the actual mass per unit length is consistently too small for both gas and dark matter to be present together in a virial equilibrium. This figure is a histogram for filaments from lingering-gas galaxies having a sound speed squared less than or equal to one, using our usual units, These are the filaments we have found to match our isothermal cylinder model the best as shown in Figure \[tbtfcssq\]. The bin on the abcissa is computed by taking the total mass per unit length of the filament and dividing it by the mass per unit length required to balance both the sound speed of the gas and the bulk velocities of the dark matter and gas.
These results make it hard to justify a model in which the gas is part of a dark matter structure at virial equilibrium. Taking the confines of the filament out to a larger radius may not solve the problem. The periphery typically has an overdensity of only a few compared to a central overdensity in the thousands. Furthermore, the otherwise mostly constant velocity of the dark matter increases at large radius.
We suggest that only the gas of these filaments is close to virial equilibrium. This supposition fits with the fact the lingering-gas galaxies are the ones that are the best match to a gravitationally bound, isothermal gas cylinder. In these galaxies the dark matter collapses into the forming galaxy more rapidly than does the gas. The moving dark matter provides an environment in which the gravitational constant appearing in the isothermal gas cylinder equations is effectively altered because of the potential energy of the dark matter.
Number and Alignment of Filaments {#sectionalign}
---------------------------------
If the filaments in the simulation correspond to the rods of the model, we might expect to see for each galaxy two filaments protruding from the centre of the galaxy that are roughly aligned end-to-end.
![ \[numpeak\] [Number of rods per galaxy.]{} Data are shown for each of the six redshifts studied: $8.90$ (black, filled circles), $8.09$ (green X’s), $7.33$ (red squares), $6.69$ (dull green triangles), $5.85$ (orange diamonds), and $5.134$ (blue stars). ](peak)
Figure \[numpeak\] shows that the number of filaments identified for each galaxy is generally no more than two[^6]. The six different curves are histograms of the number of filaments for a galaxy at the six redshifts studied. In keeping with the model, in the few cases where more than two were found, only the two most massive were retained for further analysis.
![ \[peakalign\] [Angle between filament segments.]{} Black, solid line with squares shows a histogram of the angle between the two segment directions in cases where two are found. Red, dotted line with triangles shows the same results when the entries have been weighted by the reciprocal of the sine of the angle. ](align)
When two segments are present at the same time for a galaxy, the preferred angle between their directions is close to $180$ degrees, as if the segments were part of a single, straight rod passing through the centre of collapse. This finding dovetails nicely with the simple paradigm for successive collapse in three dimensions.
The black, solid line with squares in Figure \[peakalign\] shows a histogram of the angles between the directions of pairs of segments. Results for the six redshifts have been pooled, since the individual results are very similar. The red, dotted line with triangles shows the same histogram with entries weighted by the reciprocal of the sine of the angle. This latter plot is suited to a situation in which the two filaments are assumed to come together in three dimensions rather than forming within a predetermined plane.
In this paper, for clarity, we have called each rod protruding from the centre of the galaxy a filament even though it is attractive theoretically to envision the collapse of a single filament passing through the centre of the galaxy.
Discussion
==========
The theme of this paper is that the gas in the intergalactic filaments during the first gigayear can be best understood in terms of simple hydrodynamic principles. In contrast, the dark matter can be understood in terms of the spherically symmetric collapse of an overdensity in an expanding universe.
[The filaments undergoing reionization fit the isothermal model best. These filaments tend to have dark matter spread roughly evenly over the entire length. Isothermal cylinders may also be present at later times, but be harder to demonstrate because large dark matter clumps may negate the assumption of uniform mixing.]{}
It is important to emphasize that the filaments we see should be distinguished from the generally larger ones others study at lower redshifts, which often contain within them multiple galaxies. Most papers on the cosmic web have dealt with these massive, later structures.
@eisenstein97 has described a method for determining the dynamical mass per unit length of a massive filament containing hundreds of galaxies such as might be observed in redshift surveys at low redshift. The method can be used to estimate the mass to light ratios of these filaments. The method is impressive and has theoretical antecedents in common with our model. However, the method is not easily applicable to the very small filaments associated with the galaxies of the first gigayear.
@danovich have reported that galaxies tend to be at the centre of three filaments contrary to our findings. We also often see more than two filaments extending from a single point. However, we believe that these are likely to be the result of later collisions among filaments. Most of the $1{,}200$ largest galaxies can be traced back to an early stage when no more than two roughly end-to-end filaments are present.
To study the numbers and properties of filaments, it is necessary to identify them in an algorithmic fashion free of subjective bias. We find that visual inspection can be deceptive. A plane viewed edge on or the intersection of two planes can appear as a spurious filament. In addition, some of the filament segments are difficult to pick out from the background by eye.
We believe that our criteria for selection of filament candidates are good, and that it is not unreasonable to discard additional faint filaments that may be seen by eye.
It should also be appreciated that, at early times, the centre of a future galaxy connecting two end-to-end filaments may not be apparent upon casual inspection. Rather it may appear to the eye simply as a point on a single continuous filament. This situation could lead to an apparent preference for more than two filaments emanating from a single galaxy.
We argue that, during the reionization process, the gas assumes a structure that can be understood in terms of a gravitationally bound, isothermal cylinder at equilibrium at about $10^{4}{\, {\rm K}}$. We find that this temperature, close to the Lyman alpha cooling floor, is tightly maintained during reionization. This temperature regulation may be important in the stability of the cylinders during this period.
The change in sound speed during reionization can be seen reflected in a change in the mass per unit length of the cylinder (Figure \[figcorr\]). We consider this finding together with the proportionality constant to be strong evidence for gravitationally bound, isothermal cylinders.
[To further justify our emphasis on the importance of the gas component, we explored the virial consequences of a more conventional model in which the structure is primarily determined by the aggregation properties of dark matter. There appeared to be too little mass per unit length for such a model.]{}
It is important to note that the simulation follows in detail the ionization of the gas by radiation from the actual sites of stellar formation. This is in contrast to many simulations that merely impose a uniform radiation field and thus might miss such structural nuances. [For such simulations we would not expect dramatic changes in the physics of the gas in the filaments because there is very little stellar material in the filaments. However, the time course of reionization would probably be altered with the result that there would be fewer filaments with intermediate ionization stages. Demonstration of isothermal cylinders might then be more difficult.]{}
We find remarkable uniformity in the behaviour of the dark matter of the $1{,}200$ largest galaxies in the simulation. These galaxies range in total mass, including dark matter, gas, and stars, over two orders of magnitude. The dark matter collapse follows closely the spherically symmetric cycloid model at least through turnaround. The overdensity at turnaround agrees roughly with theoretical expectation. The matter collapses initially into a rough plane and then into a one dimensional structure within the plane, resulting in up to two filaments protruding from the centre of the galaxy oriented end to end. At any time, approximately three quarters of the galaxies have at least one filament emerging from it.
An outstanding problem in cosmology is the discrepancy between the observed number of satellite galaxies of the Milky Way and the predicted number from simulations using dark matter only [@kauffmann93]. Our results suggest a mechanism that suppresses the entry of gas into dark matter haloes. This phenomenon would be in contrast to the expulsion of gas from haloes by heating and photoionization.
One might imagine that filaments would facilitate the entry of gas into the halo by restricting its angular momentum. However, our results argue that, on balance, the gas that can be accomodated into an isothermal cylinder is retarded in its motion.
A simulation with greater resolution would be desirable to confirm and expand upon these results. Higher resolution might reveal structures important in the formation of galaxies smaller than the ones studied here. It would be interesting to see if there is a minimum galaxy size that can form according to our filament model. Would very small galaxies form very early when the temperature is substantially lower than the Lyman alpha cooling floor?
A higher resolution would also help to establish whether the dark matter really is as clumpy compared to the gas as it appears. Sometimes increasing resolution can expose artifacts arising from the discreteness of the simulation elements. We would not expect a higher resolution to reveal the gas in the filaments as fragments rather than continuous. This is because, when discreteness is important, it is usually the lower resolution simulation that is fragmented rather than the higher one. For a discussion of discreteness effects see @power16.
[Reionization is regarded as a watershed event in the history of the universe. A galaxy that formed before reionization could retain an ancient population of stars that could not have formed afterward if the galaxy were too small. @brown14 have argued from an analysis of individual stars that some ultra-faint dwarf satellites of the Milky Way are indeed such galaxies that have survived to the present day. The galaxies in our study all began their formation prior to reionization. We do not know their likely fate, but it is plausible that they could share common features with some small galaxies observed at redshift zero. It is thus not unreasonable to compare our galaxies with galaxy populations observed today that exhibit the “too big to fail” effect.]{}
A comprehensive consideration of the “too big to fail” phenomenon is beyond the scope of this paper. We note, however, that our model differs from most of the other ones in focussing on the transport of gas into the galaxy, rather than on baryonic effects and star formation within the galaxy (see for example, @governato12 [@zolotov12; @brooks13; @veraciro13; @arraki14; @brooks14; @madau14b; @jiang15; @pawlowski15; @agertz16; @read16; @wetzel16; @zhu16].
A detailed analysis of the motion of gas into galaxies is beyond the scope of this paper. Our model is consistent with the “cold mode” of galaxy accretion. We see no evidence for a hot gas stage in the formation of our galaxies. We have argued from rates of collapse of the dark matter and gas that the gas moves into the galaxy along a single, linear filament.
An important open question involves the timing and mechanism of the reionization of the universe. To evaluate the role of ionizing radiation from stars one must know the fraction of stellar radiation that escapes from its point of origin. It is clear that the gas surrounding the centre of collapse of a galaxy is not spherically symmetric. Our results may provide a more realistic starting point for such calculations.
Finally, our model suggests the central presence of gas at a very early time in the formation of the galaxy halo. Perhaps this gas contributes to the formation of a “cored” halo profile as opposed to the “cuspy” one suggested from simulations containing only dark matter [@gilmore07; @evans09; @deblok10; @strigari10; @amorisco12; @martinez15].
Summary {#sectionsummary}
=======
We propose a model for the development of intergalactic filaments during the first gigayear of the universe. In this model the mass per unit length and structure of the filament is determined, not by the potential well of the enclosing dark matter, but by the hydrodynamic properties of the gas.
The model is described in the context of galaxy formation. Up to two extended filaments may protrude from the centre of the collapsing galaxy. They tend to be oriented end-to-end as if they comprised a single structure. The total mass per unit length of a filament segment is proportional to the square of the sound speed of the gas with a proportionality constant equal to that predicted for a gravitationally bound, isothermal cylinder. The sound speed of a gas filament depends only upon its temperature and ionization state and not upon the density. These structures generally contain gas and dark matter in roughly the cosmic ratio. The dark matter contributes to the gravitational field in proportion to its abundance as if it were uniformly mixed with the gas.
The dark matter of each galaxy collapses according to the simplest model for a spherically symmetric overdensity in an expanding universe. This cycloid profile is followed until some time after turnaround. The overdensity of the material at turnaround agrees with theoretical predictions.
After reionization the average gas fraction of a filament segment may increase as the collapse of dark matter progresses.
Overall the dark matter collapses to roughly the same extent for each galaxy. However, the gas collapse varies. Three types of galaxies are distinguished. In the “plunging gas” galaxies, the gas collapses as fast or faster than the dark matter. These galaxies appear to collapse from overdensities having far too much matter to form a gravitationally bound, isothermal filament under the ambient thermal conditions. In the “retreating gas” galaxies, the gas fails to move toward the halo and may expand with the universe. These galaxies appear to collapse from overdensities having too little matter. Finally, in the “lingering gas” galaxies the gas collapses but more slowly than does the dark matter. These appear to have available masses in a suitable range to form gravitationally bound, isothermal cylinders. Presumably, the gas that can be accomodated in the isothermal structure is retarded in its motion, whereas excess gas can can proceed unhindered.
For the lingering gas galaxies the different overall rates of migration of the dark matter and gas predict the final baryon fraction of the galaxy under a simple assumption of uniform, filamentary accretion, but are inconsistent with spherical accretion.
The model works for a galaxy mass range that includes that of the so-called “too big to fail” galaxies and may explain the peculiar absence of these galaxies from observational surveys.
The model may provide a mathematical framework for understanding a variety of open questions about structure formation in the early universe. The sizes of the structures in this model suggest a minimum simulation resolution that may be necessary to recreate the effects seen here. In addition, the model may contribute to our intuition about the roles of dark matter and gas in structure formation.
Acknowledgments {#acknowledgments .unnumbered}
===============
We are grateful to Nickolay Y. Gnedin for providing us with the output of his simulation along with his visualization software IFRIT.
Algorithm For Filament Identification {#filamentid}
=====================================
Each filament is selected in the context of a forming galaxy. Figure \[geom\] shows the geometry for this selection for an example galaxy. The selection takes place in a comoving coordinate system whose centre is the centre of mass of the collapsing dark matter particles of the galaxy. Unlike the gas particles, the individual dark matter particles of a galaxy can be followed throughout the simulation. The XY plane is set to the plane of the gas as determined by a principal component analysis of the gas particles within a sphere of radius $266$ kpc. The large black rectangle in the Figure \[geom\] shows this XY plane. Annuli above (red) and below (green) the plane determine a volume that is divided into equal bins by the light blue rectangles. The intersection of the bins with the plane are shown by the black annulus. The annulus is defined by two radii of 88.8 and 133.2[ [kpc]{}]{}.
![ \[geom\] [Coordinate system for identification of intergalactic filaments.]{} Outlined in black is the XY plane with the origin of the coordinate system at its centre. One pair of annuli is shown with members above (red) and below (green) the central plane, The volume between the two annuli is divided into equal, azimuthal bins. The intersection of the volume with the plane is shown in black. Planes marking the boundaries of the bins are outlined in light blue. ](geom)
![ \[filselect\] [Identification of a filament extending from the centre of a galaxy.]{} The centre of the collapsing dark matter is at the centre of the coordinate system shown in Figure \[geom\], and the plane shown in that figure is oriented to the plane of the gas. The gas particles in a region surrounding the centre of a collapsing galaxy are shown in blue. For clarity, only some of the bins are shown. ](select)
A bin having more than twice the average amount of gas per bin is considered to contain a filament. In cases where neighboring bins of the annulus both meet this criterion, the gas in both bins is merged and analyzed together. The filament identification process is completely defined by this algorithm. There are no subjective elements.
The algorithm identifies visually clear segments that are usually part of longer filaments extending radially from the centre of collapse of the galaxy. Also identified, and therefore included in our analysis, are structures that may not be readily identified or interpreted by eye.
These filament segments are used in Section \[sectionalign\] to compare the relative orientations of the filaments with the prediction of the model. However, for the analyses in the remaining sections, the set of segments is further culled using a “range test”. In this test, the filament segment is divided into three parts. To pass the test, the mass of gas in each part can differ from the average of the three parts by no more than 20%. This test ensures that the identified structure grossly resembles a filament. This is important because we do not allow any subjective inspection of a segment to influence whether it is included.
In addition, we require that the sphere of influence of the galaxy cover at least half of the filament segment in question. This is done to ensure that nearby galaxies do not result in the selection of the same filament. The sphere of influence is considered to be a sphere with radius equal to the turnaround radius in comoving coordinates centred at the centre of the galaxy at the time in question.
A total of $2300$ filament segments selected from the six redshifts considered survived all tests.
For structural analysis the orientation of each segment is determined more precisely using a principal component analysis.
[^1]: Dark matter does have a preferred length scale arising from the sound horizon at recombination. However, this scale is vastly greater than that of galaxies and their attached filaments.
[^2]: The amount of stellar material in the filaments in negligible, and we neglect it.
[^3]: [It is impractical to subdivide the filament segments into regions with different amounts of dark matter even though the dark matter is clumpy. The gas tends to be smoother and not to clump with the dark matter.]{}
[^4]: All velocities are computed relative to a coordinate system moving with the average velocity of the gas particles within 311 comoving kiloparsecs of the centre of the galaxy.
[^5]: This differs from equation 13 of these authors by a factor of 2. Their equation refers to line-of-sight velocity.
[^6]: The filaments used for the graphs in this section are taken from the collection of filaments before they are culled by the range test and the sphere of influence.
|
---
abstract: 'We report new experimental studies to understand the physics of phonon sensors which utilize quasiparticle diffusion in thin aluminum films into tungsten transition-edge-sensors (TESs) operated at 35 mK. We show that basic TES physics and a simple physical model of the overlap region between the W and Al films in our devices enables us to accurately reproduce the experimentally observed pulse shapes from x-rays absorbed in the Al films. We further estimate quasiparticle loss in Al films using a simple diffusion equation approach.'
author:
- 'J.J. Yen'
- 'B. Shank'
- 'B.A. Young'
- 'B. Cabrera'
- 'P.L. Brink'
- 'M. Cherry'
- 'J.M. Kreikebaum'
- 'R. Moffatt'
- 'P. Redl'
- 'A. Tomada'
- 'E.C. Tortorici'
title: ' Measurement Of Quasiparticle Transport In Aluminum Films Using Tungsten Transition-Edge Sensors '
---
Introduction
============
Quasiparticle transport dynamics have been studied in the lab by many groups [@diff_study_1; @diff_study_2; @Trapping] using different materials, fabrication processes, and readout schemes. Quasiparticle transport in Al films plays an important role in the design specifications of Cryogenic Dark Matter Search (CDMS) detectors [@CDMS]. These detectors utilize photolithographically patterned films of sputtered Al and W on both sides of high-purity, kg-scale, Ge and Si crystals. The superconducting Al and W films perform two roles simultaneously: they absorb phonon energy and they serve as ionization collection electrodes.
When a particle interacts with a CDMS detector, electron-hole pairs and phonons are created. Under typical operating conditions, a $\sim$1V/cm bias is used to drift the e-/h[+]{} pairs through the bulk of the crystal so charge can be collected at the detector surfaces. At the same time, the athermal phonons produced by the event make their way to the detector surfaces where they can be absorbed in the Al film by breaking Cooper pairs which create quasiparticles. Ideally, the quasiparticles diffuse randomly in the Al until they get trapped in the overlap region between the Al and W films, where the superconducting energy gap is smaller than in the Al film alone [@Booth]. This trapped energy gets absorbed by an attached W-TES, adding heat and providing the detector’s phonon signal for that event. We call these phonon sensors Quasiparticle-trap-assisted-Electrothermal-feedback Transition-edge-sensors (QETs) [@QET].
The quasiparticle (qp) trapping length in CDMS Al films impacts overall detector energy performance. Here we present results from a detailed study of energy collection and qp propagation in Al films coupled to W-TESs and describe an innovative model that explains QET pulse shapes and overall performance, and provides a way to measure qp trapping lengths in thin films and the energy transport efficiency from the qp energy to the TES electron system. Our measurements have benefited from a newly implemented signal analysis approach based on template matching rather than pulse integration which improves our energy resolution by a factor of two and yields better event reconstruction overall [@Ben_APL].
Experimental Setup
==================
![(a) SEM image of Al/W test device. The W-TESs at the ends of the Al film are 250$\mu$m x 250$\mu$m. The racetrack-shaped outer channel acts as a veto for substrate events. (b) Schematic side view (not to scale) where each W-TES overlaps the Al film. (c) Sample mount with $^{55}$Fe / NaCl x-ray fluorescence source. The test device is hidden behind a collimator plate.[]{data-label="fig.fab"}](fab.png){width="2.8in"}
Test samples consisted of photolithographically patterned, 300 nm-thick Al and 40 nm-thick W films. Three Al film lengths were studied: 250$\mu$m, 350$\mu$m and 500$\mu$m. The metallization and process steps were identical to those used for CDMS detectors, including a 40 nm layer of amorphous Si (aSi) sputtered on each cleaned Si substrate just prior to metallization. Fig. \[fig.fab\]a shows an image of one test device with a central 250 $\mu$m-wide x 350 $\mu$m-long Al phonon absorption film coupled to 250 $\mu$m x 250 $\mu$m W TESs (W-TES1 and W-TES2) at either end. A distributed racetrack-like outer TES channel (W-TES3) served as a veto for substrate events. A schematic diagram of the film geometry at the overlap regions between the W-TESs and the Al energy collection film is shown in Fig. \[fig.fab\]b. Fig. \[fig.fab\]c shows the OFHC Cu structure used to both anchor devices to the mixing chamber of our dilution refrigerator and expose a single device (through collimators) to an $^{55}$Fe/NaCl fluorescence source (Cl K$\alpha$ at 2.62 keV). With this arrangement, low energy source x-rays reached our devices $\sim$ 20 times per second.
W-TES Energy Collection
=======================
Collimated x-ray absorption events were measured using a conventional voltage-biased TES circuitry setup [@QET], with the W-TES sensor biased in the steepest part of its resistive transition. The total change in internal energy of a TES under such conditions is well approximated by: $$\Delta U = \Delta U_{ext} + \Delta U_{Joule} + \Delta U_{e-ph} = 0,$$ where $\Delta U_{ext}$ represents the deposited x-ray energy, $\Delta U_{Joule}$ corresponds to the Joule heating $\sim$ [V$^2$/R]{} of the biased TES, and $\Delta U_{e-ph}$ is an energy loss term arising from electron-phonon coupling within the TES. This latter term accounts for the thermal relaxation of the TES. It is relatively small when the TES is operated in the linear, non-saturated region of its $R(T,I)$ curve and small energy inputs are considered. In general, event energy absorbed by a voltage-biased TES will increase sensor resistance and thus decrease the instantaneous energy loss from Joule heating. When in the linear, low energy regime, the first two terms in the energy balance equation dominate the physics, and essentially cancel each other. However, when the energy flux into a TES is sufficient to drive the TES fully normal, $\Delta U_{e-ph}$ can be significant. Below, we show that by consistently including the $\Delta U_{e-ph}$ term in our model we can more accurately reproduce the observed pulse shapes and energy distributions of W-TES events in both the non-saturated and saturated regimes [@Ben_APL].
Fig. \[fig.3d\_banana\] shows the energy detected by each of the three W-TESs on a single test device exposed for $\sim$48 hours to our NaCl fluorescence source using the set-up shown in Fig. \[fig.fab\]c. The data were obtained with a 250 $\mu m$-long Al film device similar to that shown in Fig. \[fig.fab\]a. Event energies were determined using a non-linear optimal filter template fitting approach [@Ben_APL]. As shown in Fig. \[fig.3d\_banana\], we observed four basic classes of events: (1) x-rays absorbed directly in W-TES1 or W-TES2, (2) x-rays absorbed in the central Al film, (3) x-rays absorbed in one of the four main W/Al overlap regions of the device (one at each end of both W-TES1 and W-TES2), and most commonly (4) x-rays absorbed in the Si substrate (large W-TES3 signal). The relative count rates observed for the various event types were consistent with the source-collimator geometry and the known penetration depths [@Mass_Attenuation] for 2.62 keV x-rays in Al (3.3 $\mu m$) and W (0.2 $\mu m$).
![X-ray event energy collected in each of the three W-TESs of a 250 $\mu m$-long central Al film device. Four distinct x-ray interaction locations are noted: W-TES, central Al, Al/W overlap regions, and the substrate. The color bar indicates the fraction of the total detected energy appearing in the substrate channel (W-TES3). The energy collected by W-TES1 and W-TES2 for x-ray hits along the central Al film (the banana-shaped cluster of points shown) is consistent with the known device geometry.[]{data-label="fig.3d_banana"}](APL_3D_banana.jpg){width="3.5in"}
We scaled event energy measurements to the initial energy stored in qps only after their number became constant, [*i.e.*]{} after the initial fast phonon decay modes were complete but before qps shed sub-gap phonons [@Goldie]. In our experiments, a maximum of only 1.42 keV of the incident 2.62 keV Cl K$\alpha$ x-ray energy was collected in W-TES1, even for a direct-hit in that sensor (see Fig. \[fig.3d\_banana\]). This large energy deficit can be explained using an energy down-conversion model recently published by Kozorezov,[*et.al.*]{} [@phonon_loss]. Their model defines three stages of the energy down-conversion process following the absorption of an x-ray in a thin metal film. The most relevant to our experiments with W-TESs is Stage II, where athermal phonon leakage into the substrate dominates the film’s energy loss to the substrate. Stage II can be subdivided into two main parts. In the first part, the mean energy of electronic excitations, $\epsilon$, is below some threshold, $E_1^*$, but much higher than the Debye energy: $\Omega_D<<\epsilon< E_1^*$. In this regime, energy loss to the substrate can be strongly dependent on event location in the film ([*i.e.*]{} proximity to the film-substrate boundary) and spectral peaks get broadened, but not typically shifted appreciably in energy.
The second part of Stage II is characterized by $\Omega_D > \epsilon > \Omega_1$, where $\Omega_1$ is a low-energy threshold above which electron and hole relaxation by phonon emission is still important, but below which the dynamics is again dominated by electronic interactions. This portion of the energy cascade process turns out to be more important than expected for explaining the observed energy loss in TESs and other film-based devices. Applying Eqs. 7, 9 and 10 of Ref. [@phonon_loss] to our experimental conditions yields a predicted fractional energy loss in our W films of 49% for direct-hit x-ray events. In our experiments we observe an actual energy loss of $\sim$ 43% for these direct-hit events. One effect that can reduce this small discrepancy is the reabsorption of high-energy escape phonons back into the W-TES from the substrate.
Pulse Shapes: Waterfall model
=============================
We have developed a simple physical model that accurately describes the pulse shapes observed with our Al/W devices. We show in Fig. \[fig.Pulses\]a one simulated pulse from this model superimposed on a raw pulse from a well-behaved device like the one shown in Fig. \[fig.fab\]a. We have also used this model to reproduce previously unexplained pulse shapes [@LTD11] obtained with a device of similar design that was studied first in 1997 and then again in 2014. The same, unusual pulse shapes were observed in both data sets. The remarkable double-peak structure for that device is shown in Fig. \[fig.Pulses\]b.
![Overlay of raw data and simulated pulses for: (a) a typical Al/W test device, (b) a similar Al/W device, first tested in 1997, with odd pulse shapes that we now understand.[]{data-label="fig.Pulses"}](Pulses.png){width="3.5in"}
![Physical model of our Al/W device that: (a) models imperfect interfaces between the Al and W regions as resistive ’bottlenecks’ that can affect the critical current and the TES response function, and (b) treats W-TES1 and W-TES2 each as a series of ten parallel strips with thermal conductance between the strips given approximately by the Wiedemann-Franz Law.[]{data-label="fig.waterfall_Betty"}](Waterfall_Betty.png){width="3.0in"}
The key elements of our physical model are shown in Fig. \[fig.waterfall\_Betty\]. In the model, weak links of W are used to mimic the step-coverage impedance where the 40 nm-thick W film overlaps the 300 nm-thick Al film below it, as the W transitions down to the substrate where it operates as a TES (see Fig. \[fig.fab\]b). We refer to these film transition regions as “waterfall” regions based on their appearance in SEM images [@LTD15_Jeff]. In our test devices, the W/Al overlap region is typically excellent along the top surface of the Al but more limited on the steep Al sidewalls. Our model treats the added impedance of an imperfect waterfall region as a weak W link that acts effectively as a small Joule heater providing constant power even when the W-TES itself is in its superconducting transition. This impedance alters the superconducting temperature and critical current of the TES in predictable ways. Additionally, in our model each TES square is divided into ten equal-width strips parallel to the W/Al overlap region. The Wiedemann-Franz Law is then used in a one-dimensional (1D) simulation of qp thermalization in the voltage-biased TES as energy flows through it laterally.
Our new model works well. For example, it yields the first decay-time in the raw data pulse shown in Fig. \[fig.Pulses\]a. It also correctly predicts the second distinct decay-time that corresponds to the time ($\tau_{etf}$) needed for the TES to cool back to its equilibrium state. Lastly, the model explains the double-peaked pulses observed with our older devices from 1997 - the odd pulse shapes we now know resulted from poor film connectivity between each W-TES and its corresponding Al bias line at the end away from the main Al absorber (see Fig. \[fig.waterfall\_Betty\]). A detailed description of this new model and its successful use in pulse shape simulations is discussed in Ref. [@Ben_APL].
Quasiparticle transport: Diffusion, Absorption and Energy Collection
====================================================================
![Overlay of raw energy collection distribution and Maximum Likelihood fit. The banana-shaped cluster of points corresponds to direct-hit x-rays in the main Al film. (Inset): Collected x-ray energy vs. event location along the Al film. The cluster of points near -55$\mu$m is consistent with x-rays absorbed in the ground line of the main Al film.[]{data-label="fig.banana_fit"}](APL_banana_fit.png){width="2.8in"}
After selecting Al direct-hit events (dark blue in Fig. \[fig.3d\_banana\]) using the method described in Ref. [@LTD15_Jeff], we modeled qp transport in the Al film using a 1D diffusion equation with a linear loss term: $$\frac{\partial{n}}{\partial{t}} = D_{\textrm{Al}} \space \frac{\partial^2{n}}{\partial{x^2}} - \frac{n}{\tau_{\textrm{Al}}} + s,
\label{diffusion_eqn}$$ where $n = n(x,t)$ is the linear number density of qps, $D_{\textrm{Al}}$ is the diffusivity of qps, and $\tau_{\textrm{Al}}$ is the qp trapping time. The source term $s= q\, \delta(x-x_0) \delta(t-t_0)$ represents the rate of qp density creation. The rates for qp absorption into W-TES1 and W-TES2, symbolized by $I_1$ and $I_2$ respectively, were modeled by the linear relations: $$I_1 = n_{1}\,v_{\textrm{1}}, \ \ I_2 = n_{2}\,v_{\textrm{2}},
\label{absorption_eqn}$$ where the coefficient $v_{\textrm{1}}(v_{\textrm{2}}) $ has units of length/time, and $n_{1}$ ($n_{2}$) is the qp number density at the W/Al boundary closest to W-TES1 (W-TES2). This 1D approach is sufficient because the qps are reflected at the edges of the Al, and the mean free path is smaller than the width of the film, making diffusion along the two axes independent.
Equation \[diffusion\_eqn\] can be solved analytically to find the fraction F$_{1}$(F$_{2}$) of qp generated by an event that is absorbed in W-TES1(W-TES2):
$$\begin{aligned}
F_1 &=& \frac{\Lambda_d\left( \lambda_{2}\, \textrm{cosh}\left(\frac{1+2\xi}{2\Lambda_d} \right) + \Lambda_d\, \textrm{sinh}\left( \frac{1+2\xi}{2\Lambda_d} \right) \right)}{\Lambda_d(\lambda_{1} + \lambda_{2})\, \textrm{cosh} \left( \frac{1}{\Lambda_d}\right) + (\Lambda_d^2 + \lambda_{1}\lambda_{2})\,\sinh \left(\frac{1}{\Lambda_d} \right) }
\label{fraction1}
\\
F_2 &=& \frac{\Lambda_\textrm{d}\left( \lambda_{1}\, \textrm{cosh}\left(\frac{1-2\xi}{2\Lambda_d} \right) + \Lambda_d\, \textrm{sinh}\left( \frac{1-2\xi}{2\Lambda_d} \right) \right)}{\Lambda_d(\lambda_{1} + \lambda_{2})\, \textrm{cosh} \left( \frac{1}{\Lambda_d}\right) + (\Lambda_d^2 + \lambda_{1}\lambda_{2})\,\textrm{sinh}\left(\frac{1}{\Lambda_d} \right) }
\label{fraction2}\end{aligned}$$ The dimensionless variable $\Lambda_\textrm{d} \equiv L_d / L$ depends on the characteristic diffusion length $L_d = \sqrt{D_\textrm{Al}\tau_\textrm{Al}}$ of the Al film, and the term $\xi \equiv x_0/L$ depends on the qp source location, $x_0$, measured from the center of the Al film. $L$ is the length of the Al film. The dimensionless parameters $\lambda_1$ and $\lambda_2$ are defined by the relation, $ \lambda_{i} \equiv L_{i}/ L$, where L$_i = D_\textrm{Al}/v_i$ $ (i =1,2 )$ is a characteristic qp absorption parameter with units of length that varies inversely with the efficiency for coupling qp into each W-TES. In general the W-TESs would have slightly different qp absorption capabilities, hence $\lambda_{1} \neq \lambda_{2}$. However, if one assumes the same absorption capability for the two TESs, Eq. \[fraction1\] and Eq. \[fraction2\] can be further simplified to the form shown in Eq. 1 of Ref. [@diff_study_1].
Fig. \[fig.banana\_fit\] shows a Maximum Likelihood fit of this diffusion model to x-ray data for a 350 $\mu$m-long Al film. The fit yields estimates for three important parameters: the characteristic qp diffusion length, $L_d$, the qp absorption into W-TESs, $L_{\textrm{1}}(L_\textrm{2})$, and an energy scaling factor, $\mathcal{E}_{\textrm{sf}}$. The scaling factor corresponds to the deposited energy before position dependent qp trapping and sub-gap phonon losses have occurred as energy is absorbed into the two W-TESs. Applying Eq. \[diffusion\_eqn\] to our data yields $L_{\textrm{d}} \sim 130 \mu$m for three Al film lengths studied: 250 $\mu$m, 350 $\mu$m, and 500 $\mu$m. For small values of $L_i$, the band of Al direct-hit events shown in Fig. \[fig.banana\_fit\] would extend towards the energy axes. In our data, $L_\textrm{1}$ $\approx$ $L_\textrm{2}$ $\sim$100 $\mu$m, and we observe gaps between the end points of the Al direct-hit band and the energy axes. Summing the two W-TES energies and reconstructing position yields the inset of Fig. \[fig.banana\_fit\].
![Reconstructed Cl K$_\alpha$ x-ray energy as a function of event position along Al film. This corresponds to the deposited energy before position dependent qp trapping and sub-gap phonon losses have occurred. []{data-label="fig.position"}](position.png){width="3.in"}
Fig. \[fig.position\] shows the reconstructed energy vs. position data of Fig. \[fig.banana\_fit\] using the parameters from our diffusion model fit. The scaling factor obtained from the model yields a total event energy of 2.3 keV rather than the expected 2.62 keV. This $\sim $ 10$\%$ discrepancy is consistent with known energy down-conversion mechanisms. The 5% variation in reconstructed energies shown in Fig. \[fig.position\] can be corrected using a model that includes the latter stages ($\epsilon < 3 \Delta$) of the energy down-conversion cascade and simulates qp trapping in terms of a percolation threshold (below which qps are trapped by local variations in the gap) [@percolation].
Conclusion
==========
Our new TES model accurately estimates the energy of direct-hit x-rays in W-TESs. The results are consistent with phonon and qp energy down-conversion physics and the model also provides a better understanding of the processes needed to improve the energy transport in CDMS Al films. In the simple diffusion model used in this work, losses to sub-gap phonons and qp trapping were combined into a single, generic term. A more detailed study that includes percolation threshold effects from spatial variations in the superconducting gap of our Al films will be reported soon. The 0.3 $\mu$m-thick Al films used in this study have a measured qp characteristic diffusion length $L_{\textrm{d}} \sim$130$\mu$m which is thickness-limited. We are presently using SEM and FIB imaging tools to appropriately modify detector fabrication recipes in order to improve connectivity at the Al/W interfaces, allowing detectors to be made in the future with Al films that are twice the current thickness.
Acknowledgements
================
We are grateful to the Stanford Physics Machine Shop for making source and sample holders, and collimators. We thank K. D. Irwin and S. Chaudhuri for useful discussions on TES physics. We also thank M. Pyle and K. Schneck for CDMS related conversations. The authors would also like to thank A. Kozorezov and S. Bandler for useful qp and phonon physics discussions. We acknowledge support from the Department of Energy, DOE grant DE-FG02-13ER41918, and the National Science Foundation, NSF grant PHY-1102842.
[99]{}
M. Loidl, S. Cooper, O. Meier, F. Pröbst, G. Sáfrán, W. Seidel, M. Sisti, L. Stodolsky, S. Uchaikin, [*Nucl. Instr. and Meth. A,*]{} **465**, 440-446, (2001).
J. Martin, S. Lemkea, R. Grossa, R.P. Huebenera, P. Videlerb, N. Randob, T. Peacockb, P. Verhoeveb, F.A. Jansenb, [*Nucl. Instrum. Methods A*]{} **370**, 88-90, (1996).
C. Bailey, J. Adams, S. Bandler, J. Chervenak, M. Eckart, A. Ewin, F. Finkbeiner, R. Kelley, C. Kilbourne, F. Porter, J. Sadleir, S. Smith, M. Sultana, [*J. Low Temp.*]{} **167**, 3-4, (2012).
Z. Ahmed et al., [*Phys. Rev. Lett.* ]{} **106**, 131302, (2011).
N. E. Booth, [*Appl. Phys. Lett*]{} **50**, 293, (1987).
K. D. Irwin, S. W. Nam, B. Cabrera, B. Chugg and B. A. Young, [*Rev. Sci. Instrum.*]{} **66**, 5322, (1995).
B. Shank, et. al., “Nonlinear Optimal Filter Technique for Analyzing Energy Depositions in TES Sensors Driven Into Saturation”, this publication
S. M. Seltzer, [*Radiation Research.*]{} 136-147 (1993).
Guruswamy, D J Goldie and S Withington, Supercond. Sci. Technol. 27, 055012 (2014).
A. G. Kozorezov, C. J. Lambert, S. R. Bandler, M. A. Balvin, S. E. Busch, P. N. Nagler, J. P. Porst, S. J. Smith, T. R. Stevenson, and J. E. Sadleir, [*Phys. Rev. B*]{} **87**, 104504, (2013).
M. Pyle, P. L. Brink, B. Cabrera, J. P. Castle, P. Colling, C. L. Chang, J. Cooley, T. Lipus, R. W. Ogburn, B. A. Young, [*Nucl. Instrum. Methods A*]{} **559**, 405-407, (2006).
J. J. Yen, B. A. Young, B. Cabrera, P. L. Brink, M. Cherry, R. Moffatt, M. Pyle, P. Redl, A. Tomada and E. C. Tortorici [*Journal of Low Temp. Phys.*]{} **176**, 168-175, (2014).
B. Cabrera, [*et. al.*]{}, in preparation.
|
---
abstract: 'In this note we prove that any left-invariant almost Hermitian structure on a $2$-step nilmanifold is Ricci-flat with respect to the Chern connection and that it is Ricci -flat with respect to another canonical connection if and only if it is cosymplectic (i.e. $d^*\omega=0$).'
address: 'Dipartimento di Matematica, Università di Torino, Torino, Italy.'
author:
- Luigi Vezzoni
title: 'A note on Canonical Ricci forms on $2$-step nilmanifolds'
---
[^1]
introduction
============
Let $(M,g,J,\omega)$ be an almost Hermitian manifold. Gauduchon introduced in [@gau] a $1$-parameter family ${\nabla}^t$ of canonical Hermitian connections which can be distinguished by the properties of the torsion tensor $T$. In this family ${\nabla}^1$ corresponds to so-called Chern connection which can be defined as the unique Hermitian connection whose $(1,1)$-part of the torsion vanishes. In the *quasi-Kähler* case (i.e. when ${\overline{ \partial}} \omega=0$), the line $\{{\nabla}^t\}$ degenerates to a single point and the Chern connection is the unique canonical connection.
Any canonical connection ${\nabla}^t$ induces the so-called *Ricci form* $\rho^t(X,Y)=2{\operatorname{i}}{\rm tr}_{\omega}R^t(X,Y)$, where $R^t$ denotes the curvature of ${\nabla}^t$. It turns out that $\rho^t$ is always a closed form which can be locally written as the derivative of the $1$-form $\theta^t(X)=\sum_{r=1}^n g({\nabla}^t_{X}Z_r,Z_{{\overline{r}}})$, where $\{Z_r\}$ is a (local) unitary frame. Moreover, in the cosymplectic case (i.e. when $d\omega^{n-1}=0$) the line $\{\theta^t\}$ degenerates to a single point (see Corollary \[deg\]) and all the canonical connections have the same Ricci form.
The aim of this paper is to study the Ricci forms $\rho^t$ on $2$-step nilmanifolds equipped with a left-invariant almost Hermitian structure. We recall that by definition a *$k$-step nilmanifold* is a compact quotient of a $k$-step nilpotent Lie group $G$ by lattice. Since we are considering *left-invariant* almost Hermitian structures, we can work on Lie algrebras in an algebraic fashion. Our main result is the following
\[main\] Let $({\mathfrak{g}},g,J,\omega)$ be a $2$-step nilpotent Lie algebra with an almost Hermitian structure. Then $(g,J)$ is Ricci-flat with respect to the Chern connection and it is Ricci-flat with respect to another canonical connection if and only if it is cosymplectic $($i.e. $d^*\omega=0)$.
This theorem has the following immediate consequence:
Every left-invariant almost Hermitian structure on a nilmanifold associated to a $2$-step Lie group is Ricci-flat with respect to the Chern connection.
[*Acknowledgments.*]{} The research of this paper has been motivated by a conversation with Simon Salamon. I’m very grateful to him. Furthermore, I’m grateful to Gueo Grantcharov for useful conversations and remarks and to Nicola Enrietti for an important observation on the presentation of the main result.
Preliminaries on canonical connections
======================================
Let $(M,g,J,\omega)$ be an almost Hermitian manifold, where $\omega$ is the fundamental form $\omega(\cdot,\cdot)=g(J\cdot,\cdot)\,.$ The almost complex structure $J$ extends to $r$-forms as $$J\alpha(X_1,\dots,X_n)=(-1)^r \alpha(JX_1,\dots,JX_n)$$ inducing the splittings $$TM\otimes {{\mathbb C}}=T^{1,0}M\oplus T^{0,1}M\,,\quad \Lambda^r(M,{{\mathbb C}})=\bigoplus_{p+q=r}\Lambda^{p,q}M\,,$$ where $\Lambda^{r}(M,{{\mathbb C}})$ is the vector bundle of complex $r$-forms on $M$. In particular $\Lambda^{3}M$ splits as $$\Lambda^{3}M=\Lambda^{+}M\oplus\Lambda^{-}M\,,$$ where $\Lambda^{+}M=(\Lambda^{2,1}M\oplus\Lambda^{1,2}M)\cap \Lambda^{3}M$ and $\Lambda^{-}M=(\Lambda^{3,0}M\oplus\Lambda^{0,3}M)\cap \Lambda^{3}M$. Given a $3$-form $\gamma$ we denote by $\gamma^+$ and $\gamma^-$ the projection onto $\Lambda^{+}M$ and $\Lambda^{-}M$, respectively. Moreover, denoting by $\Omega^2(TM)$ the vector space of smooth sections of $\Lambda^2M\otimes TM$, we have the splitting $$\Omega^2(TM)=\Omega^{2,0}(TM)\oplus \Omega^{1,1}(TM) \oplus \Omega^{0,2}(TM)$$ where $$\begin{aligned}
& \Omega^{2,0}(TM)=\{B\in\Omega^2(TM)\,\,:\,\,B(JX,Y)=JB(X,Y) \}\,;\\
\vspace{0.1cm}
& \Omega^{1,1}(TM)=\{B\in\Omega^2(TM)\,\,:\,\,B(JX,JY)=B(X,Y) \}\,;\\
\vspace{0.1cm}
& \Omega^{0,2}(TM)=\{B\in\Omega^2(TM)\,\,:\,\,B(JX,Y)=-JB(X,Y) \}\,.
\end{aligned}$$ Hence any $B\in\Omega^{2}(TM)$ can be written as $B=B^{2,0}+B^{1,1}+B^{0,2}$. Notice that in terms of complex vector fields of type $(1,0)$ we have $$B^{2,0}(Z_i,Z_j)=B(Z_i,Z_j)+B(Z_i,Z_j)-{\operatorname{i}}JB(Z_i,Z_j)-{\operatorname{i}}JB(Z_i,Z_j)=2 B(Z_i,Z_j)-2{\operatorname{i}}JB(Z_i,Z_j)\,.$$ In particular the condition $B^{2,0}=0$ can be written in terms of $(1,0)$ vector fields as $B(Z_i,Z_j)\in T^{0,1}M\,.$ Furthermore $\Omega^2(TM)$ splits as $$\Omega^2(TM)=\Omega^2_b(TM)\oplus \Omega^2_c(TM)$$ where $$\begin{aligned}
&g(B_b(X,Y),Z)=\frac12 (g(B(X,Y),Z)-g(B(Z,X),Y)-g(B(Y,Z),X))\,,\\
&g(B_c(X,Y),Z)=\frac12 (g(B(X,Y),Z)+g(B(Z,X),Y)+g(B(Y,Z),X))\,.
\end{aligned}$$ Now we consider connections on $M$. A connection ${\nabla}$ on $M$ is called *Hermitian* if ${\nabla}J=0$, ${\nabla}g=0$. It is well-known that every almost Hermitian manifold admits Hermitian connections. We denote by $\mathcal{C}$ the space of Hermitian connection on $M$. Gauduchon introduced in [@gau] the following special class of Hermitian connections:
A connection ${\nabla}\in\mathcal{C}$ is called *canonical* if its torsion $T$ satisfies $\,\,T_{b}^{1,1}=0\,.$
From [@gau] it follows that any canonical connection ${\nabla}$ can be written as $$\label{nablat}
\begin{aligned}
g({\nabla}_{X}Y,Z) = &\,g(D_{X}Y,Z)+\frac{t-1}{4}(d^c\omega)^+(X,Y,Z)+\frac{t+1}{4}(d^c\omega)^+(X, JY, JZ)-g(X,N(Y,Z))+\\
&\,\frac12 (d^c \omega)^- (X,Y,Z)\,.
\end{aligned}$$ for some $t\in{{\mathbb R}}$, where $d^{c}$ is the operator acting on $r$-forms as $d^c=(-1)^rJdJ$ and $N$ denotes the Nijenhuis tensor $N(X,Y)=[JX,JY]-[X,Y]-J([JX,Y]+[X,JY]).$
For $t\in{{\mathbb R}}$ we denote by ${\nabla}^t$ the corresponding canonical connection. In the special case of a quasi-Kähler structure (i.e. ${\overline{ \partial}} \omega=0$) the space of canonical connections reduces to a single point, while if $J$ is integrable (i.e. $N=0$) equation reduces to $$g({\nabla}^t_{X}Y,Z) = g(D_{X}Y,Z)+\frac{t-1}{4}(d^c\omega)(X,Y,Z)+\frac{t+1}{4}(d^c\omega)(X, JY, JZ)\,.$$ For the parameters $t=1,0,-1$, the family gives the following remarkable cases
- $t=1$. In this case $\nabla^1$ is called the *Chern connetion*. This connection can be defined as the unique Hermitian connection satisfying $T^{1,1}=0$.
- $t=0$. In this case ${\nabla}^0$ is called the *first canonical connection*. This connection can be defined as the unique Hermitian connection whose torsion satisfies $T^{2,0}=0$.
- $t=-1$. In this case the connection ${\nabla}^{-1}$ is important in the complex case where it is known as the *Bismut connection*. Indeed, if $J$ is integrable, then ${\nabla}^{-1}$ can be defined as the unique Hermitian connection having totally skew-symmetric torsion (see [@Bismut]).
Canonical Ricci forms
=====================
Let $(M^{2n},g,J)$ be an almost Hermitian manifold and let $\mathcal{C}$ the space of the associated Hermitian connections. For any ${\nabla}\in\mathcal{C}$ it is defined the Ricci form $
\rho(X,Y)=2{\operatorname{i}}{\rm tr}_{\omega} R(X,Y)\,,
$ where $R$ is the curvature tensor $R(X,Y):=[{\nabla}_X,{\nabla}_Y]-{\nabla}_{[X,Y]}$. Such a form is always closed and it locally satisfies $\rho=d\theta$, where $
\theta(X)=\sum_{r=1}^n g({\nabla}_{X}Z_r,Z_{{\overline{r}}})
$ and $\{Z_r\}$ is a local unitary frame. In the case of a canonical connection ${\nabla}^t\in\mathcal{C}$ we use notation $\rho^t$ and $\theta^t$. We denote by $\natural$ the natural isomorphism between vector fields and $1$-forms induced by $g$. Namely, if $X$ a vector field, then we denote by $X^{\natural}$ the $1$-form $X^{\natural}(Y)=g(X,Y)$. We have the following
\[theta\] $\theta^t$ is locally defined by $$\label{thetat}
\theta^t(X)=\sum_{r=1}^n {\operatorname{i}}\Im\mathfrak{m}\left\{g([X +t{\operatorname{i}}JX,Z_r],Z_{{\overline{r}}})\right\}+\frac12{\operatorname{i}}(t-1) g(d^*\omega,X^{\natural})\,.$$ for any vector field $X$.
First of all we note that if $Z_r$ is a vector field of type $(1,0)$, then $$N(Z_r,Z_{{\overline{ r}}})=(d^c\omega)^-(X,Z_{r},Z_{{\overline{ r}}})=0\,,\quad (d^{c}\omega)^+(X,Z_{r},Z_{{\overline{ r}}})=d^{c}\omega(X,Z_{r},Z_{{\overline{ r}}})=-d\omega(JX,Z_{r},Z_{{\overline{ r}}})\,.$$ Hence if $\{Z_r\}$ is a local unitary frame using equation we get $$\begin{aligned}
\theta^t(X)=& \sum_{r=1}^n \Big\{g(D_{X}Z_r,Z_{{\overline{r}}})+\frac{t-1}{4}(d^c\omega)(X,Z_r,Z_{{\overline{r}}})+\frac{t+1}{4}(d^c\omega)(X,JZ_r,JZ_{{\overline{r}}})\big\}\\
=&\sum_{r=1}^n \Big\{g(D_{X}Z_r,Z_{{\overline{r}}})-\frac{t}{2}d\omega(JX,Z_r,Z_{{\overline{r}}})\Big\}\,.
\end{aligned}$$ Now $$\begin{aligned}
2g(D_{X}Z_r,Z_{{\overline{r}}})=&Xg(Z_r,Z_{{\overline{ r}}})-Z_{{\overline{r}}}g(X,Z_r)+Z_rg(X,Z_{{\overline{r}}})+g([X,Z_r],Z_{{\overline{r}}})+ g([Z_{{\overline{r}}},X],Z_r)-g([Z_r,Z_{{\overline{r}}}],X)\\
=&-Z_{{\overline{r}}}g(X,Z_r)+Z_rg(X,Z_{{\overline{r}}})+g([X,Z_r],Z_{{\overline{r}}})+ g([Z_{{\overline{r}}},X],Z_r)-g([Z_r,Z_{{\overline{r}}}],X)
\end{aligned}$$ and $$\begin{aligned}
d\omega(JX,Z_r,Z_{{\overline{r}}})=&(JX)\omega(Z_r,Z_{{\overline{ r}}})-Z_r\omega(JX,Z_{{\overline{ r}}})+Z_{{\overline{ r}}}\omega (JX,Z_r)-\omega([JX,Z_r],Z_{{\overline{r}}})\\
&-\omega([Z_{{\overline{r}}},JX],Z_{r})-\omega([Z_{r},Z_{{\overline{r}}}],JX)\\[5pt]
=&Z_r g(X,Z_{{\overline{ r}}})-Z_{{\overline{ r}}}g (X,Z_r)+{\operatorname{i}}g([JX,Z_r],Z_{{\overline{r}}})+{\operatorname{i}}g([Z_{{\overline{r}}},JX],Z_{r})-g([Z_{r},Z_{{\overline{r}}}],X)\,.
\end{aligned}$$ Then we have $$\begin{aligned}
\theta^t(X)=&\frac12 \sum_{r=1}^n\Big\{g([X +t{\operatorname{i}}JX,Z_r],Z_{{\overline{r}}})- g([X-t{\operatorname{i}}JX,Z_{{\overline{r}}}],Z_r)+g([Z_r,Z_{{\overline{r}}}],tX-X)\\
&+(1-t)Z_{r}g(X,Z_{{\overline{r}}})-(1-t)Z_{{\overline{r}}}g(X,Z_r)\Big\}\\
=&\sum_{r=1}^n\Big\{{\operatorname{i}}\Im\mathfrak{m}\left\{g([X +t{\operatorname{i}}JX,Z_r],Z_{{\overline{r}}})+(1-t)Z_{r}g(X,Z_{{\overline{r}}})\right\}-\frac12(1-t) g([Z_r,Z_{{\overline{r}}}],X)\Big\}\,.
\end{aligned}$$ So in order to prove the statement we have to show that $$\label{[Zr,Zbarr]}
\sum_{r=1}^n\Big\{\Im\mathfrak{m}\left\{Z_{r}g(X,Z_{{\overline{r}}})\right\}+{\operatorname{i}}\frac12g([Z_r,Z_{{\overline{r}}}],X)\Big\}=-\frac12\, g(\omega,dX^{\natural})\,.$$ We can write $X=\sum_{r=1}^{n}(X_r Z_r+X_{{\overline{ r}}}Z_{{\overline{r}}})$ and $X^{\natural}=\sum_{r=1}^{n}(X_r\zeta^r+X_{{\overline{r}}}\zeta^{{\overline{ r}}})$, where $\{\zeta^r\}$ is the coframe dual to of $\{Z_r\}$. Then we get $$\begin{aligned}
g(\omega,dX^{\natural})=&\,{\operatorname{i}}\sum_{k=1}^ng(\zeta^{k}\wedge\zeta^{{\overline{k}}},dX^{\natural})\\
=&\,{\operatorname{i}}\sum_{k,r=1}^n(Z_r(X_{{\overline{r}}})-Z_{{\overline{r}}}(X_{r})-X^{\natural}([Z_r,Z_{{\overline{ r}}}]))g(\zeta^{k}\wedge\zeta^{{\overline{k}}},\zeta^{r}\wedge\zeta^{{\overline{r}}})\\
=&\,{\operatorname{i}}\sum_{k=1}^nZ_k(X_{{\overline{k}}})-Z_{{\overline{k}}}(X_{k})-X^{\natural}([Z_k,Z_{{\overline{ k}}}]))\\
=&\,-2\sum_{k=1}^n\Im\mathfrak{m}\{Z_k(X_{{\overline{k}}})\}-{\operatorname{i}}\sum_{k,s=1}^n(B_{k{\overline{k}}}^sX_s+B_{k{\overline{k}}}^{{\overline{s}}}X_{{\overline{s}}})\\
=&\,-\sum_{k=1}^n \,\left(2\,\Im\mathfrak{m}\{Z_k(X_{{\overline{k}}})\}+{\operatorname{i}}g([Z_k,Z_{{\overline{ k}}}],X)\right)\,,
\end{aligned}$$ where with $B$ we denote the components of the brackets.
The following formulae hold
- $\theta^1(X)=2{\operatorname{i}}\sum_{r=1}^n \Im\mathfrak{m}\,g([X^{0,1},Z_r],Z_{{\overline{r}}})\,;$
- $\theta^0(X)={\operatorname{i}}\sum_{r=1}^n \Im\mathfrak{m}\left\{g([X,Z_r],Z_{{\overline{r}}})\right\}-{\operatorname{i}}\frac12 g(d^*\omega,X^{\natural})\,;$
- $\theta^{-1}(X)=2 {\operatorname{i}}\sum_{r=1}^n \Im\mathfrak{m}\left\{g([X^{1,0},Z_r],Z_{{\overline{r}}})\right\}-{\operatorname{i}}g(d^*\omega,X^{\natural})\,.$
It is useful to write down formula in real coordinates. In order to do this we write $Z_r=\frac{1}{\sqrt{2}}(e_r-{\operatorname{i}}Je_r)$ for a suitable orthonormal frame $\{e_1,\dots,e_n,Je_1,\dots,Je_n\}$. Then a direct computation gives $$\begin{aligned}
2\Im\mathfrak{m}\left\{ g([X +t{\operatorname{i}}JX,Z_r],Z_{{\overline{r}}})\right\}&= \Im\mathfrak{m}\left\{g([X +t{\operatorname{i}}JX,e_r-{\operatorname{i}}Je_r],e_r+{\operatorname{i}}Je_r)\right\}\\
&= g([X,e_r], Je_r)-g([X,Je_r],e_r)+tg([JX,e_r],e_r)+tg([JX,Je_r],Je_r)
\end{aligned}$$ and $$\begin{aligned}
\theta^t(X)=&\,\frac12 {\operatorname{i}}\sum_{r=1}^n \left\{g([X,e_r], Je_r)-g([X,Je_r],e_r)+tg([JX,e_r],e_r)+tg([JX,Je_r],Je_r)\right\}\\
&\,+\frac12 {\operatorname{i}}(t-1) g(d^*\omega,X^{\natural})\,.
\end{aligned}$$ A remarkable consequence of formula is the following
\[deg\] All canonical connections of a cosymplectic structure have the same Ricci form.
It is enough to show that $\theta^1=\theta^{-1}$. Since the cosymplectic condition $d^*\omega=0$ implies $$\theta^{-1}(X)=\sum_{r=1}^n2 {\operatorname{i}}\Im\mathfrak{m}\left\{g([X^{1,0},Z_r],Z_{{\overline{r}}})\right\}$$ we have $$\begin{aligned}
\theta^{-1}(X)=\,&-\sum_{r=1}^n2 {\operatorname{i}}\Im\mathfrak{m}\left\{g([X^{0,1},Z_{{\overline{r}}}],Z_{r})\right\}=
-\sum_{r=1}^n2 {\operatorname{i}}\Im\mathfrak{m}\left\{g(D_{X^{0,1}}Z_{{\overline{r}}},Z_{r})-g(D_{Z_{{\overline{r}}}}X^{0,1},Z_{r})\right\}\\
=\,& \sum_{r=1}^n2 {\operatorname{i}}\Im\mathfrak{m}\left\{g(D_{X^{0,1}}Z_{r},Z_{{\overline{r}}})-g(D_{Z_{{\overline{r}}}}X^{0,1},Z_{r})\right\}\\
=\,&\theta^1(X)+\sum_{r=1}^n2 {\operatorname{i}}\Im\mathfrak{m}\left\{g(D_{Z_r}X^{0,1},Z_{{\overline{r}}})-g(D_{Z_{{\overline{r}}}}X^{0,1},Z_{r})\right\}\,.
\end{aligned}$$ Now we observe that $\sum g(D_{Z_r}X^{0,1},Z_{{\overline{r}}})=-\sum g(X^{0,1},D_{Z_r}Z_{{\overline{r}}})=0$, since the cosymplectic condition forces $\sum D_{Z_{{\overline{r}}}}Z_r$ to be of type $(1,0)$ (see e.g. [@Wood]). The last step consists to show that $\sum \Im\mathfrak{m}\left\{ g(D_{Z_{{\overline{r}}}}X^{0,1},Z_{r})\right\}=0$. Here it is enough to consider the identity $$\sum_{r=1}^n\Big\{\Im\mathfrak{m}\left\{Z_{r}g(X,Z_{{\overline{r}}})\right\}+{\operatorname{i}}\frac12g([Z_r,Z_{{\overline{r}}}],X)\Big\}=\sum_{r=1}^n \Im\mathfrak{m}\left\{ g(D_{Z_{{\overline{r}}}}X^{0,1},Z_{r})\right\}$$ which can be checked performing a direct computation. Then equation implies the statement.
[*In the Hermitian case this last result was already known. In fact, it can be deduced from formula (8) of [@graD]. Another proof of this fact can be found in [@lui].*]{}
Canonical Ricci forms on Lie algebras
=====================================
Now we restrict our attention to left-invariant almost Hermitian structures on Lie groups (or more generally on left-invariant almost Hermitian structures on quotient of Lie groups by lattices). Since here all the computations are purely algebraic, we may assume to work on a Lie algebra $({\mathfrak{g}},{[\cdot,\cdot]})$ equipped with an almost Hermitian structure $(g,J)$. An almost Hermitian structure on a Lie algebra is a pair $(g,J)$, where $J$ is an endomorphism of ${\mathfrak{g}}$ satisfying $J^2=-{\rm Id}$ and $g$ is a $J$-Hermitian inner product. The bracket of ${\mathfrak{g}}$ has not a priori any relation with $J$. The pair $(g,J)$ induces as usual the fundamental form $\omega(\cdot,\cdot)=g(J\cdot,\cdot)$.
Proposition \[theta\] implies the following
\[g\] Let $({\mathfrak{g}},{[\cdot,\cdot]},g,J)$ be a Lie algebra with an almost Hermitian structure. For any $t\in{{\mathbb R}}$ the following formula holds $$\label{thetatinv}
\theta^t(X)=\frac12 {\operatorname{i}}\left\{-{\rm tr}({\rm ad}_{X}\circ J)+ t\,{\rm tr}\,{\rm ad}_{JX}+(t-1)\, g(d^*\omega,X^{\natural}) \right\}\,.$$ Moreover if $({\mathfrak{g}},{[\cdot,\cdot]})$ is unimodular $($i.e. ${\rm tr}\,{\rm ad}_X=0$ for any $X\in {\mathfrak{g}});$ then $$\label{tethaunimodular}
\rho^t(X,Y)=\frac12 {\operatorname{i}}{\rm tr}({\rm ad}_{[X,Y]}\circ J)-\frac12{\operatorname{i}}(t-1)\, g(d^*\omega,[X,Y]^{\natural})$$ and $\rho^t$ is the same for any $t$ if and only if $(g,J)$ cosymplectic.
The only non-trivial part of the statement is the last assertion. So we have just to show that condition $g(\omega,d[X,Y]^{\natural})=0$ is equivalent to $d^*\omega=0$. We can write ${\mathfrak{g}}=[{\mathfrak{g}},{\mathfrak{g}}]\oplus [{\mathfrak{g}},{\mathfrak{g}}]^{\perp}$. Let $X\in [{\mathfrak{g}},{\mathfrak{g}}]^{\perp}$, then $$dX^{\natural}(Z,W)=-g(X,[Z,W])=0\,.$$ Hence for every $X\in[{\mathfrak{g}},{\mathfrak{g}}]^{\perp}$, $dX^{\natural}=0$. This implies that $d{\mathfrak{g}}^*=d([{\mathfrak{g}},{\mathfrak{g}}]^{\natural})$ and the claim follows.
Now we can prove Theorem \[main\]:
Let $({\mathfrak{g}},{[\cdot,\cdot]},g,J)$ is a $2$-step nilpotent Lie algebra with an almost Hermitian structure. Then, taking into account that ${\mathfrak{g}}$ is unimodular, the [2-step]{} condition implies that $[{\mathfrak{g}},{\mathfrak{g}}]$ is contained in the center of ${\mathfrak{g}}$ and ${\rm tr}({\rm ad}_{[X,Y]}\circ J)=0$ for every $X,Y\in {\mathfrak{g}}$. Then formula reduces to $$\label{rhot}
\rho^t(X,Y)=\frac12{\operatorname{i}}(1-t)\,g(d^*\omega,[X,Y]^{\natural})$$ and first claim follows.
Proposition \[g\] allows us to describe the behavior of $\{ \rho^t\}$ for some special almost Hermitian structures:
Let $({\mathfrak{g}}{[\cdot,\cdot]},g,J)$ be an almost Hermitian Lie algebra.
- If $J$ is bi-invariant $($i.e. $[J\cdot,\cdot]=J[\cdot,\cdot])$, then $$\theta^{t}(X)=(t-1){\operatorname{i}}{\rm tr}({\rm ad}_{JX})\,,\quad \rho^t(X,Y)={\operatorname{i}}(1-t)\,{\rm tr}({\rm ad}_{[JX,Y]})\,.$$
- If $J$ is anti-bi-invariant $($i.e. $[J\cdot,\cdot]=-J[\cdot,\cdot])$, then $$\theta^{t}=0\,,\quad \rho^t=0\,.$$
- If $J$ is abelian $($i.e. $[J\cdot,J\cdot]=[\cdot,\cdot]$ $)$, then $$\begin{aligned}
& \theta^t(X)=\frac12{\operatorname{i}}\left\{ (1+t)\,{\rm tr}({\rm ad}_{JX})+(t-1)\, g(d^*\omega,X^\natural)\right\}\,,\\
& \rho^t(X)=\frac12{\operatorname{i}}\left\{- (1+t)\,{\rm tr}({\rm ad}_{J[X,Y]})+ (1-t)\,g(d^*\omega,[X,Y]^\natural)\right\}\,.
\end{aligned}$$
- If $J$ is anti-abelian $($i.e. $[J\cdot,J\cdot]=-[\cdot,\cdot]$ $)$, then $$\theta^t(X)=\frac12 {\operatorname{i}}(1+t)\,{\rm tr}({\rm ad}_{JX})\,,\quad
\rho^t(X,Y)=-\frac12 {\operatorname{i}}(1+t)\,{\rm tr}({\rm ad}_{J[X,Y]})\,.$$
In particular in the unimodular case bi-invariant, anti-bi-invariant and anti-abelian almost Hermitian structures are Ricci-flat with respect to any canonical connection, while in the abelian case $\rho^t$ is given by the following formula $$\rho^t(X,Y)=\frac12{\operatorname{i}}(1-t)\,g(d^*\omega,[X,Y]^\natural).$$ and $\rho^t=0$ for $t\neq 0$ if and only if $(g,J)$ is a cosymplectic structure.
*We remark the following facts:*
- The bi-invariant condition $[J\cdot,\cdot]=J[\cdot,\cdot]$ is equivalent to require that the simply-connected Lie group associated to $({\mathfrak{g}},J)$ is a complex Lie group. The fact that a bi-invariant almost Hermitian structure on an unimodular Lie algebra is Ricci-flat with respect any canonical connection has been already proved by Grantcharov in [@Gueo].
- The anti-bi-invariant condition $[J\cdot,\cdot]=-J[\cdot,\cdot]$ is equivalent to require that any $J$-compatible inner product on ${\mathfrak{g}}$ is quasi-Kähler and flat with respect to the Chern connection ${\nabla}^1$ (see [@DV]).
- The abelian condition $[J\cdot,J\cdot]=[\cdot,\cdot]$ was introduced in [@Barberis] and was intensely studied in [@Adrian; @BarberisDottiVerbisky; @Dotti; @Fino; @sergiun; @Maclaughlin]. This condition is equivalent to require that ${\mathfrak{g}}^{1,0}$ is an abelian Lie algebra.
- Finally, the anti-abelian condition $[J\cdot,J\cdot]=-[\cdot,\cdot]$ was studied in [@DVL].
[*Theorem \[main\] can be applied to the Heisenberg Lie algebras $\mathfrak{h}_{n}({{\mathbb R}})$ and $\mathfrak{h}_n({{\mathbb C}})$. That accords to Theorem 4.1 of [@tosatti] and Proposition 4.10 and 4.11 of [@DVagag]. Moreover things work differently either in the $3$-step nilpotent case or in the $2$-step solvable case (see [@DVagag]).* ]{}
[12]{}: An example of an almost Kähler manifold which is not Kählerian. *Boll. Un. Mat. Ital. A* (6) [**3**]{} (1984), no. 3, 383–392.
: Almost Hermitian geometry on six dimensional nilmanifolds. *Ann. Scuola Norm. Sup. Pisa Cl. Sci.* (4) [**30**]{} (2001), no. 1, pp. 147–170.
: Classification of abelian complex structures on 6-dimensional Lie algebras. J. London Math. Soc. (2011) [**83**]{} (1): 232–255. [arXiv:0908.3213]{}.
: On certain locally homogeneous Clifford manifolds. *Ann. Global Anal. Geom.* [**13**]{} (1995), no. 3, 289–301.
: Canonical bundles of complex nilmanifolds, with applications to hypercomplex geometry, *Math. Research Letters* [**16**]{} (2) (2009), 33–347, [arXiv: 0712.3863]{}.
: A local index theorem for non-Kähler manifolds, [*Mathematische Annalen*]{} [**284**]{} (1989), no. 4, 681-–699.
: Characteristic classes of Hermitian manifolds, *Ann. of Math.* [**47**]{} (1946), 85–121.
: Stability of abelian complex structures. *Internat. J. Math.* [**17**]{} (2006), no. 4, 401–416.
: Quasi-Kähler manifolds with trivial Chern holonomy (2007). [*Math. Z.*]{} [**271**]{} (2012), 95–108. [arXiv:0807.1664]{}.
Chern-flat and Ricci-flat invariant almost Hermitian structures. [*Annals of Global Analysis and Geometry*]{} [**40**]{}, n. 1, (2011), 21-45.
: Quasi-Kähler Chern-flat manifolds and complex $2$-step nilpotent Lie algebras. [*Ann. Sc. Norm. Super. Pisa Cl. Sci.*]{} (5) Vol. XI (2012), 41–60. [arXiv:0911.5655]{}
: Hypercomplex nilpotent Lie groups. *Global differential geometry: the mathematical legacy of Alfred Gray (Bilbao, 2000)*, 310–314, Contemp. Math., 288, Amer. Math. Soc., Providence, RI, 2001.
: Hermitian connections and Dirac operators. *Boll. Un. Mat. Ital. B* (7) [**11**]{} (1997), no. 2, suppl., 257–288.
[Grantcharov D., Grantcharov G., Poon, Y. S.:]{} Calabi-Yau connections with torsion on toric bundles. [*J. Differential Geom.*]{} [**78**]{} (2008), no. 1, 13-–32.
Geometry of compact complex homogeneous spaces with vanishing first Chern class. [*Adv. in Math.*]{} [**226**]{}, n. 4 (2011), 3136–3159.
: Harmonic morphisms between almost Hermitian manifolds. [*Boll. Un. Mat. Ital. B*]{} (7) [**11**]{} (1997), no. 2, suppl., 185–197.
Geometry of Hermitian manifolds. [arXiv:1011.0207]{}.
: Deformation of 2-step nilmanifolds with abelian complex structures. [*J. London Math. Soc.*]{} (2) [**73**]{} (2006), no. 1, 173–193.
: *Riemannian geometry and holonomy groups.* Pitman Research Notes in Mathematics Series, 201. Longman Scientific & Technical, Harlow; copublished in the United States with John Wiley & Sons, Inc., New York (1989). viii+201 pp.
: The Calabi-Yau equation on the Kodaira-Thurston manifold, [*J. Inst. Math. Jussieu*]{} [**10**]{} (2011), no.2, 437–447.
[^1]: The author was supported by the Project M.I.U.R. “Riemannian Metrics and Differentiable Manifolds” and by G.N.S.A.G.A. of I.N.d.A.M.
|
---
abstract: 'It is proven that if $G$ is a finite group, then $G^\omega$ has $2^\cc$ dense nonmeasurable subgroups. Also, other examples of compact groups with dense nonmeasurable subgroups are presented.'
author:
- 'F. Javier Trigos-Arrieta'
title: |
Products of finite groups\
and nonmeasurable subgroups
---
Introduction
=============
In [@SaStr1985], the authors asked whether every infinite compact group has a (Haar) nonmeasurable (dense) subgroup. That every Abelian infinite compact group does is proven in [@HR] (16.13(d)). That every non-metric compact group bigger than $\cc$ does follows from the fact that every such group has a proper pseudocompact subgroup [@Itz-Shakh-97], which in turn is nonmeasurable [@Comfort84] (6.14). Thus, the problem remains open only for non-abelian metric and non-metric groups of cardinality $\cc$. In this short note we prove the result in the abstract, and using [@CRT] (2.2) show that the unitary groups $\fU(n)$ do have too dense nonmeasurable subgroups.
Unitary groups
===============
The result [@CRT] (2.2) states that if $K$ and $M$ are compact groups and $\varphi:K \to M$ is a continuous homomorphism onto, then the preimage of any (dense) nonmeasurable subgroup of $M$ is a (dense) nonmeasurable subgroup of $K$. Since the torus $\TT$ has plenty of (dense) nonmeasurable subgroups, and the determinant is a continuous homomorphism from any unitary group $\fU(n)$ [@HR] (2.7(b)) onto $\TT$, it follows that the unitary groups do have dense nonmeasurable subgroups.
Countable products of finite groups
====================================
Let $\sU$ be a free ultrafilter. Consider $\sI:=2^\omega \setminus \sU$. The collection $\sI$ will be called an [*ideal*]{}. The following are properties dual of those for an ultrafilter:
1. $A \subset \omega \implies \omega\setminus A \in \sI,$ or $A \in \sI$,
2. $A \in \sI \implies \omega\setminus A \not\in \sI,$
3. $A \in \sI, C \subseteq A \implies C \in \sI,$ and
4. $A, B \in \sI \implies A \cup B \in \sI.$
For each $n \in \omega$, let $G_n$ be a non-trivial finite group, with identity $e_n$. Consider $G:=\times_{n < \omega} G_n$. If $x=(x_n) \in G$, denote by $\gs(x):=\{n < \omega:x_n\neq e_n\}$. If $A \subseteq \omega$, let $G_A:=\{x\in G:\gs(x) \subseteq A\}$. Finally, denote by $G_\sI:=\cup_{A\in \sI} G_A$. Clearly, $G_\sI$ is a subgroup of $G$, and because $\sU$ is a free ultrafilter, $G_\sI$ is dense in $G$.
[**Question (3.1)**]{} [*Is $G_\sI$ a measurable subgroup of $G$?*]{} We can answer this question, negatively, if all $G_n$ are equal, say to $\Gamma$. Denote by $e$ the identity of $\Gamma$. First of all, we will prove that, in this case, $G/G_\sI\simeq\Gamma$. Let $x \in G$. For each $a\in \Gamma\setminus \{e\}$, denote by $\gs(x,a)$ those $n\in\gs(x)$ $x_n=a$. Notice therefore that $\gs(x)$ is the disjoint union of the $\gs(x,a)$ as $a$ runs through every non-identity element in $\Gamma$.
If $x\not \in G_\sI$, then $\gs(x) \not\in \sI$. We claim that there is a unique $a\in\Gamma\setminus \{e\}$ with $\gs(x,a)\not\in \sI$. For, if for each $a\in \Gamma\setminus \{e\}$, we had that $\gs(x,a)\in \sI$, then we would have $\gs(x)\in \sI$, a contradiction. Thus there is $a_0\in \Gamma\setminus \{e\}$ with $\gs(x,a_0)\not\in \sI$. Hence $\omega \setminus \gs(x,a_0)\in \sI$, and since $a\in \Gamma\setminus \{e,a_0\} \implies \gs(x,a) \subseteq \omega \setminus \gs(x,a_0)$, the properties for ideals show that $\cup_{a\in \Gamma\setminus \{e,a_0\}}\gs(x,a) \in \sI$. Now, define $y=(y_n)$ by $$y_n:=\left\{ \begin{array}{lr}
a_0^{-1}x_n,\: \mbox{if}\: n\in \cup_{a\in \Gamma\setminus
\{e,a_0\}}\gs(x,a), \\
e,\: \mbox{if}\: n \in \gs(x,a_0), \\
a_0^{-1},\: \mbox{otherwise.}
\end{array} \right.$$
Because, $\gs(y)=\omega \setminus \gs(x,a_0) \in \sI$, it follows that $y \in G_\sI$. Set $\overline{a_0}=(t_n)$ by $t_n:=a_0$ for all $n<\omega$, i.e., it’s the constant sequence $a_0$. We now show that $$x=\overline{a_0}\cdot y.$$ For, if $n\in \cup_{a\in \Gamma\setminus
\{e,a_0\}}\gs(x,a)$, then $t_n a_0^{-1}x_n=a_0a_0^{-1}x_n=x_n$. If $n \in \gs(x,a_0)$, then $t_n e=a_0e=a_0$. And if $n \not\in \cup_{a\in \Gamma\setminus
\{e\}}\gs(x,a)$, then $t_n a_0^{-1}=a_0a_0^{-1}=e=x_n$, as required.
This shows the following:
[**Theorem (3.2)**]{} [*If $\Gamma$ is a finite group, and $G:=\Gamma^\omega$, then $G/G_\sI\simeq\Gamma$.*]{}
Thus $G_\sI$ has finite index and therefore cannot have zero measure.
[**Theorem (3.3)**]{} [*(Steinhaus-Weil Theorem) If $F$ is a measurable subset of a (locally) compact group $G$ with strictly positive (left Haar) measure, then $F \cdot F^{-1}:=\{x y^{-1}: x, y \in F\}$ contains a of the identity of $G$. Thus, if $F$ is in addition a dense subgroup of $G$, then $F=G$.* ]{}
This is proven in [@weil40]. See also [@steinhaus20] and [@strom72].
[**Corollary (3.4).**]{} [*$G_\sI$ is not measurable.*]{}
If $G_\sI$ were measurable, then it would have strictly positive measure. By the above theorem, it would have to be equal to the whole $G$, clearly a contradiction. Å[[A]{}]{}
Now, assume that $\Gamma$ is a simple (finite) non-Abelian group (for example, the alternating subgroup $\AA_m$ on $m$ elements, with $m\geq 5$). Robert Bassett and the author have proved that the only normal subgroups of $G$ are of the form $G_\sI$ for some ideal $\sI$. If we continue assuming that $\sI$ is the complement in $2^\omega$ of a free ultrafilter, then it follows that $G_\sI$ is a [*maximal normal*]{} subgroup. Let $\varphi: G \to G/G_\sI$ be the natural map. Identify, by Theorem 1, $G/G_\sI$ with $\Gamma$ and $G_\sI$ with $e$. Choose $g\in \Gamma, g \neq e$ and denote by $B$ the subgroup of $\Gamma$ generated by $g$. Because $\Gamma$ is simple and non-Abelian, $\{e\} \subset B \subset \Gamma$ and these contentions are proper. Set $H:=\varphi^{\leftarrow}[B] $. Thus $G_\sI \subset H \subset G$, with the above contentions proper. Since $G_\sI$ is a maximal [*normal*]{} subgroup properly contained in $H$, it follows that $H$ is a non-normal subgroup of $G$. And since the contention $H \subset G$ is proper, it follows that $H$ is a non-normal [*proper*]{} subgroup of $G$. Hence, another application of Steinhaus’-Weil Theorem (3.3) implies the following:
[**Corollary (3.5)**]{} [*$H$ is a non-normal not measurable subgroup of $G$.*]{}
[**Example (3.6)**]{} The condition that all $G_n$ are equal in Corollary (3.4) is necessary as this example shows. Let $ \< t_n \> _{n< \omega}$ be a an increasing sequence of non-zero numbers converging to 1, such that $g_m:=t_0\cdot t_1\cdot t_2 \cdots t_{m-1}$ converges to say $t \in (0,1)$ (for example, if $\Sigma_{n=0}^\infty a_n$ converges with $1> a_n \downarrow 0$, then $t_n:=1-a_n$ satisfies the condition, see Stromberg’s book [@strom81]). Now, pick a strictly increasing sequence of integers $ \< k_n \> _{n< \omega} $ $t_n \leq \frac{k_n-1}{k_n} < 1$. If $\tau_n:=\frac{k_n-1}{k_n}$, then $\gamma_m:=\tau_0\cdot \tau_1 \cdots \tau_{m-1}$ converges to say $\gamma \in [t,1)$. Set $G_n:=\AA_{k_n}$, and of course $G:=\times_{n<\omega} G_n$. Denote by $m$ the (Haar) measure on $G$. We claim that $m(G_\sI )=0$. To see this, denote by $1_n$ the identity of $G_n$. Set $\omega(n):=\omega \setminus n=\{n, n+1,...\}$, and $B_n:=\{x\in G:\omega(n)\subseteq \gs(x)\}$. Basically, $B_n$ consists of those $x$ whose first $n$ coordinates can be anything, but everything after must be different than $1_n$. Notice then that $B_n=G_0 \times G_1 \times \cdots \times G_{n-1} \times (\times_{k\geq n} (G_k \setminus \{1_k\}))$, hence $B_0 \subseteq B_1 \subseteq \cdots $, and therefore, $G\setminus B_0 \supseteq G\setminus B_1 \supseteq \cdots $. Since the measure of $G_n \setminus \{1_n\}$, in $G_n$, is $\frac{k_n-1}{k_n}$, it follows that $m(B_n)=\lim_{m\to \infty} \Pi_{j=n}^{m-1}\frac{k_j-1}{k_j}=(\frac{\tau_0\cdot \tau_1 \cdots \tau_{n-1}}{\tau_0\cdot \tau_1 \cdots \tau_{n-1}})(\lim_{m\to \infty}(\tau_n\cdots \tau_{m-1}))=(\frac{1}{\gamma_n})(\lim_{m\to \infty}(\tau_0\cdot \tau_1 \cdots \tau_{n-1}\tau_n\cdots \tau_{m-1}))=\frac{\gamma}{\gamma_n}$. Thus $m(G\setminus B_n)=1-\frac{\gamma}{\gamma_n}$. Since $\omega(n) \not \in \sI$, for all $n< \omega$, we have that $G_\sI \subseteq \bigcap_{n<\omega} (G\setminus B_n)$, which, by Proposition 2, Chapter 11 in [@Royden1968], has measure $\lim_{n<\omega} m(G\setminus B_n)=\lim_{n<\omega} (1-\frac{\gamma}{\gamma_n})=1-\frac{\gamma}{\lim_{n<\omega}\gamma_n}=1-\frac{\gamma}{\gamma}=0$. Therefore $G_\sI$, in this case, has measure 0, as required.
Nevertheless, Corollary (3.4) can be improved as follows, by using [@CRT] (2.2):
[**Corollary (3.7)**]{} [*For each $n \in \omega$, let $G_n$ be a non-trivial finite group $G_n=\Gamma$, some fixed group $\Gamma$, for infinitely many $n \in \omega$. Then $G:=\times_{n < \omega} G_n$ has nonmeasurable subgroups.*]{}
Let $\omega_\Gamma:=\{n \in \omega: G_n =\Gamma\}$. By Corollary 1, $\Gamma^\omega$ has nonmeasurable subgroups, and since $G_\Gamma:=\times_{n \in \omega_\Gamma} G_n$ is topologically isomorphic to $\Gamma^\omega$, it does too have nonmeasurable subgroups. Since $G=\times_{n < \omega} G_n = G_\Gamma \times (\times_{n < \omega\setminus \omega_\Gamma} G_n)$ , the projection of $G$ onto the first factor, yields the result.
Final Remarks
==============
1. That unitary groups have nonmeasurable subgroups was obtained during a wonderful dinner in Middletown back in 2002, when the author met with his teachers and friends, Wis Comfort, Tony Hager and Lew Robertson.
2. Faculty in the Department of Mathematics at CSUB made the author aware of a mistake in an older version of Example 1.
3. S. Hern' andez has communicated to the author that he, K. Hofmann and S. Morris have independently generalized most of the results in this article, with quite different techniques.
[10]{}
E. Hewitt and K. A. Ross, Springer-Verlag, 1979.
G. Itzkowitz and D. Shakhmatov, (1997), 497–501.
[*Department of Mathematics*]{}\
[*California State University, Bakersfield*]{}\
[*Bakersfield, California, USA*]{}\
[*e-mail: jtrigos@csub.edu*]{}
|
---
abstract: 'We present a relativistic quantum calculation at first order in perturbation theory of the differential cross section for a Dirac particle scattered by the magnetic field of a solenoid. The resulting cross section is symmetric in the scattering angle as those obtained by Aharonov and Bohm (AB) in the string limit and by Landau and Lifshitz (LL) for the non relativistic case. We show that taking $pr_0\sin{(\theta/2)}/\hbar\ll 1$ in our expression of the differential cross section it reduces to that one reported by AB, and if additionally we assume $\theta \ll 1$ our result becomes the one obtained by LL. However, these limits are explicitly singular in $\hbar$ as opposed to our initial result. We analyse the singular behavior in $\hbar$ and show that the perturbative Planck limit ($\hbar \rightarrow 0$) is consistent, contrarily to those of the AB and LL expressions.'
address: |
Instituto de F[í]{}sica, Universidad Nacional Autónoma de México.\
Apartado Postal 20364, 01000, México, D.F. México.
author:
- 'Gabriela Murgu[í]{}a [^1] and Mat[í]{}as Moreno [^2]'
title: 'Quantum effects in the scattering by the magnetic field of a solenoid.'
---
We know that in the classical scattering of charged particles by magnetic fields the particles describe circular trajectories with fixed radii, so they have a preferential movement direction. In this Letter, the relativistic quantum version of the problem is studied in the lowest order of perturbation theory and a symmetric behavior in the scattering angle is found for a solenoidal magnetic field. We study some interesting limit cases of the differential cross section and compare them with previous non relativistic results reported by Aharonov and Bohm (AB) [@AB] and Landau and Lifshitz (LL) [@LL].
As it is known, the Aharonov-Bohm effect [@AB] is considered one of the most important confirmed [@Chambers-Tonomura] predictions of quantum mechanics because it shows that the vector potential has a physical significance and can be viewed more than a mathemathical convenience. The interest in this effect has been increased recently [@Varios]. Both because of basic reasons that have changed the understanding of gauge fileds and forces in nature and also because it has a lot of connections with new physics, like the quantum Hall effect [@QHE], mesoscopic physics [@MesoscopicP] and physics of anyons [@Anyons].
We want to point out that althought the present calculation seems to be elementary, indeed it is a calculation that involves certain technical problems, as we will show. In this work, we will follow the Bjorken and Drell convention [@BD].
Let us consider the scattering of a Dirac particle by the magnetic field of a solenoid with a constant magnetic flux. Note that this is a problem in which free particle asymptotic states can be used. Also notice that the global phase $\exp{(ie\Phi\theta/hc)}$ in the free particle wave function (because of the presence of a pure gauge field in the exterior of the solenoid) does not contribute to the $S$ matrix.
Consider a long solenoid of lenght $L$ and radius $r_0 \ll L$ centered along the ${\bf \hat{\i}_3}$ axis. Inside of the solenoid, where $r<r_0$, the magnetic field is uniform, ${\bf B}=B_0{\bf \hat{\i}_3}$, with $B_0$ being a constant, while outside of the solenoid, where $r>r_0$, the magnetic field is null. Replacing the vector potential that describes this magnetic field in the lowest order in $\alpha$ of the $S$ matrix for Dirac particle solutions and multiplying by the phase space factors, we obtain the differential cross section per unit length of the solenoid: $$\frac{d\sigma}{dx_3 d\theta} =
\frac{1}{f} \frac{\hbar}{c^2} \left({\frac{e\Phi}{r_0}}\right)^2
\frac{{\left|
J_1(2\frac{p}{\hbar}r_0\left|\sin{\frac{\theta}{2}}\right|) \right|}^2}
{8\pi p^3 \sin^4{\frac{\theta}{2}}},
\label{dsigma}$$ which has the same form whether or not the final polarization of the beam is actually measured ($f=1$ or $f=2$), so this result does not depend on the final polarization. In Eq. (\[dsigma\]), $J_1$ are the first order Bessel functions [@Arfken], $\theta$ is the scattering angle, $p$ stands for the momentum of the incident particle and $\Phi = \pi r_{0}^{2} B_0$ is the total magnetic flux.
As it can be observed, the differential cross section is symmetric in $\theta$. This is reminiscent of the Stern-Gerlach result, in which an unpolarized beam interacting with an inhomogeneous magnetic field is equally split into two parts, each one with opposite spin. But, as we have mentioned before, Eq. (\[dsigma\]) does not depend on the final polarization of the particles. Thus, this symmetric behavior of $\theta$ should be a consequence of the perturbation theory, althought this symmetry is also present in non perturbative results [@AB; @LL].
We want to study the limit case of small scattering angles. So, if we assume $pr_0\sin{(\theta/2)}/\hbar\ll 1$, then the cross section of Eq. (\[dsigma\]) reduces to $$\left.{\frac{d\sigma}{dx_3 d\theta}}
\right|_{\frac{p}{\hbar}r_0\sin{\frac{\theta}{2}} \ll 1} =
\frac{1}{f}
\frac{e^2\Phi^2}{8\pi c^2 \hbar p \sin^2{\frac{\theta}{2}}},
\label{secc.mm.r}$$ which agrees with the result reported by Aharonov and Bohm [@AB] when $e\Phi/2\hbar c \ll 1$. If we additionally impose the condition $\theta \ll 1$ we obtain $$\left.{\frac{d\sigma}{dx_3 d\theta}}
\right|_{\frac{p}{\hbar}r_0\sin{\frac{\theta}{2}} \ll 1, \theta \ll 1}
= \frac{1}{f} \frac{e^2\Phi^2}{2\pi c^2 \hbar p \theta^2},
\label{secc.mm.h.c}$$ which is precisely the result reported by Landau and Lifshitz [@LL].
Note that the cross section is singular when $\hbar \rightarrow 0$ for both limiting cases resulting in a classical limit that apparently diverges. We want to point out that it does not make sense to take the Planck’s limit ($\hbar \rightarrow 0$) in Eq. (\[secc.mm.r\]) or in Eq. (\[secc.mm.h.c\]), because both expressions were obtained assuming the condition $pr_0\sin{(\theta/2)}/\hbar \ll 1$. Hence, we have to take the classical limit using the expression for the differential cross section given in Eq. (\[dsigma\]). Defining $x = 2pr_0{\left|\sin{(\theta/2)}\right|}/\hbar = r_0q$, we observe that the limit $\hbar \rightarrow 0$ implies $x \rightarrow \infty$ or $pr_0 \rightarrow \infty$ [@Zkarzhinsky-97] for fixed $\theta$. Using the asymptotic behavior of the Bessel function [@Arfken]
$$\lim_{x \rightarrow \infty}{J_1(x)} =
- \sqrt{\frac{2}{\pi x}}\cos{\left(x - \frac{3}{4}\pi\right)},
\hspace{.5cm} x \gg \frac{3}{8};$$ the resulting Planck classical limit of Eq.(\[dsigma\]) is identically zero for fixed $e, p, r_0, \Phi,$ and $\theta$: $$\lim_{\hbar \rightarrow 0}\frac{d\sigma}{dx_3 d\theta}
= \lim_{\hbar \rightarrow 0}
\frac{\hbar^2}{f} \left({\frac{e\Phi}{2\pi c}}\right)^2
\frac{\cos^2{\left({2\frac{p}{\hbar}r_0
\left|\sin{\frac{\theta}{2}}\right| -
\frac{3}{4}\pi}
\right)}
}
{2 r_{0}^{3} p^4 \left|{\sin^5{\frac{\theta}{2}}}\right|}
= 0
\label{climit-dsigma}$$ and does not show any singularity in $\hbar$ as compared to those of small scattering angles, Eq. (\[secc.mm.h.c\]), or small radii of the solenoid, Eq. (\[secc.mm.r\]). Notice that the perturbative result gives a consistent finite classical limit and reduces to the eikonal and the zero radius limits and it shows that taking the classical limit of such results is misleading. So, it is worthwhile to notice that the order in which the limits are taken is crucial.
The apparent difference in the classical limits comes from the fact that in taking the limit $\hbar \rightarrow 0$ of the perturbative result, Eq. (\[dsigma\]), the Bessel function decreases as $J_1(x)
\sim 1/\sqrt{x}$ and it does generate an $\hbar$ contribution to the cross section. On the other hand, if one begins by taking the small angle or the small radius limit, the Bessel function approximates to $J_1(x) \sim x$, and this behaves like $1/\hbar$. The overall difference between these two procedures is an $\hbar^3$ factor. It is important to notice that loop corrections to the perturbative expansion do not modify the $\hbar$ behavior of the physical amplitude, as can be proved with the use of the loop expansion.
One can obtain a non divergent expression for the Landau and Lifshitz and the Aharonov and Bohm results when $\hbar \rightarrow 0$ if one quantizes $\Phi$, the magnetic flux. Imposing the magnetic flux quantization condition, $\Phi = n\Phi_0$, where $\Phi_0 = hc/e = 4.318
\times 10^{-7}$ gauss cm$^2$, the cross section of Eq. (\[dsigma\]) takes the form $$\frac{d\sigma}{dx_3 d\theta} =
n^2 \hbar^3 \frac{\pi}{f}
\frac{{\left|
J_1(2\frac{p}{\hbar}r_0\left|\sin{\frac{\theta}{2}}\right|)
\right|}^2}
{2 r_0^2 p^3 \sin^4{\frac{\theta}{2}}},$$ which apart of being independent of the charge of the particles, it is a cross section of a purely quantum effect. Cast in this way there is no singular behavior in $\hbar$, in contradistinction to the form that Landau and Lifshitz report. In particular, for the case of small scattering angles, it takes the form $$\left.{\frac{d\sigma}{dx_3 d\theta}}\right|_{\theta \ll 1} =
\frac{8 \pi^2 \hbar n^2}{f p \theta^2},$$ which also has a null classical limit.
Finally, we want to point out that althought our reslut is consistent in the sense that the Aharonov and Bohm and the Landau and Lifshitz results are recovered, there is no direct classical correspondence via the Planck’s limit (see Eq. (\[climit-dsigma\])), because in particular the cross section is symmetric in $\theta$. This problem is shared also by the AB and LL solutions and is possibly solved by higher order corrections in the [*external*]{} magnetic field.
Acknowledgments {#acknowledgments .unnumbered}
===============
We want to thank the helpful comments of A. Rosado. This work was partially supported by CONACyT (3097P-E), DGAPA-UNAM (IN118600) and DGEP-UNAM.
[10]{}
Y. Aharonov and D. Bohm, Phys. Rev. [**115**]{}, 485 (1959).
L. D. Landau and E. M. Lifshitz, [*Quantum Mechanics (non relativistic theory)*]{}, 2nd ed. (Pergamon Press, Oxford, 1977).
R. G. Chambers, Phys. Rev. Lett. [**5**]{}, 3 (1960); N. Osakabe [*et al.*]{}, Phys. Rev. [**A34**]{}, 815 (1986).
V.G. Bagrov, D.M. Gitman, and V.B. Tlyachev, Nucl. Phys. [**B605**]{}, 425 (2001); M. V. Berry, Eur. J. Phys. [**1**]{}, 240 (1980); M. Boz and N. K. Pak, Phys. Rev. [**D62**]{}, 045022 (2000); M. Gomes, J. M. Malbouisson and A. J. da Silva., Phys. Lett. [**A236**]{}, 373 (1997); C. R. Hagen, Phys. Rev. Lett. [**64**]{}, 503 (1990); C. R. Hagen, Phys. Rev. [**D52**]{}, 2466 (1995); S. Sakoda and M. Omote, J. Math. Phys. [**38**]{}, 716 (1997); J. Audretsch and V. D. Zkarzhinsky, Found. Phys. [**28**]{}, 777 (1998).
R. B. Laughlin, Phys. Rev. Lett. [**50**]{}, 1395 (1983); B. I. Halperin, Phys. Rev. Lett. [**52**]{}, 1583 (1984); P. Giacconi and R. Soldati, J. Phys. [**A33**]{}, 5193 (2000).
J. Liu [*et al.*]{}, Phys. Rev. [**B48**]{}, 15148 (1993); P. G. N. de Vegvar [*et al.*]{}, Phys. Rev. [**B40**]{}, 3491 (1989); G. Timp [*et al.*]{}, Phys. Rev. [**B39**]{}, 6227 (1989).
S. Ouvry, Phys. Rev. [**D50**]{}, 5296 (1994); C. Chou, L. Hua, and G. Amelino-Camelia, Phys. Rev. Lett. [**B286**]{}, 329 (1992) and references there in; C. Chou, Phys. Rev. [**D44**]{}, 2533 (1991). A. Comtet, S. Mashkevuciha and S. Ouvry, Phys. Rev. [**D52**]{}, 2594 (1995); M. Gomes and A. J. da Silva., Phys. Rev. [**D57**]{}, 3579 (1998); Q. Lin, J. Phys. [**A33**]{}, 5049 (2000).
J. D. Bjorken and S. D. Drell, [*Relativistic Quantum Mechanics*]{} (McGraw-Hill, New York, 1964).
G. Arfken, [*Mathematical Methods for Physicists*]{}, 3rd ed. (Academic Press, Oxford, 1970).
V. D. Zkarzhinsky and J. Audretsch, J. Phys. A [**30**]{}, 7603 (1997).
[^1]: E-mail: gabriela@ft.fisica.unam.mx
[^2]: E-mail: matias@fisica.unam.mx
|
---
abstract: 'The independent measurement of Hubble constant with gravitational-wave standard sirens will potentially shed light on the tension between the local distance ladders and Planck’s experiments. Therefore, thorough understanding of the sources of systematic uncertainty for the standard siren method is crucial. In this paper, we focus on two scenarios that will potentially dominate the systematic uncertainty of standard sirens. First, simulations of electromagnetic counterparts of binary neutron star mergers suggest aspherical emissions, so the binaries available for the standard siren method can be selected by their viewing angles. This selection effect can lead to $\gtrsim 2\%$ bias in Hubble constant measurement even with mild selection, making the standard siren method difficult to resolve the tension in Hubble constant. Second, if the binary viewing angles are constrained by the electromagnetic counterpart observations but the bias of the constraints is not controlled under $\sim 10^{\circ}$, the resulting systematic uncertainty in Hubble constant will be $>3\%$. In addition, we find that both of the systematics cannot be fully disclosed by the viewing angle measurement from gravitational-wave observations. Comparing to the known dominant systematic uncertainty for standard sirens, the gravitational-wave calibration uncertainty, the effects from viewing angle can be more prominent.'
author:
- 'Hsin-Yu Chen'
bibliography:
- 'references.bib'
title: Systematic uncertainty of standard sirens from the viewing angle of binary neutron star inspirals
---
[***Introduction***–]{} Gravitational-wave (GW) standard sirens provide an independent way to measure the Hubble constant ($H_0$), which is crucial for our understanding of the evolution of the Universe [@1986Natur.323..310S; @Abbott:2017xzu]. Currently, the $H_0$ measurements from cosmic microwave background [@Aghanim:2018eyx] and some local distance ladders [@2019ApJ...876...85R; @2019arXiv190704869W; @2020ApJ...891L...1P] appear to be inconsistent at $>2\sigma$ level. Independent $H_0$ measurement with the standard siren method has shown its potential to resolve the inconsistency [@Abbott:2017xzu; @2018Natur.562..545C].
GW observations of compact binary mergers probe the luminosity distance ($D_L$) of the mergers directly. If the mergers also have electromagnetic (EM) counterparts [@GBM:2017lvd], e.g. short gamma-ray bursts (GRBs) or kilonova emissions that come with binary neutron star mergers (BNSs), the observation of the counterparts could allow for precise sky localization of the mergers and identification of the host galaxies [@2005ApJ...629...15H; @2006PhRvD..74f3006D]. With the luminosity distance of the GW source and the redshift of the host galaxy, cosmological parameters can be constrained. This is the so-called standard siren method with the use of EM counterparts. GW170817 was the first successful standard siren [@Abbott:2017xzu]. Several forecasts predict that a $2\%$ $H_0$ measurement can be achieved by combining $\sim$50 BNSs with identified hosts in a few years [@2013arXiv1307.2638N; @2018Natur.562..545C; @2019PhRvL.122f1105F].
In order to resolve the $H_0$ controversy, the systematic uncertainty in the standard siren method has to be well-understood. One dominant systematics comes from the calibration of amplitude measurement of GW signals. The calibration uncertainty currently leads to $\leq 2\%$ systematics in the GW distance measurement, while this uncertainty is expected to reduce in the future [@2016RScI...87k4503K; @2020arXiv200502531S]. Another source of systematics is the reconstruction of the peculiar velocity fields around the host galaxies [@2020MNRAS.492.3803H; @2019arXiv190908627M; @2020MNRAS.tmp.1270N]. However, most of the BNSs will be detected by Advanced LIGO-Virgo beyond 100 Mpc, where the effect of peculiar motions on the galaxy redshift measurement becomes less relevant. Other known sources of systematic uncertainty, e.g. the accuracy of GW waveforms [@Abbott:2018wiz], are expected to play a secondary role.
In this paper, we highlight two sources of systematic uncertainties for standard sirens that have not been thoroughly discussed before. Both of the systematics are related to the EM counterpart observations and the [viewing angle]{}of the binaries ($\zeta$) [^1]: First, simulations of BNSs suggest that their EM emissions are likely aspherical [@2011ApJ...736L..21R; @2017LRR....20....3M; @2019MNRAS.489.5037B; @2020arXiv200200299D]. For example, the brightness of kilonovae can have a factor of 2-3 angular dependent variation. The color of kilonovae can also change with the [viewing angle]{}. The variations lead to angular dependent EM observing probability for BNSs (e.g., [@2020arXiv200406137S]). If this EM [viewing angle]{}selection effect is not accounted for correctly, $H_0$ measurement will be biased after combining multiple standard sirens. Second, EM observations of BNSs provide constraint on the [viewing angle]{}. The [viewing angle]{}of BNS GW170817 [@TheLIGOScientific:2017qsa] has been reconstructed from the profiles of its EM emissions [@2018ApJ...860L...2F; @2020ApJ...888...67D] and from the observations of the jet motions [@2018Natur.561..355M]. These reconstructions help breaking the degeneracy between the luminosity distance and [inclination angle]{}of BNSs in GW parameter estimations [@2019PhRvX...9c1028C], improving the precision of distance measurement, and reducing the $H_0$ measurement uncertainty [@2017ApJ...851L..36G; @2019NatAs...3..940H]. However, if the EM constraints on the [viewing angle]{}are systematically biased, the distance and $H_0$ estimation will also be biased.
We find that both of the systematics can yield significant bias in $H_0$ measurement, undermining the standard siren’s potential to resolve the $H_0$ tension. Since both of the scenarios we discuss originate from the uncertainty of EM emissions, we also explore if it is possible to independently measure the systematics by analyzing the GW [viewing angle]{}estimations. Unfortunately, most of the events suffer from the large uncertainty of the estimations and the systematics can be difficult to disclose.
[***Simulations***–]{} We simulate 1.4[M$_\odot$]{}-1.4[M$_\odot$]{}BNS detections with the [`IMRPhenomPv2`]{}waveform and assumed a network signal-to-noise ratio of 12 GW detection threshold. The BNS astrophysical rate does not evolve, and the BNSs are uniformly distributed in comoving volume before detections. We use Advanced LIGO-Virgo O4 sensitivity [@Aasi:2013wya] for the simulations [^2]. Planck cosmology is assumed ($H_0=67.4$ km/s/Mpc, $\Omega_m=0.315$, $\Omega_k=0$) [@Aghanim:2018eyx]. We then use the simulated detections $\mathcal{D}_{\rm GW}$ to calculate the distance-inclination angle posteriors, $p(D_L,{\ensuremath{\theta_\mathrm{JN}}\xspace}|\mathcal{D}_{\rm GW})$, with the algorithms developed in [@2019PhRvX...9c1028C]. After marginalizing $p(D_L,{\ensuremath{\theta_\mathrm{JN}}\xspace}|\mathcal{D}_{\rm GW})$ over the [inclination angle]{}, we use the distance posteriors for the estimation of $H_0$. Following the methods in [@2018Natur.562..545C], we combine multiple $H_0$ posteriors from different detections to produce the final $H_0$ posterior. We repeat the simulations 100 times and report the average for the results throughout this paper. [***Systematics from EM viewing angle selection effect***–]{} If the EM counterpart emissions are aspherical, BNSs with some viewing angles could be easier to observe in EM than from other directions. The subset of BNSs with available EM counterparts for standard siren will then be selected. Suppose the data from GW and EM are denoted as $\mathcal{D}_{\rm GW}$ and $\mathcal{D}_{\rm EM}$ respectively, one can follow [@2018Natur.562..545C; @2019MNRAS.486.1086M] to write down the $H_0$ likelihood for an event as: $$\label{eq:likelihood}
p(\mathcal{D}_{\rm GW},\mathcal{D}_{\rm EM}|H_0)=\frac{\displaystyle\int p(\mathcal{D}_{\rm GW}|\vec{\Theta})p(\mathcal{D}_{\rm EM}|\vec{\Theta})p_{\rm pop}(\vec{\Theta}|H_0)d\vec{\Theta}}{\displaystyle\int p_{\rm det}^{\rm GW}(\vec{\Theta})p_{\rm det}^{\rm EM}(\vec{\Theta})p_{\rm pop}(\vec{\Theta}|H_0)d\vec{\Theta}},$$ where $\vec{\Theta}$ represents all the binary parameters, such as the mass, spin, luminosity distance, sky location, and [inclination angle]{}etc., $$\label{eq:det}
p_{\rm det}(\vec{\Theta})\equiv \displaystyle \int_{\mathcal{D}>{\rm Threshold}} p(\mathcal{D}|\vec{\Theta})d\mathcal{D},$$ and $p_{\rm pop}(\vec{\Theta}|H_0)$ is the population distribution of binaries with parameters $\vec{\Theta}$ in the Universe (also see [@2019MNRAS.486.1086M] for more details). If the EM observing probability depends on the binary parameters (such as the viewing angle), $p(\mathcal{D}_{\rm EM}|\vec{\Theta})$ has to change accordingly and Equation \[eq:likelihood\] have to be reevaluated. However, if such dependency is unknown or ignored, Equation \[eq:likelihood\] and the combined $H_0$ posteriors from multiple events will be biased.
Here we explore two examples of EM observing probability dependency on the [viewing angle]{} [^3]: First, we assume only BNSs with [viewing angle]{}less than $\zeta_{\rm max}$ are observable in EM. Smaller $\zeta_{\rm max}$ represents stronger selection since the [viewing angle]{}is more limited. In Figure \[fig:selection\] we show the 1-$\sigma$ uncertainty in $H_0$ for different $\zeta_{\rm max}$ if 50 events are combined. Without knowing the selection on [viewing angle]{}, we find the $H_0$ measurement significantly biased even if $\zeta_{\rm max}$ is as large as $\sim 60^{\circ}$ (the band *W/o correction*). Only as a demonstration, we also show the $H_0$ uncertainty assuming the [viewing angle]{}selection $\zeta_{\rm max}$ is perfectly known (the band *With correction*). If $\zeta_{\rm max}$ is known, $p(\mathcal{D}_{\rm EM}|\vec{\Theta})$ in Equation \[eq:likelihood\] is 0 when $\zeta > \zeta_{\rm max}$.
On the other hand, not all EM emissions have a sharp decline beyond a [viewing angle]{}. Here we also consider a second example, in which the EM observing probability is a continuous function of [viewing angle]{}: $E (\zeta)=0.5 ({\rm cos}(\zeta)+1)$. With this assumption, the EM observing probability is 1 for face-on binaries, and 0.5 for edge-on binaries. Without correction, we find the 1-$\sigma$ uncertainty in $H_0$ for 50 events lying between $[67.5,70.2]$km/s/Mpc, equivalent to $\sim 2\%$ bias in $H_0$.
![\[fig:selection\] Hubble constant measurement uncertainty (1-$\sigma$) from 50 standard sirens as a function of the maximum viewing angle of the binaries. The Hubble constant used for the simulations is $67.4$ km/s/Mpc. If the maximum viewing angle is known, appropriate corrections can be applied (as described in the texts) and the uncertainty is the *With correction* band. In contrast, the *W/o correction* band shows the level of bias if the maximum viewing angle is unknown. For reference, the two horizontal bands denote the $H_0$ reported by Riess et al. [@2019ApJ...876...85R] ($74.03\pm 1.42$ km/s/Mpc) and Planck [@Aghanim:2018eyx] ($67.4 \pm 0.5$ km/s/Mpc). ](select_inc_h0_bias.pdf){width="1.0\columnwidth"}
Since the EM observing probability is unclear, a possible way to access the [viewing angle]{}selection effect is to analyze the GW [viewing angle]{}estimation of the events with EM counterparts. We try to estimate $\zeta_{\rm max}$ from the first example above with data from $N$ events {${\mathcal{D}_1,\mathcal{D}_2...\mathcal{D}_N}$}: $$\begin{aligned}
\label{eq:selectionmeasure}
\nonumber &p(\zeta_{\rm max}|\mathcal{D}_1,\mathcal{D}_2...\mathcal{D}_N)=\frac{p(\zeta_{\rm max})\displaystyle \prod_{k=0}^{N} p(\mathcal{D}_k|\zeta_{\rm max})}{\displaystyle\prod_{k=0}^{N} p(\mathcal{D}_k)} \\
\nonumber &=p(\zeta_{\rm max})\displaystyle\prod_{k=0}^{N}\displaystyle \int_0^{\pi/2}\frac{p(\zeta|\mathcal{D}_k)p(\zeta_{\rm max}|\zeta,\mathcal{D}_k)}{p(\zeta_{\rm max})}d\zeta \\
\nonumber &=p(\zeta_{\rm max})\displaystyle\prod_{k=0}^{N}\displaystyle \int_0^{\pi/2}\frac{p(\zeta|\mathcal{D}_k)p(\zeta|\zeta_{\rm max})}{p(\zeta)}d\zeta \\
&=p(\zeta_{\rm max})\displaystyle\prod_{k=0}^{N}\frac{\displaystyle \int_0^{\zeta_{\rm max}}p(\zeta|\mathcal{D}_k)d\zeta}{\displaystyle \int_0^{\zeta_{\rm max}} p(\zeta) d\zeta}.\end{aligned}$$ The first line comes from the fact that each event are independent. The third line considers $p(\zeta_{\rm max}|\zeta,\mathcal{D}_k)=p(\zeta_{\rm max}|\zeta)$, and the last line takes $p(\zeta|\zeta_{\rm max})\propto p(\zeta)$ for $\zeta<\zeta_{\rm max}$. Equation \[eq:selectionmeasure\] can then be calculated from the prior on [viewing angle]{}$p(\zeta)$ [@2011CQGra..28l5023S] and the GW [viewing angle]{}posterior $p(\zeta|\mathcal{D}_k)$ for each event [@2019PhRvX...9c1028C]. Without any prior on $\zeta_{\rm max}$ (i.e. $p(\zeta_{\rm max})$ is taken as a constant), in Figure \[fig:selectionmeasure\] we show the 1-$\sigma$ uncertainty of the $\zeta_{\rm max}$ posteriors (Equation \[eq:selectionmeasure\]) as a function of the maximum EM [viewing angle]{}of 50 simulated BNSs.
![\[fig:selectionmeasure\] Maximum [viewing angle]{}$\zeta_{\rm max}$ estimated from 50 BNSs’ [viewing angle]{}GW posteriors. The band denotes the $1\sigma$ uncertainty of the estimations. Small simulated $\zeta_{\rm max}$ are not estimated accurately due to large uncertainty of the [viewing angle]{}posteriors. The grey dashed line is the equal-axis line to guide the eye. ](selectionmeasure.pdf){width="1.0\columnwidth"}
We find that $\zeta_{\rm max}$ can only be confined to $\sim 20^{\circ}$ 1-$\sigma$ uncertainty. In addition, the estimated $\zeta_{\rm max}$ is biased for small $\zeta_{\rm max}$ because GW [viewing angle]{}posteriors typically peak around $30^{\circ}$ with about $20^{\circ}$ uncertainty [@2019PhRvX...9c1028C]. Small $\zeta_{\rm max}$ is therefore difficult to estimate even if all of the events with EM counterparts are face-on/off.
[***Systematics from biased EM constraint on viewing angle***–]{} The angular dependency of EM emissions can be used to estimate the [viewing angle]{}of BNSs from their EM observations. However, lack of robust understanding of the EM emission model can lead to biased interpretation of the [viewing angle]{}.
Suppose the EM observations suggest a [viewing angle]{}of $\zeta_{\rm EM}$ with 1-$\sigma$ uncertainty of $\sigma_{\zeta}$, we can multiply following prior with the GW distance-inclination joint posterior $p(D_L,{\ensuremath{\theta_\mathrm{JN}}\xspace}|\mathcal{D}_{\rm GW})$ of the BNS: $$\Gamma ({\ensuremath{\theta_\mathrm{JN}}\xspace})= \begin{cases}
\cal{N}({\ensuremath{\theta_\mathrm{JN}}\xspace};\zeta_{\rm EM},\sigma_{\zeta}) &\text{if}\, 0\leq {\ensuremath{\theta_\mathrm{JN}}\xspace}\leq \pi/2 \\
\cal{N}({\ensuremath{\theta_\mathrm{JN}}\xspace};\pi-\zeta_{\rm EM},\sigma_{\zeta}) &\text{if}\, \pi/2< {\ensuremath{\theta_\mathrm{JN}}\xspace}\leq \pi,
\end{cases}$$ where $\cal{N}({\ensuremath{\theta_\mathrm{JN}}\xspace};\zeta_{\rm EM},\sigma_{\zeta})$ denotes a normal distribution with mean $\zeta_{\rm EM}$ and standard deviation $\sigma_{\zeta}$ evaluated at ${\ensuremath{\theta_\mathrm{JN}}\xspace}$. Such prior reduces the uncertainty in [inclination angle]{}, and the distance is better measured after the joint posterior, $p(D_L,{\ensuremath{\theta_\mathrm{JN}}\xspace}|\mathcal{D}_{\rm GW})$, is integrated over [$\theta_\mathrm{JN}$]{}. Improved distance estimate leads to more precise Hubble constant measurement [@2019PhRvX...9c1028C]. However, if the EM constraint on the [viewing angle]{}is off by $$\Delta \zeta_{\rm sys}\equiv \zeta_{\rm EM}-\zeta_{\rm real},$$ where $\zeta_{\rm real}$ denotes the real [viewing angle]{}of the event, the distance and the $H_0$ measurements will be biased. For single event the bias in $H_0$ may not be obvious, because the statistical uncertainty in $H_0$ dominates the overall uncertainty. The bias will become clear after the $H_0$ posteriors are combined over multiple events. In Figure \[fig:extbias\] we show the extent of overall bias in $H_0$ if the EM constraint on [viewing angle]{}is always off by $\Delta \zeta_{\rm sys}$ for 20 events.
When the viewing angles are overestimated (underestimated), the distances are underestimated (overestimated) and the overall $H_0$ is overestimated (underestimated). Smaller $\sigma_{\zeta}$ affects the $H_0$ measurement more significantly for the same $\Delta \zeta_{\rm sys}$. Although $\Delta \zeta_{\rm sys}$ is unlikely to be a constant across different events, our simulations provide the allowed range of systematic uncertainty for EM constrained [viewing angle]{}. In general, $\Delta \zeta_{\rm sys}$ has to be $\lesssim 10^{\circ}$ to be accurate enough to address the tension between *Planck* and the local distance ladders.
![\[fig:extbias\] Hubble constant measurement uncertainty (1-$\sigma$) from 20 standard sirens as a function of the systematic bias in the binary viewing angle constrained by EM observations. Three different statistical uncertainties in the EM-constrained viewing angle ($\sigma_{\zeta}=5^{\circ},10^{\circ},20^{\circ}$) are shown. The Hubble constant used for the simulations is $67.4$ km/s/Mpc. ](inc_h0_bias.pdf){width="1.0\columnwidth"}
Next, we wonder if a comparison between the GW and EM measurement of the [viewing angle]{}will help disclosing the bias in EM interpretations. Suppose the [viewing angle]{}posteriors from GW and EM for a BNS are $\Upsilon(\zeta)$ and $\varepsilon(\zeta)$ respectively, we can define their difference as $$\label{eq:diff}
\Delta \zeta_{\rm EM-GW}\equiv \displaystyle \int_0^{\pi/2} \int_0^{\pi/2} (\zeta_2-\zeta_1)\times \Upsilon(\zeta_1)\times \varepsilon(\zeta_2)\, d\zeta_1 d\zeta_2 .$$ The uncertainty in GW and EM posteriors both contribute to the overall uncertainty of $\Delta \zeta_{\rm EM-GW}$. We find that the average of $\Delta \zeta_{\rm EM-GW}$ over 20 BNSs traces $\Delta \zeta_{\rm sys}$ with $1-\sigma$ uncertainty $>18^{\circ}$ (Figure \[fig:biasmeasure\]). This statistical uncertainty of $\Delta \zeta_{\rm EM-GW}$ is not small enough to confine $\Delta \zeta_{\rm sys}$ to the accuracy for $H_0$ measurement described above, leaving the $H_0$ systematics from biased EM constraint in [viewing angle]{}unresolved.
![\[fig:biasmeasure\] The average difference between EM and GW [viewing angle]{}posteriors $\Delta \zeta_{\rm EM-GW}$ for 20 BNSs with EM posteriors systematically off by $\Delta \zeta_{\rm sys}$. The $1-\sigma$ uncertainty of the difference for three EM posterior statistical uncertainties, $\sigma_{\zeta}=5^{\circ},10^{\circ},20^{\circ}$, are $18.5^{\circ}$, $20^{\circ}$, and $24^{\circ}$, respectively. The grey dashed line is the equal-axis line to guide the eye. ](biasmeasure.pdf){width="1.0\columnwidth"}
[***Discussion***–]{} In this paper we evaluate the extent of bias in $H_0$ as a result of the geometry of EM emissions from BNSs. Among the two examples of viewing angle selection we present, the maximum viewing angle selection may happen due to the choice of kilonova observing strategies or the sharp decline beyond a viewing angle for short GRB emission. In particular, in the third generation GW detector era [@Reitze:2019iox; @Maggiore:2019uih], short GRBs will likely become the major EM counterparts for BNSs at high redshifts. Study of the maximum viewing angle of GRBs will be crucial to correct the selection effect for standard sirens [^4] On the other hand, the example of continuous [viewing angle]{}selection applies to current kilonova observations. Simulations show that edge-on BNSs are more difficult to localize [@2019PhRvX...9c1028C], and their kilonova emissions can be redder and dimmer [@2020arXiv200200299D]. In both examples, we find $\gtrsim 2\%$ bias in $H_0$ if the selection is not well-understood.
We note that in reality other binary parameters will also affect the EM observing probability. Therefore, more complete considerations of EM models and projections of EM observing probability for future telescopes involved in the search for EM counterparts will result in more accurate estimation of the bias in $H_0$. Unlike the [viewing angle]{}measurement, some parameters, such as the mass, are estimated precise enough from GW signals for the selection effect to be taken care of. Overall, we find the selection over [viewing angle]{}discussed in this paper the most subtle and difficult to resolve.
If the [viewing angle]{}selection effect is significant, it is possible to reconstruct the selection by comparing the number of BNSs with and without EM counterparts. The distribution of [viewing angle]{}for BNSs detected in GWs is well-understood [@2011CQGra..28l5023S]. For example, it is known that about 15% of BNSs have [viewing angle]{}larger than $60^{\circ}$. If 15% of BNSs miss counterparts, one explanation is that the maximum EM [viewing angle]{}is around $60^{\circ}$. A reconstruction for short GRB observations has been shown in [@2019arXiv191204906F]. However, the reconstruction for kilonova population will be more difficult since their EM observing probability will have more complicated dependency on the [viewing angle]{}. Such reconstruction can also be easily contaminated by other factors that affect the EM observing probability and will have to be evaluated carefully.
Although our discussion focuses on BNSs, there are simulations suggesting stronger [viewing angle]{}dependency for EM counterparts of neutron star-black hole mergers [@2020arXiv200200299D]. Therefore neutron star-black hole mergers can possibly introduce larger bias when they are used as standard sirens [@2018PhRvL.121b1303V].
On the other hand, if the geometry of EM emissions is used to confine the BNSs’ [viewing angle]{}, the systematic uncertainty in [viewing angle]{}introduced by the EM interpretations has to be less than $10^{\circ}$. Since the binary rotational axis doesn’t have to be perfectly aligned with the major axis of EM emissions, and the geometry of EM emissions is unknown, to control the systematics of EM constraint [viewing angle]{}can be challenging. We also show that the comparison between EM and GW [viewing angle]{}posteriors can help estimating the systematics, but the precision of the estimation may not be good enough to completely remove the bias.
We note that the standard siren method we discuss in this paper relies on the observations of EM counterparts and the measurements of the BNSs’ redshift. A complimentary approach of the standard siren method doesn’t require the EM counterparts but make use of galaxy catalogs (also known as ‘’dark siren’‘) may help deducing the systematics discussed in this paper. However, the dark siren approach will suffer from lower $H_0$ precision and other sources of systematics [@2018Natur.562..545C; @2019arXiv190806050G], making it complicated and difficult to contribute to the issues.
Finally, the calibration uncertainty in GWs currently dominates the known systematic uncertainty for standard sirens. The bias in $H_0$ from calibration can be as large as $\sim 2\%$ [@2016RScI...87k4503K; @2020arXiv200502531S]. Both of the systematics we find in this work can introduce $H_0$ bias larger than 2% (Figure \[fig:selection\] and \[fig:extbias\]). In summary, the systematic uncertainty from [viewing angle]{}for standard sirens can be a major challenge to resolve the tension in Hubble constant, and we look forward to future development to address this topic.
[***acknowledgments***–]{} We acknowledge valuable discussions with Sylvia Biscoveanu, Michael Coughlin, Carl-Johan Haster, Daniel Holz, Kwan-Yeung Ken Ng, and Salvatore Vitale. HYC was supported by by the Black Hole Initiative at Harvard University, which is funded by grants from the John Templeton Foundation and the Gordon and Betty Moore Foundation to Harvard University.
[^1]: Since the EM counterpart emissions barely depend on the direction of the binary rotation (clockwise or counterclockwise), in this paper we define the [viewing angle]{}as $\zeta\equiv \mathrm{min}({\ensuremath{\theta_\mathrm{JN}}\xspace}, 180^\circ - {\ensuremath{\theta_\mathrm{JN}}\xspace})$, where ${\ensuremath{\theta_\mathrm{JN}}\xspace}$ denotes the [inclination angle]{}of the binary.
[^2]: Specifically, the `aligo_O4high.txt` file for LIGO-Livingston/LIGO-Hanford, and `avirgo_O4high_NEW.txt` for Virgo in this document: <https://dcc.ligo.org/LIGO-T2000012/public>.
[^3]: Since a telescope, an EM model, and an EM serach pipeline have to be specified before the noise properties of EM data can be quantified, in this paper we assume no EM observing noise for simplification.
[^4]: Note that for high redshift sources the framework for the selection effect will be the same as we demonstrate in Equation \[eq:likelihood\], while the inference will be on $H(z)$.
|
---
abstract: 'We propose a method for automatic segmentation of individual muscles from a clinical CT. The method uses Bayesian convolutional neural networks with the U-Net architecture, using Monte Carlo dropout that infers an uncertainty metric in addition to the segmentation label. We evaluated the performance of the proposed method using two data sets: 20 fully annotated CTs of the hip and thigh regions and 18 partially annotated CTs that are publicly available from The Cancer Imaging Archive (TCIA) database. The experiments showed a Dice coefficient (DC) of 0.891$\pm$0.016 (mean$\pm$std) and an average symmetric surface distance (ASD) of 0.994$\pm$0.230 mm over 19 muscles in the set of 20 CTs. These results were statistically significant improvements compared to the state-of-the-art hierarchical multi-atlas method which resulted in 0.845$\pm$0.031 DC and 1.556$\pm$0.444 mm ASD. We evaluated validity of the uncertainty metric in the multi-class organ segmentation problem and demonstrated a correlation between the pixels with high uncertainty and the segmentation failure. One application of the uncertainty metric in active-learning is demonstrated, and the proposed query pixel selection method considerably reduced the manual annotation cost for expanding the training data set. The proposed method allows an accurate patient-specific analysis of individual muscle shapes in a clinical routine. This would open up various applications including personalization of biomechanical simulation and quantitative evaluation of muscle atrophy.'
author:
- 'Yuta Hiasa, Yoshito Otake, Masaki Takao, Takeshi Ogawa, Nobuhiko Sugano, and Yoshinobu Sato [^1] [^2] [^3] [^4]'
bibliography:
- 'refs.bib'
title: 'Automated Muscle Segmentation from Clinical CT using Bayesian U-Net for Personalized Musculoskeletal Modeling'
---
Bayesian Deep Learning, Convolutional Neural Networks, Active Learning, Image Segmentation, Musculoskeletal Model
![image](overview-eps-converted-to.pdf){width="70.00000%"}
Introduction {#intro}
============
The patient-specific geometry of skeletal muscles plays an important role in biomechanical modeling. The computational simulation of human motion using musculoskeletal modeling has been performed in a number of studies to investigate musculo-tendon forces and joint contact forces, which cannot be easily achieved by physical measurements [@rajagopal2016full; @seth2018opensim; @shu2018subject]. Recent studies have demonstrated that personalization of model parameters, such as the size of the bones, geometry of the muscles and tendons, and physical properties of the muscle-tendon complex, improves accuracy of the simulation [@moissenet2017biomech; @cheze2015state; @taddei2012femoral]. While the majority of previous studies modeled the musculo-tendon unit as one or multiple lines joining their origin and insertion, including so-called [*[via points]{}*]{} in some cases, several recent studies have shown that volumetric models representing subject-specific muscle geometry provide higher accuracy in the simulation [@webb20143d]. However, the segmentation of the volumetric geometry of individual muscles from subject-specific medical images remains a time consuming task that requires expert-knowledge, thus precludes application in clinical practice. Therefore, our focus in this study is to develop an automated method of segmentation of individual muscles for personalization of the musculoskeletal model.
Related work {#subsec:related_work}
------------
Segmentation of muscle tissue and fat tissue has been studied extensively for the analysis of muscle/fat composition. (Note that we refer to muscle tissue here as an object including all muscles, not an individual muscle.) Ulbrich et al. [@ulbrich2018whole] and Karlsson et al. [@karlsson2015automatic] implemented an algorithm for automated segmentation of the muscle and fat tissues from MRI using a multi-atlas method [@iglesias2015multiatlas]. Lee et al. [@lee2017pixel] used deep learning for segmentation of the muscle and fat tissues in a 2D abdominal CT slice.
Segmentation of individual muscles is a much more difficult problem due to the low tissue contrast at the border between neighboring muscles, especially in the area where many muscles are contiguously packed such as in the hip and thigh regions. Handsfield et al. [@handsfield2014relationships] manually performed segmentation of 35 individual muscles from MRIs of the lower leg in order to investigate the relationship between muscle volume and height or weight. To facilitate automation of the individual muscular segmentation, prior knowledge about the shape of each muscle has been introduced [@baudin2012prior]. Andrews et al. [@andrews2015generalized] proposed an automated segmentation method for 11 thigh muscles from MRI using a probabilistic shape representation and adjacency information. They evaluated the method using images of the middle part of the left femur (20 cm in length until just above the knee) and reported an accuracy of 0.808 average Dice coefficient. Since the muscles of interest run along a long bone, i.e., the femur, the muscles have similar appearances in axial slices resulting in less complexity in segmentation compared to the hip region.
In CT images, due to the lower soft tissue contrast compared to MRI, segmentation of individual muscles is even more difficult. Yokota et al. [@yokota2018automated] addressed the automated segmentation of individual muscles from CTs of the hip and thigh regions. The target region was broader than [@andrews2015generalized] covering the origin to insertion of 19 muscles. They introduced a hierarchization of the multi-atlas segmentation method such that the target region becomes gradually more complex in a hierarchical manner, namely starting with skin surface, then all muscle tissues as one object, and finally individual muscles at each hierarchy. They reported an average Dice coefficient of 0.838. Although their algorithm produced a reasonable accuracy for this highly challenging problem, due to the large number of non-rigid registrations required in the multi-atlas method, computational load was prohibitive when considering routine clinical applications (41 minutes for segmentation of one CT volume using a high performance server with 60 cores).
In order to enhance the accuracy and speed of the muscle segmentation in CT, we propose an application of convolutional neural networks (CNNs). We investigate the segmentation accuracy as well as a metric indicating uncertainty of the segmentation using the framework of Bayesian deep learning. Yarin Gal et al. [@gal2016dropout] found that the dropout [@srivastava2014dropout] is equivalent to approximating the Bayesian inference, which allows estimation of the model uncertainty. It measures the degree of difference of each test sample from the training data set, originated from the deficiency of training data, namely [*[epistemic]{}*]{} uncertainty [@kendall2017uncertainties]. This method has been applied to brain lesion segmentation [@nair2018exploring; @eaton2018towards] and surgical tool segmentation [@hiasa2018laparo]. Two example applications of the uncertainty metric explored in this study are; 1) prediction of segmentation accuracy without using the ground truth similar to the goal of Valindria et al. [@valindria2017reverse] and, 2) the active-learning framework [@maier2016crowd; @yang2017suggestive] for the reduction of manual annotation costs.
Contributions
-------------
In this study, we demonstrate a significantly improved accuracy in the segmentation of 19 individual muscles from CTs of the hip and thigh regions through application of CNNs. Contribution of this paper is two-fold; 1) investigation of the performance of Bayesian U-Net using 20 fully annotated clinical CTs and 18 partially annotated CTs that are publicly available from The Cancer Imaging Archive (TCIA) database, 2) analysis of the uncertainty metric in a multi-class organ segmentation problem and its potential applications in predicting segmentation accuracy, without using the ground truth, and efficient selection of manual annotation samples in an active-learning framework.
Paper organization
------------------
The paper is organized as follows. In Section \[sec:method\], the proposed method is described, including data sets, uncertainty estimates, and active learning. In Section \[sec:result\], we quantitatively evaluate the proposed methods through experiments using two data sets. Then, we discuss the methods and results, and conclude the paper in Section \[sec:discuss\_and\_conclusion\].
Methodology {#sec:method}
===========
Overview {#subsec:overview}
--------
Figure \[fig:overview\] shows the workflow of the proposed methods. We first segment the skin surface using a 2D U-Net to isolate the body from surrounding objects such as the scan table and the calibration phantom. Next, the individual muscles are segmented and the model uncertainty is predicted using Bayesian U-Net, which is described in Section \[subsec:uncertain\]. The Dice coefficient of each muscle segmentation is predicted from the model uncertainty without using the ground truth. This is done using a linear regression between the average model uncertainty computed in a cross validation within the training data set (Fig. \[fig:vis\_nih\]a). We evaluated the proposed active-learning framework, shown in Fig. \[fig:overview\](b), on a simulated environment using a fully annotated data set by assuming a situation where partial manual annotation is provided initially. The manual annotation of a small number of slices selected by the proposed procedure is given in steps as described in Section \[subsec:activelearn\].
Data sets
---------
Two data sets were used to evaluate the proposed method: 1) a fully annotated non-public clinical CT data set and 2) a partially annotated publicly available CT data set.
### Osaka University Hospital THA data set (THA data set)
This data set consists of 20 CT volumes scanned at Osaka University Hospital, Suita, Japan, for CT-based planning and navigation of total hip arthroplasty (THA) [@yokota2018automated; @ogawa2019valid]. The field of view was 360$\times$360 mm$^2$ and the matrix size was 512$\times$512. The original slice intervals were 2.0 mm for the region including the pelvis and proximal femur, 6.0 mm for the femoral shaft region, and 1.0 mm for the distal femur region. Each CT volume had about 500 slices (see supplementary materials for details of the number of axial slices of each muscle). In this study, the CT volumes were resampled so that the slice interval becomes 1.0 mm throughout the entire volume. Nineteen muscles around the hip and thigh regions and 3 bones (pelvis, femur, sacrum) were manually annotated by an expert surgeon (Figure \[fig:dataset\]). The manual annotation took about 40 hours per volume. This data set was used for training and cross-validation for the accuracy evaluation and prediction of the Dice coefficient. Note that 132 CT volumes acquired at Osaka University independently from the above mentioned data set were used for training of the skin segmentation network. The region inside the skin was semi-automatically annotated.
![Training data set used in this study, consisting of 20 labeled CT volumes. The muscles of interest are separately visualized according to the functional group and their region. Upper and lower rows show the anterior and posterior views, respectively. []{data-label="fig:dataset"}](dataset-eps-converted-to.pdf){width="50.00000%"}
### TCIA soft tissue sarcoma data set (TCIA data set)
The data set obtained from TCIA collections [^5] contains CT and MR volumes from 51 patients with soft tissue sarcomas (STSs) [@vallieres2015radiomics]. In this study, we selected 18 CT volumes that include the hip region. The CT volumes were resampled so that the in-plane field of view becomes 360$\times$360 mm$^2$ without changing the slice center and the slice interval becomes 1.0 mm throughout the volume similar to the THA data set. The gluteus medius muscle was manually traced by a computer scientist and verified by an expert surgeon. This data set was not used in the training nor in the parameter tuning and only used for evaluation of generalizability of the model trained with the THA data set (see Section \[subsubsec:app\] for details).
Estimation of uncertainty metric {#subsec:uncertain}
--------------------------------
The underlying algorithm of the proposed uncertainty estimates follows that of Gal et al. [[@gal2016dropout]]{} which used the dropout at the inference phase. This allowed approximation of the posterior distribution based on the probabilistic softmax output obtained from the stochastic dropout sampling. We use the mean and variance of the output from multiple samplings as the segmentation result and uncertainty estimate, respectively. Below, we briefly summarize the theoretical background described in [[@gal2016dropout]]{}, formulate the specific metric that we employed in this paper, and propose a new structure-wise uncertainty metric for a multi-class segmentation problem.
Suppose we have a training data set of images $\mathbf{X}=\{ \mathbf{x_1}, \cdots, \mathbf{x_n} \}$ and its labels $\mathbf{Y}=\{ \mathbf{y_1}, \cdots, \mathbf{y_n} \}$. We consider the predictive label $\mathbf{y}^*$ of an unseen image $\mathbf{x}^*$. Let a “[*[deterministic]{}*]{}” neural network represent $p(y^*|\mathbf{x}^*) =
\mathrm{Softmax}(\mathbf{f}(\mathbf{x}^*; \mathbf{W})) $. A “[*[probabilistic]{}*]{}” Bayesian neural network is given by marginalization over the weight $\mathbf{W}$ as $$\begin{aligned}
\label{eq:bayes}
p(y^*=c|\mathbf{x}^*, \mathbf{X}, \mathbf{Y}) = \int p(y^*=c|\mathbf{x}^*, \mathbf{W}) p(\mathbf{W} | \mathbf{X}, \mathbf{Y}) \mathrm{d}\mathbf{W}
\end{aligned}$$ where $y^* \in \mathbf{y}^*$ is the output label of a pixel, $c$ is the label class, and $p(\mathbf{W} | \mathbf{X}, \mathbf{Y})$ is the posterior distribution. Gal et al. [@gal2016dropout] proved that approximation of the posterior distribution is equivalent to the dropout masked distribution $q(\hat{\mathbf{W}})$, where $\hat{\mathbf{W}} = \mathbf{W} \cdot \mathrm{diag}(\mathbf{z})$ and $\mathbf{z}
\sim \mathrm{Bernoulli}(\theta)$, and $\theta$ is the dropout ratio. Then, Eq. (\[eq:bayes\]) can be approximated by minimizing the Kullback-Leibler (KL) divergence $\mathrm{KL}(q(\hat{\mathbf{W}}) || p(\hat{\mathbf{W}}|\mathbf{X}, \mathbf{Y}))$ as follows. $$\begin{aligned}
p(y^*=c|\mathbf{x}^*, \mathbf{X}, \mathbf{Y}) &\approx& \int p(y^*=c|\mathbf{x}^*, \hat{\mathbf{W}}) q(\hat{\mathbf{W}}) \mathrm{d}\hat{\mathbf{W}} \\
&\approx& \frac{1}{T} \sum_{t=1}^{T} \mathrm{Softmax}(\mathbf{f}(\mathbf{x}^*, \hat{\mathbf{W}})).
\end{aligned}$$ where $T$ is the number of dropout samplings. This Monte Carlo estimation is called “[*[MC dropout]{}*]{}” [@gal2016dropout]. We employed the predictive variance as the metric indicating uncertainty which is defined as $$\begin{aligned}
\lefteqn{
Var(y^*=c|\mathbf{x}^*, \mathbf{X}, \mathbf{Y})
} \nonumber \\
&\approx& \frac{1}{T} \sum_{t=1}^{T} \mathrm{Softmax}(\mathbf{f}(\mathbf{x}^*, \hat{\mathbf{W}}))^T \mathrm{Softmax}(\mathbf{f}(\mathbf{x}^*, \hat{\mathbf{W}})) \nonumber \\
& & - p(y^*|\mathbf{x}^*, \mathbf{X}, \mathbf{Y})^T p(y^*|\mathbf{x}^*, \mathbf{X}, \mathbf{Y}).
\end{aligned}$$
In this paper, we propose two new structure-wise uncertainty metrics: 1) predictive structure-wise variance (PSV) and 2) predictive Dice coefficient (PDC). PSV represents the predictive variance per unit area of the pixels that are classified as the target structure. Let $\mathbf{s}^*$ be all pixels that are classified as class $c$; $\mathbf{s}^*= \{y^*_i| \operatorname*{\arg\!\max}_k p(y^*_i=k)=c, \forall y^*_i \in \mathbf{y}^* \}$ ($argmax$ represents the selection of the class with the highest probability for the pixel $i$). The metric is defined as $$\begin{aligned}
PSV(\mathbf{s}^*|\mathbf{x}^*) = \frac{1}{|\mathbf{s}^*|} \sum_{y^* \in \mathbf{s}^*} \sum_k Var(y^*=k|\mathbf{x}^*).
\end{aligned}$$ PDC is computed by a linear regression of PSV and the actual Dice coefficient of the target structure. $$\begin{aligned}
\label{eq:pdc}
PDC(\mathbf{s}^*|\mathbf{x}^*) \approx \alpha \cdot PSV(\mathbf{s}^*|\mathbf{x}^*) + \beta
\end{aligned}$$ where $\alpha$ is the linear coefficient and $\beta$ is the bias. To find these parameters, we conduct $K$-fold cross-validation. $K$-1 groups are used to train a model, while the remaining one group is used for the evaluation (i.e., observe the Dice and PSV). Then, $\alpha$ and $\beta$ are determined by all sets of observed Dice and PSV.
As for the network architecture, we extend the U-Net model by inserting the dropout layer before each max pooling layer and after each up-convolution layer as shown in the dotted squares in Fig. \[fig:overview\](a), which is the same approach as Bayesian SegNet, proposed by Kendall et al. [@kendall2015bayesian]. We call the U-Net extended by [*[MC dropout]{}*]{} “[*[Bayesian U-Net]{}*]{}[^6].”
Bayesian active learning {#subsec:activelearn}
------------------------
A common practical situation in segmentation problems entails a scenario where the labeled data set is small-scale while a large-scale unlabeled data set is available. The active-learning method is known to be effective in that scenario by interactively expanding the training data set using the experts’ input.
In order to determine the pixels to query to the experts, the proposed method first selects slices with high uncertainty in segmentation from the unlabeled data set, which we call the [*[slice selection step]{}*]{}, and then selects pixels with high uncertainty from the selected slices, which we call the [*[pixel selection step]{}*]{}. The slice selection step follows Yang et al. [@yang2017suggestive] which utilized uncertainty and similarity metrics to determine the query slices. This is summarized as follows: Let $\mathcal{D}_u$ be an unlabeled data set; then a subset of uncertain slices $\mathcal{D}_c \subseteq \mathcal{D}_u$ is selected following the selection of representative slices $\mathcal{D}_r \subseteq \mathcal{D}_c$ using a similarity-based clustering approach. Details of the algorithm are shown in Appendix.
In this paper, we propose a new method for the pixel selection step to reduce the number of pixels to query to the expert using the proposed uncertainty metric. We used manual labels for the pixels with uncertainty larger than the threshold $T$ (i.e., “uncertain” pixels) and predicted labels for other pixels (i.e., “certain” pixels), that is $$\begin{aligned}
\label{eq:query}
\hat{Y}_{ij} = \begin{cases}
\displaystyle \operatorname*{\arg\!\max}_k \ p(y=k|\mathbf{x}) & ( \displaystyle \sum_k Var_{ij}(y=k) < T) \\
Y^{manual}_{ij} & (otherwise)
\end{cases}
\end{aligned}$$ where $\hat{Y}_{ij}$ denotes the label for the $j$-th pixel in $i$-th slice and $Y^{manual}_{ij}$ denotes the label manually provided by the expert. Note that the threshold $T$ determines the trade-off between manual annotation cost and the achieved accuracy. We experimentally investigate the choice of the threshold $T$ in the following sections.
Implementation details {#subsec:implementation}
----------------------
During the pre-processing, intensity of the CT volumes is normalized so that \[$-150,
350$\] HU is mapped to \[$0, 255$\] (intensities smaller than -150 HU and larger than 350 HU are clamped to 0 and 255, respectively). At the training phase, data augmentation is performed by translation of \[$-25, +25$\] % of the matrix size, rotation of \[$-10, +10$\] deg, scale of \[$-35, +35$\] %, shear transform with the shear angle of \[$-\pi/8$, $+\pi/8$\] rad, and flipping in the right-left direction. The data augmentation allows the model to be invariant to the FOV of the scan, patient’s size, rotation, and translation. At post-processing, the largest connected component is extracted to obtain the final output for each muscle.
Comparison with conventional methods
------------------------------------
The current state-of-the-art method for automated segmentation of individual muscles from CT based on the hierarchical multi-atlas method [@yokota2018automated] was implemented and the results were compared with the proposed method. In addition to U-Net, we evaluated another network architecture, FCN-8s [@long2015fully], which is also a common fully convolutional neural network based on VGG16.
We used the Dice coefficient (DC) [@dice1945measures] and the average symmetric surface distance (ASD) [@styner20083d] as the error metrics. Note that each metric was calculated per volume, not slice-by-slice. The statistical significance was tested by the paired $t$-test with Bonferroni correction.
Results {#sec:result}
=======
Network architecture selection and comparison with conventional methods {#subsec:comarison}
-----------------------------------------------------------------------
First, the segmentation accuracy is quantitatively evaluated using the 20 labeled clinical CTs, known as the THA data set. Leave-one-out cross-validation (LOOCV) was performed where a model was trained with 19 CTs and tested with the remaining one CT. Twenty-three class classifications (3 bones, 19 muscles, and background) were performed. We initialized the weights in the same way as in [@he2015delving], and then trained the networks using adaptive moment estimation (Adam) [@kingma2014adam] for $1\times10^5$ iterations at the learning rate of 0.0001. The batch-size was 3.
Figure \[fig:result\_box\] summarizes the segmentation accuracy of the muscles. The DC and ASD over 19 muscles for one patient were averaged and plotted as box plots (i.e., 20 data points in each plot) for the multi-atlas method, FCN-8s, and U-Net. The average and standard deviation of DC for the three methods were 0.845$\pm$0.031 (mean$\pm$std), 0.822$\pm$0.021, and 0.891$\pm$0.016, respectively, while for ASD the values were 1.556$\pm$0.444 mm, 1.752$\pm$0.279 mm, and 0.994$\pm$0.230 mm, respectively. Compared with the conventional multi-atlas method [@yokota2018automated] and FCN-8s, U-Net resulted in statistically significant improvements ($p<0.01$) in both DC and ASD.
![Accuracy of muscle segmentation for 20 patients with the hierarchical multi-atlas method [@yokota2018automated], FCN-8s, and U-Net. Box and whisker plots for two error metrics: (left) Dice coefficient (DC) and (right) average symmetric surface distance (ASD). Boxes denote the 1st/3rd quartiles, the median is marked with the horizontal line in each box, and outliers are marked with diamonds. The accuracy of 19 muscles over one patient was averaged in advance (i.e., 20 data points for each box plot).[]{data-label="fig:result_box"}](dice-eps-converted-to.pdf){width="45.00000%"}
Figure \[fig:result\_each\] shows the heatmap visualization of ASD for the individual muscles of each patient using the multi-atlas method and U-Net. The blue color indicates a lower ASD. The accuracy improvement is clearly observed for almost all of the muscles except for 5 cells (the psoas major in Patients \#09 and \#17, gracilis in Patient \#14, semimembranosus in Patient \#04, and semimembranosus in Patient \#06).
![Heatmap visualization of ASD with hierarchical multi-atlas method [@yokota2018automated] and U-Net for each individual muscle in each patient. The blue color shows higher segmentation accuracy. The numbers in parentheses indicate the mean of each row/column.[]{data-label="fig:result_each"}](heatmap-eps-converted-to.pdf){width="50.00000%"}
Figure \[fig:result\_vis3d\] shows example visualizations of the predicted label for a representative patient (Patient \#01). The result with U-Net demonstrates more accurate segmentation near the boundary of the muscles compared to the other two methods. In FCN-8s, where the output layer is obtained by upsampling and fusing the latent vectors that have lower resolution (one eighth in our case) of the input size, the accuracy seemed to be consistently lower than U-Net due to the lack of details. On the other hand, in U-Net, where the output layer is directly fused with the latent vectors that have the same resolution as the input size, delineation of details was improved due to the pixel-wise correspondence between the input image and the output label.
As for the segmentation accuracy of the bones with U-Net, DC of the pelvis, femur, sacrum were 0.981 $\pm$ 0.0043, 0.985 $\pm$ 0.0065, 0.962 $\pm$ 0.0166, respectively, and ASD were 0.145 $\pm$ 0.040 mm, 0.175 $\pm$ 0.084 mm, 0.402 $\pm$ 0.243 mm, respectively.
The skin surface segmentation step did not yield statistically significant accuracy difference in the THA data set ($p>0.05$) since it did not contain such objects that added undesirable variation, but it was effective in reducing the undesirable variation for the muscle segmentation step with a low manual annotation cost, especially for the CT volumes scanned with a solid intensity calibration phantom placed near the skin surface. The calibration phantom is essential in the quantitative CT (QCT) [@adams2009quantitative] which is one of our main application targets for the analysis of the relationship between muscle quality and bone mineral density (see supplementary materials for evaluation of the skin surface segmentation step in QCT volumes). Note that, in our experience, simple image processing methods, such as thresholding or extraction of the largest connected component, often failed to isolate the calibration phantom from the skin surface.
The average training time was approximately 11 hours with FCN-8s and U-Net, on an Intel Xeon processor (2.8 GHz, 4 cores) with NVIDIA GeForce GTX 1080Ti. The average computation time for the inference on one CT volume with about 500 2D slices was approximately 2 minutes excluding file loading, and the post-processing took about 3 minutes.
We conducted the following experiments about the predictive accuracy and active learning only with the U-Net architecture, since its accuracy is significantly higher than the other two methods as shown above.
![Visualization of the predicted label for a representative patient (Patient \#01). The result with U-Net shows distinctly more accurate segmentation near the boundary of the muscles. The region of interest in the slice visualization at the bottom corresponds to the black dotted line in the left-most column.[]{data-label="fig:result_vis3d"}](vis3d-eps-converted-to.pdf){width="50.00000%"}
Estimation of uncertainty metric {#estimation-of-uncertainty-metric}
--------------------------------
### Relationship between uncertainty and segmentation accuracy
To demonstrate validity of the uncertainty metric, we investigated the relationship between the estimated uncertainty and the error metric using the 20 labeled CTs. We performed a 4-fold cross-validation where Bayesian U-Net was trained with 15 randomly selected CTs, and tested with the remaining 5 CTs using the same conditions as the experiment above.
Figure \[fig:result\_uncertain\](a) shows the box and whisker plots of DC as a function of PSV. PSV was divided into 10 bins of equal width. The statistical significance was tested between adjacent bins, with Mann-Whitney $U$ test. The overall correlation ratio was $-0.784$. Figure \[fig:result\_uncertain\] (b-h) shows scatter plots of DC for the individual muscle structures as a function of its PSV. The 95% confidence ellipses clearly illustrate the trend of the increased error (i.e., decreased DC) in accordance with increased uncertainty (i.e., increased PSV). The only muscle which had relatively low correlation was the obturator internus, which we discuss in the discussion section. Figure \[fig:vis\_uncertain\] shows an example uncertainty visualization. These high correlations between the accuracy and uncertainty suggested validity of using the uncertainty metric estimated by Bayesian U-Net as an indicator of the unobservable error metric without using the ground truth in a real clinical situation.
![image](uncertainty-eps-converted-to.pdf){width="95.00000%"}
![Visualization of the predictive variance computed by Bayesian U-Net. The average Dice coefficient and predictive structure-wise variance of muscles are denoted as DC and PSV. A good agreement between the regions with high uncertainty (denser regions in the middle sub-figure of each patient) and the regions with error (blue regions in the right sub-figure) suggests validity of the uncertainty metric to predict unobservable error in a real clinical situation.[]{data-label="fig:vis_uncertain"}](vis_uncertainty-eps-converted-to.pdf){width="50.00000%"}
### Generalization capability to an unseen data set {#subsubsec:app}
The generalization capability of Bayesian U-Net to an unseen data set was tested with the TCIA data set. Note that Bayesian U-Net was retrained using all 20 annotated CTs in the THA data set. Figure \[fig:vis\_nih\](a) shows a scatter plot of DC as a function of PDC. $\alpha$ and $\beta$ in Eq. (\[eq:pdc\]) were determined by a linear regression of 20 data points obtained from 4-fold cross-validation within the THA data set. The mean absolute error between DC and PDC was $0.011\pm0.0084$. Figures \[fig:vis\_nih\](b) and (c) show 2 representative patients with higher and lower accuracy, respectively. The higher uncertainty regions were observed in the regions with partial segmentation failure. The quantitative evaluation in the gluteus medius muscle showed that the average DC and ASD from 18 patients were 0.914$\pm$0.026, 2.927$\pm$4.997 mm, respectively. When excluding four outlier patients with extremely large sarcoma, the average values of DC and ASD were 0.925$\pm$0.014 and 1.135$\pm$0.777 mm, respectively, which was comparable to the results on the THA data set. The uncertainty was included in the plot in Fig. \[fig:result\_uncertain\](b) (see red crosses), showing a similar distribution as the THA data set. These results suggest generalization capability of the proposed uncertainty metric between different data sets.
![Evaluation of generalization capability of Bayesian U-Net on the TCIA soft tissue sarcoma data set. (a) Scatter plot of DC as a function of predictive Dice coefficient (PDC). (b) Representative results for one patient (\#05). (c) One patient with partial segmentation failures (\#07), from left to right: the input CT volume, the predicted label and uncertainty, and the surface distance error of the gluteus medius muscle. The predictive structure-wise variance (PSV) of the gluteus medius muscle and PDC are reported, respectively. Higher uncertainty in tumor regions was observed in Patient \#07 where the segmentation failed (shown in dark red in the surface distance error). []{data-label="fig:vis_nih"}](vis_nih-eps-converted-to.pdf){width="50.00000%"}
Bayesian active learning {#subsec:al}
------------------------
To investigate one of the application scenarios of the uncertainty estimates, we tested an active-learning method in a simulated environment using the 20 fully labeled clinical CTs. The experiment assumed that 15 CTs consisting of 95% of unlabeled slices and 5% labeled slices were available. Then, from each CT, 5% of the total number of slices from unlabeled slices was manually or automatically labeled and added to the labeled data in one step, which we call one “[*[acquisition step]{}*]{}.” We iterate the acquisition step 20 times. The remaining 5 CTs were used as the test data set. In each acquisition step, Bayesian U-Net was initialized and trained using Adam [@kingma2014adam] for maximal $300$ epochs at the learning rate of 0.0001 with the early stopping schema. Note that each axial CT slice was downsampled to $256\times256$ in this experiment due to the limitation of training time. The data augmentation was purposely not performed in order to investigate the behavior of the model purely dependent on the number of training data sets.
For a quantitative evaluation of the manual labor, we defined a metric that we call [*[manual annotation cost]{}*]{} (MAC) as $$\begin{aligned}
MAC (Y) = \frac{|Y^{manual}|}{|Y|}
\end{aligned}$$ where $Y$ is the added label image. $|Y^{manual}|$ denotes the number of pixels to be queried in $Y$.
We compared the segmentation accuracy at each acquisition step with the following three pixel selection methods. (1) Fully-manual selection [@yang2017suggestive]: The user annotates all pixels in the uncertain slices. (2) Random selection: The user annotates random pixels. (3) Semi-automatic selection (proposed method): The user annotates only uncertain pixels. In order to perform a fair comparison, we set the experimental condition so that the number of pixels annotated in (2) and (3) were equal. Note that the fully-manual selection results in $MAC=1.0$.
Figure \[fig:result\_al\_dice\](a) shows mean DC over all muscles and patients as a function of the acquisition step (note that each acquisition step adds 5% of the total training data set resulting in 100% after 20 steps). The proposed semi-automatic selection was tested with three different uncertainty thresholds, $T$ in Eq. (\[eq:query\]). For a larger $T$, we trust a larger number of pixels in the automatically estimated labels and only those pixels with highly uncertain pixels will be queried to the experts. For a smaller $T$, we trust less number of pixels in the automatically estimated labels and more pixels will be queried to the experts, resulting in a higher MAC. Figure \[fig:result\_al\_dice\](b) shows the MAC metric at each acquisition step. First, we observed a trend that the accuracy increases as the training data set increases with any selection method. The random selection method stopped the increase at around a DC of 0.843, while the other two methods kept increasing. The DC of the proposed method with $T<=2.5\times10^{-3}$ reached a DC higher than the random selection by about 0.03, which was close to the fully-manual selection method. When comparing the three thresholds in the proposed method, the larger number of pixels were queried (i.e., larger MAC) when the threshold was low; however, it did not reach the DC value achieved via fully-manual selection when the threshold was too low, i.e., $T=5.0\times10^{-3}$. MAC gradually decreases according to the acquisition step, because the overall certainty increased according to the increase of training data set. In this experiment, we concluded that the threshold with a good trade-off between achievable accuracy and annotation cost was $T=2.5\times10^{-3}$, which resulted in an approximately 90-fold cost reduction compared to fully-manual selection (i.e., median MAC was 0.0108 over all 19 acquisition steps). Note that the median MAC in case of $T=1.0\times10^{-3}$ and $5.0\times10^{-3}$ were 0.0484 and 0.0013, respectively.
![Results of the active-learning experiment using the proposed pixel selection method. (a) The plot of mean DC over individual structures and patients as a function of the acquisition step for different pixel selection methods. (b) The box and whisker plots of manual annotation cost at each acquisition step.[]{data-label="fig:result_al_dice"}](active-eps-converted-to.pdf){width="40.00000%"}
![Examples of query pixels to be manually annotated (colored by yellow) and their manual annotation cost (MAC).[]{data-label="fig:result_al_vis"}](vis_query-eps-converted-to.pdf){width="50.00000%"}
Discussion and Conclusion {#sec:discuss_and_conclusion}
=========================
We presented the performance of CNNs for use in segmentation of 19 muscles in the lower extremity in clinical CT. The findings in this paper are three-fold. The proposed Bayesian U-Net 1) significantly improved segmentation accuracy over the state-of-the-art hierarchical multi-atlas method and demonstrated high generalization capability to unseen test data sets, 2) provided prediction of the quantitative accuracy measure, namely the Dice coefficient, without using the ground truth, and 3) can be used in the active-learning framework to achieve considerable reduction in manual annotation cost.
The LOOCV using 20 fully annotated CTs showed the average DC of 0.891$\pm$0.016 and ASD of 0.994$\pm$0.230 mm, which were significant improvements ($p<0.01$) when compared with the state-of-the-art methods. The muscles that exhibited ASD larger than 3 mm with Bayesian U-Net (see Fig. \[fig:result\_each\]) were the piriformis (hip \#08) of Patient \#19, the psoas major (hip \#09) of Patients \#09, \#17, and \#20, the semitendinosus (thigh \#07) of Patient \#04, and the tensor fasciae latae (thigh \#08) of Patient \#06. After careful verification of those 6 muscles, we found one error in the ground truth expert’s trace (thigh \#07 muscle of Patient \#04). The accuracy and inter-/intra-operator variability in the manual trace is a frequently raised question. In our case, several rounds of inspections and reviews among the expert group were performed on the manual traces, especially on some muscles, which are difficult to define their boundaries even by experts, and finally consensus among the expert group was established. We consider that the proposed Bayesian U-Net learned the trace generated by the experts specialized in musculoskeletal anatomy and correctly reproduced the trace that would have been created by an expert in the same group with high fidelity. The muscles with higher average ASD (hips \#03, \#08, \#09, thighs \#07, \#08) had specifically obscure boundaries in the axial plane, and an especially larger error among them was observed in muscles elongated in z- (superior-inferior) direction (hip \#09 and thigh \#07). On the other hand, the thigh muscles showed notably higher error in the multi-atlas method than Bayesian U-Net, because the thigh muscles, especially the gracilis (thigh \#03), which is a thin muscle located near the skin surface in the lower thigh region, exhibited a larger shape variation than the hip muscles due to the variation in the hip joint position. These muscles are susceptible to the error in the registration that relies on the spatial smoothness in 3D, while our 2D slice-by-slice segmentation approach was not affected.
As for the uncertainty metric for prediction of accuracy, the high correlation between uncertainty and Dice coefficient in both THA and TCIA data sets suggested the potential for its use as the performance indicator. The only muscle with low correlation ($r=0.08$) was the obturator internus (hip \#06). A possible reason of the low correlation is that non-[*[epistemic]{}*]{} variability became dominant. The obturator internus is a small muscle connecting internal surface of the obturator membrane of the pelvis and medial surface of the greater trochanter of the femur and traveling almost in parallel to the axial plane (see Fig. \[fig:dataset\]). We believe these properties entailed a challenge in manual tracing and the variability in the ground truth (so-called [*[aleatory]{}*]{} variability) became dominant. The psoas major (hip \#09) and the tensor fasciae latae (thigh \#08) had major failures in a few cases, but their low uncertainty metrics correctly indicated the failures. Valindria et al. [@valindria2017reverse] also attempted to predict the segmentation performance without using the ground truth by using the predicted segmentation of a new image as a surrogate ground truth for training a classifier which they call a reverse classification accuracy (RCA) classifier. They tried three different classifiers for use as segmentation and the RCA classifiers, and investigated the best combination exhaustively. Extensive comparative studies with our approach are intriguing, but it is beyond the scope of this paper. However, our approach using the MC dropout sampling representing the [*[epistemic]{}*]{} uncertainty in the model would be a more straightforward strategy to performance prediction without requiring an exhaustive search. The uncertainty metrics were recently investigated by Eaton-Rosen et al. [@eaton2018towards] in a binary segmentation problem of the brain tumor, specifically for quantifying the uncertainty in volume measurement. Nair et al. [@nair2018exploring] also explored uncertainty in binary segmentation for lesion detection in multiple sclerosis. Our present work is distinct from these previous works in that we demonstrated correlation between the structure-wise uncertainty metric, namely MC sample variance, and the Dice coefficient of each structure.
Active learning, in which the algorithm interactively queries the user to obtain the desired ground truth for new data points, is an extensively studied topic including discussion regarding the efficient use of non-expert knowledge from the [*[crowd]{}*]{} [@maier2016crowd] and the efficient savings of the manual annotation cost by the expert [@yang2017suggestive]. We enhanced the approach developed by Yang et al. [@yang2017suggestive] which selected the new image of which the expert’s ground truth is most effective to improve accuracy. Our proposal is to further reduce the annotation cost by focusing on pixels to annotate, resulting in an approximately 90-fold cost reduction. The idea of pixel selection is similar to that proposed in [@maier2016crowd], in which only super-pixels with high uncertainty is manually annotated. In summary, the proposed method combines slice- [@yang2017suggestive] and pixel- [@maier2016crowd] selection methods based on Bayesian neural networks [@gal2016dropout]. Our algorithm introduces one additional hyper parameter, which is the threshold of the uncertainty determining the pixel to be queried or not. We experimentally demonstrated that the threshold determined the trade-off between the manual annotation cost, learning speed, and final achievable accuracy. The optimum choice of the threshold value for a new data set requires further theoretical and experimental considerations, although the rate of initial improvement in accuracy during the first few acquisition steps would provide indications about the behavior in further steps as shown in Fig. \[fig:result\_al\_dice\](a). Stopping criteria in active learning have been discussed in [@settles2009active]. The ideal criterion is when the “cost” caused by the error (e.g., incorrect diagnosis) becomes less than the annotation cost. However, in practice, the “cost” caused by the error is difficult to estimate, so the active learning is usually stopped when the learning curve stalls.
Our target application mainly focuses on personalization of the biomechanical simulation. The volumetric muscle modeling, using a finite element model [@webb20143d; @yamamura2014effect] or a simpler approximation in shape deformation for real-time applications such as [@murai2016anatomographic], has shown advantages in accurate prediction of muscle behavior. In addition, Otake et al. [@otake2018registration] demonstrated the potential for estimating the patient-specific muscle fiber structure from a clinical CT assuming the segmentation of each muscle was provided. The proposed accurate automated segmentation method enhances this volumetric modeling in clinical routine as well as in studies using a large-scale CT database for applications such as statistical analysis of human biomechanics for ergonomic design. The patient-specific geometry of skeletal muscles has also been studied in clinical diagnosis and monitoring of muscle atrophy or muscle fatty degeneration caused by or associated with conditions such as trauma, aging, disuse, malnutrition, and diabetes [@rasch2009persisting; @uemura2016volume], where muscles were delineated manually by a single operator from the images. The automated segmentation is also advantageous in the reduction of the manual labor and inter-operator variability in these analyses.
In general, CT is superior in terms of speed compared to MRI. The CT scanning protocol that we used for the lower extremity took less than 30 seconds, while a typical MRI scan of the same range with the same spatial resolution would require more than 10 minutes. The fast scan is especially advantageous in orthopedic surgery, where biomechanical simulation is most helpful, to obtain the entire muscle shapes from their origin to insertion in the thigh region. Nonetheless, application of the proposed method to MR images would also be achievable, for example, by using an algorithm such as CycleGAN [@hiasa2018cross; @zhang2017deep] for synthesizing a CT-like image from the MR image.
One limitation in this study is the limited variation in the training and test data set. The THA data set only contains females who were subject to THA surgery, which limits variation in size and fat content in muscles. Although the TCIA data set contains male patients and a larger variation in terms of pathology, the ground truth label is available only for the gluteus medius muscle. Another limitation in the active-learning method is that the experiment was only a simulation. Although it illustrated potential usefulness of the proposed uncertainty metric with dependency on the uncertainty threshold in one type of active-learning framework, further investigation with a larger labeled- and unlabeled- CT database would be preferable to evaluate effectiveness of the proposed method in a more realistic clinical scenario. An investigation of an effective learning algorithm that exploits information from a large-scale unlabeled data set without requiring the iterative/time-consuming manual annotation is also in our future work.
\[sec:suppl\]
@chapapp[suppl]{} ifmtarg
IEEEappendixsavesection\*[Supplementary materials]{}
IEEEappendixsavesection\*[Suppl.\
]{}
ifstar[IEEEappendixsavesection\*]{}[ IEEEdestroythesectionargument]{}
=====================================================================
{#section .unnumbered}
![Bayesian U-Net on the TCIA soft tissue sarcoma data set. The tumor caused mis-segmentation in Patients \#01, \#07, \#09, \#11-\#15 and \#17, we observed that such failed regions indicated high uncertainty (black solid allow). In Patient \#03, CT artifacts led to failure (black dashed allow). In Patient \#10, some thigh muscle structures were out of the FOV, which led to mis-segmentation. The uncertainty indicated high value also in these regions. ](suppl_fig1-eps-converted-to.pdf){width="93.00000%"}
{#section-1 .unnumbered}
![Qualitative evaluation of the effectiveness of the skin surface segmentation step in the quantitative CT (QCT) volumes, which scanned with an intensity calibration phantom placed near the skin surface. Bayesian U-Net was trained using 20 CT volumes in the THA data set without containing the calibration phantom. Three cases from our QCT data set (independent from the THA data set) were shown. Note that, when the skin segmentation step was not applied (3rd and 4th columns), the calibration phantom was wrongly segmented, indicated high uncertainty, and mis-segmentation in the muscles near the phantom boundary was observed (yellow arrows), while the skin segmentation step corrected these errors (5th, 6th, and 7th columns) and improved accuracy of the muscle segmentation. ](suppl_fig2-eps-converted-to.pdf){width="90.00000%"}
{#section-2 .unnumbered}
![Accuracy of individual muscular structures for 20 patients in THA data set with the hierarchical multi-atlas method [@yokota2018automated], FCN-8s and U-Net. Blue color shows high averaged accuracy or low variance. ](suppl_fig3-eps-converted-to.pdf){width="65.00000%"}
![List of the volume in mL and the number of CT slices (in brackets) for individual muscular structures of each patient in THA data set. Color indicates the relative volume per structure (blue indicates small volume). ](suppl_fig4-eps-converted-to.pdf){width="78.00000%"}
[^1]: This research was supported by MEXT/JSPS KAKENHI (19H01176, 26108004), JST PRESTO (20407), and the AMED/ETH Strategic Japanese-Swiss Cooperative Program.
[^2]: \(I) Y. Hiasa, Y. Otake, and Y. Sato are with Division of Information Science, Nara Institute of Science and Technology, Ikoma, Nara, Japan.
[^3]: \(II) M. Takao is with Department of Orthopaedic Surgery, Osaka University Graduate School of Medicine, Suita, Osaka, Japan.
[^4]: \(III) T. Ogawa, and N. Sugano are with Department of Orthopaedic Medical Engineering, Osaka University Graduate School of Medicine, Suita, Osaka, Japan.
[^5]: <http://www.cancerimagingarchive.net>
[^6]: The source code is available at [<https://github.com/yuta-hi/bayesian_unet>]{}
|
---
abstract: 'The consistency between the exchange-correlation functional used in pseudopotential construction and in the actual density functional theory calculation is essential for the accurate prediction of fundamental properties of materials. However, routine hybrid density functional calculations at present still rely on GGA pseudopotentials due to the lack of hybrid functional pseudopotentials. Here, we present a scheme for generating hybrid functional pseudopotentials, and we analyze the importance of pseudopotential density functional consistency for hybrid functionals. We benchmark our PBE0 pseudopotentials for structural parameters and fundamental electronic gaps of the G2 molecular dataset and some simple solids. Our results show that using our new PBE0 pseudopotentials in PBE0 calculations improves agreement with respect to all-electron calculations.'
author:
- Jing Yang
- 'Liang Z. Tan'
- 'Andrew M. Rappe'
bibliography:
- 'rappecites.bib'
title: Hybrid functional pseudopotentials
---
Introduction
============
Density functional theory (DFT) methods have proven to be successful for understanding and predicting the physical and chemical properties of materials. With approximations such as the local density approximation (LDA) [@Kohn65pA1133] and generalized-gradient approximation (GGA) [@Perdew96p3865], DFT can reproduce many fundamental properties of solids, such as lattice constants and atomization energies [@Buhl08p1449]. However, LDA and GGA usually underestimate the fundamental band gaps of semiconductors and insulators [@Perdew85p497]. The use of hybrid functionals in DFT, which combine part of the exact Hartree-Fock (HF) exchange with local or semilocal approximations (PBE0, HSE, B3LYP) [@Adamo99p6158; @Muscat01p397; @Heyd05p174101], has become a popular option for addressing this problem.
The pseudopotential approximation is often used to reduce the complexity of DFT calculations. By replacing the nucleus and core electrons with a finite shallow potential, the solution of the Kohn-Sham equation is simplified because of the reduced number of electrons in the system. Accuracy is preserved because the core electrons are not involved in chemical bonding [@Rappe90p1227; @Ramer99p12471].
Even though hybrid density functional calculations using pseudopotentials are currently very popular, these calculations solve the Kohn-Sham equation using pseudopotentials constructed at a lower rung of Jacob’s ladder [@Perdew01p1], such as GGA. This is due to a lack of hybrid functional pseudopotentials available to the community. The mismatch of the level of density functional approximation between pseudopotential construction and target calculation is theoretically unjustified, and could lead to reduced accuracy [@Fuchs98p2134]. In this work, we have developed hybrid density functional pseudopotentials to restore pseudopotential consistency in hybrid functional DFT calculations.
Prior to this work, Hartree-Fock pseudopotentials developed over the last decade [@Trail05p014112; @Al-Saidi08p075112] have proven to be useful in calculations with correlated electrons. The inclusion of HF exchange leads to stronger electron binding and mitigates the underbinding errors of GGA. It has been suggested that HF pseudopotentials may be useful in a variety of contexts, such as modeling systems with negatively-charged reference states [@Al-Saidi08p075112] and in diffusion Monte Carlo simulations [@Greeff98p1607; @Ovcharenko01p7790]. The successful development of HF pseudopotentials [@Al-Saidi08p075112] has opened the possibility of constructing hybrid pseudopotentials by including an exact exchange component into GGA potentials. Previous work demonstrated PBE0 pseudopotentials for gallium, indium and nitrogen atoms [@Wu09p115201]. However, such potentials are simple linear combinations of the exact exchange potential and the GGA derived potential without self-consistently solving hybrid PBE0 all-electron calculations.
In this paper, we construct self-consistent pseudopotentials (Sec. \[sec\_thy\]) with the PBE0 hybrid density functionals, following the Rappe-Rabe-Kaxiras-Joannopoulos (RRKJ) method [@Rappe90p1227]. We benchmark the hybrid functional pseudopotential accuracy for diatomic molecules in the G2 dataset and for simple solids, focusing on geometric parameters and fundamental gaps (Sec. \[sec\_test\]). Consistent use of the density functional between pseudopotential and molecular/solid calculations generally reduces the error by 0.1$\%$ on bond lengths and 3$\%$ on HOMO-LUMO gaps. The PBE0 pseudopotential generator is implemented in the OPIUM software package [@Opium].
Theoretical Methods {#sec_thy}
===================
In this section, we provide an overview of the standard theory behind pseudopotential construction, before discussing the special considerations that must be taken into account for hybrid functional pseudopotentials.
Pseudopotential construction {#sec_const}
----------------------------
The all-electron (AE) wavefunctions and eigenvalues of an atom are the foundation for the construction of all pseudopotentials. The AE Kohn-Sham (KS) equation is $$\left[-\frac{1}{2}\bigtriangledown^2+V_{\text{ion}}(\mathbf{r})+V_{\text{H}}[\rho(\mathbf{r})]+V_{\text{xc}}[\rho(\mathbf{r})]\right]\psi_i^{\text{AE}}(\mathbf{r})=\epsilon_i^{\text{AE}}\psi_i^{\text{AE}}(\mathbf{r}),
\label{eq:ae}$$ where $-\frac{1}{2}\bigtriangledown^2$ is the single-particle kinetic-energy operator, $V_{\text{ion}}(\mathbf{r})$ is the ionic potential that electrons feel from the nucleus, $V_{\text{H}}[\rho(\mathbf{r})]$ is the Hartree potential, and $V_{\text{xc}}[\rho(\mathbf{r})]$ is the exchange-correlation potential, which are functionals of the charge density $\rho(\mathbf{r})$. The all-electron wavefuction is denoted by $\psi_i^{\text{AE}}(\mathbf{r})$, and the all-electron energy eigenvalues by $\epsilon_i^{\text{AE}}$. For an atom, $V_{\text{ion}}(\mathbf{r})=-\frac{Z}{r}$, where $Z$ is the nuclear charge. Representing the wavefunction in spherical coordinates, $r=|\mathbf{r}|$ and each $\psi^{\text{AE}}_i(\mathbf{r})$ can be written as, $$\psi^{\text{AE}}_{nlm}(\mathbf{r})=\frac{\phi^{\text{AE}}_{nl}(r)}{r}Y_{lm}(\theta,\phi),
\label{eq:spherical_coord}$$ where $n,l,m$ are principal, angular, and spin quantum numbers, and $\theta$ and $\phi$ are the corresponding angles from spherical coordinates. $\phi^{\text{AE}}_{nl}$ is the radial wavefunction and $Y_{lm}(\theta,\phi)$ are the spherical harmonics. Now, Eq. \[eq:ae\] can be simplified in terms of $\phi_{nl}$: $$\left(-\frac{1}{2}\frac{d^2}{dr^2}+\frac{l(l+1)}{r^2}+V_{\text{KS}}(r)\right)\phi^{\text{AE}}_{nl}(r)=\epsilon^{\text{AE}}_{nl}\phi^{\text{AE}}_{nl}(r),
\label{eq:radial}$$ where $V_{\text{KS}}(r)=V_{\text{ion}}(r)+V_{\text{H}}(r)+V_{\text{xc}}(r)$. Instead of solving the full all-electron KS equation as in (Eq. \[eq:ae\]), it is computationally more efficient to solve the radial equation (Eq. \[eq:radial\]) self-consistently to obtain the radial wavefunction, $\phi^{\text{AE}}_{nl}(r)$ and corresponding eigenvalue, $\epsilon^{\text{AE}}_{nl}$. In most molecular or solid systems, the valence electrons of atoms within the system are more crucial than core electrons, because they are more involved in chemical bonding. The core electrons mostly contribute to the electrostatic shielding of the nucleus. The AE wavefunctions of core electrons can contain rapid oscillations, which will cause difficulty in solving Eq. \[eq:radial\] numerically. Therefore, it is advantageous to construct pseudopotentials, which capture the valence electron behavior and also eliminate the need to recalculate the core electron wavefunctions.
Replacing the potential by a pseudopotential operator, the KS equation can be written as, $$\left[-\frac{1}{2}\frac{d^2}{dr^2}+\frac{l(l+1)}{2r^2}+\hat{V}_{\text{PS}}\right]\phi^{\text{PS}}_{nl}(r)=\epsilon^{\text{PS}}_{nl}\phi^{\text{PS}}_{nl}(r),
\label{eq:radialps}$$ where $\hat{V}_{\text{PS}}$ is the screened pseudopotential operator. Note that such an operator is usually non-local (is an integral operator on $\phi^{\text{PS}}_{nl}(r)$). Similar to $V_{\text{KS}}$, $\hat{V}_{\text{PS}}=\hat{V}^{\text{PS}}_{\text{ion}}+{V}_{\text{H}}(r)+{V}_{\text{xc}}(r)$. $\epsilon^{\text{PS}}_{nl}$ is the pseudo-eigenvalue, and $\phi^{\text{PS}}_{nl}(r)$ is the pseudo-wavefunction. Norm-conserving pseudo-wavefunctions [@Hamann79p1494] should obey the following criteria: $$\begin{aligned}
(1)\quad& \phi^{\text{PS}}_{nl}(r) = \phi^{\text{AE}}_{nl}(r), \quad
\frac{d\phi^{\text{PS}}_{nl}(r)}{dr}=\frac{d\phi^{\text{AE}}_{nl}(r)}{dr}, \quad
\frac{d^2\phi^{\text{PS}}_{nl}(r)}{dr^2}=\frac{d^2\phi^{\text{AE}}_{nl}(r)}{dr^2}\text{ for } r\geqslant r_c. \\
(2)\quad& \epsilon^{\text{PS}}_{nl} =\epsilon^{\text{AE}}_{nl} \\
(3)\quad& \langle\phi^{\text{PS}}_{nl}|\phi^{\text{PS}}_{nl}\rangle =\langle\phi^{\text{AE}}_{nl}|\phi^{\text{AE}}_{nl}\rangle =1 \\
(4)\quad& \frac{d}{d\epsilon}\left(\frac{d\ln\phi^{\text{PS}}_{nl}(r)}{dr}\right)\bigg |_{R,\epsilon_{nl}} = \frac{d}{d\epsilon}\left(\frac{d\ln\phi^{\text{AE}}_{nl}(r)}{dr}\right)\bigg |_{R,\epsilon_{nl}},\ R\geqslant r_c
\end{aligned}$$
Together, they guarantee wavefunction smoothness and continuity, that the solutions of the pseudo-system are accurate representations of the corresponding all-electron system, and that the error of eigenenergy shifts caused by chemical bonding is small for gentle changes to the wavefuntions and density [@Hamann79p1494], hence improving the transferability, or applicability of the pseudopotential in different chemical environments.
In the RRKJ method [@Rappe90p1227], the pseudo-wavefunction is constructed as a sum of $N_b$ spherical Bessel functions $j_l(q_k r)$: $$\phi^{\text{PS}}_{nl}(r)=
\begin{cases}
\sum^{N_b}_{k=1}c_{nlk}r j_{l}(q_k r), & \quad r < r_c \\
\phi^{\text{AE}}_{nl}(r), & \quad r\geqslant r_c\\
\end{cases}
\label{eq:RRJK}$$ where the coefficients, $c_{nlk}$, are chosen to normalize the wavefunction and satisfy continuity constraints at $r_c$. Additional $c_{nlk}$ coefficients improve plane-wave convergence. Once the pseudo-wavefunction is constructed, the pseudopotential is obtained by inverting the pseudo-KS equation above (see Eq.(\[eq:radialps\])). In applications of the pseudopotential in solid-state or molecular calculations, the screening effect of the valence electrons will generally be different from in the atomic calculation. Therefore, the valence electron screening is removed to obtain a descreened pseudopotential, $V^{\text{PS}}_{\text{ion},l}(r)$ for each angular momentum $l$, by subtracting Hartree and exchange-correlation potentials from the screened pseudopotential
$$\label{eq:descreenps}
V^{\text{PS}}_{\text{ion},l}(r)= V^{\text{PS}}_l(r)-V_{\text{H}}[\rho_{\text{val}}](r)-V_{\text{xc}}[\rho_{\text{val}}](r),$$
where $V_{\text{H}}[\rho_{\text{val}}](r)$ and $V_{\text{xc}}[\rho_{\text{val}}](r)$ are calculated only from the valence charge density. The full pseudopotential, written in semilocal form, is then
$$\begin{aligned}
\hat{V}^{\text{PS}}_{\text{ion}} =& \sum_{lm} V^{\text{PS}}_{\text{ion},l}(r)\, \lvert Y_{lm} \rangle \langle Y_{lm} \rvert \\
=& V_{\textrm{loc}}(r) + \sum_l\Delta \hat{V}_l^{\textrm{SL}} \\
\end{aligned}$$
In the second line, the potential is expressed as the sum of a local potential $V_{\textrm{loc}}(r)$ and semilocal corrections $\Delta\hat{V}_l^{\textrm{SL}}$, which are projections in the angular coordinates yet local in the radial coordinate. In order to reduce the memory cost of computation, we write the semilocal pseudopotential in a fully-separable nonlocal Kleinman-Bylander [@Kleinman82p1425] form
$$\label{eq:KB}
\begin{aligned}
\hat{V}^{\textrm{PS}} =& \hat{V}^{\textrm{loc}} + \sum_l \Delta \hat{V}_l^{\textrm{NL}} \\
\Delta \hat{V}_l^{\textrm{NL}} =& \frac{\Delta \hat{V}_l^{\textrm{SL}} \lvert \phi_{nl}^{\textrm{PS}} \rangle \langle \phi_{nl}^{\textrm{PS}} \rvert \Delta \hat{V}_l^{\textrm{SL}}}{\langle \phi_{nl}^{\textrm{PS}} \rvert \Delta \hat{V}_l^{\textrm{SL}} \lvert \phi_{nl}^{\textrm{PS}} \rangle}
\end{aligned}$$
Writing the pseudopotential in this form ensures that semilocal and nonlocal pseudoatoms have the same eigenvalues and wavefunctions for the reference configuration. The transferability of such a nonlocal pseudopotential, to configurations other than the reference, can be improved by applying the designed nonlocal strategy, which involves modifying the projectors of Eq. \[eq:KB\] [@Ramer99p12471].
Hartree-Fock pseudopotentials {#sec:HF_ps}
-----------------------------
Pseudopotentials can be constructed by solving the all-electron (AE) and pseudopotential (PSP) equations, Eq. \[eq:ae\] and Eq. \[eq:radialps\], above using different exchange-correlation functionals, such as LDA or GGA. It is crucial that the exchange-correlation functional used for pseudopotential construction is the same as the functional used in the target calculation [@Fuchs98p2134]. When the exchange-correlation functional contains the Fock operator, as is the case for the hybrid functionals presently in widespread use, there are special considerations that must be taken into account in constructing the pseudopotential.Here, we consider the case of Hartree-Fock (HF) pseudopotentials, where the exchange-correlation functional is just the Fock operator, and will examine the PBE0 hybrid functional in the next subsection, where the Fock operator and PBE exchange-correlation are combined. For the HF pseudopotential, instead of solving the KS equation as in Eq.(\[eq:radial\]), we solve the Hartree-Fock equation,
$$\left(\hat{T}+V_{\text{ion}}(\mathbf{r})+\hat{V}_{\text{HF}}[\{ \psi_{n'l'}\}]\right)\psi_{nl}(\mathbf{r})=\epsilon_{nl}\psi_{nl}(\mathbf{r}),
\label{eq:hf}$$
where $\psi_{nl}(\mathbf{r})$ still takes the form in Eq.(\[eq:spherical\_coord\]) (dropping the AE superscript for simplicity), $V_{\text{ion}}(\mathbf{r})$ is the ionic potential, and $\hat{V}_{\text{HF}}[\{ \psi_{nl}\}]$ is the HF potential, which depends on the set of wavefunctions $\{ \psi_{nl}\}$. It is separated into two terms, $$\hat{V}_{\text{HF}}[\{ \psi_{n'l'}\}]=\hat{V}_{\text{H}}[\{ \psi_{n'l'}\}]+\hat{V}_{\text{x}}[\{ \psi_{n'l'}\}].$$ The Hartree potential takes the form $$\langle \psi_{nl}|\hat{V}_{\text{H}}[\{ \psi_{n'l'}\}]|\psi_{nl}\rangle=\sum_{n'l'}\int d^3\mathbf{r}'d^3\mathbf{r}\frac{|\psi_{n'l'}(\mathbf{r}')|^2|\psi_{nl}(\mathbf{r})|^2}{|\mathbf{r}-\mathbf{r}'|},$$ and the exact exchange operator acts as
$$\langle \psi_{nl}|\hat{V}_{\text{x}}[\{ \psi_{n'l'}\}]|\psi_{nl}\rangle= \sum_{n'l'}\int d^3\mathbf{r}'d^3\mathbf{r}\frac{\psi_{nl}(\mathbf{r})\psi^{*}_{n'l'}(\mathbf{r})\psi_{n'l'}(\mathbf{r}')\psi^{*}_{nl}(\mathbf{r}')}{|\mathbf{r}-\mathbf{r}'|}.
\label{eq:vx}$$
Direct evaluation of the Fock integral above (Eq. \[eq:vx\]) requires introduction of angular variables for orbitals with non-zero angular momentum. This would result in non-spherical pseudopotentials, as well as introduce complexity into the pseudopotential generation process, which would then depend on the exact atomic configuration, including magnetic quantum numbers. To circumvent these issues, we make use of a spherical approximation, to construct spherical Hartree-Fock pseudopotentials. Spherical approximations are routinely used to construct spherical LDA and GGA pseudopotentials, which are widely used successfully in electronic and structural calculations.
We use the Hartree-Fock spherical approximation of Froese Fischer [@Fischer87p355] based on the concept of the “average energy of configuration" introduced by Slater [@Slater]. Consider all atomic configurations where the $i$-th shell, with principal and total angular quantum numbers $n_i$ and $l_i$, is occupied with weight $w_i$. That is, all permutations of $w_i$ electrons occupying the $(2l_i+1)$-degenerate shell $(n_il_i)$.
The average energy of all such atomic configurations, expressed as a sum over pairs of atomic orbitals $(n_il_i)$ and $(n_jl_j)$, is $$\begin{aligned}
E_{\text{av}}^{\text{HF}}&=\sum_{i=1}^{m} w_i[I(n_il_i,n_il_i)+\left(\frac{w_i-1}{2}\right)\sum_{k=0}^{2l_i}f_k(l_i,l_i)F^{k}(n_il_i,n_il_i)]\\
&+\sum_{i=2}^{m}\left\{\sum_{j=1}^{i-1}w_iw_j\left[F^0(n_il_i,n_jl_j)+\sum_{k=|l_i-l_j|}^{(l_i+l_j)}g_k(l_i,l_j)G^k(n_il_i,n_jl_j)\right]\right\},
\label{eq:slater_average}
\end{aligned}$$ Here, the first summation represents the one electron contribution,
$$I(nl,nl)=-\frac{1}{2}\int_o^{\infty}\phi_{nl}^{*}(r)\left(\frac{d^2}{dr^2}+\frac{2Z}{r}-\frac{l(l+1)}{r^2}\right)\phi_{nl}(r)dr.
\label{eq:slater_oneel}$$
The other terms contain the interaction terms between pairs of electrons. $F^k$ and $G^k$ are the Hartree and exchange energy Slater integrals,
$$\label{eq:slater_Fk}
F^{k}(nl;n'l')=\int_{0}^{\infty}\int_{0}^{\infty}\phi_{nl}(r)\phi_{nl}(r)\frac{r_<^k}{r_>^{k+1}}\phi_{n'l'}(r')\phi_{n'l'}(r')drdr',$$
and $$G^{k}(nl;n'l')=\int_{0}^{\infty}\int_{0}^{\infty}\phi_{nl}(r)\phi_{n'l'}(r')\frac{r_<^k}{r_>^{k+1}}\phi_{n'l'}(r)\phi_{nl}(r')drdr',
\label{eq:slater_FandG}$$
where $r_<$ ($r_>$) is the lesser (greater) of $r$ and $r'$. Details of the derivation are provided in Appendix C, and the numerical coefficients $f_k$ and $g_k$ are tabulated in Ref. [@Slater]. We note that the integrals in Eq. \[eq:slater\_oneel\]–\[eq:slater\_FandG\] for the average energy depend only on the radial coordinate, and hence are a simplification of Eq. \[eq:vx\].
Taking functional derivatives of Eq. \[eq:slater\_average\] with respect to the radial wavefunctions $\phi_i(r)$, we arrive at Hartree-Fock equations for the wavefunctions of a Hartree-Fock atom. The set of $m$ radial wavefunctions $\phi_{i}, \, i=1,\dots,m$ obeys the coupled set of equations
$$\hat{L}\,\phi_{i}(r)=\frac{2}{r} \, \big [ Y_i[\{\phi\}](r) \, \phi_{i}(r)+X_i[\{\phi\}](r) \big ]+\sum_{j=1}^{m} \varepsilon_{ij}\phi_{j}(r),
\label{eq:coupled_hf}$$
where $\hat{L}=\frac{d^2}{dr^2}-2V_{\text{ion}}(r)-\frac{l_i(l_i+1)}{r^2}$ is the single-particle part of the Hartree-Fock Hamiltonian, $(2/r)Y_i[\{\phi\}](r)$ and $(2/r)X_i[\{\phi\}](r)$ are the Hartree and exchange terms [@Fischer], $\varepsilon_{ij}$ are Lagrange multipliers for orthogonality and normalization of radial wavefunctions. The detailed derivation of all these terms are presented in Appendix D.
Once the HF equation is constructed, we solve these equations self-consistently in a similar way to DFT pseudopotentials. The HF pseudowavefunctions $\phi_{nl}^{\text{PS}}(r)$ are constructed using the same RRKJ procedure (Eq.(\[eq:RRJK\])) as for the DFT pseudowavefunctions. The screened pseudopotential is obtained by inverting Eq.(\[eq:hf\]). Similar to DFT pseudopotentials, we descreen by subtracting the Hartree and exchange contributions of the valence electrons (c.f. Eq. \[eq:descreenps\]) $$V_{\text{ion},l}^{\text{PS}}(r)
=V_{l}^{\text{PS}}(r)-
\frac{2}{r} Y_i[\{\phi_{\text{val}}\}](r)-
\frac{2X_i[\{\phi_{\text{val}}\}](r)}{r\phi_{i}(r)} ,$$ with $Y_i$ and $X_i$ obtained from Eq. \[eq:coupled\_hf\]. The HF pseudopotential constructed this way has a long-range non-Coulombic component of the tail, which does not decay as $1/r$. This is a consequence of the non-local nature of the Fock operator [@Al-Saidi08p075112]. To resolve this issue, we make use of the localization procedure of Trail and Needs [@Trail05p014112]. The tail is forced to asymptotically approach $1/r$, and the potential is modified within the localization radius to ensure consistency with the all-electron eigenvalues [@Al-Saidi08p075112].
PBE0 pseudopotentials
---------------------
As hybrid functionals are a mix of HF and DFT ingredients, we generate hybrid pseudopotential using the HF pseudopotential approach as a foundation. The PBE0 density functional [@Perdew96p9982] was developed based on the PBE exchange-correlation functional [@Perdew96p3865]; the PBE0 form is
$$\label{eq:pbe0}
E_{\text{xc}}^{\text{PBE0}}=aE_{\text{x}}^{\text{HF}}+(1-a)E_{\text{x}}^{\text{PBE}}+E_{\text{c}}^{\text{PBE}},$$
where $a=0.25$ for the PBE0 functional. As we use the spherical approximation for $E_{\text{x}}^{\text{HF}}$ (Eq. \[eq:slater\_average\]), we likewise evaluate the PBE exchange-correlation functional using a spherical approximation. Since $E_\text{x}^{\text{PBE}}$ is a functional of density only, this method consists of evaluating $E_\text{x}^{\text{PBE}}$ in Eq. \[eq:pbe0\] at the charge density, again taken to be the average over all possible magnetic quantum number configurations.
$$\rho_{nl}(r)= \sum_{nlm} f_{nlm} |\psi(\mathbf{r})_{nl}|^2 = \frac{1}{4\pi}\sum_{n_il_i} f_{n_il_i}|\phi_{n_il_i}(r)|^2,$$
where $\rho_{nl}(r)$ is the spherical symmetric charge density, $f_{n_il_i}=w_i$ (as in Appendix B) is the occupation number for each orbital $(n_il_i)$, and $f_{nlm}=f_{nlm'}$ is the occupation number for each magnetic quantum number $(nlm)$. Upon including $E_{\text{x}}^{\text{PBE}}$ and $E_{\text{c}}^{\text{PBE}}$ into the total energy expression Eq. \[eq:slater\_average\], and taking functional derivatives, the coupled set of HF equations (Eq. \[eq:coupled\_hf\]) becomes
$$\hat{L}\phi_{i}(r)=\frac{2}{r}[Y_i(r)\phi_{i}(r)+\frac{1}{4}X_i(r)]+\frac{3}{4}V_{\text{x}}^{\text{PBE}}(r)+V_{\text{c}}^{\text{PBE}}(r)+\sum_{j=1}^{m}\delta_{l_il_j}\epsilon_{ij}\phi_{j}(r),$$
where the additional terms are the PBE exchange potential $V_{\text{x}}^{\text{PBE}}(r)$ and the PBE correlation potential $V_{c}(r)$. The self-consistent solution of these coupled equations is found iteratively, in a similar fashion to the HF equations (Eq. \[eq:coupled\_hf\]). At each iteration, we calculate the Fock exchange term ($X_i(r)$) from the wavefunctions of the previous iteration, and the PBE terms ($V_\text{x}^{\text{PBE}}$, $V_\text{c}^{\text{PBE}}$) from the density of the previous iteration. The pseudopotential construction is performed the same way as for HF pseudopotentials, including RRKJ pseudization, descreening, and localization of the non-Coulombic tail.
Testing of PBE0 pseudopotentials on molecular and solid state systems {#sec_test}
=====================================================================
We test the accuracy of our PBE0 pseudopotentials and the importance of pseudopotential density functional consistency for PBE0. We compare PBE calculations using PBE pseudopotentials (PBE), PBE0 calculations using PBE0 pseudopotentials (PBE0) and PBE0 calculations using PBE pseudopotentials (PBE-PBE0). The last case is currently the most widely used method of performing PBE0 calculations. The DFT code we use is Q<span style="font-variant:small-caps;">uantum-espresso</span> [@Giannozzi09p395502]. Each single molecule is put into 20.0 Å cubic box, and its energy and geometry computed with kinetic energy cutoff $E_{\text{cut}}$=25.0 Hartree. All these calculations are spin-polarized. The total energy convergence and force convergence are set to 0.005 mHartree/cell and 0.05 mHartree/Å. The reference all-electron calculations are performed using FHI-aims [@Blum09p2175] with tight basis settings. The molecular and crystal structural optimizations are converged within 3 $\times10^{-3}$ mHartree/cell for total energy, and the forces are converged within 0.003 mHartree/Å.
In Table \[tab:2\], we show the bond lengths for diatomic molecules that belong to G2 data set [@Adamo99p6158] and compare each of our pseudopotential calculations with PBE0 all-electron values [@Becke98p2092]. The PBE functional gives the worst mean absolute relative error (MARE) of 1.08$\%$ when comparing to FHI-aims PBE0. The use of PBE pseudopotential in PBE0 calculation gives MARE of 0.71$\%$. Using the PBE0 functional with the PBE0 pseudopotential, the MARE reduces to $0.53\%$. This indicates that pseudopotential density functional consistently improves bond lengths for PBE0.
Molecule PBE PBE-PBE0 PBE0 AE-PBE AE-PBE0
---------- ------- ---------- ------- -------- ---------
H$_2$ 0.753 0.747 0.747 0.750 0.746
LiH 1.600 1.595 1.596 1.603 1.595
BeH 1.348 1.343 1.351 1.355 1.348
CH 1.137 1.122 1.122 1.136 1.124
NH 1.070 1.056 1.041 1.050 1.041
OH 0.983 0.975 0.966 0.983 0.983
FH 0.928 0.914 0.912 0.93 0.918
Li$_2$ 2.719 2.725 2.718 2.728 2.723
LiF 1.578 1.567 1.566 1.574 1.562
CN 1.174 1.159 1.159 1.175 1.159
CO 1.135 1.123 1.122 1.136 1.122
N$_2$ 1.081 1.069 1.069 1.103 1.089
NO 1.132 1.113 1.138 1.157 1.139
O$_2$ 1.212 1.218 1.217 1.218 1.192
F$_2$ 1.420 1.382 1.382 1.413 1.376
MARE 1.08 0.71 0.53 1.08
: The bond lengths of the diatomic molecules from G2 data set calculated from PBE, PBE-PBE0 and PBE0. The all-electron data are calculated using FHI-aims [@Blum09p2175]. Units in Å. The MARE is calculated as MARE$=\frac{1}{N}\sum_i^N\frac{|b_i-b_{\text{AE}}|}{b_{\text{AE}}}\times100$, where $N$ is the number of species, $b_i$ is the bond length of each species, and $b_{\text{AE}}$ is the PBE0 all-electron value. []{data-label="tab:2"}
One of the reasons for using hybrid density functionals is that they predict fundamental gaps and ionization potentials (IP) more accurately than the PBE functional [@Ernzerhof99p5029; @Matsushita11p075205; @Wu09p115201]. Table \[tab:comp\_IP\] shows the HOMO eigenvalues for diatomic molecules within the G2 dataset, calculated from different density functionals and compared with HOMO levels calculated from all-electron calculations. The MARE between PBE HOMO eigenvalues and all-electron PBE0 values is the largest among the three computed cases. Both PBE0 cases are smaller than PBE case, and the MARE of PBE0 is reduces by 0.13$\%$ compare to PBE-PBE0.
Molecule PBE PBE-PBE0 PBE0 AE-PBE AE-PBE0
---------- -------- ---------- -------- -------- ---------
H$_2$ -10.31 -11.96 -11.96 -10.34 -11.99
LiH -3.89 -5.45 -5.44 -4.35 -5.44
BeH -4.76 -5.77 -5.20 -4.68 -5.69
CH -5.91 -7.43 -7.43 -5.84 -7.45
NH -7.98 -9.78 -9.76 -6.69 -9.76
OH -7.06 -8.81 -8.72 -7.14 -7.00
FH -9.33 -11.43 -11.43 -9.61 -11.86
Li$_2$ -3.20 -3.99 -3.75 -3.16 -3.72
LiF -6.08 -7.77 -7.85 -6.09 -7.96
CN -9.30 -10.74 -10.94 -9.38 -9.32
CO -9.01 -10.41 -10.42 -9.03 -10.72
N$_2$ -10.07 -11.93 -12.20 -10.22 -12.20
NO -4.74 -6.25 -6.29 -4.50 -4.60
O$_2$ -6.71 -8.68 -8.70 -6.91 -8.91
F$_2$ -9.41 -11.50 -11.58 -9.46 -11.68
MARE 15.87 6.79 6.66 16.06
: HOMO eigenvalues with PBE, PBE-PBE0 and PBE0 methods. Energies are in eV. The all-electron PBE0 values are used as the reference.[]{data-label="tab:comp_IP"}
In Table \[tab:comp\_HL\], we present the HOMO-LUMO gap for the same dataset as in Table \[tab:comp\_IP\].
Molecule PBE PBE-PBE0 PBE0 AE-PBE AE-PBE0
---------- ------- ---------- ------- -------- ---------
H$_2$ 10.26 11.94 11.94 10.84 13.10
LiH 2.57 4.04 4.48 2.81 4.45
BeH 2.64 4.44 4.42 2.31 4.15
CH 2.06 3.95 3.51 1.77 3.60
NH 3.95 7.27 7.34 6.45 7.16
OH 1.12 4.77 4.92 6.54 4.25
FH 8.19 10.92 10.93 8.76 11.80
Li$_2$ 1.41 2.75 2.47 1.43 2.50
LiF 4.29 6.41 6.50 4.62 7.02
CN 1.99 4.67 4.74 1.72 4.48
CO 6.98 9.61 9.62 6.98 10.04
N$_2$ 7.66 10.94 10.94 8.24 11.71
NO 1.30 3.50 2.88 1.22 2.86
O$_2$ 2.40 5.74 6.09 2.31 6.10
F$_2$ 3.32 7.77 7.79 3.63 8.34
MARE 44.70 7.96 4.55 40.88
: HOMO-LUMO gap (in eV) of diatomic molecules in G2 dataset with different functionals. The PBE0 all-electron results are used as the reference. []{data-label="tab:comp_HL"}
Both PBE0 cases gave much closer values to the AE PBE0 reference, and our PBE0 pseudopotential showed a small error reduction compared to the hybrid DFT calculated with PBE pseudopotentials. Similar to bond length calculations, the consistency of the exchange-correlation density functional between pseudoptential and DFT calculation reduces the error. This indicates that the use of PBE pseudopotential for PBE0 DFT calculation results in good accuracy, which can be improved by implementing the corresponding pseudopotential with a consistent density functional.
We have also tested our pseudopotentials in solid-state calculations. The lattice constants and band gaps for $\alpha$-Si and $\beta$-GaN are shown in Table \[tab:4\]. Similar to molecular bond lengths, the density functional consistency also influences the lattice constants of solids. The lattice constant of $\alpha$-Si is slightly improved by using PBE0 pseudopotentials instead of PBE-PBE0. The PBE calculation significantly underestimates the band gaps. The two PBE0 cases increase the band gaps by a large amount compared to PBE calculation. The band gaps from PBE-PBE0 and PBE0 are within 1$\%$ of each other, for both Si and GaN. The PBE0 pseudopotential band gap tends to be lower, and closer to the experimental value. Together with the calculations from molecular properties, we may conclude that the systematic error from pseudopotential density functional inconsistency is of the order of 1$\%$ for PBE0, for the systems tested.
Crystal PBE PBE-PBE0 PBE0 AE-PBE AE-PBE0
------------------- --------------- -------------- --------------- --------------- ---------
Lattice constants
Si 5.484(0.219) 5.452(0.073) 5.446(-0.037) 5.472(0.441) 5.448
GaN 4.541(-0.176) 4.539(0.066) 4.537(0.022) 4.549 (0.287) 4.536
Band Gap
Si 0.58(-77.17) 1.79(9.82) 1.78(9.20) 2.54 (55.83) 1.63
GaN 1.81(16.77) 3.58(1.13) 3.56(0.56) 1.55 (-56.21) 3.54
: Solid state calculation with PBE, PBE-PBE0 and PBE0. The lattice constant and band gap of Si and GaN are listed. The lattice constant is in units of Å, and the band gap is in eV. The experimental band gaps are at 0K. Relative errors ($\%$) are listed in parentheses. All-electron PBE0 results are used as the reference.[]{data-label="tab:4"}
Conclusion {#sec_conc}
==========
We have developed the first self-consistent PBE0 pseudopotential and have successfully implemented it in the OPIUM pseudopotential generation code. We have also shown that our PBE0 pseudopotentials behave well when implementing them to DFT calculations. Our benchmarking tests on G2 dataset indicate that the systematic error associated with pseudopotential density functional consistency is within 1$\%$. We have shown that using the PBE0 pseudopotential in PBE0 DFT calculations lead to improvements in bond length accuracy of 0.1$\%$ compared to PBE0 all-electron DFT calculations with PBE pseudopotentials. The HOMO eigenvalues for G2 dataset predicted by using PBE0 pseudopotentials are closer to the all-electron values compared to PBE-PBE0. On average, for our test set, the error of HOMO-LUMO gaps for molecules is reduced by about 3$\%$. A similar trend is obtained for the solids tested. From these results, we conclude that using PBE pseudopotentials in PBE0 calculations leads to acceptable results for small molecules and simple solids, while using PBE0 pseudopotentials instead will likely result in a small consistent increase in accuracy. Future directions include further testing of PBE0 pseudopotentials for more complex systems, the inclusion of relativistic effects for heavy atoms, and the development of other hybrid functional pseudopotentials, including range-separated hybrids [@Toulouse10p032502].
Acknowledgements
================
J.Y. was supported by the U.S. National Science Foundation, under grant CMMI-1334241. L.Z.T. was supported by the U.S. ONR under Grant N00014-17-1-2574. A.M.R. was supported by the U.S. Department of Energy, under grant DE-FG02-07ER46431. Computational support was provided by the HPCMO of the U.S. DOD and the NERSC of the U.S. DOE.
Appendix A: Construction of PBE0 pseudopotentials on a real space grid {#sec_bench}
======================================================================
The accuracy of the real space pseudopotential generator depends on the radial grid size. The use of the logarithmic grid ensures enough grid points near the core to describe oscillations of the all-electron wavefunctions in that region, while capturing the tail of the wavefunctions at large distances from the core to sufficient accuracy. The logarithmic grid is defined as $$r_i=aZ^{-1/3}e^{(i-1)b}, i=1,...,N$$ where $N$ is the number of grid points, spanning a sufficiently large real space range ($r_{max}$), $Z$ is the core charge, and $a$ controls the position of the first grid point, and $b$ determines the grid spacing. We use values of $a=0.0001$ and $b=0.013$. The number of grid points $N$ is obtained by setting $r_{\rm{max}}$=80 Bohr.
Appendix B: Derivation of Hartree-Fock average energy {#sec:hfaverage}
=====================================================
As a preliminary to deriving the average energy formula Eq. \[eq:slater\_average\], we collect several useful quantities. The Hartree potential due to an electron in the state $(nlm)$ is
$$\label{eq:vhnlm}
\begin{aligned}
V_H^{(nlm)}(\vec{r}) =& \int d^3r' \frac{|\psi_{nlm}(\vec{r'})|^2}{\lvert \vec{r} - \vec{r'}\rvert}
=& \int_0^\infty r'^2dr' d\Omega' \frac{\phi_{nl}(r')^2 \lvert Y_{lm}(\Omega)\rvert ^2 }{\lvert \vec{r} - \vec{r'}\rvert}
\end{aligned}$$
Using the expansion with $m$ here for getting ready for Eq. \[eq:vhnlm-simple\]
$$\label{eq:expand}
\frac{1}{\lvert \vec{r} - \vec{r'}\rvert} = \sum_{k=0}^\infty \sum_{m=-k}^{k} \frac{4\pi}{2k+1} (-1)^m
\frac{r_<^k}{r_>^{k+1}} Y_k^{-m}(\Omega) Y_k^{m}(\Omega')$$
where $r_<$ ($r_>$) is the lesser (greater) of $r$ and $r'$, we write Eq. \[eq:vhnlm\] as
$$\label{eq:vhnlm-simple}
\begin{aligned}
V_H^{(nlm)}(\vec{r}) =& \sum_{km'} \int_0^\infty r'^2dr'
\frac{r_<^k}{r_>^{k+1}}
\sqrt{\frac{4\pi}{2k+1}}\, Y_k^{0*}(\Omega) \,
c^k(l,m',l,m') \, \phi_{nl}(r')^2 \\
=& \int_0^\infty r'^2dr' \frac{1}{r_>} \phi_{nl}(r')^2
+ \sum_{k=1}^{2l} \sum_{m'} \int_0^\infty r'^2dr'
\frac{r_<^k}{r_>^{k+1}}
\sqrt{\frac{4\pi}{2k+1}}\, Y_k^{0*}(\Omega) \,
c^k(l,m',l,m') \, \phi_{nl}(r')^2
\end{aligned}$$
Here, we make use of the symbols
$$\label{eq:gaunt}
\begin{aligned}
c^k(l,m,l',m') =& \sqrt{\frac{4\pi}{4k+1}} \int Y_{lm}^*(\Omega) Y_{k, m-m'}(\Omega) Y_{l'm'}(\Omega) d\Omega \\
=& (-1)^{-m} \sqrt{2l+1}\sqrt{2l'+1} \begin{pmatrix}l & k & l' \\ 0 & 0 & 0\end{pmatrix} \begin{pmatrix}l & k & l' \\ -m & m-m' & m'\end{pmatrix}
\end{aligned}$$
for Gaunt’s formula, in terms of Wigner $3j$-symbols. In the second line of Eq. \[eq:vhnlm-simple\], we have separated the $k=0$ and $k>0$ components, because the latter vanishes when averaged over $m$. Therefore, the Hartree energy of a pair of electrons $(i j\vert i j)$, in orbitals $(n_i,l_i)$ and $(n_j,l_j)$, averaged over the magnetic quantum number $m_j$ of the second electron, is simply
$$\label{eq:avh}
\begin{aligned}
\langle (i j\vert i j) \rangle_{m_j} =& \int_0^\infty dr \phi_{n_i l_i}(r)^2 \int_0^\infty dr' \frac{1}{r_>} \phi_{n_j l_j}(r')^2 \\
=& F^0(n_i l_i,n_j l_j)
\end{aligned}$$
The exchange integral for a pair of electrons in orbitals $(n_i,l_i)$ and $(n_j,l_j)$ can be calculated in similar fashion. Using Eqs. \[eq:expand\] and \[eq:gaunt\], we get
$$\label{eq:exchint}
\begin{aligned}
(i j\vert j i) =& \int d^3r d^3r' \frac{\psi_{n_i l_i m_i}^*(\vec{r})\psi_{n_j l_j m_j}(\vec{r}) \psi_{n_j l_j m_j}^*(\vec{r'}) \psi_{n_i l_i m_i}(\vec{r'}) }{\lvert \vec{r} - \vec{r'}\rvert} \\
=& \sum_{kq} \int Y_{l_i m_i}^*(\Omega) Y_{l_jm_j}(\Omega) Y_{kq}(\Omega) d\Omega
\int Y_{l_jm_j}^*(\Omega') Y_{l_im_i}(\Omega') Y_{kq^*}(\Omega') d\Omega' \\
&\int \frac{r_<^k}{r_>^{k+1}} \frac{4\pi}{2k+1}
\phi_{n_il_i}(r) \phi_{n_jl_j}(r) \phi_{n_jl_j}(r') \phi_{n_il_i}(r') dr dr' \\
=& \sum_{k} c^k(l_i,m_i,l_j,m_j)^2
\int \frac{r_<^k}{r_>^{k+1}}
\phi_{n_il_i}(r) \phi_{n_jl_j}(r) \phi_{n_jl_j}(r') \phi_{n_il_i}(r') dr dr'
\end{aligned}$$
For the average of the exchange integral over $m_j$, we get
$$\label{eq:avx}
\langle (i j\vert j i) \rangle_{m_j} = \frac{1}{\sqrt{(2l_i+1)(2l_j+1)}} \sum_{k} c^k(l_i,0,l_j,0) G^k(n_il_i,n_jl_j)$$
To calculate the average total energy of an atomic configuration, we must consider the Hartree and exchange energies of all pairs of electrons. First consider the case where the electrons are in the same orbital ($n_i=n_j$, $l_i=l_j$). In this case, since $G^k(n_il_i,n_il_i)=F^k(n_il_i,n_il_i)$, we can combine Eqs. \[eq:avh\], \[eq:slater\_Fk\] and \[eq:avx\] to obtain
$$\label{eq:same}
\langle (i j\vert i j) - (i j\vert j i) \rangle = \frac{w_i(w_i-1)}{2} \sum_k f_k(l_i,l_i) F^k(n_il_i,n_il_i)$$
where the numerical coefficients $f_k(l_i,l_i)$ are obtained from those in Eqs. \[eq:avh\], \[eq:avx\], and the prefactor $\frac{w_i(w_i-1)}{2}$ is the number of different electron pairs in orbital $i$.
For the case where the electrons in the pair are in different orbitals, the sum of Eqs. \[eq:avh\], \[eq:avx\] gives
$$\label{eq:diff}
\langle (i j\vert i j) - (i j\vert j i) \rangle = w_i w_j \left ( F^0(n_il_i,n_jl_j) + \sum_k g_k(l_i,l_j) G^k(n_il_i,n_jl_j) \right)$$
where the coefficients $g_k(l_i,l_j)$ are given by Eq. \[eq:avx\]. Collecting the terms in Eqs. \[eq:same\], \[eq:diff\] with the single-particle energies results in the expression for the average total energy Eq. \[eq:slater\_average\]
Appendix C: Derivation of self-consistent Hartree-Fock equations {#sec:hfeqns}
================================================================
If the orbitals are not necessarily normalized, the average energy (as defined in Sec. \[sec:HF\_ps\]) derived in Sec. \[sec:hfaverage\] may be written in the form
$$\label{eq:hfgen}
E_{\textrm{av}}^{\textrm{HF}} = \sum_i \frac{w_i I(n_il_i,n_il_i)}{\langle n_il_i \vert n_il_i \rangle }
+ \sum_{i ; k} \frac{a_{iik} F^k(n_il_i,n_il_i)}{\langle n_il_i \vert n_il_i \rangle \langle n_il_i \vert n_il_i \rangle}
+ \sum_{i> j ; k} \frac{a_{ijk} F^k(n_il_i,n_jl_j)}{\langle n_il_i \vert n_il_i \rangle \langle n_jl_j \vert n_jl_j \rangle}
+ \sum_{i> j ; k} \frac{b_{ijk} G^k(n_il_i,n_jl_j)}{\langle n_il_i \vert n_il_i \rangle \langle n_jl_j \vert n_jl_j \rangle}$$
We wish to find wavefunctions that minimize $E_{\textrm{av}}^{\textrm{HF}}$, under the constraint of wavefunction orthogonality. In other words, a pair of radial functions from orbitals with the same angular momentum, $(n_i,l_i)$ and $(n_j,l_j)$ with $l_i=l_j$, must be orthogonal. Using the Lagrange multipliers $\lambda_{ij}$, we therefore search for the stationary solutions of the functional
$$\label{eq:kfunc}
K = E_{\textrm{av}}^{\textrm{HF}} + \sum_{i>j} \delta_{l_il_j} \lambda_{ij} \frac{\langle n_il_i \vert n_jl_j \rangle}{\langle n_il_i \vert n_il_i \rangle^{1/2} \langle n_il_i \vert n_il_i \rangle^{1/2}}$$
We now proceed to take functional derivatives of Eqs. \[eq:hfgen\], \[eq:kfunc\] with respect to variations in a radial function $\phi_{nl}(r)$. We note that only a subset of terms in Eq. \[eq:hfgen\] involve $nl$, and those that do all contain a factor of $\langle n_il_i \vert n_il_i \rangle^{-1}$. We can therefore write those terms in the form $\tilde{E}(nl) = \langle n_il_i \vert n_il_i \rangle^{-1} \tilde{F}(nl)$ with the variation
$$\label{eq:delebar}
\delta\tilde{E}(nl) = \langle n_il_i \vert n_il_i \rangle^{-1} \delta \tilde{F}(nl)+ \delta [\langle n_il_i \vert n_il_i \rangle^{-1}] \tilde{F}(nl)$$
and
$$\begin{aligned}
\delta \tilde{F}(nl) =& w_{nl} \delta I(nl)
+ \sum_{k} a_{nl,nl,k} F^k(nl,nl) \delta [\langle nl \vert nl \rangle^{-1} ]
+ \sum_k \frac{a_{nl,nl,k} \delta F^k(nl,nl)}{\langle nl \vert nl \rangle}\\
&+ \sum_{n'l'\neq nl ; k} \frac{a_{nl,n'l',k} \delta F^k(nl,n'l')}{\langle n'l' \vert n'l' \rangle}
+ \sum_{n'l'\neq nl ; k} \frac{b_{nl,n'l',k} \delta G^k(nl,n'l')}{\langle n'l' \vert n'l' \rangle}
\end{aligned}$$
Furthermore, we have
$$\delta [\langle n_il_i \vert n_il_i \rangle^{-1}] = -2\int dr\, \frac{\phi_{nl}(r) \delta \phi_{nl}(r)}{\langle nl \vert nl \rangle^2}$$
and
$$\delta F^k(nl,n'l') = 2(1+\delta_{nl,n'l'}) \int dr\, \phi_{nl}(r)\, \delta \phi_{nl}(r)\, \frac{1}{r}\, Y^k(n'l',nl,r)$$
$$\delta G^k(nl,n'l') = 2 \int dr\, \phi_{n'l'}(r)\, \delta \phi_{nl}(r)\, \frac{1}{r}\, Y^k(nl,n'l',r)$$
where
$$Y^k(nl,n'l',r) = \int_0^r ds\, \frac{s^k}{r^k} \, \phi_{nl}(s)\, \phi_{n'l'}(s) +\int_r^\infty ds\, \frac{r^{k+1}}{s^{k+1}}\, \phi_{nl}(s)\, \phi_{n'l'}(s)$$
Finally, the variation of the terms involving the Lagrange multipliers in Eq. \[eq:kfunc\] is
$$\label{eq:dellam}
\delta \left [ \sum_{n'} \lambda_{nl,n'l'} \frac{\langle nl \vert n'l \rangle}{\langle nl \vert nl \rangle^{1/2} \langle n'l \vert n'l \rangle^{1/2}} \right ]
= \sum_{n'} \lambda_{nl,n'l'} \frac{\int dr\, \phi_{n'l}(r)\, \delta \phi_{nl}(r) }{\langle nl \vert nl \rangle^{1/2} \langle n'l \vert n'l \rangle^{1/2}}$$
The variational principle requires that the variation $\delta K$ be stationary with respect to $\delta \phi_{nl}(r)$. Collecting Eqs. \[eq:delebar\]–\[eq:dellam\], we obtain the Hartree-Fock equations (Eq. \[eq:coupled\_hf\]) where
$$Y_i(r) = \sum_{j,k} \frac{(1+\delta_{n_il_i,n_jl_j}) a_{n_il_i,n_jl_j,k} Y^k(n_jl_j,n_jl_j,r)}{w_i \langle n_jl_j \vert n_jl_j \rangle}$$
$$X_i(r) = \sum_{j\neq i,k} \frac{ b_{n_il_i,n_jl_j,k} Y^k(n_il_i,n_jl_j,r) \phi_{n_jl_j}(r)}{w_i \langle n_jl_j \vert n_jl_j \rangle}$$
and
$$\varepsilon_{ii} = \frac{2}{w_i} \left [ \tilde{E}(n_il_i)-\sum_k \frac{a_{n_il_i, n_il_i,k} F^k(n_il_i,n_il_i)}{\langle n_il_i \vert n_il_i \rangle^2} \right ]$$
$$\varepsilon_{ij} = \frac{\lambda_{n_il_i,n_jl_j} \langle n_il_i \vert n_il_i \rangle^{1/2}}{w_i \langle n_jl_j \vert n_jl_j \rangle^{1/2} }$$
|
---
abstract: 'We use a weak mean curvature flow together with a surgery procedure to show that all closed hypersurfaces in ${\mathbb R}^4$ with entropy less than or equal to that of $\mathbb{S}^2\times {\mathbb R}$, the round cylinder in ${\mathbb R}^4$, are diffeomorphic to $\mathbb{S}^3$.'
address:
- 'Department of Mathematics, Johns Hopkins University, 3400 N. Charles Street, Baltimore, MD 21218'
- 'Department of Mathematics, University of Wisconsin, 480 Lincoln Drive, Madison, WI 53706'
author:
- Jacob Bernstein
- Lu Wang
title: Topology of Closed Hypersurfaces of Small Entropy
---
Introduction {#Intro}
============
If $\Sigma$ is a [hypersurface]{}, that is, a smooth properly embedded codimension-one submanifold of ${\mathbb R}^{n+1}$, then the *Gaussian surface area* of $\Sigma$ is $$F[\Sigma]=\int_{\Sigma}\Phi\, d\mathcal{H}^{n}=(4\pi)^{-\frac{n}{2}}\int_{\Sigma}e^{-\frac{|{\mathbf{x}}|^2}{4}} d\mathcal{H}^n,$$ where $\mathcal{H}^n$ is $n$-dimensional Hausdorff measure. Following Colding-Minicozzi [@CMGenMCF], define the *entropy* of $\Sigma$ to be $$\lambda[\Sigma]=\sup_{({\mathbf{y}},\rho)\in{\mathbb R}^{n+1}\times{\mathbb R}^+}F[\rho\Sigma+{\mathbf{y}}].$$ That is, the entropy of $\Sigma$ is the supremum of the Gaussian surface area over all translations and dilations of $\Sigma$. Observe that the entropy of a hyperplane is one. In [@BernsteinWang], we show that, for $2\leq n\leq 6$, the entropy of a closed (i.e. compact and without boundary) hypersurface in ${\mathbb R}^{n+1}$ is uniquely (modulo translations and dilations) minimized by $\mathbb{S}^n$, the unit sphere centered at the origin. This verifies a conjecture of Colding-Ilmanen-Minicozzi-White [@CIMW Conjecture 0.9] (cf. [@KetoverZhou]). We further show, in [@BernsteinWang2 Corollary 1.3], that surfaces in ${\mathbb R}^3$ of small entropy are topologically rigid. That is, if $\Sigma$ is a closed surface in ${\mathbb R}^3$ and $\lambda[\Sigma]\leq\lambda[\mathbb{S}^1\times{\mathbb R}]$, then $\Sigma$ is diffeomorphic to $\mathbb{S}^2$.
In this article, we use a weak mean curvature flow (see [@ES1; @ES2; @ES3; @ES4] and [@CGG]) to obtain new topological rigidity for closed hypersurfaces in ${\mathbb R}^4$ of small entropy. This generalizes a result of Colding-Ilmanen-Minicozzi-White [@CIMW] for closed self-shrinkers to arbitrary closed hypersurfaces and contrasts with the methods of both [@CIMW] and [@BernsteinWang2 Corollary 1.3], which both use only the classical mean curvature flow.
\[MainTopThm\] If $\Sigma\subset{\mathbb R}^{4}$ is a closed hypersurface with $\lambda[\Sigma]\leq\lambda[\mathbb{S}^2\times{\mathbb R}]$, then $\Sigma$ is diffeomorphic to $\mathbb{S}^3$.
One of the key ingredients in the proof of Theorem \[MainTopThm\] is a refinement of [@BernsteinWang2 Theorem 0.1] about the topology of asymptotically conical self-shrinkers of small entropy. Recall, a hypersurface $\Sigma$ is said to be *asymptotically conical* if it is smoothly asymptotic to a regular cone; i.e., $\lim_{\rho\to 0} \rho\Sigma= \mathcal{C} (\Sigma)$ in $C^{\infty}_{loc}({\mathbb R}^{n+1}\setminus{\left\{{\mathbf{0}}\right\}})$ for $\mathcal{C} (\Sigma)$ a regular cone. A *self-shrinker*, $\Sigma$, is a hypersurface that satisfies $$\label{SSEqn}
\mathbf{H}_\Sigma+\frac{{\mathbf{x}}^\perp}{2}=\mathbf{0},$$ where $\mathbf{H}_\Sigma=-H_{\Sigma} {\mathbf{n}}_\Sigma=\Delta_\Sigma {\mathbf{x}}$ is the mean curvature vector of $\Sigma$ and ${\mathbf{x}}^\perp$ is the normal component of the position vector. Let us denote the set of self-shrinkers in ${\mathbb R}^{n+1}$ by $\mathcal{S}_n$ and the set of asymptotically conical self-shrinkers by $\mathcal{ACS}_n$. Self-shrinkers generate solutions to the mean curvature flow that move self-similarly by scaling. That is, if $\Sigma\in\mathcal{S}_n$, then $${\left\{\Sigma_t\right\}}_{t\in(-\infty,0)}={\left\{\sqrt{-t}\, \Sigma\right\}}_{t\in(-\infty,0)}$$ moves by mean curvature. Important examples are the maximally symmetric self-shrinking cylinders with $k$-dimensional spine, $$\mathbb{S}^{n-k}_{*}\times{\mathbb R}^k={\left\{({\mathbf{x}},{\mathbf{y}})\in{\mathbb R}^{n-k+1}\times{\mathbb R}^k={\mathbb R}^{n+1}: |{\mathbf{x}}|^2=2(n-k)\right\}},$$ where $0\leq k\leq n$. As $\mathbb{S}^{n-k}_{*}\times{\mathbb R}^k$ are self-shrinkers, their Gaussian surface area and entropy agree (cf. [@CMGenMCF Lemma 7.20]). That is, $$\lambda_n=\lambda [\mathbb{S}^n]=F[\mathbb{S}_*^n]=F[\mathbb{S}_*^n\times{\mathbb R}^l]=\lambda[\mathbb{S}^n\times{\mathbb R}^l].$$ Hence, a computation of Stone [@Stone], gives that $$2>\lambda_1>\frac{3}{2}>\lambda_2>\ldots>\lambda_n>\ldots\to\sqrt{2}.$$
\[MainACSThm\] Let $\Sigma\in \mathcal{ACS}_n$ for $n\geq 2$. If $\lambda[\Sigma]\leq\lambda_{n-1}$, then $\Sigma$ is contractible and $\mathcal{L} (\Sigma)$, the link of the asymptotic cone $\mathcal{C} (\Sigma)$, is a homology $(n-1)$-sphere.
We always consider homology with integer coefficients.
For $n=3$, the classification of surfaces and Alexander’s theorem [@Alexander] gives
\[ACSDim3Cor\] Let $\Sigma\in \mathcal{ACS}_3$. If $\lambda[\Sigma]\leq\lambda_{2}$, then $\Sigma$ is diffeomorphic to ${\mathbb R}^3$.
To prove Theorem \[MainTopThm\] we first establish a topological decomposition, i.e., Theorem \[CondSurgThm\], constructed from the weak mean curvature flow associated to $\Sigma$. Together with Corollary \[ACSDim3Cor\] this allows one to perform a surgery procedure which immediately gives the result. Both these steps require $n=3$. For $n\geq 4$, one can use Theorem \[MainACSThm\] and this surgery procedure to show a (strictly weaker) extension of Theorem \[MainTopThm\] valid in any dimension where the two hypotheses below are satisfied. These hypotheses ensure the existence of topological decomposition. Specifically, they ensure that if the entropy of an initial hypersurface is small enough, then tangent flows at all singularities are modeled by self-shrinkers that are either closed or asymptotically conical.
In order to state these hypotheses, first let $\mathcal{S}_n^*$ denote the set of non-flat elements of $\mathcal{S}_n$ and, for any $\Lambda>0$, let $$\mathcal{S}_n(\Lambda)={\left\{\Sigma\in \mathcal{S}_n: \lambda[\Sigma]<\Lambda\right\}} \mbox{ and } \mathcal{S}_n^*(\Lambda)=\mathcal{S}^*_n \cap \mathcal{S}_n(\Lambda).$$ Next, let $\mathcal{RMC}_n$ denote the space of *regular minimal cones* in ${\mathbb R}^{n+1}$, that is $\mathcal{C}\in \mathcal{RMC}_n$ if and only if it is a proper subset of ${\mathbb R}^{n+1}$ and $\mathcal{C}\backslash{\left\{{\mathbf{0}}\right\}}$ is a hypersurface in ${\mathbb R}^{n+1}\backslash{\left\{{\mathbf{0}}\right\}}$ that is invariant under dilation about ${\mathbf{0}}$ and with vanishing mean curvature. Let $\mathcal{RMC}_n^*$ denote the set of non-flat elements of $\mathcal{RMC}_n$ – i.e., cones whose second fundamental forms do not identically vanish. For any $\Lambda>0$, let $$\mathcal{RMC}_n(\Lambda)={\left\{\mathcal{C}\in \mathcal{RMC}_n: \lambda[\mathcal{C}]< \Lambda\right\}} \mbox{ and } \mathcal{RMC}_n^*(\Lambda)=\mathcal{RMC}^*_n \cap \mathcal{RMC}_n(\Lambda).$$
Let us now fix a dimension $n\geq 3$ and a value $\Lambda>1$. The first hypothesis is $$\label{Assump1}
\mbox{For all $3\leq k\leq n$, }\mathcal{RMC}_{k}^*(\Lambda)=\emptyset \tag{$\star_{n,\Lambda}$}.$$ Observe that all regular minimal cones in $\mathbb{R}^2$ consist of unions of rays and so $\mathcal{RMC}^*_1=\emptyset$. Likewise, as great circles are the only geodesics in $\mathbb{S}^2$, $\mathcal{RMC}_2^*=\emptyset$. The second hypothesis is $$\label{Assump2}
\mathcal{S}_{n-1}^*(\Lambda) =\emptyset. \tag{${\star\star}_{n,\Lambda}$}$$ Obviously this holds only if $\Lambda\leq\lambda_{n-1}$. We then show the following conditional result:
\[MainCondThm\] Fix $n\geq 4$ and $\Lambda\in (\lambda_{n}, \lambda_{n-1}]$. If and both hold and $\Sigma$ is a closed hypersurface in ${\mathbb R}^{n+1}$ with $\lambda[\Sigma]\leq\Lambda$, then $\Sigma$ is a homology $n$-sphere.
\[LowBndLam\] If and hold for $\Lambda\leq\lambda_{n}$, then it follows from Huisken’s monotonicity formula and the results of [@BernsteinWang] and [@CIMW] that there does *not* exist a closed hypersurface $\Sigma$ so that $\lambda[\Sigma]\leq\Lambda$ unless $\Lambda=\lambda_n$ and $\Sigma$ is a round sphere. Thus, we require $\Lambda>\lambda_n$ in order to make Theorem \[MainCondThm\] non-trivial.
For general $n$ and $\Lambda\in (\lambda_n, \lambda_{n-1}]$, neither the validity of nor that of is known. However, both can be established for $n=3$ and $\Lambda=\lambda_2$. First, as part of their proof of the Willmore conjecture, Marques-Neves gave a lower bound on the density of non-trivial regular minimal cones in ${\mathbb R}^4$. In particular, it follows from [@MarquesNeves Theorem B] that if $\mathcal{C}\in\mathcal{RMC}_3^*$, then $\lambda[\mathcal{C}] >\lambda_2$ and so ${(\star_{3,\lambda_2})}$ holds. Furthermore, it follows from [@BernsteinWang2 Corollary 1.2] that $\mathcal{S}_2^*(\lambda_2)=\emptyset$ and so ${({\star\star}_{3,\lambda_2})}$ holds.
For $n\geq 4$, some partial results suggest that and hold for $\Lambda=\lambda_{n-1}$. For instance, Ilmanen-White [@Ilmanen-White Theorem 1\*], have shown that if $\mathcal{C}\in \mathcal{RMC}_n^*$ and is area-minimizing and topologically non-trivial, then $\lambda[\mathcal{C}]\geq \lambda_{n-1}$. Additionally, [@CIMW Theorem 0.1] says that the self-shrinking sphere has the lowest entropy among all compact self-shrinkers and [@CIMW Conjecture 0.10] posits that ${({\star\star}_{n,\lambda_{n-1}})}$ holds for $n\leq 7$. It is important to note that there exist many topologically trivial elements of $\mathcal{RMC}_n^*$. Indeed, the work of Hsiang [@Hsiang1; @Hsiang2] and Hsiang-Sterling [@HsiangSterling], shows that there exist topologically trivial elements of $\mathcal{RMC}_n^*$ for $n=5,7$ and for all even $n\geq 4$.
The paper is organized as follows. In Section \[NotationSec\], we introduce notation and recall some basic facts about the mean curvature flow. In Section \[RegularitySec\], we show regularity of self-shrinking measures of low entropy. In Section \[SingularitySec\], we study the structure of the singular set for weak mean curvature flows of small entropy. Importantly, we give a topological decomposition, Theorem \[CondSurgThm\], of the regular part of the flow which is the basis of the surgery procedure. In Section \[SharpeningSec\], we prove Theorem \[MainACSThm\] and Corollary \[ACSDim3Cor\]. Finally, in Section \[SurgerySec\], we carry out the surgery procedure and prove Theorems \[MainTopThm\] and \[MainCondThm\].
Notation and Background {#NotationSec}
=======================
In this section, we fix notation for the rest of the paper and recall some background on mean curvature flow. Experts should feel free to consult this section only as needed.
Singular hypersurfaces
----------------------
We will use results from [@Ilmanen1] on weak mean curvature flows. For this reason, we follow the notation of [@Ilmanen1] as closely as possible.
Denote by
- ${\mathcal{M}}({\mathbb R}^{n+1})={\left\{\mu: \mu\mbox{ is a Radon measure on ${\mathbb R}^{n+1}$}\right\}}$ (see [@Simon Section 4]);
- ${\mathcal{IM}}_k({\mathbb R}^{n+1})={\left\{\mu: \mu\mbox{ is an integer $k$-rectifiable Radon measure on ${\mathbb R}^{n+1}$}\right\}}$ (see [@Ilmanen1 Section 1]);
- ${\mathbf{IV}}_k({\mathbb R}^{n+1})={\left\{V: V\mbox{ is an integer rectifiable $k$-varifold on ${\mathbb R}^{n+1}$}\right\}}$ (see [@Ilmanen1 Section 1] or [@Simon Chapter 8]).
The space ${\mathcal{M}}({\mathbb R}^{n+1})$ is given the weak\* topology. That is, $$\mu_i\to\mu\iff\int f\, d\mu_i\to\int f \, d\mu\mbox{ for all $f\in C^0_c({\mathbb R}^{n+1})$}.$$ And the topology on ${\mathcal{IM}}_k({\mathbb R}^{n+1})$ is the subspace topology induced by the natural inclusion into ${\mathcal{M}}({\mathbb R}^{n+1})$. For the details of the topologies considered on $ {\mathbf{IV}}_k({\mathbb R}^{n+1})$, we refer to [@Ilmanen1 Section 1] or [@Simon Chapter 8]. There are natural bijective maps $$V: {\mathcal{IM}}_k({\mathbb R}^{n+1})\to {\mathbf{IV}}_k({\mathbb R}^{n+1}) \mbox{ and } \mu:{\mathbf{IV}}_k({\mathbb R}^{n+1})\to{\mathcal{IM}}_k({\mathbb R}^{n+1}).$$ The second map is continuous, but the first is not. Henceforth, write $V(\mu)=V_\mu$ and $\mu(V)=\mu_V$.
If $\Sigma\subset{\mathbb R}^{n+1}$ is a $k$-dimensional smooth properly embedded submanifold, we denote by $\mu_\Sigma=\mathcal{H}^k\lfloor\Sigma\in{\mathcal{IM}}_k({\mathbb R}^{n+1})$. Given $({\mathbf{y}},\rho)\in{\mathbb R}^{n+1}\times{\mathbb R}^+$ and $\mu\in{\mathcal{IM}}_k({\mathbb R}^{n+1})$, we define the rescaled measure $\mu^{{\mathbf{y}},\rho}\in{\mathcal{IM}}_k({\mathbb R}^{n+1})$ by $$\mu^{{\mathbf{y}},\rho}(\Omega)=\rho^{k}\mu\left(\rho^{-1}\Omega+{\mathbf{y}}\right).$$ This is defined so that if $\Sigma$ is a $k$-dimensional smooth properly embedded submanifold, then $$\mu^{{\mathbf{y}},\rho}_\Sigma=\mu_{\rho (\Sigma-{\mathbf{y}})}.$$ One of the defining properties of $\mu\in {\mathcal{IM}}_k({\mathbb R}^{n+1})$ is that for $\mu$-a.e. ${\mathbf{x}}\in{\mathbb R}^{n+1}$, there is an integer $\theta_\mu({\mathbf{x}})$ so that $$\lim_{\rho\to \infty}\mu^{{\mathbf{x}},\rho}=\theta_\mu({\mathbf{x}})\mu_{P},$$ where $P$ is a $k$-dimensional plane through the origin. When such $P$ exists, we denote it by $T_{{\mathbf{x}}} \mu$ the *approximate tangent plane at ${\mathbf{x}}$*. The value $\theta_\mu({\mathbf{x}})$ is the *multiplicity of $\mu$ at ${\mathbf{x}}$* and by definition, $\theta_\mu({\mathbf{x}})\in\mathbb{N}$ for $\mu$-a.e. ${\mathbf{x}}$. Notice that if $\mu=\mu_{\Sigma}$, then $T_{{\mathbf{x}}}\mu=T_{{\mathbf{x}}}\Sigma$ and $\theta_\mu({\mathbf{x}})=1$. Given a $\mu\in {\mathcal{IM}}_n({\mathbb R}^{n+1})$, set $${{\ensuremath{\mathop{\mathrm{reg}}} }}({{\ensuremath{\mathop{\mathrm{spt}}} }}(\mu))={\left\{{\mathbf{x}}\in{{\ensuremath{\mathop{\mathrm{spt}}} }}(\mu): \exists\rho>0 \mbox{ s.t. $B_\rho({\mathbf{x}})\cap{{\ensuremath{\mathop{\mathrm{spt}}} }}(\mu)$ is a hypersurface}\right\}},$$ and ${{\ensuremath{\mathop{\mathrm{sing}}} }}({{\ensuremath{\mathop{\mathrm{spt}}} }}(\mu))={{\ensuremath{\mathop{\mathrm{spt}}} }}(\mu)\setminus{{\ensuremath{\mathop{\mathrm{reg}}} }}({{\ensuremath{\mathop{\mathrm{spt}}} }}(\mu))$. Here $B_\rho({\mathbf{x}})$ is the open ball in ${\mathbb R}^{n+1}$ centered at ${\mathbf{x}}$ with radius $\rho$. Likewise, $${{\ensuremath{\mathop{\mathrm{reg}}} }}(\mu)={\left\{{\mathbf{x}}\in {{\ensuremath{\mathop{\mathrm{reg}}} }}({{\ensuremath{\mathop{\mathrm{spt}}} }}(\mu)): \theta_\mu({\mathbf{x}})=1\right\}} \mbox{ and } {{\ensuremath{\mathop{\mathrm{sing}}} }}(\mu)={{\ensuremath{\mathop{\mathrm{spt}}} }}(\mu)\setminus{{\ensuremath{\mathop{\mathrm{reg}}} }}(\mu).$$
For $\mu\in{\mathcal{IM}}_n({\mathbb R}^{n+1})$, we extend the definitions of $F$ and $\lambda$ in the obvious manner, namely, $$F[\mu]=F[V_\mu]=\int \Phi\, d\mu \mbox{ and } \lambda[\mu]=\lambda[V_\mu]=\sup_{({\mathbf{y}},\rho)\in{\mathbb R}^{n+1}\times{\mathbb R}^+} F[\mu^{{\mathbf{y}},\rho}].$$
Gaussian densities and tangent flows {#Brakkef}
------------------------------------
Historically, the first weak mean curvature flow was the measure-theoretic flow introduced by Brakke [@B]. This flow is called a *Brakke flow*. Brakke’s original definition considered the flow of varifolds. We use the (slightly stronger) notion introduced by Ilmanen [@Ilmanen1 Definition 6.3]. For our purposes, the Brakke flow has two important roles. The first is the fact that Huisken’s monotonicity formula [@Huisken] holds also for Brakke flows; see [@Ilmanen2 Lemma 7]. The second is the powerful regularity theory of Brakke [@B] for such flows. In particular, we will often refer to White’s version of Brakke’s local regularity theorem [@WhiteReg]. We emphasize that White’s argument is valid only for a special class of Brakke flows, but that all Brakke flows considered in this paper are within this class.
A consequence of Huisken’s monotonicity formula is that if a Brakke flow $\mathcal{K}={\left\{\mu_t\right\}}_{t\geq t_0}$ has bounded area ratios, then $\mathcal{K}$ has a well-defined *Gaussian density* at every point $({\mathbf{y}},s)\in{\mathbb R}^{n+1}\times (t_0,\infty)$ given by $$\Theta_{({\mathbf{y}},s)}(\mathcal{K})=\lim_{t\to s^-}\int\Phi_{({\mathbf{y}},s)}({\mathbf{x}},t)\, d\mu_t({\mathbf{x}}),$$ where $$\Phi_{({\mathbf{y}},s)} ({\mathbf{x}},t)=(4\pi)^{-\frac{n}{2}} e^{\frac{|{\mathbf{x}}-{\mathbf{y}}|^2}{4(t-s)}}.$$ Furthermore, the Gaussian density is upper semi-continuous.
Combining the compactness of Brakke flows (cf. [@Ilmanen1 7.1]) with the monotonicity formula, one establishes the existence of tangent flows. For a Brakke flow $\mathcal{K}={\left\{\mu_t\right\}}_{t\geq t_0}$ and a point $({\mathbf{y}},s)\in{\mathbb R}^{n+1}\times(t_0,\infty)$, define a new Brakke flow $$\mathcal{K}^{({\mathbf{y}},s),\rho}={\left\{\mu_t^{({\mathbf{y}},s),\rho}\right\}}_{t\geq\rho^2(t_0-s)},$$ where $$\mu_t^{({\mathbf{y}},s),\rho}=\mu_{s+\rho^{-2}t}^{{\mathbf{y}},\rho}.$$
Let $\mathcal{K}={\left\{\mu_t\right\}}_{t\geq t_0}$ be an integral Brakke flow with bounded area ratios. A non-trivial Brakke flow $\mathcal{T}={\left\{\nu_t\right\}}_{t\in{\mathbb R}}$ is a *tangent flow* to $\mathcal{K}$ at $({\mathbf{y}},s)\in{\mathbb R}^{n+1}\times(t_0,\infty)$, if there is a sequence $\rho_i\to\infty$ so that $\mathcal{K}^{({\mathbf{y}},s),\rho_i}\to\mathcal{T}$. Denote by $\mathrm{Tan}_{({\mathbf{y}},s)}\mathcal{K}$ the set of tangent flows to $\mathcal{K}$ at $({\mathbf{y}},s)$.
The monotonicity formula implies that any tangent flow is backwardly self-similar.
\[BlowupsThm\] Given an integral Brakke flow $\mathcal{K}={\left\{\mu_t\right\}}_{t\geq t_0}$ with bounded area ratios, a point $({\mathbf{y}},s)\in{\mathbb R}^{n+1}\times({t_0},\infty)$ with $\Theta_{({\mathbf{y}},s)}(\mathcal{K})\geq 1$, and a sequence $\rho_i\to\infty$, there exists a subsequence $\rho_{i_j}$ and a $\mathcal{T}\in\mathrm{Tan}_{({\mathbf{y}},s)}\mathcal{K}$ so that $\mathcal{K}^{({\mathbf{y}},s),\rho_{i_j}}\to\mathcal{T}$.
Furthermore, $\mathcal{T}={\left\{\nu_t\right\}}_{t\in{\mathbb R}}$ is backwardly self-similar with respect to parabolic rescaling about $({\mathbf{0}},0)$. That is, for all $t<0$ and $\rho>0$, $$\nu_t=\nu_t^{({\mathbf{0}},0),\rho}.$$ Moreover, $V_{\nu_{-1}}$ is a stationary point of the $F$ functional and $$\Theta_{({\mathbf{y}},s)}(\mathcal{K})=F[\nu_{-1}].$$
Level set flows and boundary motions
------------------------------------
We will also need a set-theoretic weak mean curvature flow called the level-set flow. This flow was first studied in the context of numerical analysis by Osher-Sethian [@OS]. The mathematical theory was developed by Evans-Spruck [@ES1; @ES2; @ES3; @ES4] and Chen-Giga-Goto [@CGG]. For our purposes, it has the important advantages of being uniquely defined and satisfying a maximum principle.
A technical feature of the level-set flow is that the level sets ${L}(\Gamma_0)={\left\{\Gamma_t\right\}}_{t\geq 0}$ may develop non-empty interiors for positive times. This phenomena is called fattening and is unavoidable for certain initial sets $\Gamma_0$ and is closely related to non-uniqueness phenomena of weak solutions of the flow. We say $L(\Gamma_0)$ is *non-fattening*, if each $\Gamma_t$ has no interior. It is relatively straightforward to see that the non-fattening condition is generic; see for instance [@Ilmanen1 Theorem 11.3].
In [@Ilmanen1], Ilmanen synthesized both notions of weak flow. In particular, he showed that for a large class of initial sets, there is a canonical way to associate a Brakke flow to the level-set flow, and observed that this allows, among other things, for the application of Brakke’s partial regularity theorem. For our purposes, it is important that the Brakke flow constructed does not vanish gratuitously. A similar synthesis may be found in [@ES4]. The result we need is the following:
\[UnitdensityThm\] If $\Sigma_0$ is a closed hypersurface in ${\mathbb R}^{n+1}$ and the level-set flow ${L}(\Sigma_0)$ is non-fattening, then there is a set $E\subset {\mathbb R}^{n+1}\times {\mathbb R}$ and a Brakke flow $\mathcal{K}={\left\{\mu_t\right\}}_{t\geq 0}$ so that:
1. $E={\left\{({\mathbf{x}},t): u({\mathbf{x}},t)>0\right\}}$, where $u$ solves the level set flow equation with initial data $u_0$ that satisfies $E_0={\left\{{\mathbf{x}}: u_0({\mathbf{x}})>0\right\}}$ and $\partial E_0={\left\{{\mathbf{x}}: u_0({\mathbf{x}})=0\right\}}=\Sigma_0$;
2. each $E_t={\left\{{\mathbf{x}}: ({\mathbf{x}},t)\in E\right\}}$ is of finite perimeter and $\mu_t=\mathcal{H}^n\lfloor\partial^\ast E_t$, where $\partial^* E_t$ is the reduced boundary of $E_t$.
Regularity of Self-Shrinking Measures of Small Entropy {#RegularitySec}
======================================================
We establish some regularity properties of self-shrinking measures of small entropy when $n\geq 3$. We restrict to $n\geq 3$ in order to avoid certain technical complications coming from the fact that $\lambda_1>\frac{3}{2}$.
Self-shrinking measures
-----------------------
We will need a singular analog of $\mathcal{S}_n$. To that end, we define the set of self-shrinking measures on ${\mathbb R}^{n+1}$ by $$\mathcal{SM}_n={\left\{\mu\in{\mathcal{IM}}_n({\mathbb R}^{n+1}): V_\mu\mbox{ is stationary for the ${F}$ functional}, {{\ensuremath{\mathop{\mathrm{spt}}} }}(\mu)\neq\emptyset\right\}}.$$ Clearly, if $\Sigma\in \mathcal{S}_n$, then $\mu_\Sigma\in\mathcal{SM}_n$. There are many examples of singular self-shrinkers. For instance, any element of $\mathcal{C}\in\mathcal{RMC}_n$ satisfies $\mu_{\mathcal{C}}=\mathcal{H}^n\lfloor\mathcal{C}\in \mathcal{SM}_n$. For $\mu\in\mathcal{SM}_n$, we define the *associated Brakke flow* $\mathcal{K}={\left\{\mu_t\right\}}_{t\in{\mathbb R}}$ by $$\mu_t= \left \{ \begin{array}{cc}
0 & t\geq 0 \\
\mu^{{\mathbf{0}}, \sqrt{-t}} & t<0.
\end{array} \right.$$ One can verify that this is a Brakke flow. Given $\Lambda>0$, set $$\mathcal{SM}_n(\Lambda)={\left\{\mu\in\mathcal{SM}_n: \lambda[\mu]<\Lambda\right\}} \mbox{ and } \mathcal{SM}_n[\Lambda]={\left\{\mu\in\mathcal{SM}_n: \lambda[\mu]\leq \Lambda\right\}}.$$
Regularity and asymptotic properties of self-shrinking measures of small entropy
--------------------------------------------------------------------------------
A $\mu\in{\mathcal{IM}}_n({\mathbb R}^{n+1})$ is a *cone*, if $\mu^{{\mathbf{0}},\rho}=\mu$. Likewise, $\mu\in{\mathcal{IM}}_n({\mathbb R}^{n+1})$ *splits off a line*, if, up to an ambient rotation of ${\mathbb R}^{n+1}$, $\mu=\hat{\mu}\times\mu_{\mathbb R}$ for $\hat{\mu}\in{\mathcal{IM}}_{n-1}({\mathbb R}^{n})$. Observe that if $\mu\in\mathcal{SM}_n$ is a cone, then $V_\mu$ is stationary (for area). Similarly, if $\mu\in\mathcal{SM}_n$ splits off a line, then $\hat{\mu}\in\mathcal{SM}_{n-1}$ and $\lambda[\mu]=\lambda[\hat{\mu}]$.
Standard dimension reduction arguments give the following:
\[DimRedConeLem\] Fix $n\geq 3$ and $\Lambda\leq 3/2$ and suppose that holds. If $\mu\in\mathcal{SM}_n(\Lambda)$ is a cone, then $\mu=\mu_P$ for some hyperplane $P$.
We will prove this by showing that if holds, then for all $3\leq m\leq n$, if $\mu\in\mathcal{SM}_m(\Lambda)$ is a cone, then $\mu=\mu_{P}$ for a hyperplane $P$ in $\mathbb{R}^{m+1}$.
We proceed by induction on $m$. When $m=3$, note that $\Lambda\leq\frac{3}{2}$ and so we have that $\mu=\mu_{\mathcal{C}}$ for some $\mathcal{C}\in \mathcal{RMC}_3$ by [@BernsteinWang Proposition 4.2]. Hence, by the assumption that $\mathcal{RMC}_3^*(\Lambda)=\emptyset$, we must have that $\mathcal{C}$ is a hyperplane through the origin. To complete the induction argument, we observe that it suffices to show that if $\mu\in \mathcal{SM}_m(\Lambda)$ is a cone, then $\mu=\mu_{\mathcal{C}}$ for some $\mathcal{C}\in \mathcal{RMC}_m(\Lambda)$. Indeed, such a $\mathcal{C}$ must be a hyperplane because holds and so, by definition, $\mathcal{RMC}^*_m(\Lambda)=\emptyset$ for $3\leq m \leq n$.
To complete the proof, we argue by contradiction. Suppose that ${{\ensuremath{\mathop{\mathrm{spt}}} }}(\mu)$ is not a regular cone. Then there is a point ${\mathbf{y}}\in{{\ensuremath{\mathop{\mathrm{sing}}} }}(\mu)\setminus{\left\{{\mathbf{0}}\right\}}.$ As $V_\mu$ is stationary, and $\mu\in{\mathcal{IM}}_m$ with $\lambda[\mu]<\Lambda$, we may apply Allard’s integral compactness theorem (see [@Simon Theorem 42.7 and Remark 42.8]) to conclude that there exists a sequence $\rho_i\to\infty$ so that $\mu^{{\mathbf{y}},\rho_i}\to\nu$ and $V_\nu$ is a stationary integral varifold. Moreover, it follows from the monotonicity formula [@Simon Theorem 17.6] that $\nu$ is a cone; see also [@Simon Theorem 19.3].
As $\mu$ is a cone, $\nu$ splits off a line. That is, $\nu=\hat{\nu}\times\mu_{\mathbb R}$, where $\hat{\nu}\in\mathcal{IM}_{m-1}$ and $V_{\hat{\nu}}$ is a stationary cone and so $\hat{\nu}\in \mathcal{SM}_{m-1}$. Moreover, by the lower semi-continuity of entropy, $$\lambda[\hat{\nu}]=\lambda[\hat{\nu}\times\mu_{\mathbb R}]\leq\lambda[\mu]<\Lambda.$$ Thus, it follows from the induction hypotheses that $\hat{\nu}=\mu_{\hat{P}}$, where $\hat{P}$ is a hyperplane in ${\mathbb R}^m$ and so $V_\nu$ is a multiplicity-one hyperplane. Hence, by Allard’s regularity theorem (see [@Simon Theorem 24.2]), ${\mathbf{y}}\in{{\ensuremath{\mathop{\mathrm{reg}}} }}(\mu)$, giving a contradiction. Therefore, $\mu=\mu_{\mathcal{C}}$ for a $\mathcal{C}\in\mathcal{RMC}_m(\Lambda)$.
As a consequence, we obtain regularity for elements of $\mathcal{SM}_n(\Lambda)$ under the hypothesis that holds.
\[CondRegProp\] Fix $n\geq 3$ and $\Lambda\leq 3/2$ and suppose that holds. If $\mu\in \mathcal{SM}_n(\Lambda)$, then $\mu=\mu_\Sigma$ for some $\Sigma\in \mathcal{S}_n(\Lambda)$.
Observe that for $\mu\in\mathcal{SM}_n(\Lambda)$, the mean curvature of $V_\mu$ is locally bounded by . Following the same reasoning in the proof of Lemma \[DimRedConeLem\], given ${\mathbf{y}}\in{{\ensuremath{\mathop{\mathrm{sing}}} }}(\mu)$, there exists a sequence $\rho_i\to\infty$ so that $\mu^{{\mathbf{y}},\rho_i}\to\nu$ and $V_\nu$ is a stationary cone and so $\nu\in \mathcal{SM}_{n}$. By the lower semi-continuity of entropy, $\lambda[\nu]\leq\lambda[\mu]<\Lambda$. Hence, together with Lemma \[DimRedConeLem\], it follows that ${{\ensuremath{\mathop{\mathrm{sing}}} }}(\mu)=\emptyset$. That is, ${{\ensuremath{\mathop{\mathrm{spt}}} }}(\mu)$ is a smooth submanifold of ${\mathbb R}^{n+1}$ that, moreover, satisfies . Finally, the entropy bound on $\mu$ implies that $\mu(B_R)\leq C R^n$ for some $C>0$ and so, by [@CZProper Theorem 1.3], ${{\ensuremath{\mathop{\mathrm{spt}}} }}(\mu)$ is proper. That is, $\mu=\mu_\Sigma$ for some $\Sigma\in \mathcal{S}_n$.
If, in addition, holds:
\[CondTotRegProp\] Fix $n\geq 3$ and $\Lambda\leq\Lambda_{n-1}$ and suppose that both and hold. If $\mu\in\mathcal{SM}_n(\Lambda)$, then $\mu=\mu_\Sigma$ for some $\Sigma\in\mathcal{S}_n(\Lambda)$, and either $\Sigma$ is diffeomorphic to $\mathbb{S}^n$ or $\Sigma\in\mathcal{ACS}_n$.
First observe that, by Proposition \[CondRegProp\], $\mu=\mu_\Sigma$ for some $\Sigma\in \mathcal{S}_n(\Lambda)$. If $\Sigma$ is closed, then it follows from [@CIMW Theorem 0.7] that $\Sigma$ is diffeomorphic to $\mathbb{S}^n$. On the other hand, if $\Sigma$ is not closed, then it is non-compact.
Let $\mathcal{K}={\left\{\mu_t\right\}}_{t\in{\mathbb R}}$ be the Brakke flow associated to $\mu$. Note that $\mu_t=\mu_{\sqrt{-t}\, \Sigma}$ for $t<0$. Let $\mathcal{X}={\left\{{\mathbf{y}}: {\mathbf{y}}\neq{\mathbf{0}}, \Theta_{({\mathbf{y}},0)}(\mathcal{K})\geq 1\right\}}\subset {\mathbb R}^{n+1}\setminus{\left\{{\mathbf{0}}\right\}}$. As $\Sigma$ is non-compact, $\mathcal{X}$ is non-empty. Indeed, pick any sequence of points ${\mathbf{y}}_i\in\Sigma$ with $|{\mathbf{y}}_i|\to\infty$. The points $\hat{{\mathbf{y}}}_i=|{\mathbf{y}}_i|^{-1}{\mathbf{y}}_i\in |{\mathbf{y}}_i|^{-1}\Sigma$. Hence, $\Theta_{(\hat{{\mathbf{y}}}_i, -|{\mathbf{y}}_i|^{-2})}(\mathcal{K})\geq 1$. As the $\hat{{\mathbf{y}}}_i$ are in a compact subset, up to passing to a subsequence and relabeling, $\hat{{\mathbf{y}}}_i\to\hat{{\mathbf{y}}}$, and so the upper semi-continuity of Gaussian density implies that $\Theta_{(\hat{{\mathbf{y}}},0)}(\mathcal{K})\geq 1$.
We next show that $\mathcal{X}$ is a regular cone. The fact that $\mathcal{X}$ is a cone readily follows from the fact that $\mathcal{K}$ is invariant under parabolic scalings. To see that ${{\ensuremath{\mathop{\mathrm{sing}}} }}(\mathcal{X})\subset {\left\{{\mathbf{0}}\right\}}$, we note that, by [@BernsteinWang Lemma 4.4], for any ${\mathbf{y}}\in\mathcal{X}$ and $\mathcal{T}\in\mathrm{Tan}_{({\mathbf{y}},0)}\mathcal{K}$, $\mathcal{T}={\left\{\nu_t\right\}}_{t\in{\mathbb R}}$ splits off a line. That is, up to an ambient rotation, $\nu_{t}=\hat{\nu}_t\times\mu_{\mathbb R}$ with ${\left\{\hat{\nu}_t\right\}}_{t\in{\mathbb R}}$ the Brakke flow associated to $\hat{\nu}_{-1}\in\mathcal{SM}_{n-1}(\Lambda)$. Here we use the lower semi-continuity of entropy. Note that $\Lambda\leq\lambda_{n-1}<3/2$. Thus, by Proposition \[CondRegProp\] and the hypothesis that ${(\star_{n,\Lambda})}$ holds, $\hat{\nu}_{-1}=\mu_\Gamma$ for $\Gamma\in\mathcal{S}_{n-1}(\Lambda)$. Hence, as we assume that holds, $\Gamma$ is a hyperplane through the origin. Therefore, it follows from Brakke’s regularity theorem that, for $t<0$ close to $0$, ${{\ensuremath{\mathop{\mathrm{spt}}} }}(\mu_t)$ has uniformly bounded curvature near ${\mathbf{y}}$ and so $\sqrt{-t}\, \Sigma\to\mathcal{X}$ in $C^\infty_{loc}\left({\mathbb R}^{n+1}\backslash {\left\{{\mathbf{0}}\right\}}\right)$, concluding the proof.
As a consequence, we establish the following compactness theorem for asymptotically conical self-shrinkers of small entropy.
\[CpctnessACSCor\] Fix $n\geq 3$, $\Lambda\leq\Lambda_{n-1}$, and $\epsilon_0>0$. If both and hold, then the set $$\mathcal{ACS}_n[\Lambda-\epsilon_0]={\left\{\Sigma: \Sigma \in \mathcal{ACS}_n \mbox{ and } \lambda[\Sigma]\leq \Lambda-\epsilon_0\right\}}$$ is compact in the $C^\infty_{loc}({\mathbb R}^{n+1})$ topology.
Consider a sequence $\Sigma_i\in \mathcal{ACS}_n[\Lambda-\epsilon_0]$ and let $\mu_i=\mu_{\Sigma_i}\in\mathcal{SM}_n[\Lambda-\epsilon_0]$. By the integral compactness theorem for $F$-stationary varifolds, up to passing to a subsequence, $\mu_i\to \mu$ in the sense of Radon measures. Moreover, by the lower semi-continuity of the entropy, $\mu\in \mathcal{SM}_n[\Lambda-\epsilon_0]$. Hence, by Proposition \[CondRegProp\], $\mu=\mu_\Sigma$ for $\Sigma\in \mathcal{S}_n[\Lambda-\epsilon_0]$ and so, by Allard’s regularity theorem, $\Sigma_i\to \Sigma$ in $C^\infty_{loc}({\mathbb R}^{n+1})$. Finally, as each $\Sigma_i$ is non-compact and connected, so is $\Sigma$ and so, by Proposition \[CondTotRegProp\], $\Sigma\in \mathcal{ACS}_n[\Lambda-\epsilon_0]$, proving the claim.
Recall that $\mathcal{C}(\Sigma)$ denotes the asymptotic cone of any $\Sigma\in \mathcal{ACS}_n$. Denote the link of the asymptotic cone by $\mathcal{L}(\Sigma)=\mathcal{C}(\Sigma)\cap \mathbb{S}^n$.
\[CpctnessLinksProp\] Fix $n\geq 3$, $\Lambda \leq \lambda_{n-1}$, and $\epsilon_0>0$. If both and hold, then the set $$\mathcal{L}_n[\Lambda-\epsilon_0]={\left\{\mathcal{L}(\Sigma): \Sigma\in \mathcal{ACS}_n[\Lambda-\epsilon_0]\right\}}$$ is compact in the $C^\infty(\mathbb{S}^n)$ topology.
Consider a sequence $L_i\in \mathcal{L}_n[\Lambda-\epsilon_0]$ and let $\Sigma_i\in \mathcal{ACS}_n[\Lambda-\epsilon_0]$ be chosen so that $\mathcal{L}(\Sigma_i)=L_i$ (observe that the $\Sigma_i$ are uniquely determined by [@WaRigid Theorem 1.3]). By Corollary \[CpctnessACSCor\], up to passing to a subsequence, $\Sigma_i\to \Sigma\in \mathcal{ACS}_n[\Lambda-\epsilon_0]$. We claim that $L_i\to L=\mathcal{L}(\Sigma)$ in $C^\infty(\mathbb{S}^n)$.
To see this, let $\mu_i=\mu_{\Sigma_i}$ and $\mu=\mu_\Sigma$ be the corresponding elements of $\mathcal{SM}_n[\Lambda-\epsilon_0]$ and let $\mathcal{K}_i$ and $\mathcal{K}$ be the associated Brakke flows. Clearly, $\mu_i\to \mu$ in the sense of measures. Hence, by construction, the $\mathcal{K}_i$ converge in the sense of Brakke flows to ${\mathcal{K}}$. Since $$\mathcal{C}(\Sigma)={\left\{{\mathbf{x}}\in {\mathbb R}^{n+1}: \Theta_{({\mathbf{x}},0)}(\mathcal{K}) \geq 1\right\}}$$ and likewise for $\mathcal{C}(\Sigma_i)$, we have by Brakke’s regularity theorem that $\mathcal{C}(\Sigma_i)\to \mathcal{C}(\Sigma)$ in $C^\infty_{loc}({\mathbb R}^{n+1}\backslash {\left\{{\mathbf{0}}\right\}})$, that is $\mathcal{L}(\Sigma_i)\to \mathcal{L}(\Sigma)$ in $C^\infty(\mathbb{S}^n)$ as claimed.
Let $B_R$ denote the open ball in $\mathbb{R}^{n+1}$ centered at the origin with radius $R$. Combining Corollary \[CpctnessACSCor\] and Proposition \[CpctnessLinksProp\] gives that
\[GraphCor\] Fix $n\geq 3$, $\Lambda \leq \lambda_{n-1}$, and $\epsilon_0>0$. Suppose that and hold. There is an $R_0=R_0(n, \Lambda, \epsilon_0)$ and $C_0=C_0(n, \Lambda, \epsilon_0)$ so that if $\Sigma \in \mathcal{ACS}_n[\Lambda-\epsilon_0]$, then
1. $\Sigma\setminus\bar{B}_{R_0}$ is given by the normal graph of a smooth function $u$ over $\mathcal{C}(\Sigma)\setminus\Omega$, where $\Omega$ is a compact set, satisfying that for $p\in\mathcal{C}(\Sigma)\setminus\Omega$, $${\left\vert{\mathbf{x}}(p)\right\vert} {\left\vertu(p)\right\vert}+ {\left\vert{\mathbf{x}}(p)\right\vert}^2 {\left\vert\nabla_{\mathcal{C}(\Sigma)} u(p)\right\vert}+{\left\vert{\mathbf{x}}(p)\right\vert}^3 {\left\vert\nabla_{\mathcal{C}(\Sigma)}^2 u(p)\right\vert}\leq C_0;$$
2. given $\delta>0$, there is a $\kappa\in (0,1)$ and $\mathcal{R}>1$ depending only on $n,\Lambda, \epsilon_0$ and $\delta$ so that if $p\in\Sigma\setminus B_{\mathcal{R}}$ and $r=\kappa |{\mathbf{x}}(p)|$, then $\Sigma\cap B_r(p)$ can be written as a connected graph of a function $v$ over a subset of $T_p\Sigma$ with $|Dv|\leq\delta$.
As such, for any $R\geq R_0$, $\Sigma\backslash B_R$ is diffeomorphic to $\mathcal{L}(\Sigma)\times [0, \infty)$.
For any sequence $\Sigma_i\in\mathcal{ACS}_n[\Lambda-\epsilon_0]$, by Corollary \[CpctnessACSCor\] and Proposition \[CpctnessLinksProp\], up to passing to a subsequence, $\Sigma_i\to\Sigma$ in $C^\infty_{loc} ({\mathbb R}^{n+1})$ for some $\Sigma\in\mathcal{ACS}_n[\Lambda-\epsilon_0]$, and $\mathcal{L}(\Sigma_i)\to\mathcal{L}(\Sigma)$ in $C^\infty (\mathbb{S}^n)$. Let $\mathcal{K}_i$ and $\mathcal{K}$ be the associated Brakke flows to $\Sigma_i$ and $\Sigma$, respectively. As $\Sigma\in\mathcal{ACS}_n$, $\mathcal{K}\lfloor (B_2\setminus \bar{B}_1)\times [-1,0]$ is a smooth mean curvature flow. Furthermore, since $\mathcal{K}_i\to\mathcal{K}$, it follows from Brakke’s local regularity theorem that $\Sigma_i$ have uniform curvature decay, more precisely, there exist $R, C>0$ so that for all $i$ and $p\in\Sigma_i\setminus B_R$, $$\sum_{k=0}^2 {\left\vert{\mathbf{x}}(p)\right\vert}^{k+1}{\left\vert\nabla^k_{\Sigma_i} A_{\Sigma_i}(p)\right\vert}\leq C,$$ where $A_{\Sigma_i}$ is the second fundamental form of $\Sigma_i$. As the $\mathcal{C}(\Sigma_i)\to\mathcal{C}(\Sigma)$, by [@WaRigid Lemma 2.2] and [@BernsteinWang2 Proposition 4.2], there exist $R^\prime, C^\prime>0$ so that Items (1) and (2) in the statement hold for all $\Sigma_i$. This establishes the corollary by the arbitrariness of the $\Sigma_i$.
Finally, we need the fact that closed self-shrinkers of small entropy have an upper bound on their extrinsic diameter.
\[DiamBoundProp\] Fix $n\geq 3$, $\Lambda\leq\lambda_{n-1}$, and $\epsilon_0>0$. Suppose that both and hold. Then there is a $R_D=R_D(n,\Lambda, \epsilon_0)$ so that if $\Sigma\in \mathcal{S}_n[\Lambda-\epsilon_0]$ is closed, then $\Sigma\subset \bar{B}_{R_D}$.
We argue by contradiction. If this was not true, then there would be a sequence of $\Sigma_i\in \mathcal{S}_n[\Lambda-\epsilon_0]$ with the property that there are points $p_i\in \Sigma_i$ with $|p_i|\to \infty$. In particular, for each $R>\sqrt{2n}$, there is an $i_0=i_0(R)$ so that if $i>i_0(R)$, then $\Sigma_i\cap \partial B_R\neq \emptyset$. Indeed, if this was not the case, then the mean curvature flows ${\left\{\sqrt{-t}\, \Sigma\right\}}_{t\in [-1,0)}$ and ${\left\{\partial B_{\sqrt{R^2-2n(t+1)}}\right\}}_{t\in [-1,0)}$ would violate the avoidance principle.
Now, let $\mu_i=\mu_{\Sigma_i}\in \mathcal{SM}_n[\Lambda-\epsilon_0]$. By the integral compactness theorem for $F$-stationary varifolds, up to passing to a subsequence the $\mu_i$ converge to a $\mu\in \mathcal{SM}_n[\Lambda-\epsilon_0]$. By Proposition \[CondRegProp\], $\mu=\mu_\Sigma$ for some $\Sigma\in \mathcal{S}_n [\Lambda-\epsilon_0]$. Furthermore, up to passing to a further subspace, $\Sigma_i\to \Sigma$ in $C^\infty_{loc}({\mathbb R}^{n+1})$. It follows that $\Sigma \cap \partial B_R\neq \emptyset$ for all $R>\sqrt{2n}$. In other words, $\Sigma$ is non-compact and so, by Proposition \[CondTotRegProp\], $\Sigma\in \mathcal{ACS}_n$. However, this implies that $\Sigma$ is non-collapsed (cf. [@BernsteinWang Definition 4.6]), while the $\Sigma_i$ are collapsed by [@BernsteinWang Lemma 4.8]. This contradicts [@BernsteinWang Proposition 4.10] and completes the proof.
Singularities of Flows with Small Entropy {#SingularitySec}
=========================================
Given a Brakke flow $\mathcal{K}={\left\{\mu_t\right\}}_{t\in I}$ and a point $({\mathbf{x}}_0,t_0)\in {{\ensuremath{\mathop{\mathrm{sing}}} }}(\mathcal{K})$ with $t_0\in \mathring{I}$, a tangent flow $\mathcal{T}\in\mathrm{Tan}_{({\mathbf{x}}_0,t_0)}\mathcal{K}$ is of *compact type* if $\mathcal{T}={\left\{\nu_t\right\}}_{t\in (-\infty,\infty)}$ and ${{\ensuremath{\mathop{\mathrm{spt}}} }}(\nu_{-1})$ is compact. Otherwise, the tangent flow is of *non-compact type*. If every element of $\mathrm{Tan}_{({\mathbf{x}}_0, t_0)}\mathcal{K}$ is of compact type, then $({\mathbf{x}}_0,t_0)$ is a *compact singularity*. Likewise, if every element of $\mathrm{Tan}_{({\mathbf{x}}_0, t_0)}\mathcal{K}$ is of non-compact type, then $({\mathbf{x}}_0,t_0)$ is a *non-compact singularity*.
For the remainder of this section, we fix a dimension $n\geq 3$ and constants $\Lambda\in (\lambda_n, \lambda_{n-1}]$[^1] and $\epsilon_0>0$, and suppose that both and hold. We further assume that $\Sigma_0\subset {\mathbb R}^{n+1}$ is a closed connected hypersurface with $\lambda[\Sigma_0]\leq\Lambda-\epsilon_0$ and with the property that the level set flow $L(\Sigma_0)$ is non-fattening and that $(E,\mathcal{K})$ is the pair given by Theorem \[UnitdensityThm\].
\[RegTanProp\] Let $({\mathbf{x}}_0,t_0)\in {{\ensuremath{\mathop{\mathrm{sing}}} }}(\mathcal{K})$ and $\mathcal{T}\in \mathrm{Tan}_{({\mathbf{x}}_0,t_0)}\mathcal{K}$. If $\mathcal{T}={\left\{\nu_t\right\}}_{t\in (-\infty,\infty)}$ is of non-compact type, then $\nu_{-1}=\mu_\Sigma$ for some $\Sigma\in\mathcal{ACS}_n$. Moreover, there is a constant $R_1=R_1(n, \Lambda, \epsilon_0)$ so that for all $R\geq R_1$, $$\mathcal{T} \lfloor \left(B_{16R}\setminus\bar{B}_{R}\right)\times (-1,1)$$ is a smooth mean curvature flow. Moreover, for all $\rho\in (R,16R)$ and $t\in (-1,1)$, $\partial B_\rho$ meets ${{\ensuremath{\mathop{\mathrm{spt}}} }}(\nu_t)$ transversally and $\partial B_\rho\cap {{\ensuremath{\mathop{\mathrm{spt}}} }}(\nu_t)$ is connected.
First, invoking Theorem \[BlowupsThm\] and the monotonicity formula, $\mathcal{T}$ is backwardly self-similar with respect to parabolic scalings about $(\mathbf{0},0)$ and $\nu_{-1}\in\mathcal{SM}_n [\Lambda-\epsilon_0]$. Furthermore, by Proposition \[CondTotRegProp\], we have $\nu_{-1}=\mu_\Sigma$ for some $\Sigma\in\mathcal{ACS}_n[\Lambda-\epsilon_0]$. Finally, by Corollary \[GraphCor\], the pseudo-locality property of mean curvature flow [@IlmanenNevesSchulze Theorem 1.5][^2] and Brakke’s local regularity theorem, there is an $R_1>0$ depending only on $n,\Lambda,\epsilon_0$ so that for $R>R_1$, $$\mathcal{T} \lfloor \left(B_{16R}\setminus \bar{B}_R\right)\times (-1,1)$$ is a smooth mean curvature flow. Indeed, for all $t\in (-1,1)$, ${{\ensuremath{\mathop{\mathrm{spt}}} }}(\nu_t)\cap \left(B_{16R}\setminus \bar{B}_R\right)$ is the graph of a function over a subset of $\mathcal{C}(\Sigma)$ the asymptotic cone of $\Sigma$ with small $C^2$ norm. As such, for all $\rho\in (R,16R)$ and $t\in (-1,1)$, $\partial B_\rho$ meets ${{\ensuremath{\mathop{\mathrm{spt}}} }}(\nu_t)$ transversally. As $\lambda[\Sigma]\leq \lambda[\Sigma_0]<\lambda_{n-1}$ it follows from [@BernsteinWang2 Theorem 1.1] that $\mathcal{L}(\Sigma)$, the link of $\mathcal{C}(\Sigma)$, is connected and, hence, so is $\partial B_\rho \cap {{\ensuremath{\mathop{\mathrm{spt}}} }}(\nu_t)$.
Next we observe that singularities are either compact or non-compact.
\[NonCpctOrCpctLem\] Each $({\mathbf{x}}_0,t_0)\in{{\ensuremath{\mathop{\mathrm{sing}}} }}(\mathcal{K})$ is either a compact or a non-compact singularity.
Suppose that $({\mathbf{x}}_0,t_0)$ is not a non-compact singularity. Then there is a $\mathcal{T}=\{\nu_t\}_{t\in{\mathbb R}}\in\mathrm{Tan}_{({\mathbf{x}}_0,t_0)}\mathcal{K}$ of compact type. By the monotonicity formula and Theorem \[BlowupsThm\], $\nu_{-1}\in\mathcal{SM}_n[\Lambda-\epsilon_0]$. It follows from Proposition \[CondTotRegProp\] that $\nu_{-1}=\mu_\Sigma$ for some $\Sigma\in\mathcal{S}_n[\Lambda-\epsilon_0]$ and $\Sigma$ is closed. Hence, by [@Schulze Corollary 1.2], $\mathcal{T}$ is the only element of $\mathrm{Tan}_{({\mathbf{x}}_0,t_0)}\mathcal{K}$ and so $({\mathbf{x}}_0,t_0)$ is a compact singularity, proving the claim.
We further prove that
\[IsolatedSingThm\] Given $({\mathbf{x}}_0, t_0)\in {{\ensuremath{\mathop{\mathrm{sing}}} }}(\mathcal{K})$, there exist $\rho_0=\rho_0({\mathbf{x}}_0,t_0, \mathcal{K})>0$ and $\alpha=\alpha(n,\Lambda,\epsilon_0)>1$ so that:
1. If $({\mathbf{x}}_0,t_0)$ is a compact singularity and $\rho<\rho_0$, then $$\mathcal{K}\lfloor \left(B_{2\alpha \rho}({\mathbf{x}}_0)\times (t_0-4\alpha^2\rho^2, t_0+4\alpha^2\rho^2) \backslash {\left\{({\mathbf{x}}_0,t_0)\right\}} \right)$$ is a smooth mean curvature flow. Furthermore, for all $R\in (\frac{1}{2}\alpha \rho, 2\alpha \rho)$ and $t\in (t_0 -\rho^2, t_0+\rho^2)$, ${{\ensuremath{\mathop{\mathrm{spt}}} }}(\mu_t)\cap \partial B_{R}({\mathbf{x}}_0)=\emptyset$.
2. If $({\mathbf{x}}_0,t_0)$ is a non-compact singularity and $\rho<\rho_0$, then $$\mathcal{K}\lfloor \left(B_{2\alpha \rho}({\mathbf{x}}_0)\times (t_0-4\alpha^2\rho^2, t_0] \backslash {\left\{({\mathbf{x}}_0,t_0)\right\}} \right)$$ and $$\mathcal{K}\lfloor \left(B_{2\alpha\rho}({\mathbf{x}}_0)\backslash \bar{B}_{\frac{1}{2}\alpha\rho}({\mathbf{x}}_0)\right)\times (t_0-\rho^2, t_0+\rho^2)$$ are both smooth mean curvature flows. Furthermore, for all $R\in (\frac{1}{2}\alpha \rho, 2\alpha \rho)$ and $t\in (t_0 -\rho^2, t_0+\rho^2)$, $\partial B_{R}({\mathbf{x}}_0)$ meets ${{\ensuremath{\mathop{\mathrm{spt}}} }}(\mu_t)$ transversally and the intersection is connected.
Finally, for all $t\in (t_0-\rho^2, t_0)$, ${{\ensuremath{\mathop{\mathrm{spt}}} }}(\mu_t)\cap \bar{B}_{\alpha \rho}({\mathbf{x}}_0)$ is diffeomorphic (possibly as a manifold with boundary) to $\Gamma\cap \bar{B}_{\alpha}$, where $\Gamma\in \mathcal{S}_n^*[\Lambda-\epsilon_0]$ and, if $\Gamma\in \mathcal{ACS}_n$, then $\Gamma\backslash B_\alpha$ is diffeomorphic to $\mathcal{L}(\Gamma)\times[0,\infty)$.
Set $\alpha=4\max{\left\{R_1, R_D,1\right\}}$ where $R_1$ is given by Proposition \[RegTanProp\] and $R_D$ is given by Proposition \[DiamBoundProp\]. Without loss of generality, we may assume that $({\mathbf{x}}_0,t_0)=(\mathbf{0},0)$.
We establish the regularity near (but not at) $({\mathbf{0}},0)$ by contradiction. To that end, suppose that there was a sequence of points $({\mathbf{x}}_i,t_i)\in {{\ensuremath{\mathop{\mathrm{sing}}} }}(\mathcal{K})\backslash {\left\{({\mathbf{0}}, 0)\right\}}$ such that $({\mathbf{x}}_i,t_i)\to (\mathbf{0},0)$. If $(\mathbf{0},0)$ is a non-compact singularity, we further assume $t_i\leq 0$. Let $r_i^2=|{\mathbf{x}}_i|^2+|t_i|$. Then, up to passing to a subsequence, it follows from Theorem \[BlowupsThm\] that $\mathcal{K}^{(\mathbf{0},0),r_i}\to\mathcal{T}$ in the sense of Brakke flows and $\mathcal{T}=\{\nu_t\}_{t\in{\mathbb R}}\in\mathrm{Tan}_{(\mathbf{0},0)}\mathcal{K}$. Let $\tilde{{\mathbf{x}}}_i=r^{-1}_i {\mathbf{x}}_i$ and $\tilde{t}_i=r^{-2}_i t_i$. Then $|\tilde{{\mathbf{x}}}_i|^2+|\tilde{t}_i|=1$, that is, $(\tilde{{\mathbf{x}}}_i,\tilde{t}_i)$ lies on the unit parabolic sphere in space-time. Thus, up to passing to a subsequence, $(\tilde{{\mathbf{x}}}_i,\tilde{t}_i)\to (\tilde{{\mathbf{x}}}_0,\tilde{t}_0)$, where $|\tilde{{\mathbf{x}}}_0|^2+|\tilde{t}_0|=1$. Moreover, the upper semi-continuity of Gaussian density implies that $\Theta_{(\tilde{{\mathbf{x}}}_0,\tilde{t}_0)}(\mathcal{T})\geq 1$.
As $\nu_{-1}\in \mathcal{SM}_n [\Lambda-\epsilon_0]$, Proposition \[CondTotRegProp\] implies that ${{\ensuremath{\mathop{\mathrm{sing}}} }}(\nu_t)=\emptyset$ for $t<0$. That is, $(\tilde{{\mathbf{x}}}_0,\tilde{t}_0)$ is a regular point of $\mathcal{T}$ if $\tilde{t}_0<0$. If $(\mathbf{0},0)$ is a non-compact singularity, then $\mathcal{T}$ is of non-compact type and $\tilde{t}_0\leq 0$. Hence, either $(\tilde{{\mathbf{x}}}_0,\tilde{t}_0)$ is a regular point or $\tilde{t}_0=0$ and $|\tilde{{\mathbf{x}}}_0|=1$. However in the later case, Proposition \[RegTanProp\] applied to $ \mathcal{T}^{(\mathbf{0}, 0), \alpha}\in\mathrm{Tan}_{(\mathbf{0},0)}\mathcal{K}$ implies that $(\tilde{{\mathbf{x}}}_0,\tilde{t}_0)$ is also a regular point of $\mathcal{T}$. If $(\mathbf{0},0)$ is a compact singularity, then $\mathcal{T}$ is of compact type and $\nu_{-1}=\mu_\Gamma$ for some $\Gamma\in\mathcal{S}_n(\Lambda)$ by Proposition \[CondTotRegProp\]. This implies that $\mathcal{T}$ is extinct at time $0$ and ${{\ensuremath{\mathop{\mathrm{sing}}} }}(\mathcal{T})=\{(\mathbf{0},0)\}$, again implying that $\tilde{t}_0\leq 0$ and $(\tilde{{\mathbf{x}}}_0,\tilde{t}_0)$ is a regular point of $\mathcal{T}$. Hence, it follows from Brakke’s local regularity theorem that for all $i$ sufficiently large, $(\tilde{{\mathbf{x}}}_i,\tilde{t}_i)\notin{{\ensuremath{\mathop{\mathrm{sing}}} }}(\mathcal{K}^{(\mathbf{0},0),r_i})$, or equivalently, $({\mathbf{x}}_i,t_i)\notin{{\ensuremath{\mathop{\mathrm{sing}}} }}(\mathcal{K})$. This is the desired contradiction. Therefore, for $\rho_0'>0$ sufficiently small, if $\rho<\rho_0'$ and $(\mathbf{0},0)$ is a non-compact singularity, then $$\mathcal{K} \lfloor \left(B_{2\alpha\rho}\times (-4\alpha^2\rho^2,0]\setminus\{(\mathbf{0},0)\}\right)$$ is a smooth mean curvature flow, while, if $\rho<\rho_0'$ and $(\mathbf{0},0)$ is a compact singularity, then $$\mathcal{K} \lfloor \left(B_{2\alpha\rho}\times (-4\alpha^2 \rho^2,4\alpha^2 \rho^2)\setminus\{(\mathbf{0},0)\}\right)$$ is a smooth mean curvature flow.
We continue arguing by contradiction and again consider a sequence, $\rho_i$, of positive numbers with $\rho_i\to 0$ and $\rho_i<\rho_0'$. Up to passing to a subsequence, $\mathcal{K}^{(\mathbf{0},0),\rho_i}$ converges, in the sense of Brakke flows, to some $\mathcal{T}=\{\nu_t\}_{t\in{\mathbb R}}\in\mathrm{Tan}_{(\mathbf{0},0)}\mathcal{K}$. If $({\mathbf{0}},0)$ is a compact singularity, then, as $\alpha\geq 4 R_D$, $\partial B_R \cap {{\ensuremath{\mathop{\mathrm{spt}}} }}(\nu_{t})=\emptyset$ for $R\geq\frac{1}{2}\alpha$ and $t\in (-1,1)$ by Proposition \[DiamBoundProp\] and the avoidance principle. Hence, the nature of the convergence implies that, for $\rho_i$ sufficiently large, $\partial B_{R}\cap {{\ensuremath{\mathop{\mathrm{spt}}} }}(\mu_t)=\emptyset$ for $t\in (-\rho^2_i,\rho^2_i)$ and $R\in (\frac{1}{2} \alpha \rho_i, 2\alpha \rho_i)$. If $({\mathbf{0}}, 0)$ is a non-compact singularity, then Proposition \[RegTanProp\], implies that $$\mathcal{T} \lfloor \left(B_{4\alpha}\setminus \bar{B}_{\frac{1}{4}\alpha}\right)\times (-1,1)$$ is a smooth mean curvature flow and for all $R\in (\frac{1}{4}\alpha,4\alpha)$ and $t\in (-1,1)$, $\partial B_R$ meets ${{\ensuremath{\mathop{\mathrm{spt}}} }}(\nu_t)$ transversally and as a connected set. Thus, by Brakke’s local regularity theorem, for all $i$ sufficiently large, $$\mathcal{K}^{(\mathbf{0},0),\rho_i} \lfloor \left(B_{2\alpha}\setminus\bar{B}_{\frac{1}{2} \alpha}\right)\times (-1,1)$$ is a smooth mean curvature flow, and hence so is $$\mathcal{K} \lfloor \left(B_{2\alpha \rho_i}\setminus\bar{B}_{\frac{1}{2}\alpha \rho_i}\right) \times (-\rho_i^2,\rho_i^2).$$ Moreover, for all $R\in (\frac{1}{2}\alpha \rho_i,2\alpha\rho_i )$ and $t\in (-\rho_i^2,\rho_i^2)$, $\partial B_R$ meets $\mu_t$ transversally and as a connected set. Hence, as the sequence $\rho_i$ was arbitrary, there is a $\rho_0''<\rho_0'$ so that Items (1) and (2) hold for $\rho<\rho_0''$.
To complete the proof, we observe that again arguing by contradiction, there is a $\rho_0<\rho_0''$ so that if $\rho<\rho_0$, $B_{2\alpha }\cap \rho^{-1} {{\ensuremath{\mathop{\mathrm{spt}}} }}(\mu_{-\rho^2})$ is a normal graph over a domain $\Omega$ in $\Gamma$ with small $C^2$ norm for some $\Gamma\in \mathcal{S}_n [\Lambda-\epsilon_0]$. In particular, by Corollary \[GraphCor\], $\partial\Omega$ is a small normal graph over $\partial B_{\alpha} \cap \Gamma$, so $\bar{B}_{\alpha\rho }\cap {{\ensuremath{\mathop{\mathrm{spt}}} }}(\mu_{-\rho^2})$ is diffeomorphic to $\bar{B}_\alpha \cap \Gamma$. Furthermore, the choice of $\alpha$ ensures that if $\Gamma\in \mathcal{ACS}_n$, then $\Gamma\backslash B_{\alpha}$ is diffeomorphic to $\mathcal{L}(\Sigma)\times [0, \infty)$. It remains only to show that $\bar{B}_{\alpha\rho }\cap {{\ensuremath{\mathop{\mathrm{spt}}} }}(\mu_{t})$ is diffeomorphic to $\bar{B}_\alpha \cap \Gamma$ for $t\in (-\rho^2, 0)$. This follows from the fact that, as already established, the flow is smooth in $\bar{B}_{2\alpha \rho} \times [-2\rho^2, 0)$ and, for all $t\in [-\rho^2,0)$, either $\partial B_{\alpha\rho} \cap {{\ensuremath{\mathop{\mathrm{spt}}} }}(\mu_t)=\emptyset$ (if the singularity is compact) or the intersection is transverse (if the singularity is non-compact). As such, the flow provides a diffeomorphism between $\bar{B}_{\alpha\rho }\cap {{\ensuremath{\mathop{\mathrm{spt}}} }}(\mu_{t})$ and $\bar{B}_{\alpha\rho }\cap {{\ensuremath{\mathop{\mathrm{spt}}} }}(\mu_{-\rho^2})$ – see Appendix A.
We obtain a direct consequence of Theorem \[IsolatedSingThm\].
\[TimeSingCor\] For each $t_0>0$, ${{\ensuremath{\mathop{\mathrm{sing}}} }}_{t_0}(\mathcal{K})={\left\{{\mathbf{x}}: ({\mathbf{x}},t_0)\in {{\ensuremath{\mathop{\mathrm{sing}}} }}(\mathcal{K})\right\}}$ is finite.
Given a manifold $M$ we say a subset $U\subset M$ is a *smooth domain* if $U$ is open and $\partial U$ is a smooth submanifold.
\[CondSurgThm\] There is an $N=N(\Sigma_0)\in\mathbb{N}$ and a sequence of closed connected hypersurfaces $\Sigma^1, \ldots, \Sigma^N$ so that:
1. $\Sigma^1=\Sigma_0$;
2. $\Sigma^N$ is diffeomorphic to $\mathbb{S}^n$;
3. For each $i$ with $1\leq i \leq N-1$, there is an $m=m(i)\in \mathbb{N}$ and open connected pairwise disjoint smooth domains $U_1^i, \ldots, U_{m(i)}^i \subset \Sigma^i$ and $V_1^i, \ldots, V_{m(i)}^i \subset \Sigma^{i+1}$ so that:
- There are orientation preserving diffeomorphisms $$\hat{\Phi}^{i}:\Sigma^{i+1}\backslash \cup_{j=1}^{m(i)} V_j^{i}\to \Sigma^{i}\backslash \cup_{j=1}^{m(i)} U_j^i;$$
- Each $\bar{U}_j^i$ is diffeomorphic to $\bar{B}_{R_j^i}\cap \Gamma_j^i$ where $\Gamma_j^i\in \mathcal{ACS}_n^*(\Lambda)$ and $\Gamma_j^i\backslash B_{R_j^i}$ is diffeomorphic to $\mathcal{L}(\Gamma_j^i)\times [0,\infty)$.
Let us denote the set of compact singularities of $\mathcal{K}$ by ${{\ensuremath{\mathop{\mathrm{sing}}} }}^C(\mathcal{K})$ and the set of non-compact singularities by ${{\ensuremath{\mathop{\mathrm{sing}}} }}^{NC}(\mathcal{K})$. By Lemma \[NonCpctOrCpctLem\], ${{\ensuremath{\mathop{\mathrm{sing}}} }}(\mathcal{K})={{\ensuremath{\mathop{\mathrm{sing}}} }}^{NC}(\mathcal{K})\cup {{\ensuremath{\mathop{\mathrm{sing}}} }}^C(\mathcal{K})$. We note that if $X\in {{\ensuremath{\mathop{\mathrm{sing}}} }}^{NC}(\mathcal{K})$, then, by Proposition \[CondTotRegProp\], every element of $\mathrm{Tan}_X \mathcal{K}$ is the flow of an element of $\mathcal{ACS}_n$ and so the tangent flows are non-collapsed at time $0$ in the sense of [@BernsteinWang Definition 4.9]. Hence, by [@BernsteinWang Lemma 5.1], ${{\ensuremath{\mathop{\mathrm{sing}}} }}^C(\mathcal{K})\neq \emptyset$. In fact, if we define the extinction time of $\mathcal{K}$ to be $$T(\mathcal{K})=\sup{\left\{t: {{\ensuremath{\mathop{\mathrm{spt}}} }}(\mu_t)\neq \emptyset\right\}},$$ then $$\emptyset \neq {\left\{{\mathbf{x}}\in {\mathbb R}^{n+1}: \Theta_{({\mathbf{x}}_0,T(\mathcal{K}))} (\mathcal{K})\geq 1\right\}}={\left\{{\mathbf{x}}\in {\mathbb R}^{n+1}: ({\mathbf{x}}, T(\mathcal{K}))\in {{\ensuremath{\mathop{\mathrm{sing}}} }}{}^C (\mathcal{K})\right\}}.$$ It follows from Theorem \[IsolatedSingThm\] that ${{\ensuremath{\mathop{\mathrm{sing}}} }}^C(\mathcal{K})$ consists of at most a finite number of points.
Observe that if ${{\ensuremath{\mathop{\mathrm{sing}}} }}(\mathcal{K})$ consists of exactly one point $X_0$, then we can take $N=1$. Indeed, by the above discussion, this singularity must be compact and hence, by Proposition \[CondTotRegProp\], there is a $\Gamma\in \mathcal{S}_n(\Lambda)$ diffeomorphic to $\mathbb{S}^n$ so that one of the tangent flows at $X_0$ is the flow associated to $\mu_{\Gamma}$. In this case we may write $\mathcal{K}={\left\{\mu_{\Sigma_t}\right\}}_{t\in [0, T(\mathcal{K}))}$ where ${\left\{\Sigma_t\right\}}_{t\in [0, T(\mathcal{K}))}$ is a smooth mean curvature flow. By Brakke’s regularity theorem, there is a $t$ near $T(\mathcal{K})$ so that $\Sigma_t$ is a small normal graph over $\Gamma$ and hence $\Sigma^1=\Sigma_0$ is diffeomorphic to $\Gamma$, verifying the claim.
Now let $\mathrm{ST}(\mathcal{K})={\left\{t\in {\mathbb R}: ({\mathbf{x}},t)\in {{\ensuremath{\mathop{\mathrm{sing}}} }}(\mathcal{K})\right\}}$ be the set of singular times. Notice that by Corollary \[TimeSingCor\] there are at most a finite number of singular points associated to each singular time. We observe that as $\Sigma^1=\Sigma_0$ is smooth, there is a $\delta>0$ so that $\mathrm{ST}(\mathcal{K})\subset [\delta, T(\mathcal{K})]$. Furthermore, as ${{\ensuremath{\mathop{\mathrm{sing}}} }}(\mathcal{K})$ is a closed set, so is $\mathrm{ST}(\mathcal{K})$.
For each $t\in \mathrm{ST}(\mathcal{K})$, let $$\rho(t)=\min{\left\{\rho_0({\mathbf{x}}, t, \mathcal{K}): {\mathbf{x}}\in {{\ensuremath{\mathop{\mathrm{sing}}} }}{}_t(\mathcal{K})\right\}}>0,$$ where $\rho_0({\mathbf{x}}, t, \mathcal{K})$ is the constant given by Theorem \[IsolatedSingThm\]. This minimum is positive as ${{\ensuremath{\mathop{\mathrm{sing}}} }}_{t}(\mathcal{K})$ is a finite set. Observe that by Theorem \[IsolatedSingThm\], $$\label{DisjointEqn}
B_{\alpha\rho(t)}({\mathbf{x}})\cap B_{\alpha\rho(t)}({\mathbf{x}}^\prime)=\emptyset$$ when ${\mathbf{x}},{\mathbf{x}}^\prime$ are distinct elements of ${{\ensuremath{\mathop{\mathrm{sing}}} }}_t(\mathcal{K})$ and $\alpha=\alpha(n, \Lambda, \epsilon_0)$ is given by Theorem \[IsolatedSingThm\]. Next, choose $\tau(t)\in (0,\rho^2(t))$ so that $$\mathcal{K} \lfloor \left({\mathbb R}^{n+1}\setminus\bigcup_{{\mathbf{x}}\in{{\ensuremath{\mathop{\mathrm{sing}}} }}_t(\mathcal{K})} \bar{B}_{\alpha\rho(t)}({\mathbf{x}})\right)\times \left(t-\tau(t),t+\tau(t)\right)$$ is a smooth mean curvature flow. Such a $\tau$ exists as ${{\ensuremath{\mathop{\mathrm{sing}}} }}(\mathcal{K})$ is a closed set.
As $\mathrm{ST}(\mathcal{K})$ is a closed subset of $[0,T(\mathcal{K})]$, it is a compact set and so the open cover $${\left\{(t-\tau(t), t+\tau(t)): t\in \mathrm{ST}(\mathcal{K})\right\}}$$ of $\mathrm{ST}(\mathcal{K})$ has a finite subcover. That is, there are a finite number of times $t_1, \ldots, t_{N^\prime}\in \mathrm{ST}(\mathcal{K})$, labeled so that $t_i<t_{i+1}$ and chosen so that $$\mathrm{ST}(\mathcal{K})\subset \bigcup_{i=1}^{N^\prime} (t_i-\tau(t_i),t_i+\tau(t_i)).$$ Furthermore, we can assume that for each $i$:
1. For all $j>i$, $t_i-\tau(t_i)< t_j- \tau(t_j)$,
2. For all $j<i$, $t_i+\tau(t_i)>t_j+\tau(t_j)$, and
3. For all $j<i<j'$, $t_j+\tau(t_j)<t_{j'}-\tau(t_{j'})$.
As otherwise, we could delete $(t_i-\tau(t_i),t_i+\tau(t_i))$ and still have an open cover. Note that, by the definition of $\tau(t)$, one must have $t_{N^\prime}=T(\mathcal{K})$.
By Theorem \[IsolatedSingThm\] we may choose a sequence of points $s^\pm_1, \ldots, s_{N^\prime}^\pm$ with $t_i\in (s_i^-, s_i^+)$, $|s_i^\pm-t_i|<\tau(t_i)$, $s_i^+\leq s_{i+1}^-$ and so that $$\left([0, s_1^-]\cup \bigcup_{i=1}^{N^\prime-1}[s_i^+, s_{i+1}^-]\right)\cap \mathrm{ST}(\mathcal{K})=\emptyset.$$ More concretely, first take $s_1^-\in (t_1-\tau(t_1), t_1)$ with $s_1^->0$ and $s_{N^\prime}^+=t_{N^\prime}+\frac{1}{2}\tau(t_{N^\prime})$. For $1\leq i\leq N^\prime-1$, let $$\tilde{s}_i^+=\sup \left( \mathrm{ST}(\mathcal{K})\cap (t_i-\tau(t_i), t_i+\tau(t_i))\right)$$ and for $2\leq i \leq N^\prime$, let $$\tilde{s}_i^-=\inf \left( \mathrm{ST}(\mathcal{K})\cap (t_i-\tau(t_i), t_i+\tau(t_i))\right).$$ The definition of $\tau(t_i)$ and Theorem \[IsolatedSingThm\] imply that $\tilde{s}_i^-=t_i$. As the set of singular times is closed and $t_i\in \mathrm{ST}(\mathcal{K})$, $\tilde{s}_i^+\in \mathrm{ST}(\mathcal{K})$ and $t_i \leq \tilde{s}^+_i$. We treat two cases. In the first case we suppose that $t_{i+1}-\tau(t_{i+1})<t_i+\tau(t_i) $. As $\tilde{s}_{i+1}^-=t_{i+1}$, there are then no singular times in the interval $(t_{i+1}-\tau(t_{i+1}),t_i+\tau(t_i))$ and so we may take $s_i^+=s_{i+1}^-$ to be the same point in this interval. In the second case, we suppose that $t_i+\tau(t_i)\leq t_{i+1}-\tau(t_{i+1})$ and observe that $\tilde{s}_i^+\leq t_i+\tau(t_i)\leq t_{i+1}-\tau(t_{i+1})$. In fact, $\tilde{s}_i^+<t_i+\tau(t_i)$ as otherwise in order to cover $\mathrm{ST}(\mathcal{K})$ assumption (3) from above would not hold. Pick $s_i^+$ as some point in $(\tilde{s}_i^+, t_i+\tau(t_i))$ and $s_{i+1}^-$ as some point in $( t_{i+1}-\tau(t_{i+1}), t_i)$. The lack of singular times in $[0,s_0^-]$ and in each $[s_i^+, s_{i+1}^-]$ follows by our choices and assumptions (1) and (3) above.
For $1\leq i \leq N^\prime$ set $\Sigma^i_\pm = {{\ensuremath{\mathop{\mathrm{spt}}} }}(\mu_{s_i^\pm})$. By the choice of $s_i^\pm$, each $\Sigma^i_\pm$ is a closed hypersurface and, as there are no singular times between $s_i^+$ and $s_{i+1}^-$, we have for $1\leq i \leq N^\prime-1$ diffeomorphisms $\Phi^i: \Sigma_+^i \to \Sigma_-^{i+1}$ coming from the flow and, for the same reason, a diffeomorphism $\Phi^0: \Sigma^1\to \Sigma^1_-$. Observe that, *a priori*, the $\Sigma^i_\pm$ need not consist of one component (indeed, $\Sigma^{N^\prime}_+$ is empty). By Corollary \[TimeSingCor\], ${{\ensuremath{\mathop{\mathrm{sing}}} }}_{t_i}(\mathcal{K})$ is finite for each $1\leq i \leq N^\prime$ and we write $${\left\{{\mathbf{x}}_i^1, \ldots, {\mathbf{x}}_i^{M(i)}\right\}}={{\ensuremath{\mathop{\mathrm{sing}}} }}{}_{t_i}(\mathcal{K})$$ i.e., the $({\mathbf{x}}_i^j,t_i)$ are the singular points of the flow at time $t_i$. Up to relabeling, there is an $0\leq m(i)\leq M(i)$ so that for $1\leq j \leq m(i)$, $({\mathbf{x}}_i^j, t_i)\in {{\ensuremath{\mathop{\mathrm{sing}}} }}^{NC}(\mathcal{K})$ while for $m(i)<j\leq M(i)$, $({\mathbf{x}}_i^j,t_i)\in {{\ensuremath{\mathop{\mathrm{sing}}} }}^C(\mathcal{K})$. Set $R^i=\alpha\rho(t_i)$ and, for each ${\mathbf{x}}_i^j$, let $U^{i}_{j,\pm}\subset \Sigma^i_\pm$ be the sets $ B_{R^i}({\mathbf{x}}_i^j)\cap \Sigma^i_\pm$. By for fixed $j$, these are pairwise disjoint sets and, by Theorem \[IsolatedSingThm\], these intersections are transverse and so the $\sigma^{i}_{j,\pm}=\partial U^i_{j,\pm}$ are submanifolds of $\Sigma^i_\pm$. Hence, the $U^i_{j,\pm}$ are smooth pairwise disjoint domains.
Furthermore, by Theorem \[IsolatedSingThm\] and fact that $\tau(t)<\rho(t)$, each $\bar{U}^i_{j,-}$ is diffeomorphic to $\bar{B}_{\alpha}\cap \Gamma_{j}^i$ for some $\Gamma_{j}^i \in \mathcal{S}_n$. In particular, for $j>m(i)$ we have that $\bar{U}^i_{j,-}$ is a closed connected hypersurface, while for $1\leq j\leq m(i)$, $\partial \bar{U}^i_{j,-}$ is non-empty and connected. Hence, for $j>m(i)$, $\bar{U}^i_{j,+}=\emptyset$, while for $1\leq j\leq m(i)$, $\partial \bar{U}^i_{j,+}$ is non-empty and connected. Furthermore, Theorem \[IsolatedSingThm\] implies that there are diffeomorphisms (see Appendix A) $$\Psi^i: \Sigma_-^i\backslash \bigcup_{j=1}^{M(i)} U_{j,-}^i \to \Sigma_+^i \backslash \bigcup_{j=1}^{M(i)} U_{j,+}^i.$$
As $\Sigma^1$ is connected and $\Phi^0(\Sigma^1)=\Sigma^1_-$, $\Sigma^1_-$ is also connected. As each $\sigma^1_{j,-}$ is connected, we obtain that $\hat{\Sigma}^1_-=\Sigma^1_-\backslash \bigcup_{j=1}^{M(1)} U_{j,-}^1$ is connected. Let $\tilde{\Sigma}^1_+$ be the connected component of $\Sigma^1_+$ that contains $\Psi^1(\hat{\Sigma}^1_{-})$. Inductively, let $\tilde{\Sigma}^{i+1}_-=\Phi^i(\tilde{\Sigma}^i_+)$ and $\hat{\Sigma}^{i+1}_-=\tilde{\Sigma}^{i+1}_-\backslash \bigcup_{j=1}^{M(i+1)} U_{j,-}^{i+1}$ and define $\tilde{\Sigma}^{i+1}_+$ to be the connected component of $\Sigma^{i+1}_+$ that contains $\Psi^{i+1}(\hat{\Sigma}^{i+1}_-)$. Here we adopt the convention that if $\hat{\Sigma}^{i+1}_-=\emptyset$, then $\tilde{\Sigma}^{i+1}_+=\emptyset$. It follows inductively that each $\tilde{\Sigma}^i_{\pm}$ is connected. Let $\tilde{\Phi}^i: \tilde{\Sigma}^i_+\to \tilde{\Sigma}^{i+1}_-$ be the diffeomorphisms given by restricting the $\Phi^i$. To be consistent we also set $\tilde{\Sigma}^1_-=\Sigma^1_-$ and $\tilde{\Phi}^0=\Phi^0$.
Finally let $$N=\max{\left\{1\leq i\leq N^\prime: \tilde{\Sigma}^k_-\neq\emptyset\mbox{ for all $1\leq k\leq i$}\right\}}.$$ If $N<N^\prime$, then, by constructions, $\hat{\Sigma}^N_-=\emptyset$ and $\tilde{\Sigma}^N_- = U^N_{j,-}$ for some $j>m(N)$. If $N=N^\prime$, then $t_N=T(\mathcal{K})$ at which all singularities are compact. Thus it follows from [@CIMW Theorem 0.7] that $\tilde{\Sigma}^N_-$ is diffeomorphic to $\mathbb{S}^n$. The theorem now follows by taking $\Sigma^i=\tilde{\Sigma}^i_-$ for $2\leq i \leq N$ and $\hat{\Phi}^i$ are the diffeomorphisms given by $(\tilde{\Phi}^i\circ\Psi^i)^{-1}$.
A Sharpening of [@BernsteinWang2] {#SharpeningSec}
=================================
In order to prove Theorem \[MainACSThm\], we begin with an elementary lemma.
\[StupidLem\] If ${\mathbf{x}}_1, \ldots, {\mathbf{x}}_{m+1}\in {\mathbb R}^{n+1}$ is a sequence of points so that $$\label{StupidHyp}
|{\mathbf{x}}_i-{\mathbf{x}}_{i+1}|\leq \hat{K} (1+|{\mathbf{x}}_i|)^{-1}$$ for $1\leq i \leq m$ and some $\hat{K}\geq 0$, then $$\label{StupidClaim}
|{\mathbf{x}}_1-{\mathbf{x}}_{m+1}|\leq K(m) (1+|{\mathbf{x}}_1|)^{-1}$$ where $K(m)=(\hat{K}+1)^m-1$.
We proceed by induction on $m$. The lemma is obviously true when $m=1$. Suppose holds for $m=m'$. Using this induction hypothesis with implies that $$|{\mathbf{x}}_1-{\mathbf{x}}_{m'+2}|\leq |{\mathbf{x}}_1-{\mathbf{x}}_{m'+1}|+|{\mathbf{x}}_{m'+1}-{\mathbf{x}}_{m'+2}|\leq K(m') (1+|{\mathbf{x}}_1|)^{-1}+\hat{K} (1+|{\mathbf{x}}_{m'+1}|)^{-1}.$$ Furthermore, by the induction hypothesis and triangle inequality $$|{\mathbf{x}}_{1}|\leq K(m') (1+|{\mathbf{x}}_{1}|)^{-1}+ |{\mathbf{x}}_{m'+1}|.$$ As $K(m')\geq 0$ and $(1+|{\mathbf{x}}_{1}|)^{-1}\leq 1$, this implies that $$1+ |{\mathbf{x}}_{1}| \leq 1+K(m')+|{\mathbf{x}}_{m'+1}|\leq (1+K(m')) (1+|{\mathbf{x}}_{m'+1}|).$$ That is, $$(1+|{\mathbf{x}}_{m'+1}|)^{-1}\leq (1+K(m')) (1+|{\mathbf{x}}_{1}|)^{-1}.$$ Hence, $$|{\mathbf{x}}_1-{\mathbf{x}}_{m'+2}|\leq (K(m')+\hat{K}(1+K(m'))) (1+|{\mathbf{x}}_1|)^{-1}$$ and, by the induction hypothesis, $K(m')=(\hat{K}+1)^{m'}-1$ and so setting $$K(m'+1)=K(m')+\hat{K}(1+K(m'))=(\hat{K}+1)^{m'+1}-1$$ verifies that holds for $m=m'+1$ and finishes the proof.
We next observe that the proof of the main result of [@BernsteinWang2 Theorem 0.1] actually allows us to make the following more refined conclusion.
\[MainACSProp\] Fix $n\geq 2$, if $\Sigma\in \mathcal{ACS}_n[\lambda_{n-1}]$, then there is a homeomorphic involution $\phi:\mathbb{S}^{n}\to \mathbb{S}^{n}$ which fixes $\mathcal{L} (\Sigma)$, the link of the asymptotic cone, $\mathcal{C} (\Sigma)$, of $\Sigma$, and swaps the two components of $\mathbb{S}^n\backslash \mathcal{L}(\Sigma)$.
By [@BernsteinWang2 Theorem 0.1], the link $\mathcal{L}(\Sigma)$ is connected and separates $\mathbb{S}^n$ into two components $\Omega_+$ and $\Omega_-$. In particular, $\mathcal{L}(\Sigma)=\partial \bar{\Omega}_+=\partial \bar{\Omega}_-$. In order to construct $\phi$, it is enough to show the existence of a homeomorphism ${\psi}: \bar{\Omega}_+\to \bar{\Omega}_-$ so that ${\psi}|_{\mathcal{L}(\Sigma)} : \mathcal{L}(\Sigma)\to \mathcal{L}(\Sigma)$ is the identity map. Indeed, if such a ${\psi}$ exists, one defines $\phi$ by $$\phi(p)=\left\{ \begin{array}{ll} {\psi}(p) & p\in \bar{\Omega}_+ \\ {\psi}^{-1}(p) & p \in \Omega_- \end{array}\right.$$
To explain the construction of ${\psi}$ let us first summarize the main objects used in the proof of [@BernsteinWang2 Theorem 0.1]. First, recall that it is shown there that associated to $\Sigma$ are two smooth mean curvature flows ${\left\{\Gamma_t^{\pm }\right\}}_{t\in[-1,0]}$ with $\Gamma_{-1}^{+}$ the normal exponential graph over $\Sigma$ of a small positive multiple of the lowest eigenfunction of the self-shrinker stability operator of $\Sigma$ (normalized to be positive) and $\Gamma^-_{-1}$ to be a small negative multiple of this function. In particular, by choosing the multiple small enough, one can ensure both that $\Gamma^+_{-1}$ is the exponential normal graph of some function on $\Gamma^-_{-1}$ and that $\Gamma^-_{-1}$ is the exponential normal graph of some function on $\Gamma^+_{-1}$. Furthermore, up to relabeling, each $\Gamma^{\pm}=\Gamma^{\pm}_0$ is diffeomorphic to $\Omega^\pm$ the components of $\mathbb{S}^n\backslash \mathcal{L}(\Sigma)$. Moreover, these diffeomorphisms, which we denote by $\Pi^\pm$, are given by restricting the map $$\Pi(p)=\frac{{\mathbf{x}}(p)}{|{\mathbf{x}}(p)|}$$ to $\Gamma^\pm$.
We next use the flow ${\left\{\Gamma^\pm_{t}\right\}}_{t\in[-1,0]}$ to construct a natural diffeomorphism ${\Psi}: \Gamma^+\to \Gamma^-$ which has the property that there is a constant $K>0$ so that $$\label{DistortionEst}\left|{\mathbf{x}}(p) -{\mathbf{x}}({\Psi}(p))\right|\leq \frac{K}{1+|{\mathbf{x}}(p)|}.$$ We do so iteratively. Specifically, by [@BernsteinWang2 Items (1) and (2) of Proposition 4.4 and Proposition 4.5] there is a constant $\tilde{C}_0>0$ so that $$\label{GammaCurvEst}
\sup_{t\in [-1,0]} \sup_{\Gamma_t^\pm} \left( |A_{\Gamma^\pm_t}|+|\nabla_{\Gamma^\pm_t} A_{\Gamma^\pm_t}|\right)<\tilde{C}_0.$$ This, together with [@BernsteinWang2 Item (3) of Proposition 4.4], implies that there is a $\rho>0$ so that for each $t\in [-1,0]$, $\mathcal{T}_{\rho}(\Gamma_t^\pm)$ is a regular tubular neighborhood of $\Gamma_t^\pm$. It follows from this and that there is a $\delta>0$ so that if $t_1, t_2\in [-1,0]$ and $|t_1-t_2|<\delta$, then $\Gamma^\pm_{t_1}$ is a normal exponential graph over $\Gamma^\pm_{t_2}$ and vice versa. As such, for all $t_1, t_2\in [-1,0]$ with $|t_1-t_2|<\delta$, there is a diffeomorphism $${\Psi}^\pm_{t_2,t_1}: \Gamma_{t_1}^\pm \to \Gamma_{t_2}^\pm$$ defined by nearest point projection from $\Gamma_{t_1}^\pm$ to $\Gamma^{\pm}_{t_2}$. Pick $M\in \mathbb{N}$ so $M\delta>1$ and choose $0=s_0>s_1>\ldots >s_M=-1$ so that $|s_i-s_{i+1}|<\delta$ and define a diffeomorphism $\Psi^-: \Gamma^-_{-1}\to \Gamma^-$ by $$\Psi^-= \Psi^-_{s_0, s_1}\circ \Psi^-_{s_1,s_2} \circ \cdots \circ \Psi^-_{s_{M-1}, s_{M}}.$$ Likewise, define a diffeomorphism $\Psi^+:\Gamma^+ \to \Gamma^+_{-1}$ by $$\Psi^+=\Psi^+_{s_{M}, s_{M-1}}\circ \Psi^+_{s_{M-1}, s_{M-2}}\circ \cdots \circ \Psi^+_{s_1,s_0}$$ and let $\Psi^{+,-}:\Gamma^+_{-1}\to \Gamma^-_{-1}$ be given by nearest point projection. By construction, this is also a diffeomorphism and so the map $$\Psi=\Psi^-\circ \Psi^{+,-}\circ \Psi^+$$ is a diffeomorphism $\Psi:\Gamma^+\to \Gamma^-$.
By construction, if $t_1,t_2\in [-1,0]$ and $|t_1-t_2|<\delta$, then for all $p\in \Gamma_{t_1}^\pm$, $$\label{SillyEst}
|{\mathbf{x}}(p)-{\mathbf{x}}(\Psi_{t_2, t_1}^\pm(p) )|<\rho.$$ Furthermore, [@BernsteinWang2 Item (1) of Proposition 4.4] implies that for $t\in [-1,0]$ each $\Gamma_t^\pm $ is smoothly asymptotic to $\mathcal{C}(\Sigma)$. In particular, there is a $R>0$ and functions $u_t^\pm$ on $\mathcal{C}(\Sigma)\backslash B_R$ whose normal exponential graph over $\mathcal{C}(\Sigma)$ sits inside of $\Gamma^\pm_t$ and contains $\Gamma^\pm_t\backslash B_{2R}.$ Moreover, by [@BernsteinWang2 Item (2) of Proposition 4.2] and [@BernsteinWang2 Lemma 4.3] there is a constant $K^\prime>0$ so that for $p\in \mathcal{C}(\Sigma)\backslash B_R$, $$|u_t^\pm(p)|\leq K^\prime (1+|{\mathbf{x}}(p)|)^{-1}.$$
Hence, for any $t_1,t_2\in [-1,0]$, if $p \in \Gamma^\pm_{t_1}\backslash B_{2R}$, then there is a point $p'\in \mathcal{C}(\Sigma)\backslash B_R$ so that $$\label{DecayEst}
|{\mathbf{x}}(p)-{\mathbf{x}}(p')|\leq K^\prime (1+ |{\mathbf{x}}(p')|)^{-1}$$ and also a point $p''\in \Gamma^\pm_{t_2}$ so that $$|{\mathbf{x}}(p')-{\mathbf{x}}(p'')|\leq K^\prime (1+ |{\mathbf{x}}(p')|)^{-1}.$$ Hence, if $|t_1-t_2|<\delta$, then as $\Psi^\pm_{t_2,t_1}$ is given by nearest point projection, $$\begin{aligned}
|{\mathbf{x}}(p)-{\mathbf{x}}(\Psi^\pm_{t_2,t_1}(p))| &\leq |{\mathbf{x}}(p)-{\mathbf{x}}(p'')|\\
&\leq |{\mathbf{x}}(p)-{\mathbf{x}}(p')|+|{\mathbf{x}}(p')-{\mathbf{x}}(p'')|\\
&\leq 2 K^\prime (1+|{\mathbf{x}}(p')|)^{-1}.\end{aligned}$$ As $K^\prime>0$ and $1+|{\mathbf{x}}(p')|\geq 1$, implies that $$(1+|{\mathbf{x}}(p')|)^{-1}\leq (1+K^\prime)(1+|{\mathbf{x}}(p)|)^{-1},$$ and so $$|{\mathbf{x}}(p)-{\mathbf{x}}(\Psi^\pm_{t_2,t_1}(p))|\leq 2 K^\prime (1+K^\prime)(1+|{\mathbf{x}}(p)|)^{-1}.$$ Combining this with one obtains that for all $p \in \Gamma^\pm_{t_1}$, $$|{\mathbf{x}}(p)-{\mathbf{x}}(\Psi^\pm_{t_2,t_1}(p))|\leq \hat{K}(1+|{\mathbf{x}}(p)|)^{-1}$$ where $\hat{K}=2K^\prime (1+K^\prime)+\rho(1+2R)$. By the same arguments, for all $p\in \Gamma^{+}_{-1}$, $$|{\mathbf{x}}(p)-{\mathbf{x}}(\Psi^{+,-}(p))|\leq \hat{K}(1+|{\mathbf{x}}(p)|)^{-1}.$$ Hence, it follows from Lemma \[StupidLem\], that $$|{\mathbf{x}}(p)-{\mathbf{x}}(\Psi(p))|\leq K(1+|{\mathbf{x}}(p)|)^{-1}$$ where $K=(1+\hat{K})^{2M+2}-1$.
To complete the proof set $$\psi(p)=\left\{\begin{array}{cc} \Pi^-(\Psi((\Pi^+)^{-1}(p))) & p \in \Omega_+ \\ p & p\in \partial \Omega_+. \end{array} \right.$$ We claim that $\psi$ is a homeomorphism. First, note that, by [@BernsteinWang2 Item (3) of Proposition 4.4], there is an $R>1$ and $\tilde{C}_1>1$ so that if $p\in \Gamma^\pm \backslash B_{R}$, then $$\tilde{C}_1^{-1} |{\mathbf{x}}(p)|^{2\mu} < {\mathrm{dist}}_{{\mathbb R}^{n+1}}(p, \mathcal{C}(\Sigma)) < \tilde{C}_1 |{\mathbf{x}}(p)|^{-1}$$ where $\mu<-1$. Hence, $$\label{TwoSidedEst}
C^{-1} |{\mathbf{x}}(p)|^{2\mu-1}<{\mathrm{dist}}_{\mathbb{S}^n}(\Pi^\pm (p), \mathcal{L}(\Sigma)) < C |{\mathbf{x}}(p)|^{-2}$$ where $C\geq\tilde{C}_1$. Hence, for $q\in \Omega^+$, with ${\mathrm{dist}}_{\mathbb{S}^n}(q, \mathcal{L}(\Sigma))$ sufficiently small, if we set $q'=(\Pi^+)^{-1}(q)\in \Gamma^+$, then $$|{\mathbf{x}}(q')|\geq C^{\frac{1}{2\mu-1}} {\mathrm{dist}}_{\mathbb{S}^n}(q, \mathcal{L}(\Sigma))^{\frac{1}{2\mu-1}}.$$ By , $$\begin{aligned}
| |{\mathbf{x}}(\Psi(q'))|-|{\mathbf{x}}(q')|| &\leq
|{\mathbf{x}}(\Psi(q'))-{\mathbf{x}}(q')|\\
&\leq K C^{-\frac{1}{2\mu-1}} {\mathrm{dist}}_{\mathbb{S}^n}(q, \mathcal{L}(\Sigma))^{-\frac{1}{2\mu-1}}.\end{aligned}$$ Hence, for ${\mathrm{dist}}_{\mathbb{S}^n}(q, \mathcal{L}(\Sigma))$ sufficiently small, $${\mathrm{dist}}_{\mathbb{S}^n}(q, \psi(q))\leq 4K C^{-\frac{1}{2\mu-1}} {\mathrm{dist}}_{\mathbb{S}^n}(q, \mathcal{L}(\Sigma))^{-\frac{1}{2\mu-1}} |{\mathbf{x}}(q')|^{-1} .$$ Using , again gives $${\mathrm{dist}}_{\mathbb{S}^n}(q, \psi(q))\leq 4K C^{-\frac{2}{2\mu-1}} {\mathrm{dist}}_{\mathbb{S}^n}(q, \mathcal{L}(\Sigma))^{-\frac{2}{2\mu-1}}.$$ As $\mu<-1$, for any $q_0\in \mathcal{L}(\Sigma)$, the right hand side goes to $0$ as $q\to q_0$. By the triangle inequality $${\mathrm{dist}}_{\mathbb{S}^n}(q_0, \psi(q))\leq {\mathrm{dist}}_{\mathbb{S}^n}(q, \psi(q))+{\mathrm{dist}}_{\mathbb{S}^n}(q, q_0)$$ and so the right hand side goes to $0$ as $q\to q_0$. Hence, $\psi$ is continuous. Finally, as $\bar{\Omega}_+$ is compact and $\bar{\Omega}_-$ is Hausdorff, $\psi$ is a closed map and hence, as $\psi$ is a bijection, it is a homeomorphism.
Theorem \[MainACSThm\] is a standard topological consequence of Proposition \[MainACSProp\].
(of Theorem \[MainACSThm\])
Observe that as $\mathcal{L}(\Sigma)$ is connected, by [@BernsteinWang2 Theorem 0.1], there are exactly two components of $\mathbb{S}^n\backslash \mathcal{L}(\Sigma)$, which we denote by $U^\pm$. Let $\phi:\mathbb{S}^n\to \mathbb{S}^n$ be the homeomorphism given by Proposition \[MainACSProp\] so $\phi(U^-)=U^+$. Pick a regular tubular neighborhood $T\subset \mathbb{S}^n$ of $\mathcal{L}(\Sigma)$. We let $V^\pm =U^\pm \cup T$ and observe that $\bar{U}^\pm$, the closure of $U^\pm$, is a retract of $V^\pm$ and that $\mathcal{L}(\Sigma)$ is a retraction of $T=V^-\cap V^+$.
As $\bar{U}^\pm$ is a retraction of $V^\pm$ and $\mathcal{L}(\Sigma)$ is a retraction of $T$, the natural inclusion maps induce isomorphisms between the reduced homology groups $\tilde{H}_k(\bar{U}^\pm)$ and $\tilde{H}_k(V^\pm)$ and between $\tilde{H}_k(\mathcal{L}(\Sigma))$ and $\tilde{H}_k(T)$. As such, there is a natural map $\Phi: \tilde{H}_k(V^-)\to \tilde{H}_k(V^+)$ defined by the following diagram, $$\begin{tikzcd}
{} & \tilde{H}_k(T) \arrow{r}{j^-_*} \arrow[pos=0.3]{dr}{j^+_*} & \tilde{H}_k(V^-) \arrow{d}{\Phi}\\
\tilde{H}_k(\mathcal{L}(\Sigma)) \arrow[leftrightarrow]{ur}{\simeq}\arrow{r}{i^-_*} \arrow{dr}{i^+_*} & \tilde{H}_k(\bar{U}^-) \arrow[leftrightarrow, crossing over, pos=0.3]{ur}{\simeq} \arrow{d}{\phi_*} & \tilde{H}_k(V^+) \\
{} & \tilde{H}_k(\bar{U}^+) \arrow[leftrightarrow]{ur}{\simeq}&
\end{tikzcd}$$ where $i^\pm: \mathcal{L}(\Sigma)\to \bar{U}^\pm$ and $j^\pm:T\to V^\pm$ denote the natural inclusion maps and we used that $\phi\circ i^-=i^+$. As $\phi$ is a homeomorphism, both $\phi_*$ and $\Phi$ are isomorphisms. This implies that the map $$J=(j^-_*, -j^+_*):\tilde{H}_k(T) \to \tilde{H}_k(V^-)\oplus \tilde{H}_k(V^+)$$ is surjective if and only if $\tilde{H}_k(V^-)=\tilde{H}_k(V^+)={\left\{0\right\}}$. Indeed, if the map is surjective, then for any element $\alpha\in \tilde{H}_k(V^-)$ there is an element $\beta \in \tilde{H}_k(T)$ so that $J(\beta)=(\alpha,0)$. That is, $j_*^-(\beta)=\alpha$ and $j_*^+(\beta)=0$. Hence, $0=j_*^+(\beta)=\Phi(j_*^-(\beta))=\Phi(\alpha)$. In other words, as $\Phi$ is an isomorphism, $\alpha\in \ker(\Phi)={\left\{0\right\}}$ and so $\tilde{H}_k(V^-)={\left\{0\right\}}$. The proof that $\tilde{H}_k(V^+)={\left\{0\right\}}$ is the same. The converse is immediate.
We next recall several standard facts about the reduced homology of manifolds and of manifolds with boundary. First of all, as $\mathcal{L}(\Sigma)$ is a connected, oriented $(n-1)$-dimensional manifold, $\tilde{H}_k(\mathcal{L}(\Sigma))=\tilde{H}_k(T)={\left\{0\right\}}$ for $k=0$ and $k\geq n$ and $\tilde{H}_{n-1}(\mathcal{L}(\Sigma))=\tilde{H}_{n-1}(T)=\mathbb{Z}$. Likewise, as the $\bar{U}^\pm$ are connected, oriented $n$-manifolds with boundary, $\tilde{H}_k(\bar{U}^\pm)=\tilde{H}_k(V^\pm)=0$ for $k=0$ and $k\geq n$.
In order to compute the remaining reduced homology groups, we use the Mayer-Vietoris long exact sequence for the reduced homology of $(V^-,V^+,\mathbb{S}^n)$. This gives the following exact sequences for $k\geq 0$ $$\label{ExactSeq}
\begin{tikzcd}
\tilde{H}_{k+1}(\mathbb{S}^n)\arrow{r} & \tilde{H}_k(T) \arrow{r}{J} & \tilde{H}_k(V^-)\oplus \tilde{H}_k(V^+)
\arrow{r} & \tilde{H}_k(\mathbb{S}^n).
\end{tikzcd}$$ As $\tilde{H}_k(\mathbb{S}^n)=\mathbb{Z}$ for $k=n$ and is otherwise ${\left\{0\right\}}$, implies that $J$ is surjective for $0\leq k\leq n-1$. Hence, for these $k$, $\tilde{H}_k(\bar{U}^\pm)=\tilde{H}_k(V^\pm)={\left\{0\right\}}$ and so the $U^\pm$ are homology $n$-balls as claimed. As such, further implies that $\tilde{H}_k(\mathcal{L}(\Sigma))=\tilde{H}_k(T)={\left\{0\right\}}$ for $0\leq k \leq n-2$ completing the verification that $\mathcal{L}(\Sigma)$ is a homology $(n-1)$-sphere.
To conclude the proof, it is enough, by the Hurewicz theorem, to show that $\pi_1(U^\pm)=\pi_1(\bar{U}^\pm)={\left\{1\right\}}$. To that end first observe that the maps $F^\pm: \mathbb{S}^n\to \bar{U}^\pm$ defined by $$F^\pm(p)=\left\{ \begin{array}{ll} p & p\in \bar{U}^\pm \\ {\phi}(p) & p \in U^\mp \end{array} \right.$$ are continuous. Now suppose $\gamma$ is a closed loop in $\bar{U}^\pm$. As $\pi_1(\mathbb{S}^n)={\left\{1\right\}}$, there is a homotopy $H: \mathbb{S}^1\times[0,1]\to \mathbb{S}^n$ taking $\gamma$ to a point. Clearly, $F^\pm\circ H: \mathbb{S}^1\times[0,1]\to \bar{U}^\pm$ is also a homotopy taking $\gamma$ to a point. That is, $\pi_1(\bar{U}^\pm)={\left\{1\right\}}$.
(of Corollary \[ACSDim3Cor\])
By Theorem \[MainACSProp\], $\mathcal{L}(\Sigma)$ is a homology $2$-sphere. By the classification of surfaces this means that $\mathcal{L}(\Sigma)$ is diffeomorphic to $\mathbb{S}^2$ and so Alexander’s Theorem [@Alexander] implies that both components of $\mathbb{S}^3\backslash \mathcal{L}(\Sigma)$ are diffeomorphic to ${\mathbb R}^3$, proving the claim.
Surgery Procedure {#SurgerySec}
=================
We prove Theorem \[MainTopThm\] using Corollary \[ACSDim3Cor\] and Theorem \[CondSurgThm\].
(of Theorem \[MainTopThm\])
We first observe that ${(\star_{3,\lambda_2})}$ holds by [@MarquesNeves Theorem B] and that ${({\star\star}_{3,\lambda_2})}$ holds by [@BernsteinWang2 Corollary 1.2]. If $\Sigma$ is (after a translation and dilation) a self-shrinker, then, by [@CIMW Theorem 0.7], $\Sigma$ is diffeomorphic to $\mathbb{S}^3$, proving the theorem. Otherwise, flow $\Sigma$ for a small amount of time by the mean curvature flow (using short time existence of for smooth closed initial hypersurfaces) to obtain a hypersurface, $\Sigma^\prime$, diffeomorphic to $\Sigma$ and, by Huisken’s monotonicity formula, with $\lambda[\Sigma']<\lambda[\Sigma]$. On the one hand, if the level set flow of $\Sigma'$ is non-fattening, then we set $\Sigma_0=\Sigma'$. On the other hand, if the level set flow of $\Sigma'$ is fattening, then we can take $\Sigma_0$ to be a small normal graph over $\Sigma'$ so that $\lambda[\Sigma_0]<\lambda[\Sigma]$ and, because the non-fattening condition is generic, the level set flow of $\Sigma_0$ is non-fattening.
Hence, the hypotheses of Section \[SingularitySec\] hold and we may apply Theorem \[CondSurgThm\] unconditionally to obtain a family of hypersurfaces $\Sigma^1, \ldots, \Sigma^N$ in ${\mathbb R}^4$. As $\Sigma^N$ is diffeomorphic to $\mathbb{S}^3$, if $N=1$, then there is nothing more to show and so we may assume that $N>1$. We will now show that $\Sigma^{N-1}$ is diffeomorphic to $\Sigma^N$ and hence to $\mathbb{S}^3$.
Let us denote by $V= \cup_{j=1}^{m(N-1)} V_j^{N-1}$ and by $\hat{\Sigma}^N=\Sigma^N\backslash V$ and let $U= \cup_{j=1}^{m(N-1)} U_j^{N-1}$ and $\hat{\Sigma}^{N-1}=\Sigma^{N-1}\backslash U$ so $\hat{\Phi}^{N-1}: \hat{\Sigma}^N \to \hat{\Sigma}^{N-1}$ is the orientation preserving diffeomorpism given by Theorem \[CondSurgThm\]. By Corollary \[ACSDim3Cor\], each component of $\bar{U}$ is diffeomorphic to a closed three-ball $\bar{B}^3$. Hence, each component of $\partial \hat{\Sigma}^{N-1}$ and $\partial \hat{\Sigma}^{N}$ is diffeomorphic to $\mathbb{S}^2$. That is, for $1\leq j \leq m(N-1)$, $\partial V_j^{N-1}$ is diffeomorphic to $\mathbb{S}^2$ and so, as $\Sigma^N$ is diffeomorphic to the three-sphere, Alexander’s theorem [@Alexander] implies that each $\bar{V}_j^{N-1}$ is diffeomorphic to the closed three-ball. Hence, there are orientation preserving diffeomorphisms $\Psi_j^{N-1}: \bar{V}_j^{N-1}\to \bar{U}_j^{N-1}$.
Denote by $\hat{\phi}^{N-1}_j: \partial V_j^{N-1} \to \partial U_j^{N-1}$ the diffeomorphism given by restricting $\hat{\Phi}^{N-1}$ and, likewise, let $\psi^{N-1}_j: \partial V_j^{N-1}\to \partial U_j^{N-1}$ denote the diffeomorphisms given by restricting $\Psi^{N-1}_j$. Observe, that the orientation of $\hat{\Sigma}^{N}$ and the orientation on $\bar{V}$ induce opposite orientations on $\partial \bar{V}$. Likewise, the orientation of $\hat{\Sigma}^{N-1}$ and that of $\bar{U}$ induce opposite orientations on $\partial \bar{U}$. By construction, the $\hat{\phi}^{N-1}_j$ preserve the orientations induced from $\hat{\Sigma}^N$ and $\hat{\Sigma}^{N-1}$. Hence, as the orientations induced by $\bar{V}_j^{N-1}$ and $\bar{U}_j^{N-1}$ are opposite to those induced by $\hat{\Sigma}^N$ and $\hat{\Sigma}^{N-1}$, the $\hat{\phi}^{N-1}_j$ also preserve these orientations. The same is true of the $\psi^{N-1}_j$. As such, $\xi_j^{N-1}=(\psi_{j}^{N-1})^{-1}\circ \hat{\phi}_j^{N-1}\in \mathrm{Diff}_+(\partial V_j^{N-1})$, where $\mathrm{Diff}_+(M)$ is the space of orientation preserving self-diffeomorphisms of an oriented manifold $M$ (here we may use the orientation on $\partial V_j^{N-1}$ induced by either $\bar{V}$ or $\hat{\Sigma}^N$). By [@Munkres] – see also [@Smale] and [@Cerf] – the space $\mathrm{Diff}_+(\mathbb{S}^2)$ is path-connected and so any element of $\mathrm{Diff}_+(\mathbb{S}^2)$ extends to an element of $\mathrm{Diff}_+(\bar{B}^3)$. That is, there are diffeomorphism $\Xi_j^{N-1}\in \mathrm{Diff}_+( \bar{V}_j^{N-1})$ that restrict to $\xi_{j}^{N-1}$ on $\partial V_j^{N-1}$. Thus, the maps $\hat{\Psi}_j^{N-1}=\Psi_j^{N-1}\circ \Xi_{j}^{N-1}: \bar{V}_j^{N-1} \to \bar{U}_j^{N-1}$ are diffeomorphisms that agree with $\hat{\Phi}^{N-1}$ on the common boundary.
Define $\Phi^{N-1}:\Sigma^N \to \Sigma^{N-1}$ by $$\Phi^{N-1}(p)=\left\{ \begin{array}{cc} \hat{\Phi}^{N-1}(p) & p \in \hat{\Sigma}^N \\ \hat{\Psi}_j^{N-1}(p) & p\in V_j^{N-1}. \end{array} \right.$$ By construction, this map is a homeomorphism. However, it is a standard procedure to construct a diffeomorphism between $\Sigma^N$ and $\Sigma^{N-1}$ by smoothing this map out (see for instance [@Hirsch Theorem 8.1.9]). Hence, $\Sigma^{N-1}$ is diffeomorphic to $\mathbb{S}^3$ and iterating this argument shows that $\Sigma=\Sigma^1$ is diffeomorphic to $\mathbb{S}^3$ as claimed.
Theorem \[MainCondThm\] follows from Theorem \[MainACSThm\], Theorem \[CondSurgThm\] and the Mayer-Vietoris long exact sequence for reduced homology. For completeness, we include a proof of the following standard topological fact which we will need to use.
\[HomBallLem\] Let $M$ be a closed $n$-dimensional manifold and $\Sigma \subset M$ a closed hypersurface. If $M$ is a homology $n$-sphere and $\Sigma$ is a homology $(n-1)$-sphere, then each component of $M\backslash \Sigma$ is a homology $n$-ball.
Our hypotheses ensure that both $M$ and $\Sigma$ are connected and oriented. Hence, $\Sigma$ is two-sided and there is an open $U^+\subset M$ so that $\Sigma =\partial U^+$. Let $U^-=M\backslash \bar{U}^+$. To prove the lemma we will need to compute the Mayer-Vietoris long exact sequence for $(\bar{U}^-, \bar{U}^+, M)$. Strictly speaking, we should “thicken" $\bar{U}^+$ and $\bar{U}^-$ up with a regular tubular neighborhood of $\Sigma=\partial \bar{U}^\pm$ as in the proof of Theorem \[MainACSThm\], but we leave the details of this to the reader.
The Mayer-Vietoris long exact sequence and the fact that $M$ is a homology $n$-sphere and $\Sigma$ is a homology $(n-1)$-sphere gives the sequences $$\begin{tikzcd}
\tilde{H}_{k+1}(M)\arrow{r}{\partial} \arrow[leftrightarrow]{d}{=} & \tilde{H}_k(\Sigma) \arrow{r} \arrow[leftrightarrow]{d}{=}& \tilde{H}_k(\bar{U}^-)\oplus \tilde{H}_k(\bar{U}^+)
\arrow{r} \arrow[leftrightarrow]{d}{=} & \tilde{H}_k(M) \arrow[leftrightarrow]{d}{=}\\
\tilde{H}_{k+1}(\mathbb{S}^n) \arrow{r}{\partial} & \tilde{H}_k(\mathbb{S}^{n-1}) \arrow{r} & \tilde{H}_k(\bar{U}^-)\oplus \tilde{H}_k(\bar{U}^+)
\arrow{r} & \tilde{H}_k(\mathbb{S}^{n}).
\end{tikzcd}$$ For $0\leq k\leq n-2$ and $k\geq n+1$ this immediately gives that $\tilde{H}_k(\bar{U}_\pm)={\left\{0\right\}}$. When $k={n-1}$, the map $\partial$ is necessarily generated by $[M]\mapsto [\Sigma]$ where $[M] $ is the fundamental class of $M$ and $[\Sigma]$ is the fundamental class of $\Sigma$. In particular, this map is an isomorphism and so we conclude that $\tilde{H}_{n-1}(\bar{U}^\pm)={\left\{0\right\}}$. For the same reason, $\tilde{H}_n(\bar{U}^\pm)={\left\{0\right\}}$, which verifies the claim.
(of Theorem \[MainCondThm\])
Arguing as in the first paragraph of the proof of Theorem \[MainTopThm\], we obtain $\Sigma^1,\ldots, \Sigma^N$ the hypersurfaces given by Theorem \[CondSurgThm\]. As $\Sigma^N$ is diffeomorphic to $\mathbb{S}^n$, it is a homology $n$-sphere. In particular, if $N=1$, then there is nothing further to show. As such, we may assume that $N>1$.
Let us show that $\Sigma^{N-1}$ is a homology $n$-sphere. First, set $V= \cup_{j=1}^{m(N-1)} V_j^{N-1}$ and $\hat{\Sigma}^N=\Sigma^N\backslash V$ and let $U= \cup_{j=1}^{m(N-1)} U_j^{N-1}$ and $\hat{\Sigma}^{N-1}=\Sigma^{N-1}\backslash U$. Next observe that, as $\partial U_j^{N-1}=\mathcal{L}(\Gamma_j^{N-1})$ for some $\Gamma_j^{N-1}\in \mathcal{ACS}_n^*(\Lambda)$, Theorem \[MainACSThm\] implies that each component of $\partial \hat{\Sigma}^{N-1}$ is a homology $(n-1)$-sphere. Hence, as $\partial U=\partial \hat{\Sigma}^{N-1}$ is diffeomorphic to $\partial \hat{\Sigma}^N=\partial V$, we see that each component of $\partial V=\partial \hat{\Sigma}^N$ is a homology $(n-1)$-sphere and so Lemma \[HomBallLem\] implies that each component of $\bar{V}$ is a homology $n$-ball.
We may now use the Mayer-Vietoris long exact sequence to compute that $\tilde{H}_k(\hat{\Sigma}^N)={\left\{0\right\}}$ for $k\neq n-1$ and $\tilde{H}_{n-1}(\hat{\Sigma}^N)=\mathbb{Z}^{m(N-1)-1}$. To see this, consider the Mayer-Vietoris long exact sequence of $(\bar{V}, \hat{\Sigma}^N, \Sigma^N)$. This long exact sequence and the fact that $\bar{V}$ is the union of homology $n$-balls gives, for $k>0$, the exact sequences $$\begin{tikzcd}
\tilde{H}_{k+1}(\Sigma^N)\arrow{r}{\partial} \arrow[leftrightarrow]{d}{=} & \tilde{H}_k(\partial V) \arrow{r} \arrow[leftrightarrow]{d}{=}& \tilde{H}_k(\bar{V})\oplus \tilde{H}_k(\hat{\Sigma}^N)
\arrow{r}\arrow[leftrightarrow]{d}{=} & \tilde{H}_k(\Sigma^N)\arrow[leftrightarrow]{d}{=}\\
\tilde{H}_{k+1}(\mathbb{S}^n)\arrow{r}{\partial} & \bigoplus\limits_{j=1}^{m(N-1)}\tilde{H}_k(\mathbb{S}^{n-1}) \arrow{r} & \tilde{H}_k(\hat{\Sigma}^N)
\arrow{r} & \tilde{H}_k(\mathbb{S}^n).
\end{tikzcd}$$ Hence, for $1\leq k\leq n-2$ and $k\geq n+1$, $\tilde{H}_k(\hat{\Sigma}^N)={\left\{0\right\}}$. When $k=n-1$, the map $\partial$ is generated by $ [\Sigma^N]\mapsto([\partial V_{1}^{N-1}], \ldots, [\partial V_{m(N-1)}^{N-1}])$ where $[\Sigma^N]$ is the fundamental class of $\Sigma^N$ and $[\partial V_{j}^{N-1}]$ is the fundamental class of $\partial V_{j}^{N-1}$. It follows that $ \tilde{H}_{n-1}(\hat{\Sigma}^N)=\mathbb{Z}^{m(N-1)-1}$ and, as this map is injective, that $\tilde{H}_{n}(\hat{\Sigma}^N)={\left\{0\right\}}$. Finally, as $\hat{\Sigma}^N$ is connected, $\tilde{H}_0(\hat{\Sigma}^N)={\left\{0\right\}}$, which completes the computation.
By Theorem \[CondSurgThm\], $\hat{\Sigma}^N$ is diffeomorphic to $\hat{\Sigma}^{N-1}$ and so $\tilde{H}_k(\hat{\Sigma}^{N-1})=0$ for $k\neq n-1$ and $\tilde{H}_{n-1}(\hat{\Sigma}^{N-1})=\mathbb{Z}^{m(N-1)-1}$. Furthermore, Theorem \[MainACSThm\] implies that each component of $\bar{U}$ is contractible. Hence, applying the Mayer-Vietoris long exact sequence to $(\hat{\Sigma}^{N-1}, \bar{U}, \Sigma^{N-1})$ gives, for $k>0$, $$\begin{tikzcd}[column sep=12pt]
\tilde{H}_k(\partial \bar{U}) \arrow{r} \arrow[leftrightarrow]{d}{=} & \tilde{H}_k(\bar{U})\oplus \tilde{H}_k(\hat{\Sigma}^{N-1})
\arrow{r} \arrow[leftrightarrow]{d}{=}& \tilde{H}_k(\Sigma^{N-1}) \arrow{r}\arrow[leftrightarrow]{d}{=} &\tilde{H}_{k-1}(\partial \bar{U})\arrow[leftrightarrow]{d}{=}\\
\bigoplus\limits_{j=1}^{m(N-1)}\tilde{H}_k(\mathbb{S}^{n-1}) \arrow{r} & \tilde{H}_k(\hat{\Sigma}^{N-1}) \arrow{r} & \tilde{H}_k(\Sigma^{N-1})\arrow{r} & \bigoplus\limits_{j=1}^{m(N-1)} \tilde{H}_{k-1} (\mathbb{S}^{n-1}).
\end{tikzcd}$$ In particular, for $1\leq k\leq n-2$ and $k\geq n+1$, we obtain that $\tilde{H}_k(\Sigma^{N-1})={\left\{0\right\}}$. The Mayer-Vietoris long exact sequence further gives the exact sequences $$\begin{tikzcd}
\tilde{H}_{n-1}(\partial \bar{U}) \arrow{r}{\delta} \arrow[leftrightarrow]{d}{=}& \tilde{H}_{n-1}(\bar{U} )\oplus \tilde{H}_{n-1}(\hat{\Sigma}^{N-1}) \arrow{r} \arrow[leftrightarrow]{d}{=} & \tilde{H}_{n-1}(\Sigma^{N-1}) \arrow{r} \arrow[leftrightarrow]{d}{=} &\tilde{H}_{n-2}(\partial \bar{U}) \arrow[leftrightarrow]{d}{=}\\
\mathbb{Z}^{m(N-1)} \arrow{r}{\delta} & \mathbb{Z}^{m(N-1)-1}
\arrow{r} & \tilde{H}_{n-1}(\Sigma^{N-1}) \arrow{r} &{\left\{0\right\}}.
\end{tikzcd}$$ Here $\delta$ is given by $(l_1, \ldots, l_{m(N-1)})\mapsto (l_1-l_{m(N-1)}, \ldots, l_{m(N-1)-1}-l_{m(N-1)})$. As $\delta$ is surjective, it follows that $\tilde{H}_{n-1}(\Sigma^{N-1})={\left\{0\right\}}$. Finally, as $\Sigma^{N-1}$ is an oriented, connected $n$-dimensional manifold $\tilde{H}_n(\Sigma^{N-1})=\mathbb{Z}$ and $\tilde{H}_0(\Sigma^{N-1})={\left\{0\right\}}$. Hence, $\Sigma^{N-1}$ is a homology $n$-sphere.
As our argument only used that ${\Sigma}^N$ was a homology $n$-sphere, we may repeat it to see that each of the ${\Sigma}^i$ is a homology $n$-sphere and so conclude that $\Sigma$ is one as well.
Fix an open subset $U\subset {\mathbb R}^{n+1}$. A *hypersurface in $U$*, $\Sigma$, is a proper, codimension-one submanifold of $U$. A *smooth mean curvature flow in $U$*, $S$, is a collection of hypersurfaces in $U$, ${\left\{\Sigma_t\right\}}_{t\in I}$, $I$ an interval, so that:
1. For all $t_0\in I$ and $p_0\in \Sigma_{t_0}$ there is a $r_0=r_0(p_0,t_0)$ and an interval $I_0=I_0(p_0,t_0)$ with $(p_0,t_0)\in B_{r_0}^{n+1}(p_0)\times I_0\subset U\times I$;
2. There is a smooth map $F: B_{1}^n\times I_0\to {\mathbb R}^{n+1}$ so that $F_t(p)=F(p,t): B_1^n\to {\mathbb R}^{n+1}$ is a parameterization of $B_{r_0}^{n+1}(p_0)\cap \Sigma_t$; and
3. $\left(\frac{\partial}{\partial t} F (p,t)\right)^\perp= \mathbf{H}_{\Sigma_t}(F(p,t)).$
It is convenient to consider the *space-time track* of $S$ (also denoted by $S$): $${S}={\left\{({\mathbf{x}}(p),t)\in\mathbb{R}^{n+1}\times \mathbb{R}: p\in\Sigma_t\right\}}\subset U\times I.$$ This is a smooth submanifold of space-time and is transverse to each constant time hyperplane ${\mathbb R}^{n+1}\times {\left\{t_0\right\}}$. Along the space-time track ${S}$, let $\frac{d}{dt}$ be the smooth vector field $$\left.\frac{d}{dt}\right|_{(p,t)}=\frac{\partial}{\partial t}+\mathbf{H}_{\Sigma_t}(p).$$ It is not hard to see that this vector field is tangent to ${S}$ and the position vector satisfies $$\label{MCFEqn}
\frac{d}{dt} {\mathbf{x}}(p,t)=\mathbf{H}_{\Sigma_t}(p).$$
It is a standard fact that if each $\Sigma_t$ in $S$ is closed, i.e. is compact and without boundary, then there is a smooth map $$F:M\times I \to {\mathbb R}^{n+1}$$ so that each $F_t=F(\cdot,t): M\to {\mathbb R}^{n+1}$ is a parameterization of $\Sigma_t$ a closed $n$-dimensional manifold $M$. As a consequence, each $\Sigma_t$ is diffeomorphic to $M$. We will need the following generalization of this last fact to manifolds with boundary.
\[BoundaryDiffProp\] Fix $R\in (0,\infty]$ and let ${\left\{\bar{B}_{2r_1}({\mathbf{x}}_1), \ldots, \bar{B}_{2r_m}({\mathbf{x}}_m)\right\}}$ be a collection of pairwise disjoint balls in $B_{R}\subset {\mathbb R}^{n+1}$ and let $U=B_{2R}\backslash \bigcup_{i=1}^m \bar{B}_{r_i}({\mathbf{x}}_i)$. If ${\left\{\Sigma_{t}\right\}}_{t\in (-\tau,\tau)}$ is a smooth mean curvature flow in $U$ with the property that
1. Each $\hat{\Sigma}_t=\Sigma_t\cap \left(\bar{B}_R\backslash
\bigcup_{i=1}^m B_{2 r_i}({\mathbf{x}}_i)\right)$ is compact,
2. For each $1\leq i \leq m$, $\partial B_{2 r_i} ({\mathbf{x}}_i)$ intersects $\Sigma_t$ transversally and non-trivially for all $t\in (-\tau,\tau)$,
3. If $R<\infty$, then $\partial B_R$ intersects $\Sigma_t$ transversally and non-trivially for all $t\in (-\tau,\tau)$,
then, for any $t_1,t_2\in (-\tau,\tau)$, $\hat{\Sigma}_{t_1}$ and $\hat{\Sigma}_{t_2}$ are diffeomorphic as compact manifolds with boundary.
For simplicity, we consider only $R=\infty$, $m=1$, ${\mathbf{x}}_1={\mathbf{0}}$ and $r_1=\frac{1}{2}$. It is straightforward to extend the argument to the general case. Let $S$ be the space-time track of the flow, so $S$ is a smooth hypersurface in $({\mathbb R}^{n+1}\backslash \bar{B}_{1/2})\times(-\tau,\tau)$. As each $\Sigma_t$ intersects $\partial B_1$ transversally, it is clear that $S$ meets $\partial B_1 \times (-\tau,\tau)$ transversally. In particular, $\tilde{S}=S\backslash \left({B}_1 \times (-\tau,\tau)\right)$ is a smooth hypersurface with boundary. Let $\tilde{B}=\partial \tilde{S}={\left\{(p,t):p\in \partial B_1\cap \Sigma_t, t\in (-\tau,\tau)\right\}}$.
Without loss of generality we may assume that the given $t_1,t_2$ satisfy $t_1<t_2$. Let $\hat{S}= \tilde{S}\cap \left({\mathbb R}^{n+1}\times [t_1,t_2]\right)$ and $\hat{B}=\tilde{B}\cap \left({\mathbb R}^{n+1}\times [t_1,t_2]\right)$. Observe that $\hat{S}$ is a compact manifold with corners and $\hat{B}$ is one of its boundary strata. The other two boundary strata are $\hat{\Sigma}_{t_1}\times {\left\{t_1\right\}}$ and $\hat{\Sigma}_{t_2}\times {\left\{t_2\right\}}$.
As $\partial B_1$ meets each $\Sigma_t$ transversally and $\hat{B}$ is compact, there is an $\epsilon>0$ so that, for $(p,t)\in \hat{B}$, $|{\mathbf{x}}^\top(p,t)|\geq 2\epsilon$, where ${\mathbf{x}}^\top$ is the tangential component of the position vector. By continuity there is a $\frac{1}{2}>\delta>0$ so that, for any $t\in [t_1,t_2]$ and $p\in\left(\bar{B}_{1+\delta} \backslash B_{1-\delta}\right)\cap \Sigma_t$, $|{\mathbf{x}}^\top(p,t)|\geq \epsilon$. Now let $\eta \in C^\infty_0({\mathbb R}^{n+1})$ be a smooth function with $0\leq \eta\leq 1$, $\eta=1$ on $\partial B_1$ and ${{\ensuremath{\mathop{\mathrm{spt}}} }}(\eta)\subset\bar{B}_{1+\delta} \backslash B_{1-\delta}$. For $(p,t)\in \hat{S}$ consider the vector $$\mathbf{V}(p,t)= - \eta({\mathbf{x}}(p,t)) \frac{ ({\mathbf{x}}(p,t)\cdot \mathbf{H}_{\Sigma_t}(p))}{|{\mathbf{x}}^\top(p,t)|^2} {\mathbf{x}}^\top(p,t)$$ and observe this gives a smooth vector field on $S$ that restricts to a smooth compactly supported vector field on each $\Sigma_t$. Let $\mathbf{W}=\frac{d}{dt}+\mathbf{V}$ which is a smooth vector field on ${S}$.
We claim that $\mathbf{W}$ is tangent to $\hat{B}$ and transverse to $\hat{\Sigma}_{t_1}\times{\left\{t_1\right\}}\cup \hat{\Sigma}_{t_2}\times {\left\{t_2\right\}}$. As $\mathbf{V}$ is tangent to $\Sigma_t\times{\left\{t\right\}}$, the transversality of $\mathbf{W}$ follows from the transversality of $\frac{d}{dt}$. This transversality follows immediately from the definition of $\frac{d}{dt}$. To see the tangency note that, by construction, $\hat{B}={\left\{(p,t)\in \hat{S}: |{\mathbf{x}}(p,t)|^2=1\right\}}$. For $(p,t)\in \hat{B}$, one computes $$\begin{aligned}
\mathbf{W}\cdot |{\mathbf{x}}(p,t)|^2& = 2 {\mathbf{x}}(p,t) \cdot \nabla_\mathbf{W} {\mathbf{x}}(p,t)\\
&=2{\mathbf{x}}(p,t) \cdot \mathbf{H}_{\Sigma_t}(p)-2\eta({\mathbf{x}}(p,t)) \frac{ ({\mathbf{x}}(p,t)\cdot \mathbf{H}_{\Sigma_t}(p))}{|{\mathbf{x}}^\top(p,t)|^2} {\mathbf{x}}(p,t)\cdot {\mathbf{x}}^\top(p,t) \\
&=0\end{aligned}$$ where the last equality used that $(p,t)\in \hat{B}$ so $\eta({\mathbf{x}}(p,t))=1$. This verifies the claim.
To conclude the proof observe that, as $\hat{S}$ is compact and $\mathbf{W}$ is tangent to $\hat{B}$ and transverse to $\hat{\Sigma}_{t_1}\times{\left\{t_1\right\}}\cup \hat{\Sigma}_{t_2}\times{\left\{t_2\right\}}$, standard ODE theory gives that for any $P_0=(p_0,t_0)\in \hat{S}$ the initial value problem $$\left\{ \begin{array}{c} \dot{\gamma}(s)=\mathbf{W}(\gamma(s)) \\ \gamma_{P_0}(0)=P_0 \end{array}
\right.$$ has a unique smooth solution $\gamma_{P_0}:[t_1-t_0,t_2-t_0]\to \hat{S}$ which depends smoothly on $P_0$. These solutions satisfy $t(\gamma_{P_0}(s))=s+t_0$ and so there is a diffeomorphism $\phi:\Sigma_{t_1}\to \Sigma_{t_2}$ given by $(\phi(p),t_2)=\gamma_{(p,t_1)}(t_2-t_1)$.
[99]{} J.W. Alexander, *On the subdivision of $3$-space by a polyhedron*, Proc. Nat. Acad. Sci. 10 (1924), no. 1, 6–8.
F. Almgren, *Some interior regularity theorems for minimal surfaces and an extension of Bernstein’s theorem*, Ann. of Math. (2) 84 (1966), 277–292.
J. Bernstein and L. Wang, *A sharp lower bound for the entropy of closed hypersurfaces up to dimension six*, to appear in Invent. Math..
J. Bernstein and L. Wang, *A topological property of asymptotically conical self-shrinkers of small entropy*, to appear in Duke Math. J..
K. Brakke, The motion of a surface by its mean curvature, Mathematical Notes 20, Princeton University Press, Princeton, N.J., 1978.
S. Brendle, *Embedded self-similar shrinkers of genus $0$*, Ann. of Math. (2) 183 (2016), no. 2, 715–728.
J. Cerf, Sur les difféomorphismes de la sphère de dimension trois ($\Gamma_4=0$), Lecture Notes in Mathematics, No. 53. Springer-Verlag, Berlin-New York, 1968.
Y.G. Chen, Y. Giga, and S. Goto, *Uniqueness and existence of viscosity solutions of generalized mean curvature flow equations*, J. Differential Geom. 33 (1991), no. 3, 749–786.
X. Cheng and D. Zhou, *Volume estimate about shrinker*, Proc. Amer. Math. Soc. 141 (2013), no. 2, 687–696.
T.H. Colding, T. Ilmanen, W.P. Minicozzi II, and B. White, *The round sphere minimizes entropy among closed self-shrinkers*, J. Differential Geom. 95 (2013), no. 1, 53–69.
T. H. Colding and W.P. Minicozzi II, *Generic mean curvature flow I; generic singularities*, Ann. of Math. (2) 175 (2012), no. 2, 755–833.
L. C. Evans and J. Spruck, *Motion of level sets by mean curvature. I*, J. Differential Geom. 33 (1991), no. 3, 635–681.
L. C. Evans and J. Spruck, *Motion of level sets by mean curvature. II*, Trans. Amer. Math. Soc. 330 (1992), no. 1, 321–332.
L. C. . Evans and J. Spruck, *Motion of level sets by mean curvature. III*, J. Geom. Anal. 2 (1992), no. 2, 121–150.
L. C. Evans and J. Spruck, *Motion of level sets by mean curvature. IV*, J. Geom. Anal. 5 (1995), no. 1, 77–114.
M. W. Hirsch, Differential topology, Springer-Verlag, New York, 1976.
W.-Y. Hsiang, *Minimal cones and the spherical Bernstein problem. I*, Ann. of Math. (2) 118 (1983), no. 1, 61–73.
W.-Y. Hsiang, *Minimal cones and the spherical Bernstein problem. II*, Invent. Math. 74 (1983), no. 3, 351–369.
W.-Y. Hsiang and I. Sterling, *Minimal cones and the spherical Bernstein problem. III*, Invent. Math. 85 (1986), no. 2, 223–247.
G. Huisken, *Asymptotic behaviour for singularities of the mean curvature flow*, J. Differential Geom. 31 (1990), no. 1, 285–299.
G. Huisken, Local and global behaviour of hypersurfaces moving by mean curvature, Differential geometry: partial differential equations on manifolds (Los Angeles, CA, 1990), 175–191, Proc. Sympos. Pure Math., 54, Part 1, Amer. Math. Soc., Providence, RI, 1993.
T. Ilmanen, Elliptic regularization and partial regularity for motion by mean curvature, Mem. Amer. Math. Soc. 108 (1994), no. 520.
T. Ilmanen, *Singularities of mean curvature flow of surfaces*. Preprint. Available at <https://people.math.ethz.ch/~ilmanen/papers/sing.ps>.
T. Ilmanen, A. Neves, and F. Schulze, *On short time existence for the planar network flow*. Preprint. Available at: [arxiv.org/abs/1407.4756](arxiv.org/abs/1407.4756).
T. Ilmanen and B. White, *Sharp lower bounds on density of area-minimizing cones*, Cambridge Journal of Mathematics 3 (2015), no. 1-2, 1–18.
D. Ketover and X. Zhou, *Entropy of closed surfaces and min-max theory*. Preprint. Available at <http://arxiv.org/abs/1509.06238>.
F.C. Marques and A. Neves, *Min-max theory and the Willmore conjecture*, Ann. of Math. (2) 179 (2014), no. 2, 683–782.
J. R. Munkres, *Differentiable isotopies on the 2-sphere*, Mich. Math. Jour. 7 (1960), 193–197.
S. Osher and J. Sethian, *Fronts propagating with curvature-dependent speed: algorithms based on Hamilton-Jacobi formulations*, J. Comput. Phys. 79 (1988), no. 1, 12–49.
F. Schulze, *Uniqueness of compact tangent flows in mean curvature flow*, J. Reine Angew. Math. 690 (2014), 163–172.
L. Simon, Lectures on geometric measure theory, Proceedings of the Centre for Mathematical Analysis, Australian National University 3, Australian National University, Centre for Mathematical Analysis, Canberra, 1983.
S. Smale, *Diffeomorphisms of the 2-sphere*, Proc. Amer. Math. Soc. 10 (1959), 621–626.
A. Stone, *A density function and the structure of singularities of the mean curvature flow*, Calc. Var. Partial Differential Equations 2 (1994), no. 4, 443–480.
L. Wang, *Uniqueness of self-similar shrinkers with asymptotically conical ends*, J. Amer. Math. Soc. 27 (2014), no. 3, 613–638.
B. White, *A local regularity theorem for mean curvature flow*, Ann. of Math. (2) 161 (2005), no. 3, 1487–1519.
[^1]: The reader may refer to Remark \[LowBndLam\] for the reason that we restrict to $\Lambda>\lambda_n$.
[^2]: The proof of [@IlmanenNevesSchulze Theorem 1.5] uses the local regularity theorem of White, which is also applicable to the Brakke flows in Theorem \[UnitdensityThm\] and their tangent flows – see [@WhiteReg pp. 1487–1488].
|
---
abstract: 'About 500d after explosion the light curve of the Type Ia [SN 1998bu]{} suddenly flattened and at the same time the spectrum changed from the typical nebular emission to a blue continuum with broad absorption and emission features reminiscent of the SN spectrum at early phases. We show that in analogy to SN 1991T [@schmidt], this can be explained by the emergence of a light echo from a foreground dust cloud. Based on a simple model we argue that the amount of dust required can consistently explain the extinction which has been estimated by completely independent methods. Because of the similar echo luminosity but much higher optical depth of the dust in [SN 1998bu]{} compared with SN 1991T, we expect that the echo ring size of [SN 1998bu]{} grows faster than in SN 1991T. HST observations have indeed confirmed this prediction.'
author:
- 'E. Cappellaro, F. Patat, P. A. Mazzali'
- 'S. Benetti, J.I. Danziger, A. Pastorello, L. Rizzi, M. Salvo, M. Turatto'
title: 'Detection of a light echo from SN 1998.'
---
Introduction
============
The Type Ia SN 1998bu was discovered on May 9.9 UT by @villi in a spiral arm of NGC 3368 (M96), a nearby Sab galaxy. It had very good photometric and spectroscopic coverage by several groups [@jha; @hernandez; @suntzeff] showing that the luminosity decline was slower than the average ($\Delta m_{15}(B)$, the magnitude decline in the early 15 days after maximum, was $1.01\pm0.05$ mag). This is almost as slow as the slowest SN Ia on record, SN 1991T, although, unlike SN 1991T, SN 1998bu was spectroscopically normal before and around maximum. The distance to the host galaxy was measured using Cepheids variables [@tanvir] as $\mu=30.25\pm0.19$. However, a calibration of the absolute luminosity of the SN depends also on the estimated reddening which in the case of SN 1998bu is significant. @hernandez estimated $E(B-V) = 0.86\pm0.15$ based on the comparison of the spectral energy distribution of [SN 1998bu]{} with that of the template SN Ia 1981B.
[SN 1998bu]{} was included in our program of monitoring of the nebular phase of SN Ia. During this monitoring we realized that [SN 1998bu]{} was not declining with the expected rate; this, combined with the change in the spectral appearance gave the first evidence of the emergence of a light echo in [SN 1998bu]{} [@capiau]. These observations are presented here for the first time along with a simple modeling.
Evidence for a light echo in SN 1998
====================================
When the SN Ia ejecta become optically thin, about 100 days after the explosion, the SN luminosity is determined essentially by the deposition of the kinetic energy of the positrons released by the decay of $^{56}Co$. There is evidence that even positron deposition may not be complete [@cap97; @milne], and so at advanced phases the light curve declines either at the $^{56}Co$ rate ($0.98$ mag/100d) or faster. The only previously known case of a SN Ia with a decline slower than the $^{56}Co$ rate was SN 1991T, whose slow late decline and peculiar spectra were interpreted as due to a light echo [@schmidt].
A first indication that [SN 1998bu]{} was beginning to deviate from the normal exponential decline came on December 4 1999, roughly 500 days after maximum. At this epoch the observed magnitude ($V=20.75\pm0.08$) was already 2 mag brighter than the extrapolation of the radioactive decay tail. Additional photometric observations over the following few months showed that the SN luminosity remained almost constant (the decline was only $\sim 0.5$ mag in 100 days). Also, the color at these epochs was unusually blue ($B-V \simeq -0.1$). This behavior was reminiscent of that exhibited by SN 1991T [@schmidt; @sparks].
[llllllllll]{} 1991T & NGC 4527 & $11.51\pm0.02$ & 0.07 & $0.53\pm0.17$ & $0.94\pm0.05$ & @phil99 & $30.74\pm0.12$ & @saha\
1998bu & NGC 3368 & $11.88\pm0.02$ & 0.08 & $0.94\pm0.15$ & $1.01\pm0.05$ & @jha & $30.25\pm0.19$ & Tanvir et al. (2000)\
Fig. 1 shows the absolute V light curves of [SN 1998bu]{} and SN 1991T [@cap97]. Both light curves have been calibrated using the Cepheid-based distances [@saha; @tanvir] and corrected for total extinction (cf. Tab. 1). The striking similarity of the two light curves suggests that the same mechanism causing the flattening of the light curve of SN 1991T was also acting in the case of [SN 1998bu]{}.
Although this is not clearly visible in Fig. 1, the absolute magnitude of [SN 1998bu]{} at maximum was about 0.5 mag fainter than that of SN 1991T. Based on the $\Delta m_{15}(B)$ vs $M_V$ relation [@phil99] the difference should be only 0.05 mag. The gap increases slightly with time, and it reaches about 0.8-0.9 mag one year after maximum. That SNe Ia with a very similar $\Delta
m_{15}(B)$ actually show significant dispersion in some of their other properties is a well-known fact [@paolo98], and is one of the main caveats for the use of SN Ia as distance indicators. However, this is not essential for the discussion presented here.
The slow photometric evolution of [SN 1998bu]{} makes it a viable target for spectroscopy even 2 yrs after explosion. Spectra of [SN 1998bu]{} where obtained using the ESO 3.6m telescope (and EFOSC2) at La Silla at two epochs, 2000 March 3 and March 30. We used grism \#11 which, combined with the 1.2 arcsec slit, allowing one to cover the range 3300-7500 Å with a resolution of about 15Å. Eventually, since no spectral evolution was apparent between the two epochs, the two spectra were merged to improve the S/N. The total integration time was 9000 sec. The resulting spectrum, labeled with the average phase of 670d, is shown in Fig. 2. For comparison we also show the spectrum of SN 1991T obtained with the same telescope at a similar phase. We also show spectra of the two SNe at a much earlier epoch, when the luminosity decline was still tracking the radioactive exponential tail.
The similarity of the spectral evolution of the two SNe is remarkable. About 300 days after maximum the spectra are dominated by the strong emission lines of FeII\] and FeIII\] which are typical of SNe Ia in the nebular phase. These emission lines originate in the iron nebula and are powered by the radioactive decay of $^{56}$Co to $^{56}$Fe [@kuchner].
In correspondence to the flattening of the light curve the spectra change completely, showing a blue continuum with superposed broad absorption and emission features. These spectra resemble the photospheric epoch spectra, but they do not match any specific early-time epoch. For SN 1991T it was convincingly suggested [@schmidt] that the peculiar late time light curve and spectrum can be explained assuming that the emission at these epochs is dominated by the echo from circumstellar dust of the light emitted by the SN near maximum. Our observations suggest that the same mechanism is at work also in [SN 1998bu]{}.
The key element for such a mechanism to work is the presence of a dust layer in the vicinity of the SN, scattering towards the observer a fraction of the light emitted by the SN. Because of the different travel times the observed spectrum is a combination of the early time scattered spectra and of the direct, late-time spectrum. In principle, the details of the echo (intensity, duration, spectrum) depend on the spatial distribution of the dust and on the physics of the scattering process. A careful analysis and modeling of time distributed observations may allow one to derive useful constraints. Such a detailed study is the subject of future work (Patat et al. in preparation), while here we illustrate the basic principles of the phenomenon.
In general, the light curve of the echo is related to the light emitted by the SN by the following relation [@chevalier]:
$$\label{flux}
F_{echo}(t) = \int^t_0 \; F_{SN}(t-t^\prime) \;
f(t^\prime) \;dt^\prime$$
where $F_{SN}(t-t^\prime)$ is the flux of the SN at time $t-t^\prime$ and $f(t)$, the fraction of this light which is scattered towards the observer, depends on the geometry of the system and on the properties of the dust (units of $f(t)$ are s$^{-1}$; cf. Equation \[fdt\]). As a first approximation we can assume that the SN light curve is a short pulse with a duration $\Delta t_{SN}$, during which the emitted flux $F_{SN}$ is constant. Under this assumption Equation \[flux\] becomes $F_{echo}(t)= F_{SN} \; f(t) \; \Delta t_{SN}$, where $\Delta t_{SN}$ can be obtained by a numerical integration of the observed light curve as $F_{SN} \Delta t_{SN} = \int_0^{+\infty}
F_{SN}(t) dt$. In the case of the $V$ light curve of SN 1991T [@schmidt; @sparks] this gives $\Delta t_{SN}$=0.11 yr.
As one of the possible geometries which are consistent with the observations, we consider a thin dust sheet lying in front of the SN and extended perpendicularly to the line of sight. For the sake of simplicity we make the hypothesis that the sheet thickness $\Delta D$ is much smaller than the distance between the SN and the dust elements and we consider only single scattering. Given this idealized configuration and assuming the scattering phase function by Henyey & Greenstein (1941), an analytic expression for the function $f(t)$ can be derived relatively easily [@chevalier; @xu] as:
$$\label{fdt}
f(t) = \frac{c}{8\pi}\,\frac{\omega_d \tau_d}{D+ct}\,
\frac{1-g^2}{(1+g^2-2g\frac{D}{D+ct})^{3/2}}$$
where $D$ is the distance of the dust layer from the SN, $\tau_d$ is the optical depth of the dust layer, $\omega_d$ is the dust albedo, and $g$ is a parameter which represents the degree of forward scattering. The value of $g$ ranges from 0 for isotropic scattering to 1 for purely forward scattering. Empirical estimates and numerical calculations [@white] give $g\approx$0.6. For the dust albedo we have adopted $\omega_d\approx$0.6 [@mathis].
Equation \[fdt\] shows that if the dust is too close to the object, even if it survives the exposure to the enormous radiation flux of the SN, it would not produce a constant, long duration echo. This is because of both the angular dependence of the scattering phase function and the SN radiation dilution. In the case of SN 1991T the echo luminosity declined by a factor of 4 over $\sim 7$ yrs [@sparks] which implies $D > 45$ lyr (equation \[fdt\]). On the other hand, if a dust layer is to produce a significant light echo, it must be located close enough to the SN that the emission from the SN is not too diluted. In fact, other conditions being similar, for a distant cloud ($D\gg ct$), the echo flux is $F_{echo} \propto \tau_{d}/D$. In particular, since we found that the cloud must be located farther than 45 lyr from the SN, we derive a lower limit for the dust optical depth of $\tau_d>0.28$. Since in our model the dust layer is located in front of the SN we expect that it causes some extinction on the line of sight. Assuming that the column density is constant along different lines of sight and based on the relation $A_V = 1.086\tau_V$ [@cox] we can estimate for SN 1991T a lower limit for $A_V > 0.3$ mag.
Actually, both SN 1991T and [SN 1998bu]{} show evidence of significant extinction from interstellar dust associated with the host galaxy, $A_V^{host} = 0.46$ for SN 1991T and $A_V^{host} = 0.86$ for [SN 1998bu]{} (Tab. 1). These values were derived mainly by photometric methods based on the analysis of the light curve and the SN color, and therefore they are affected by large uncertainties. In a simplified scenario, one can assume that there is only one cloud which is causing both the observed extinction and the light echo. In this case, since we fix the column density of the cloud, we can derive the distance $D$ of the cloud itself from the SN through a fit to the echo luminosity. The light curve fit for SN 1991T, shown in Fig. 1, gives $D\sim 120$ lyr. In the case of SN 1998bu the light curve is similar to that of SN 1991T, hence $\Delta t_{SN} \simeq 0.1$ yr, but the optical depth of the dust is about a factor 2 larger: therefore we obtain $D\sim
230$ lyr. Note that these estimates are independent of the adopted distance for the parent galaxies, because they depend only on the ratio between the luminosity of the echo and that of the SN at maximum.
The distance $D$ is related to the linear diameter of the ring through the relation $2R = 2\sqrt{ct\;(2D+ ct)}$. In the case of SN 1991T the echo has been resolved with a diameter of 0.$^{\prime\prime}$4. Based on the Cepheid distance to NGC 4527, this results in a linear diameter $2R = 120$ lyr and hence $D \simeq 150$ lyr. Given the observational uncertainties, this is in very good agreement with the estimate based on the light echo modeling. In the case of SN 1998bu, because of the much higher optical depth of the dust, we had to place the dust layer at a larger distance than for SN 1991T. In turn, this causes the echo ring diameter at any given time to be larger. Based on the geometry described earlier we obtain that at the time of writing, 2.5 yr after the explosion, the diameter of the echo should be $\sim 0.3$ arcsec. Indeed, the echo of SN 1998bu has already been resolved using HST (Leibundgut p.c.).
A consistency check of the light echo interpretation for the peculiar light curve of [SN 1998bu]{} can be obtained by comparing the observed and computed spectrum of the light echo. Clearly the most important contribution to the light echo comes from the light emitted near maximum. Spectra of [SN 1998bu]{} at eight different epochs from -6 to 50 days after maximum were published by @hernandez. Weighting the spectra according to the integrated luminosity around each observed epoch and using our assumed geometry (cf. Equation 1), we computed the contribution of each early-time spectrum to the emerging echo spectrum at phase 670d. The input spectra were corrected for reddening using a standard extinction law and assuming that the scattering function as a similar $\lambda^{-1}$ dependence. These corrected spectra were coadded to compute the expected spectrum of the echo (at 670d the SN nebular spectrum is several magnitudes fainter than the echo and does not give a measurable contribution). This is compared with the observed spectrum in Fig. 3. Given the crudeness of some of the assumptions and the incomplete spectral coverage of the near-maximum phase, the agreement between the observations and the model is excellent, which we consider a strong argument in favor of the light echo scenario.
Moreover, even if we stress that similar results can be obtained for different geometries, the fact that with the configuration we have chosen it is possible to explain the observed light curve, spectrum and reddening is suggestive.
Are bright SN Ia linked to dusty star forming regions ?
=======================================================
The observation and interpretation of light echos can tell us about dust properties and distribution in galaxies, but may have far-reaching applications and consequences. One such application is to measure distances of galaxies @sparks.
SN 1998bu is only the second Type Ia SN for which a light echo was observed, the other being SN 1991T in the Sbc galaxy NGC 4527. It is remarkable that both SNe are very slow decliners. SN 1991T was also spectroscopically peculiar at and before maximum, indicating a higher photospheric temperature than in normal SNe Ia, as shown by the presence of strong FeIII lines and by the absence of FeII lines before maximum [@filip]. This was confirmed by spectroscopic modeling, which also revealed that SN 1991T had an abnormally high abundance of Fe-group species in the outer, fast-moving part of the ejecta [@paolo95]. Such a peculiar element distribution suggested that SN 1991T may have been the result of an unusual explosion mechanism. Further studies revealed that SN 1991T had very broad Fe nebular lines @paolo98. This and the brightness of the SN at maximum led various authors to conclude that SN 1991T produced an unusually large amount of $^{56}$Ni, about 1 [M$_{\odot}$]{}[@spyro; @paolo95; @cap97; @paolo98] though this may be challenged by recent calibration of the SN absolute luminosity [@saha; @richtler; @gibson]. Finally, @fisher suggested that SN 1991T may have been the result of the explosion of a super-Chandrasekhar mass progenitor.
SN 1998bu shares many of the properties of SN 1991T: it was bright at maximum, it had a slow decline and broad nebular lines, although not quite as extreme in these respect as SN 1991T. It is therefore very interesting that both SNe are significantly reddened and show a dust echo. Other two nearby SNe Ia were heavily reddened: SN 1989B was a normal SN Ia for which indeed there may be some evidence of a light echo [@milne], while SN 1986G was a fast decliner and spectroscopically peculiar object for which, despite an early claim [@schaefer], there was no evidence of a light echo [@turatto]. The latter requires that the dust was more distant from SN 1986G than in [SN 1998bu]{} or SN 1991T (by placing the dust cloud 10 times further the echo magnitude becomes 2.5 mag fainter).
Historically, the fact that SN Ia were seen as very homogeneous was attributed to their progenitors arising from a single stellar population. Adding to this that SN Ia were found in all types of galaxies, even in Ellipticals, contributed to the standard paradigm that the progenitors of all SN Ia belong to the old stellar population. Now there is evidence that slow decliners (or high luminosity SN Ia) may be preferably associated with a younger stellar population (and therefore conceivably more massive progenitors), a suggestion first made by @vdb and confirmed by @hamuy [@hamuynew]. Our result that both SN 1991T and SN 1998bu are located close to dusty regions is coherent with this scenario.
Obviously one needs to improve the statistics by adding more cases. In this respect one interesting object is SN 1998es. This is a SN Ia whose spectrum near maximum was very similar to that of SN 1991T [@jha98] and shows the signature of quite similar interstellar reddening (Patat et al. in preparation). Therefore SN 1998es maybe an interesting candidate to search for light echo though, because of the relatively large distance ($\mu
= 33.2$), the apparent magnitude may result quite faint (V$\sim 24$ if the echo has the same intensity as in SN 1991T and [SN 1998bu]{}).
Conclusions
===========
The peculiar late light curve and spectrum of SN 1998bu are attributed to the echo from circumstellar dust of the light emitted near maximum. Based on a simple modelling and on the comparison with SN 1991T (which also experienced a similar phenomenom) we estimate that the dust cloud is located in front of the SN and relatively nearby ($\sim 100
pc$). Because of the similar echo luminosity but much higher optical depth of the dust in [SN 1998bu]{} compared with SN 1991T, we expect that the dust layer is more distant and hence the echo ring of [SN 1998bu]{} grows faster than in SN 1991T.
The association of dust with SN Ia, possibly of a specific subtype, may have interesting implications for the progenitor scenario and prompts for a renewed effort for monitoring SN Ia at late phases.
Cappellaro, E., Benetti, S., Pastorello, A., Turatto, M., Altavilla, G. & Rizzi, L. 2000, , 7391, 2
Cappellaro, E., Mazzali, P. A., Benetti, S., Danziger, I. J., Turatto, M., della Valle, M. & Patat, F. 1997, , 328, 203
Chevalier, R. A. 1986, , 308, 225
Cox, A. N. 2000, Allen’s astrophysical quantities, 4th ed. Publisher: New York: AIP Press; Springer, 2000. Edited by Arthur N. Cox. ISBN: 0387987460,
Filippenko, A. V. et al. 1992, , 104, 1543
Fisher, A., Branch, D., Hatano, K. & Baron, E. 1999, , 304, 67
Gibson, B. K., Stetson, P. B. 2000, , in press (astro-ph/0011478)
Jha, S. et al. 1999, , 125, 73
Jha, S., Garnavich, P., Challis, P., Kirshner, R. 1998 I.A.U. Circular No. 7054
Hamuy, M., Phillips, M. M., Suntzeff, N. B., Schommer, R. A., Maza, J. & Aviles, R. 1996, , 112, 2391
Hamuy, M., Trager, S. C., Pinto, P. A., Phillips, M. M., Schommer, R. A., Ivanov, V. & Suntzeff, N. B. 2000, , 120, 1479
Hernandez, M. et al. 2000, , in press (astro-ph/0007022)
Henyey, L. C. & Greenstein, J. L. 1941, , 93, 70
Kuchner, M. J., Kirshner, R. P., Pinto, P. A. & Leibundgut, B.1994, , 426, L89
Mathis, J. S., Rumpl, W. & Nordsiek, K. H. 1977, , 217, 425
Mazzali, P. A., Cappellaro, E., Danziger, I. J., Turatto, M. & Benetti, S. 1998, , 499, L49
Mazzali, P. A., Danziger, I. J. & Turatto, M. 1995, , 297, 509
Milne, P. A., The, L. -. & Leising, M. D. 1999, , 124, 503
Phillips, M. M., Lira, P., Suntzeff, N. B., Schommer, R. A., Hamuy, M. & Maza, J.; 1999, , 118, 1766
Richtler, T., Jensen, J.B., Tonry, J., Barris, B., Drenkhahn, G., 2000, , in press (astro-ph/0011440)
Saha, A., Sandage, A., Thim, F., Tammann, G. A., Labhardt, L., Cristensen, J., Macchetto, F. D., Panagia, N. 2000, , in press (astro-ph/0012015)
Schaefer, B. E. 1987, , 323, L47
Schmidt, B. P., Kirshner, R. P., Leibundgut, B., Wells, L. A., Porter, A. C., Ruiz-Lapuente, P., Challis, P. & Filippenko, A. V. 1994, , 434, L19
Sparks, W. B., Macchetto, F., Panagia, N., Boffi, F. R., Branch, D., Hazen, M. L.& della Valle, M. 1999, , 523, 585
Spyromilio, J., Meikle, W. P. S., Allen, D. A. & Graham, J. R. 1992, , 258, 53P
Suntzeff, N. B. et al. 1999, , 117, 1175
Tanvir, N. R., Ferguson, H. C. & Shanks, T. 1999, , 310, 175
Turatto, M., Cappellaro, E., Barbon, R., della Valle, M., Ortolani, S. & Rosino, L. 1990, , 100, 771
van den Bergh, S. & Pazder, J. 1992, , 390, 34
Villi, M. 1998, , 6899, 1
White, R. L. 1979, , 229, 954
Xu, J., Crotts, A. P. S. & Kunkel, W. E. 1994, , 435, 274
|
---
abstract: 'In the selective withdrawal experiment fluid is withdrawn through a tube with its tip suspended a distance $S$ above an unperturbed two-fluid interface. At low withdrawal rates, $Q$, the interface forms a steady state hump and only the upper fluid is withdrawn. When $Q$ is increased (or $S$ decreased), the interface undergoes a transition so that the lower fluid is entrained with the upper one, forming a thin steady-state spout. Near this discontinuous transition the hump curvature becomes very large and displays power-law scaling behavior. This scaling is used to show that steady-state profiles for humps at different flow rates and tube heights can all be scaled onto a single similarity profile.'
author:
- 'Itai Cohen, Sidney R. Nagel'
title: Scaling at the selective withdrawal transition
---
Is it possible to classify topological transitions in nonlinear fluid systems [@Ca93; @Bertozzi94; @Goldstein93; @Pugh98] in the same manner as one classifies thermodynamic transitions? When the topological transition involves formation of a singularity in the fluid flows or interface shapes, a similarity solution can provide a simplified description of the flows and help make such a classification [@Barenblatt96]. A crucial component to this approach involves determining how characteristic physical quantities and lengths describing the fluid system scale near the singularity. In many cases these singularities manifest themselves in the transition dynamics and do not appear in the steady state flows. Here, we report on steady-state interface profiles near the topological transition associated with the selective withdrawal experiment. Despite the transition being discontinuous, scaling of the interface is observed as the transition is approached.
In the selective withdrawal experiment a tube is immersed in a filled container so that its tip is suspended a height $S$ above an unperturbed interface separating two immiscible fluids. When fluid is pumped out through the tube at low flow rates, $Q$, only the upper fluid is withdrawn and the interface is deformed into an axi-symmetric steady-state hump (Fig. \[fig:SW\_parameters\]) due to the flows in the upper fluid. The hump grows in height and curvature as $Q$ increases or $S$ decreases until the flows undergo a transition where the lower fluid becomes entrained in a thin axi-symmetric spout along with the upper fluid. The two-fluid interface becomes unbounded in the vertical direction thus changing the topology of the steady state. Once the spout has formed, an increase in $Q$ or decrease in $S$ thickens the spout.
The interfacial profiles at different flow rates and tube heights are recorded. Near the transition, the steady-state radius of curvature of the hump tip is orders of magnitude smaller than the length scales characterizing the boundary conditions (e.g. the tube diameter, $D$). This separation of length scales suggests that a similarity analysis of the steady-state hump profiles might be possible. However, for the range of parameters explored thus far, even when the system is arbitrarily close to the transition from hump to spout, the mean curvature of the hump tip, $\kappa$, while large, remains finite. Nevertheless, by fixing $S$ and looking at the steady-state profiles as $Q$ is increased, we observe that both the hump curvature and height display scaling behavior characteristic of systems approaching a singularity. Since the divergence is cut off before a singularity is reached this transition appears to be “weakly-first-order.”
As shown in Fig. \[fig:SW\_parameters\], the parameters important for this experiment are the upper and lower fluid viscosities and densities ($\eta_{u}, \eta_{l}, \rho_{u}, \rho_{l}$), interfacial tension ($\gamma$), orifice diameter ($D$), tube height ($S$), and flow rate ($Q$). In looking for scaling of the steady-state profiles, care must be taken to design an experimental apparatus capable of isolating the profiles near the transition. Experiments were performed in large tanks (30cm $\times$ 30cm $\times$ 30cm) capable of holding fluid layers that were each about 12 cm in height. To ensure that the upper fluid level remained constant, the withdrawn fluid was recycled back into the container (the bottom fluid layer thickness remains constant when the system is in the hump state). Steady withdrawal rates were achieved by using a gear pump. We verified that for the tube diameter ($D = 0.16$ cm), tube heights ($0.1$ cm $\leq S \leq 1.1$ cm), and flow rates ($Q \leq 10$ ml/sec) used in the experiments, the container walls were sufficiently distant and the fluid layers sufficiently thick so as not to affect the flows [@tobe]. We measured the upper(Heavy Mineral Oil) and lower(Glycerin-Water mixture) fluid viscosities to be $\eta_{u} = 2.29$ St and $\eta_{l} = 1.90$ St, the upper and lower fluid densities to be $\rho_{u} = 0.88$ g/ml and $\rho_{l} = 1.24$ g/ml, and the surface tension[@Neumann; @Hansen] to be $\gamma = 31$ dynes/cm. Attempts to increase $\kappa$ near the transition by decreasing $\gamma$ cause fluid mixing and result in a diffuse interface at high shear rates so that some fraction of the lower fluid is always being withdrawn [@JRL89].
While many of the parameters mentioned influence the flows, our understanding of the scaling behavior can be conveyed by focusing on $S$ and $Q$. We can fix $Q$ and track the development of the hump profiles as a function of $S$. Below the tube height, $S_{u}$, the hump is unstable and undergoes a transition to a spout. Figure \[fig:Su,Ku,hu,DeltaSvsQ\] shows that $S_{u} \propto Q^{0.30 \pm 0.05}$ [@Svs.Q]. At low $Q$ the transition is hysteretic: the value of $S$ where the spout becomes unstable and decays back into the hump is larger than $S_{u}$. We define the difference of the two heights or hysteresis as $\Delta S$. Figure \[fig:Su,Ku,hu,DeltaSvsQ\] indicates that the data is consistent with an exponential decrease: $\Delta S = 0.04 \exp^{-
Q/0.032}$. For $Q > 0.1$ ml/sec, $\Delta S$ was too small to measure.
In order to measure the mean curvature of the hump tip, $\kappa$, we first fit the tip of the recorded profile with a Gaussian and then calculate the curvature of the fitting function at the hump tip. Figure \[fig:Su,Ku,hu,DeltaSvsQ\] also shows the hump height, $h_{u}$, and mean radius of curvature, $1/\kappa_{u}$, at the transition as a function of $Q$. The dramatic decrease in $\Delta S$ coincides with the onset of a flat asymptotic dependence for $1/\kappa_{u}$ at $Q > 0.1$ ml/sec. We quantify this correlation, by fitting the curvature data with the form $1/\kappa_{u} = 0.02 + 0.32 \exp^{-Q/0.032}$ which has the same exponential decay with $Q$ as does $\Delta S$. For $Q > 0.1$, ml/sec we find both $\kappa_{u}$, and $h_{u}$ to be independent of the orifice diameter $D$ [@tobe]. We restrict our scaling analysis to this regime.[@Surfac_disc]
Figure 3a plots $\kappa$ (where $0 \leq \kappa \leq \kappa_{u}$), versus $Q$ for six data sets corresponding to different values of $S$. As shown in the inset of Fig. 3a, all fifteen data sets display a power-law divergence for $\kappa$, as $Q$ approaches $Q_{c}$ (a fitting parameter). While the power-law exponents remain constant as $S$ is varied, the prefactors to the power laws, $c_{\kappa}(S)$, vary slightly with $S$ and are scaled out in the inset. Figure 3b plots the hump height, $h_{max}$, versus $Q$. The inset to Fig. 3b shows that as $Q$ approaches $Q_{c}$ (obtained from Fig. 3a inset), the hump height approaches $h_{c}$ (a fitting parameter) as a power law. Once again, the power-law exponents for these data sets are the same over this range of $S$. The prefactors, $c_{h}(S)$, are scaled out in the inset. Note that $Q_{c}$ changes with $S$ indicating that the system can approach a continuous line of divergences. Combining the two scaling dependencies in Fig. 3c, we plot $(h_{c}-h_{max})/h_{max}$ versus the normalized curvature, $\kappa/n$. We find that $(h_{c}-h_{max})/h_{max}$ scales as $(\kappa/n)^{0.85 \pm 0.09}$ indicating that even though both $h_{c}$ and the power-law prefactor, $n$, change with $S$, the power-law exponents are independent of $S$ for this range of tube heights. Note that $n(S) = c_{h}(S) [c_{\kappa}(S)^{0.85}]$. The transition cuts off the evolution of the hump states making it impossible for the system to approach arbitrarily close to the singularity and limiting the precision with which we can determine the exponents.
The scaling observed for $h_{max}$ and $\kappa$ suggests that the hump profiles should display universal behavior as $h_{max}$ nears $h_{c}$. The quantities $1/(\kappa/n)$ and $(h_{c}-h_{max})/h_{max}$ track how quickly the radial and axial length scales decrease as the system approaches the singularity and are therefore used to scale the profiles. We define the scaled variables: $$\label{similarity_var}
H(R) = \frac{h_{c}-h(r)}{h_{c}-h_{max}} \qquad and \qquad R = \frac{r\kappa}{n},$$ where $h(r)$ is the hump profile and $h_c$ is taken from Fig. 3. The transformation shifts the profiles so that under scaling the singularity occurs at the origin and the maximum hump heights occurs at $H = 1$ and $R = 0$. Figure 4 shows eight scaled profiles for the $S = 0.830$ cm data set. In the bottom inset we overlay the hump profiles scaled in the main figure. We find excellent collapse of the profiles. The solid line in Fig. 4 is a power law that fits the data in the region beyond the parabolic tip. The picture that emerges is of a parabolic tip region which decreases in its radial scale and is simultaneously pulled towards the singularity in the axial direction leaving behind it a power-law profile with an exponent of $0.72 \pm 0.08$. This exponent is within error (although slightly smaller) of the exponent observed in the scaling relation of Fig. 3c which predicts an exponent of $0.85 \pm 0.09$.
Typically, the observed scaling dependencies in these types of problems result from the local stress balance. A scaling analysis where the viscous stresses of the upper and lower fluids balance the stress arising from the interfacial curvature predicts linear scaling dependencies and conical profile shapes. The non-linearity of the observed dependencies indicates that either a different stress balance governs the flows (e.g. only viscous stress due to upper fluid balances stress due to the interface curvature) or that non-local effects are coupling into the solution. A more detailed discussion can be found in [@tobe].
Finally, we compare the similarity curves for five different tube heights in the upper right inset of Fig. 4. The profiles all display the same power-law dependence. Within error, the normalized curvature $\kappa/n$ (taken from Fig. 3c) can be used to scale the radial components of these profiles and obtain good collapse. In Fig. 3c we find that the normalization prefactors, $n$, decrease as $\exp^{-2.5 S}$. Here, we correlate this decrease with the observation that the profiles become shallower at larger $S$. The points of deviation for the $S = 0.255$ cm and $0.381$ cm profiles mark the transition from the similarity regime to the matching regime beyond which the profiles become asymptotically flat. At large enough radii all of the scaled profiles display these deviations.
We have shown that in the $Q > 0.1$, ml/sec regime, a similarity analysis can be used to describe the flows near the selective-withdrawal transition [@Acrivose78]. We have observed power-law scaling of the hump height and curvature (Fig. 3) and used these scaling relations to collapse the hump profiles at different flow rates and tube heights onto a single universal curve (Fig. 4). However, the origin of the saturation of $\kappa$ at large $Q$ remains an important unexplained problem. Further insight into this cutoff behavior may be gained by comparing with an analogous two-dimensional (2-D) problem which roughly corresponds to replacing the tube with a line sink. Jeong and Moffatt [@Moffatt92] showed that in an idealized case where the bottom fluid is inviscid while the top fluid is very viscous, the 2-D hump interface forms a 2-D cusp singularity above sufficiently high withdrawal rates. Recently, Eggers [@Eggers01] showed that the solution changes when the lower fluid has a finite viscosity; the system no longer manifests a singularity [@EggersRev97]. Instead, the approach to the singularity is cut off and the system undergoes a transition to a different steady state. In this new state, a sheet of the lower fluid is entrained along with the upper fluid into the line sink. However, the finite lower fluid viscosity prevents the hump profiles from scaling onto a similarity solution.
Here, we have shown that for the three-dimensional selective withdrawal system even when both fluids are viscous ($\eta > 1$ St for both fluids) the effects of a singularity manifest themselves in the scaling of the hump profiles. Furthermore, preliminary experiments show that a reduction of the lower fluid viscosity to 0.01 St has little effect in determining the final curvature of the hump tip or equivalently, how close the system is to forming a cusp. This suggests that for our 3-D problem, either the effects of the lower fluid viscosity enter as a higher-order perturbation to the profile shapes, or a different mechanism underlies the avoidance of the cusp formation. If the latter scenario is correct, it may be possible for the system to manifest the singularity at a finite lower fluid viscosity. In either case, determining which variables affect how close the system approaches the singularity would allow for control of the maximum hump curvature and minimum spout diameter. This control could then be used to advance technologies such as coating microparticles [@Itai_coating], creating mono-dispersed micro-spheres [@Ganan-Calvo98], and emulsification through tip streaming [@Eggers01; @Sherwood] which take advantage of the selective withdrawal geometry.
We are grateful to W. W. Zhang, S. Venkataramani, J. Eggers, H. A. Stone, T. J. Singler, J. N. Israelachvili, C. C. Park, S. Chaieb, S. N. Coppersmith, T. A. Witten, L. P. Kadanoff, R. Rosner, P. Constantin, R. Scott, T. Dupont H. Diamant, and V. C. Prabhakar for sharing their insights. This research is supported by the University of Chicago (MRSEC) NSF DMR-0089081 grant
[10]{}
in [*Singularities in Fluids, Plasmas, and Optics*]{}, edited by R. E. Caflisch & G. C. Papanicolaou (Kluwer, Norwell, MA, 1993).
A. L. Bertozzi, M. P. Brenner, T. F. Dupont, & L. P. Kadanoff, in [*Trends and Perspectives in Applied Mathematics*]{}, edited by L. Sirovich (Springer, New York, 1994).
R. E. Goldstein, A. I. Pesci, & M. J. Shelley, Phys. Rev. Lett. [**70**]{}, 3043 (1993).
M. Pugh & M. J. Shelley, Comm. Pure App. Math. [**51**]{}, 733 (1998).
G. I. Barenblatt, [*Scaling, self-similarity and intermediate asymptotics*]{} (Cambridge University Press, Cambridge, UK, 1996).
J. R. Lister & H. A. Stone, Phys. Fluids [**10**]{}, 2758 (1998).
D. Bensimon [*et al.*]{}, Rev. Mod. Phys. [**58**]{}, 1986 (1986).
S. R. Nagel & L. Oddershede, Phys. Rev. Lett. [**85**]{}, 1234 (2000).
B. W. Zeff, B. Kleber, J. Fineberg, & D. P. Lathrop, Nature [**403**]{}, 401 (2000).
I. Cohen, to be published.
A. W. Neumann & J. K. Spelt, [*Applied Surface Thermodynamics*]{} (Marcel Dekker, New York, 1995).
F. K. Hansen & G. Rodsrud, J. Colloid Interface Sci. [**141**]{}, 1 (1991).
J. R. Lister, J. Fluid Mech. [**198**]{}, 231 (1989).
For systems with a different upper fluid viscosity, we find that the power-law dependence changes. There is a vast literature which focuses on explaining the Su vs. Q dependence (e.g. [@JRL89]). We therefore defer the detailed discussion of how currently available scaling predictions compare with the data to [@tobe].
Note that the transition from spout to hump is also discontinuous indicating that there is a curvature cut-off for the spout states as well. Since surfactants are continuously being removed from the interface when the system is in the spout state, the presence of surfactants in our system is not enough to account for the discontinuous nature of the transition or equivalently the values of the curvature cut-offs. More generally, we find that surfactant effects do not significantly influence the results presented. A more detailed discussion is presented in [@tobe].
While scaling has been hypothesized for an analogous 3-D system \[A. Acrivos and T. S. Lo, J. Fluid Mech. 86, 641, (1978)\] it was never shown experimentally.
J. T. Jeong & H. K. Moffatt, J. Fluid Mech. [**241**]{}, 1 (1992).
J. Eggers, Phys. Rev. Lett. [**86**]{}, 4290 (2001).
J. Eggers, Rev. Mod. Phys. [**69**]{}, 865 (1997).
I. Cohen [*et al.*]{}, Science [**292**]{}, (2001).
A. M. Ganan-Calvo, Phys. Rev. Lett. [**80**]{}, 285 (1998).
J. D. Sherwood, J. Fluid Mech. [**144**]{}, 281 (1984).
|
---
abstract: |
In this paper we compare the characteristics of pulsars with a high spin-down energy loss rate ($\dot{E}$) against those with a low $\dot{E}$. We show that the differences in the total intensity pulse morphology between the two classes are in general rather subtle. A much more significant difference is the fractional polarization which is very high for high $\dot{E}$ pulsars and low for low $\dot{E}$ pulsars. The $\dot{E}$ at the transition is very similar to the death line predicted for curvature radiation. This suggests a possible link between high energy and radio emission in pulsars and could imply that $\gamma$-ray efficiency is correlated with the degree of linear polarization in the radio band. The degree of circular polarization is in general higher in the second component of doubles, which is possibly caused by the effect of co-rotation on the curvature of the field lines in the inertial observer frame.
The most direct link between the high energy emission and the radio emission could be the sub-group of pulsars which we call the energetic wide beam pulsars. These young pulsars have very wide profiles with steep edges and are likely to be emitted from a single magnetic pole. The similarities with the high energy profiles suggest that both types of emission are produced at the same extended height range in the magnetosphere. Alternatively, the beams of the energetic wide beam pulsars could be magnified by propagation effects in the magnetosphere. This would naturally lead to decoupling of the wave modes, which could explain the high degree of linear polarization. As part of this study, we have discovered three previous unknown interpulse pulsars (and we detected one for the first time at 20 cm). We also obtained rotation measures for 18 pulsars whose values had not previously been measured.
author:
- |
Patrick Weltevrede and Simon Johnston\
Australia Telescope National Facility, CSIRO, P.O. Box 76, Epping, NSW 1710, Australia.
bibliography:
- 'journals\_apj.bib'
- 'modrefs.bib'
- 'psrrefs.bib'
title: Profile and polarization characteristics of energetic pulsars
---
\[firstpage\]
polarization — pulsars:general — pulsars: individual PSRs J0905–5127, J1126–6054, J1611–5209, J1637–4553 — radiation mechanisms: non-thermal
Introduction
============
Pulsars are observed to be spinning down with time. The spin-down energy loss rate $\dot{E}$, which is the loss of kinetic energy, is given by $$\dot{E} = 4\pi^2I\dot{P}P^{-3}$$ where $I$ is the moment of inertia of the star (generally taken to be $10^{45}$ g$\,$cm$^{2}$), $P$ its spin period and $\dot{P}$ its spin down rate. Some of the loss of spin-down energy emerges as radiation across the entire electromagnetic spectrum from radio to $\gamma$-rays. The radio emission accounts for only $\sim10^{-6}$ of the energy budget (e.g. @lk05) whereas up to a few percent is emitted in the $\gamma$-ray band (e.g. @tho04), with the rest converted to magnetic dipole radiation and some form of pulsar wind.
It has been evident for more than a decade that pulsars with high $\dot{E}$ have different polarization characteristics to those with lower $\dot{E}$. Many high $\dot{E}$ pulsars are highly linearly polarized (e.g. @qmlg95 [@hlk98; @cmk01]). The pulse profiles of high $\dot{E}$ pulsars are believed to be generally simple, consisting of either one or two prominent components (e.g. @hmt71 [@ran83]). [@jw06] found that, in the high $\dot{E}$ pulsars with double profiles, the total power and the circular polarization usually dominates in the trailing component and that the swing of position angle (PA) of the linearly polarized radiation is steeper under the trailing component. They interpreted these results as showing that the beam of high $\dot{E}$ pulsars consisted of a single conal ring at a relatively high height. [@kj07] incorporated these results into their pulsar beam model. In their model, there is a sharp distinction between pulsars with $\dot{E}>10^{35}$ erg s$^{-1}$ and those with smaller $\dot{E}$.
High $\dot{E}$ pulsars are not only interesting because of their distinct properties in the radio band, but also because a subset of them emit pulsed high energy emission. There are three different families of high energy emission models in the literature which places the emitting regions at different locations in the pulsar magnetosphere. In the polar cap models (e.g. @dh96) the emitting region is close to the neutron star surface, while outer gap models (e.g. @chr86a) place the emitting region near the light cylinder. Finally, in slot gap models (e.g. @mh04) the particle acceleration occurs in a region bordering the last open field lines. In the polar cap models the young pulsars are thought to produce pairs through curvature radiation (e.g. @hm01a), while older pulsars produce pairs only through inverse-Compton scattering (e.g. @mh04b). In the outer gap model pairs are formed by the interaction of thermal X-ray photons from the neutron star surface with $\gamma$-ray photons (e.g. @rom96a). All models have in common that high $\dot{E}$ pulsars should be brighter $\gamma$-ray sources than low $\dot{E}$ pulsars, something which is confirmed by EGRET (@tho04).
We have recently embarked on a long-term timing campaign to monitor a large sample of young, high $\dot{E}$ pulsars. The ephemerides obtained from timing will be used to provide accurate phase tagging of $\gamma$-ray photons obtained from the Fermi Gamma-Ray Space Telescope (formerly known as GLAST @sgc+08) and AGILE (@ppp+08) satellites, with the expectation that the number of $\gamma$-ray pulsars will increase from the current 7 to over 100 (@gsc+07). Of the $\sim80$ non millisecond pulsars in the pulsar catalogue maintained by the ATNF[^1] [@mhth05] with $\dot{E}>10^{35}$ erg s$^{-1}$, we have obtained polarization profiles at 1.4 GHz for [61]{}, a substantial increase in the number available to previous studies. In this paper, therefore, we examine the differences between high and low $\dot{E}$ pulsars. In total, we use pulse profiles from [[352]{}]{} pulsars, which includes the [[61]{}]{} energetic pulsars and a comparison sample of intermediate and low $\dot{E}$ pulsars in order to draw general conclusions about the pulsar population.
The paper is organized as follows. We start with explaining the details of the observations and the data analysis. In section 3 we then describe the polarization profiles of four pulsars for which we found an interpulse at 20 cm and present new rotation measures. In section 4 the the total intensity profiles of the pulsars are discussed, followed by a discussion of the polarization properties. Finally we will discuss the results in section 6, followed by the conclusions. The polarization profiles of all the pulsars can be found in appendix A (those for which we have a 20 cm and a 10 cm profile) and appendix B (those for which we only have a 20 cm profile). The plots of the pulse profiles can also be found on the internet[^2]. Finally, a table with derived properties from the pulse profiles can be found in appendix C[^3]. The appendices are only available in the on-line version of this publication.
Observations and data analysis
==============================
The procedure to generate pulse profiles for the pulsars which are timed for the Fermi and AGILE satellites is complicated by the fact that the pulse profiles of individual (short) observations have typically a low signal to noise ratio ($S/N$). It is therefore required to sum all the available observations in order to obtain a template profile with a higher $S/N$. This procedure is described in some detail in this section.
Observations
------------
All the observations were made at the Parkes telescope in Australia using the centre beam of the 20 cm multibeam receiver (which has a bandwidth of 256 MHz and has a noise equivalent flux density of $\sim$35 Jy on a cold sky) and the 10/50 cm receiver (which has at 10 cm a bandwidth of 1024 MHz and has a noise equivalent flux density of $\sim$49 Jy on a cold sky). This paper will focus mainly on the 20 cm data, because that is wavelength at which the majority of observations were made. However for some highly scattered pulsars it is also useful to consider the 10 cm data. The 50 cm data is not used, because the profiles are scattered at that frequency in many cases. The timing program started in April 2007 and each pulsar is typically observed once per month at 20 cm and twice per year at 10 and 50 cm. The two polarization channels of the linear feeds of the receiver were converted into Stokes parameters, resampled and folded at the pulse period by a digital filterbank. In our case a pulse profile with 1024 bins and 1024 frequency channels was dumped every 30 seconds on hard disk. Before each observation a calibration signal, injected into the feed at a [$45^\circ$]{} angle to the probes, was recorded which is then used to determine the phase delay and relative gain between the two polarization channels.
The data were processed using the [*PSRCHIVE*]{} package (@hvm04). The data of each observing session were first checked for narrow band radio frequency interference (RFI). An automatic procedure using the median smoothed difference of the bandpass, was used to identify the affected frequency channels in the calibration observations. The flagged channels were left out of all the observations of a particular observing day, making the automatic procedure more robust in finding weaker RFI which is not always identified. The remaining frequency channels were added together and the resulting sequence of profiles was then visually inspected for impulsive RFI. The sub-integrations in where RFI was particular strong were left out of further data processing.
The 20 cm multibeam receiver has a significant cross-coupling between the two dipoles affecting the polarization of the pulsar signal. For instance, a highly linearly polarized signal induces artificial circular polarization. These effects are measured as a function of parallactic angle for PSR J0437–4715 for the Parkes Pulsar Timing Array project (Manchester et al. 2008, in preparation), which allows the construction of a polarimetric calibration model (@van04). We have applied this model to all the observations using the 20 cm multibeam receiver, which reduces the artifacts in the Stokes parameters considerably.
\[SctSumming\]Summing of the individual observations
----------------------------------------------------
For some pulsars the timing noise is so severe that the pulse period predicted by the timing solution in the pulsar catalogue is not accurate enough to fold the data. In such a case the pulsar appears to drift in longitude in successive sub-integrations. We therefore applied the updated timing solutions to align the sub-integrations within individual observations.
To produce high $S/N$ profiles the individual observations must be added together. Because many pulsars involved in this timing program have severe timing noise and show glitches, it is difficult to use the timing solution to add the observations together. Instead a scheme was followed in which the observations are correlated with each other in order to find the offsets in pulse longitude between the profiles. These offsets were applied directly to the individual observations using custom software in order to sum the profiles. The sum of the profiles (i.e. the standard or template) has a higher $S/N$ than the individual profiles and can then be correlated with the individual observations to determine the offsets in pulse longitude with higher precision, hence making a more accurate standard. This procedure is repeated one more time to make the final pulse profile.
Faraday de-rotation
-------------------
The interstellar medium interacts with the radio waves of pulsars, causing a number of frequency and time dependent effects. One of these effects is Faraday rotation, where the interstellar magnetic field component parallel to the line of sight causes a difference in the propagation speeds of the left- and right-hand circular polarization signal components. This effect causes the polarization vector to rotate in the Stokes Q and U plane and the angle is a function of frequency and the rotation measure (RM). It is therefore necessary to de-rotate Stokes Q and U before summing the frequency channels in an observation.
A similar procedure has to be followed when the profiles of different observations are summed together, because different frequency channels were flagged and deleted in different observations. This means that although the centre frequencies are identical for the different observations, their weighted mid-frequencies are slightly different. The [*PSRCHIVE*]{} package de-rotates Stokes Q and U with respect to this weighted mid-frequency of the band and therefore it is necessary to take the RM into account when profiles of different observations are summed together. This is done by rotating Stokes Q and U of each observation with respect to infinite frequency using custom software before adding individual observations together.
Making frequency standards
--------------------------
In order to be able to measure the RM for pulsars for which no sufficiently accurate values were available one needs to keep frequency resolution. This is done by summing the observations together using custom software which takes into account the pulse longitude offsets found by correlating the profiles of the individual observations (as described section \[SctSumming\]). A complication is that [*PSRCHIVE*]{} de-disperses the data with respect to the non-weighted centre frequency of the band, while the pulse longitude offsets are determined using de-dispersed profiles with respect to the weighted mid-frequency. It is therefore necessary to include a dispersion time delay corresponding with the difference in the weighted and non-weighted mid-frequency when the observations are added together.
Results on individual pulsars
=============================
Newly discovered interpulses
----------------------------
![\[Fig\_newIP\]The pulse profile of the four pulsars for which we report an interpulse at an observing wavelength of 20 cm. The top panels show the total intensity profile (solid line), linear polarization (dashed line) and circular polarization (dotted line). The peak intensity of the profiles are normalized to one. The bottom panels show the PA of the linear polarization (for the pulse longitude bins in where the linear polarization was detected above $2 \sigma$).](J0905-5127.20.paperplot.ps "fig:"){height="0.99\hsize"}\
![\[Fig\_newIP\]The pulse profile of the four pulsars for which we report an interpulse at an observing wavelength of 20 cm. The top panels show the total intensity profile (solid line), linear polarization (dashed line) and circular polarization (dotted line). The peak intensity of the profiles are normalized to one. The bottom panels show the PA of the linear polarization (for the pulse longitude bins in where the linear polarization was detected above $2 \sigma$).](J1126-6054.20.paperplot.ps "fig:"){height="0.99\hsize"}\
![\[Fig\_newIP\]The pulse profile of the four pulsars for which we report an interpulse at an observing wavelength of 20 cm. The top panels show the total intensity profile (solid line), linear polarization (dashed line) and circular polarization (dotted line). The peak intensity of the profiles are normalized to one. The bottom panels show the PA of the linear polarization (for the pulse longitude bins in where the linear polarization was detected above $2 \sigma$).](J1611-5209.20.paperplot.ps "fig:"){height="0.99\hsize"}\
![\[Fig\_newIP\]The pulse profile of the four pulsars for which we report an interpulse at an observing wavelength of 20 cm. The top panels show the total intensity profile (solid line), linear polarization (dashed line) and circular polarization (dotted line). The peak intensity of the profiles are normalized to one. The bottom panels show the PA of the linear polarization (for the pulse longitude bins in where the linear polarization was detected above $2 \sigma$).](J1637-4553.20.paperplot.ps "fig:"){height="0.99\hsize"}
While analysing the data described in this paper we discovered four interpulses which have not been previously reported at 20 cm in the literature. The polarization profiles of these pulsars are shown in Fig. \[Fig\_newIP\] and discussed in some detail below. In all cases there is no evidence that the interpulse only appears sporadically rather than be weakly present in all observations.
### PSR J0905–5127
Profiles of this pulsar were presented first in [@dsb+98]. In their figure, there appears to be little sign of the interpulse at 20 cm, with perhaps a hint at 70 cm. In our observations at 20 cm, the interpulse is very weak in comparison to the main pulse with an intensity ratio of $\sim$17. The separation between the centroid of the main and interpulses is 175. The main pulse is a clear double with a total width of $\sim{\ensuremath{20^\circ}}$ but not much structure can be discerned in the interpulse because of the low $S/N$, although its width appears to be narrower than that of the main pulse. The interpulse is separated by [$180^\circ$]{} from the trailing component of the main pulse, suggesting that the interpulse could be the trailing component of a double. The polarization swing across both the main and interpulses is rather flat, and we cannot attempt a rotating vector model fit (RVM; @rc69a). We do not have sufficient $S/N$ to make any claims about the interpulse at either 10 cm or 50 cm from our data.
### PSR J1126–6054
The interpulse of PSR J1126–6054 is too weak to be seen in the profile presented in [@jlm+92]. However, it is just visible in our 20 cm data with a peak amplitude about one tenth of that of the main pulse. The peak-to-peak separation between the main and interpulses is $\sim$174. This low $\dot{E}$ pulsar has a low degree of linear polarization in its main pulse, which explains the absence of significant linear polarization in the much weaker interpulse. The interpulse appears to be significantly narrower, though the low $S/N$ makes the width difficult to measure. At 50 cm, the interpulse is marginally stronger with respect to the main pulse whereas at 10 cm it is not detected.
### PSR J1611–5209
There is no obvious interpulse at 20 cm in the profile presented in [@jlm+92]. However, [@kjm05] reported a low-amplitude interpulse in their 10 cm data. In our 20 cm data we clearly see the interpulse which has a peak amplitude less than 0.1 that of the main pulse. The separation between the main and interpulse is $\sim$177. The main pulse has a total width of $\sim$10 and consists of at least two components with a low fractional polarization. The low $S/N$ in the interpulse precludes any measurement of the polarization, but the overall width seems similar to main pulse.
### PSR J1637–4553
This pulsar has a very weak interpulse (about one tenth in amplitude compared to the main pulse), which is perhaps just visible in the existing literature (@jlm+92). The separation between the main and interpulses is $\sim{\ensuremath{173^\circ}}$, and although weak, the interpulse seems to be the same width as the $\sim{\ensuremath{20^\circ}}$ of the main pulse. The polarization of the interpulse is hard to determine, although the main pulse is virtually 100% polarized. At 50 cm, the interpulse has the same separation from the main pulse and roughly the same relative amplitude as at 20 cm. Our low $S/N$ at 10 cm makes the interpulse undetectable.
New rotation measures
---------------------
------------ ------------- -------- ------- -------- -------
Name
J1052–5954 $-280\pm24$ 288.55 -0.40 491 13.55
J1115–6052 $257\pm18$ 291.56 -0.13 228.2 6.76
J1156–5707 $238\pm19$ 295.45 4.95 243.5 20.40
J1524–5625 $180\pm20$ 323.00 0.35 152.7 3.84
J1524–5706 $-470\pm20$ 322.57 -0.19 833 21.59
J1638–4417 $160\pm25$ 339.77 1.73 436.0 8.46
J1702–4128 $-160\pm20$ 344.74 0.12 367.1 5.18
J1705–3950 $-106\pm14$ 346.34 0.72 207.1 3.86
J1737–3137 $448\pm17$ 356.74 0.15 488.2 5.88
J1738–2955 $-200\pm20$ 358.38 0.72 223.4 3.91
J1801–2154 $160\pm40$ 7.83 0.55 387.9 5.15
J1809–1917 $41\pm17$ 11.09 0.08 197.1 3.71
J1815–1738 $175\pm20$ 13.18 -0.27 728 9.06
J1828–1101 $45\pm20$ 20.49 0.04 607.4 7.26
J1837–0604 $450\pm25$ 25.96 0.26 462 6.19
J1841–0345 $447\pm15$ 28.42 0.44 194.32 4.15
J1845–0743 $440\pm12$ 25.43 -2.30 281.0 5.85
J1853–0004 $647\pm16$ 33.09 -0.47 438.2 6.58
------------ ------------- -------- ------- -------- -------
: \[newRMs\]The pulsars for which new values of the RM were measured. From left to right the columns are the pulsar name, the measured rotation measures, the galactic longitude and latitude, the dispersion measure and the best available distance estimate.
As mentioned in section 2, the interstellar magnetic field parallel to the line of sight causes the polarization vector to rotate in the Stokes Q and U plane. In order to derive the degree of linear polarization it is therefore necessary to correct for this rotation before summing the frequency channels across the frequency band. The amount of rotation of the PA depend on the rotation measure RM and values for the RM were obtained from the pulsar catalogue. However, not all the pulsars have a published value for its RM (or one with sufficient accuracy). We therefore measured the RM for a number of objects in our sample.
The RM can be measured by fitting the change of the PA ($\psi$) across the frequency band with the Faraday rotation formula $$\label{EqRM}
\psi\left(\lambda\right)=\psi_\infty+RM\,\lambda^2,$$ where $\lambda$ is the observing wavelength of the considered frequency channel and $\psi_\infty$ is the PA at infinite frequency. When different pulse longitude bins of the pulse profile show a similar frequency dependence of $\psi$ one can be confident in the measured RM. The RM is obtained by calculating the weighted average of the fits of equation (\[EqRM\]) for different bins in where there is enough linear polarization present. The new RM values are listed in Table \[newRMs\]. Only PSR J1809–1917 has a previously published RM which we include in the table because our value of 41 rad m$^{-2}$ differs significantly from the 130 rad m$^{-2}$ quoted by Han et al. (2006).
Total intensity pulse profiles
==============================
In this and the following section we investigate if, and how, the beams of high $\dot{E}$ pulsars differ from those of low $\dot{E}$ pulsars. As we are going to investigate basic pulse profile properties in a statistical way, it is important to consider the effects of a low $S/N$ and interstellar scattering. Because the high $\dot{E}$ pulsars tend to be younger they have on average lower galactic latitude than older pulsars, hence they tend to be more affected by interstellar scattering. Therefore low $S/N$ observations and profiles which are clearly affected by interstellar scattering were excluded from the statistics.
Pulse profile morphology
------------------------
It has been pointed out by several authors (e.g. @hmt71 [@ran83; @jw06; @kj07]) that the profiles of high $\dot{E}$ pulsars are relatively simple. A problem with measuring “profile complexity” is that it is not a well defined quantity, hence it is highly subjective. In order to make the results objective and better reproducible one should quantify the amount of complexity in a mathematical way. We will therefore explore ways to quantify different aspects of profile complexity, because it is difficult to come up with a definition which covers all facets of profile complexity. Only the total intensity (Stokes parameter I) profiles are considered in this section, while the polarization properties are investigated in the next section.
### Profile classification
Single Double Multiple Total
----------- -------- ----------- ---------- ---------- ---------- -----
$10^{35}$ – $10^{38}$ 27 (53%) 17 (33%) 7 (14%) 51
$10^{33}$ – $10^{35}$ 53 (47%) 43 (38%) 16 (14%) 112
$10^{28}$ – $10^{33}$ 52 (46%) 46 (40%) 16 (14%) 114
: \[classification\]The classification of the profiles for different $\dot{E}$ bins. Pulsars with a $S/N < 30$ were excluded as well as the profiles marked to show substantial scattering.
Pulse profiles are often described in terms of “components”, which are attributed to structure in the pulsar beam. There are different models in the literature describing the structure of the radio beam of pulsars. The beam could be composed out of a core and one or more cones (@ran83), randomly distributed patches (@lm88) or patchy cones (@kj07). In these models each component of the pulse profile originates from a different physical location in the magnetosphere. Because the components overlap in many cases and because their shapes are not uniquely defined, it is difficult to objectively classify profiles. Following [@kj07], we have classified the profiles by eye into three classes depending on the number of distinct (possibly overlapping) peaks in the pulse profile. These classes are named “single”, “double” and “multiple”, depending on if one, two or more peaks were identified. Although this classification is subjective, it should be considered as a rough measure for the complexity.
Table \[classification\] shows the percentage of pulsars in each class for three different $\dot{E}$ bins. For the pulsars which are significantly scattered at 20 cm we used 10 cm data (when available) and we omitted profiles with $S/N<30$ to improve significance. Compared with [@kj07] we find relatively more singles and less multiples and also the difference between high and low $\dot{E}$ pulsars is less pronounced. This might partially reflect the subjectivity of profile classification, but it may also be related to the fact that the classification of [@kj07] was based on polarization properties. For instance, rapid changes in the PA-swing are often found in between components and can therefore be interpreted as an indication for the presence of multiple components. The polarization properties are discussed in a separate section in this paper.
### \[SctMathDecomp\]Mathematical decomposition of the profiles
Because profile components can overlap and can have various shapes, it is in many cases not clear how many separate emission components there are. Also, because the classification is done by eye it is highly subjective at which level of detail the profile is separated into components. A more objective way to decompose the profile into components is to describe the profiles as linear combinations of basis functions. The number of required functions to fit the profile is then a measure for the complexity of the pulse profiles.
![\[Fig\_vonmises\]The decomposition of the pulse profile of PSR J1803–2137 at 20 cm into five von Mises functions (the dotted curves). The sum of these functions (the thick solid line) is a good representation of the observed pulse profile (thin solid line).](vonMises.ps){height="0.99\hsize"}
Gaussian functions are often used to decompose profiles (e.g. @kwj+94), but we have chosen to use von Mises functions (@von18), which are defined as $$\label{Eqvonmises}
I(x) = I_\mathrm{peak}e^{\kappa\cos\left(x-\mu\right) - \kappa}.$$ Here $\mu$ is the location of the peak (in radians), $I_\mathrm{peak}$ is the peak intensity and $\kappa$ is the concentration (which determines the width of the peak). The shape of these functions is very similar to Gaussians (see Fig. \[Fig\_vonmises\]), but they can often fit the edges of components slightly better. The main difference is that von Mises functions are circular, hence they are also known as circular normal distributions. A fitting routine for von Mises functions is part of the [*PSRCHIVE*]{} software package.
There is a subtle difference between the required number of fit functions and the number of components in the pulse profile. The first is just a mathematical measure of complexity, while the latter is the number of distinct physical emission locations in the pulsar magnetosphere which are visible in the line of sight. These numbers can be different, because there is no a priori reason to believe that the shape of a profile component can be described by a single, simple, mathematical basis function which is the same for all pulsars. For instance, a profile which shows a tail because of interstellar scattering can have one component (“single profile”), but it can only be fitted by a number of von Mises functions. Another example can be seen in the decomposition as shown in Fig. \[Fig\_vonmises\]. Although the component between pulse longitude [$70^\circ$]{} and [$120^\circ$]{} is fit by two von Mises functions, the smooth shape does suggest that it is a single asymmetric emission component. By using more complex asymmetric mathematical functions it might be possible to decompose some profiles in a smaller number of fit functions. However, in effect this is the same as to fit a larger number overlapping more simple symmetric functions which have less fit-parameters per function.
There is not always one unique solution for the decomposition of a profile and therefore the decomposition does not necessarily give additional insight in how profiles are composed out of distinct physical components. Nevertheless a noise free mathematical description of a pulse profiles can be used as a measure for its complexity. Moreover it is a very useful technique which makes it easier to measure profile properties such as pulse widths. An additional advantage of a mathematical description of the profile is that one can more accurately determine the component widths for pulsars which have overlapping components.
![\[Fig\_ppdot\_complexity\]The histogram of the average number of von Mises functions required to fit the profiles when the $S/N$ is scaled down to 30 for different $\dot{E}$ bins (solid line). The dashed histogram shows the number of pulsars contributing in each bin. Only profiles with a $S/N\geq100$ are included. ](avnrcomponents.ps){height="0.95\hsize"}
When using the number of mathematical fit functions as a measure of complexity it is important to take into account the $S/N$ ratio of the profiles. A higher $S/N$ profile will require a larger number of mathematical basis functions to fit its shape, even though the profile is not necessarily more complex. In order to avoid this effect, we determined how many of the fit functions would have a significant contribution to the total integrated intensity of the pulse profile when the $S/N$ would have been 30. We only considered profiles with a $S/N\ge100$ to ensure that all weak components which are just significant when the $S/N$ would have been 30 are spotted by eye.
Fig. \[Fig\_ppdot\_complexity\] shows the average number of von Mises functions required to fit the profiles when the $S/N$ is scaled down to 30 for different $\dot{E}$ bins. There is not much evidence that the profile complexity is very different for high and low $\dot{E}$ pulsars. The absence of a significant correlation between $\dot{E}$ and the number of fit functions is confirmed by calculating the Spearman rank-order correlation coefficient (@ptvf92) of the unbinned data, which is a non-parametric measure of correlation. Among the most complex profiles, according to this classification scheme, are those of PSRs J1034–3224 and J1745–3040, which indeed have complex looking profile shapes. An other pulsar which is ranked at the same level of complexity is PSR J1302–6350, which has to the eye a relatively simple double peaked profile, but its highly asymmetric components require relatively many mathematical functions to fit. We will therefore try a different method to define profile complexity below.
### The dimensionless double separation
![\[Fig\_double\_sep\]The dimensionless double separation (the ratio of the separation between the components and the average of their full width half maxima) versus the spin-down energy loss rate for all the observed pulsars at 20 cm which are classified to be doubles with a $S/N\geq30$. ](double_separation.ps){height="0.95\hsize"}
As described in section \[SctMathDecomp\], the profiles of the high $\dot{E}$ pulsars J1015–5719 and J1302–6350 were ranked as highly complex, while they appear to be “simple” to the eye. One property of these profiles which makes them look simple is that they are doubles with well separated components. We therefore tested the hypothesis that the doubles of high $\dot{E}$ pulsars have more clearly separated components than those of the low $\dot{E}$ pulsars. How clearly the components of doubles are separated can be quantified by calculating a quality factor, which we define to be $$Q_\mathrm{sep} = \frac{\mathrm{\Delta\phi_\mathrm{sep}}}{\frac{1}{2}\left(\mathrm{FWHM}_1+\mathrm{FWHM}_2\right)}.$$ This dimensionless double separation is the ratio of the separation between the components $\Delta\phi_\mathrm{sep}$ and the average of the full width half maxima of the components $\mathrm{FWHM}_1$ and $\mathrm{FWHM}_2$. Higher values of $Q_\mathrm{sep}$ imply that the components are separated more compared with the width of the components.
Fig. \[Fig\_double\_sep\] shows $Q_\mathrm{sep}$ versus $\dot{E}$ for all profiles at 20 cm which were classified to be doubles and have a $S/N\geq30$. There is no evidence that the components of doubles of low $\dot{E}$ pulsars are more likely to be overlapping, which is confirmed by calculating the Spearman rank-order correlation coefficient. According to this measure the most clearly separated doubles are PSRs J1302–6350, J1733–3716, J1901–0906 and J2346–0609.
### Profile symmetry
A factor which was not taken into account in the previous sections is the amount of symmetry in the profile. For instance, PSR J1302–6350 has highly asymmetric profile components, but the profile as a whole appears symmetric and could therefore be regarded as “simple”. It is therefore interesting to consider the degree of symmetry of the profiles, which can be measured by cross-correlating the profile with its mirror-image. We define the degree of profile symmetry to be the ratio of the maximum value of the cross-correlation function between the profile and the time reversed profile, and the maximum value of auto-correlation function of the profile. The degree of symmetry is therefore normalized to 1 for completely symmetric profiles and it decreases for more asymmetric profiles.
![\[Fig\_symmetry\]The degree of profile symmetry versus $\dot{E}$. Pulsars with a $S/N < 30$ and profiles with substantial scattering were excluded and only 20 cm data was considered. There is no evidence that high $\dot{E}$ pulsars are more symmetric.](symmetry.ps){height="0.95\hsize"}
The degree of symmetry versus $\dot{E}$ is shown in Fig. \[Fig\_symmetry\]. The pulsar with the lowest measured degree of symmetry is PSR B1747–31, which has a relatively narrow and bright leading component and a much broader and weaker trailing component. Also the complex main pulse of PSR B1055–52 can be found at the lower end of this figure. There is no indication for any correlation, which is confirmed by the Spearman rank-order correlation coefficient. Like for the other measures of complexity, it is hard to quantify that pulse profiles of high $\dot{E}$ pulsars are more simple than those of the low $\dot{E}$ pulsars.
\[Sectbeamwidths\]Pulse widths versus $P$
-----------------------------------------
A basic property of the emission beam of a pulsar is its half opening angle $\rho$. It is found that the opening angle is proportional to $P^{-1/2}$ (e.g. @big90b [@kxl+98; @gks93; @ran93]), which is expected if the edge of the active area of the polar cap is set by the last open field lines. In order to derive the opening angle from the measured profile width one needs to know how the emission beam intersects the line of sight. Because the orientation of the line of sight with respect to the pulsar beam is for most pulsars at best only poorly constrained, it is difficult to obtain accurate opening angles. For a large sample of pulsars the unknown geometrical factors should average out and therefore the profile width and $\rho$ should have the same $P$ dependence. The unknown geometry will cause additional scattering around the correlation between the pulse width and $P$.
![\[Fig\_pulsewidths\_p\]The measured profile 10% widths versus $P$. The solid line is the power law fit through the data, which has a slope of [$-0.30$]{}. The dashed line indices the fit of a second order polynomial through the data points (which is statistically not better than the power law fit). The filled points are pulsars with an $\dot{E} \ge 10^{34}$ erg s$^{-1}$ and the open points have a lower $\dot{E}$. Pulsars with a $S/N < 30$ and profiles with substantial scattering were excluded. All the shown observations were done at a wavelength of 20 cm.](width_p.ps){height="0.95\hsize"}
The measured pulse widths at 10% of the peak intensity ($W_{10}$) indeed show a slight anti-correlation with $P$ (Fig. \[Fig\_pulsewidths\_p\]), while there is no indication for a dependence with $\dot{P}$ (not shown). The slope is measured by reduced $\chi^2$ fitting (the data points are weighted equally), which results in a slope of [$-0.30\pm0.05$]{}[^4], comparable with the fit obtained from the data of [@gl98] by [@wj08a]. The slope of the correlation is therefore slightly less than what is expected from theory. This conclusion, in combination with the period distribution of pulsars with interpulses, provides convincing evidence in favour for the evolution of the pulsar beam towards alignment with the rotation axis (@wj08a).
If there is any deviation from a power law relationship between $W_{10}$ and $P$, then it would be that the slope of the correlation is steeper for faster rotating pulsars. Although the fit of a second order polynomial through the data-points indeed show this trend, it is statistically not much better than the first order fit. High $\dot{E}$ pulsars are in general spinning faster then low $\dot{E}$ pulsars, and therefore one could conclude that the pulse widths of high $\dot{E}$ pulsars have a stronger dependence on $P$ than the low $\dot{E}$ pulsars. To illustrate this the pulsars with high and low values of $\dot{E}$ are marked differently in Fig. \[Fig\_pulsewidths\_p\]. One could argue that there is not much evidence for a correlation for the low $\dot{E}$ pulsars, while this is clearer for the high $\dot{E}$ pulsars. But, as the fit second order polynomial was statistically not much better than the fit of a power law, this conclusion is also not significant. If this correlation exist, then it would suggest that the profile widths of the high $\dot{E}$ pulsars follow the theoretical prediction more closely than those of the low $\dot{E}$ pulsars, which could indicate that the emission geometry for high $\dot{E}$ pulsars is more simple.
Pulse widths versus $\dot{E}$
-----------------------------
![\[Fig\_pulsewidths\_edot\]The measured profile 10% widths versus $\dot{E}$. The triangles indicate the pulsars which are classified to have interpulses. Pulsars with a $S/N < 30$ and profiles with substantial scattering were excluded. All the shown observations were done at a wavelength of 20 cm. Three clusterings of pulsars are highlighted by ellipses, which are discussed in the text.](profile_widths_edot.ps){height="0.95\hsize"}
In the previous subsection we found that $W_{10}$ is correlated with $P$. One can expect that the correlation with $\dot{E}$ is weaker, because $W_{10}$ was found to be uncorrelated with $\dot{P}$. Indeed, Fig. \[Fig\_pulsewidths\_edot\] shows that for most pulsars $W_{10}$ is as good as uncorrelated with $\dot{E}$. But remarkably, unlike in Fig. \[Fig\_pulsewidths\_p\] there are a number of outliers which are clustered in relatively well defined regions in $\dot{E}$-space. These outliers are indicated by the ellipses and each group will be discussed separately below.
The first group of outliers are the pulsars in the ellipse at the left hand side of Fig. \[Fig\_pulsewidths\_edot\]. Although the profiles are clearly wider than most pulse profiles, they form a continuous distribution with the narrower profiles. These low $\dot{E}$ pulsars are PSRs J1034–3224, J1655–3048 and J2006–0807 (which have complex looking profiles) and PSRs J1133–6250 and J1137–6700 (which are doubles with a clear saddle between the components). The profiles of these pulsars are most likely broad because their beam is close to alignment with the rotation axis, making the beam intersect the line of sight for a relatively long fraction of the rotation period. There is evidence that the beam evolves to alignment with the rotation axis over time (e.g. @wj08a), so it is not surprising that these aligned pulsars are old pulsars with low $\dot{E}$ values. For two of these pulsars estimates for the angle between the magnetic axis and the rotation axis can be found in the literature. These polarization studies indeed suggest that the beam of PSRs J1034–3224 [@mhq98] and J2006–0807 [@ran93b; @lm88] are close to alignment.
The second group of pulsars with wide profiles are the pulsars with interpulses, which are marked with triangles in Fig. \[Fig\_pulsewidths\_edot\]. These are PSRs J0834–4159, J0905–5127, B0906–49, B1124–60, J1549–4848, B1607–52, B1634–45, B1702–19, B1719–37, B1736–29, J1828–1101 and J1843–0702. The profiles of these pulsars are characterised by having an interpulse which is separated by approximately [$180^\circ$]{} in pulse longitude from the main pulse. This separation is much larger than the widths of the main- and interpulse. The most natural explanation for these interpulses is that the emission of the main- and interpulse originates from opposite magnetic poles. These pulsars are concentrated to high values for $\dot{E}$. This is partially a selection effect in the sample of pulsars which are included in the Fermi timing program, but it has also shown by [@wj08a] that interpulses are more likely to be detected in young (high $\dot{E}$) pulsars.
The third group of pulsars with wide profiles can also be found at the high $\dot{E}$ end of Fig. \[Fig\_pulsewidths\_edot\]. These are PSRs J1015–5719, B1259–63, J1803–2137, J1809–1917 and J1826–1334. Like the group of pulsars with wide profiles at the low $\dot{E}$ end of the figure, this group appears to form a continuum with the pulsars with narrow profiles. We will refer to this group as the [*energetic wide beam pulsars*]{}. Their profiles show a double structure and they are exceptionally wide, but they are not separated by exactly [$180^\circ$]{} in pulse longitude. In contrast to the group of interpulses, this separation is not much larger than the width of the individual components. The two components are often highly asymmetric with steep edges at opposite sides, making the profile as a whole to have a high degree of mirror symmetry. For some of these pulsars a weak bump is detected in between the components, which disappears at higher frequencies. The dependence of the PA on pulse longitude is usually simple and straight.
It is not clear if PSR B1055–52 should be classified as an energetic wide beam pulsar or a pulsar with an interpulse. On the one hand the separation between the main- and interpulse is larger than the width of the individual components, but on the other hand the components are very wide and the interpulse is not exactly [$180^\circ$]{} away from the main pulse. The location of PSR B1055–52 in Fig. \[Fig\_pulsewidths\_edot\] (the lowest triangle at $\dot{E}=3.0\times10^{34}$ erg s$^{-1}$) suggest that it is well separated from the other energetic wide beam pulsars, although the group of interpulse pulsars appear to have an overlap with the group of energetic wide beam pulsars. Especially the location of PSRs B0906–49 ($\dot{E}=4.9\times10^{35}$ erg s$^{-1}$) and J1828–1101 ($\dot{E}=1.6\times10^{36}$ erg s$^{-1}$) in the figure are consistent with both groups. Both these pulsars have interpulses at $\sim$[$180^\circ$]{} away from the main pulse and this separation is much larger than the component widths (the broad components of J1828–1101 at 20 cm are because of scatter broadening), which is good evidence that both interpulses are emitted from the opposite pole. For PSR B0906–49 the PA-swing is shown to be inconsistent with a wide cone interpretation (@kj08).
The energetic wide beam pulsars are among the pulsars with the highest $\dot{E}$ values ($\dot{E}>5\times10^{35}$ erg s$^{-1}$), although it is not true that all pulsars with high $\dot{E}$ values are also energetic wide beam pulsars. This is first of all shown by the overlap between the group of pulsars with interpulses and the energetic wide beam pulsars group. Secondly, PSR J1513–5908 which has the highest $\dot{E}$ value in our sample, does not show any evidence of a double structure. Finally, PSR J1028–5819 is an extremely narrow double (@kjk+08, point in the bottom left corner of Fig. \[Fig\_pulsewidths\_p\]). A high $\dot{E}$ therefore appears to be an important parameter which allows an energetic pulsar to form a wide beam, but there must be more factors involved.
The intensity ratio of the components of high $\dot{E}$ pulsars with double profiles
------------------------------------------------------------------------------------
[@jw06] noted that the trailing component of well separated double profiles of high $\dot{E}$ pulsars tend to dominate in total power (and in circular polarization as we will discuss below). This curious effect seems to be strongest for pulsars with $\dot{E}>10^{35}$ erg s$^{-1}$ (see for example PSR J1420–6048, Fig. \[J1420-6048\]). The only exceptions are the Vela pulsar (which has no well separated double profile), PSR J1302–6350 (an energetic wide beam pulsars for which it is not clear which component is the trailing component) and PSR J1831–0952. Nevertheless, in the majority of the cases this correlation holds.
Polarization
============
Linear polarization
-------------------
![\[Fig\_linearpol\]The degree of linear polarization versus $\dot{E}$ of all pulsars observed at 20 cm for which a significant degree of linear polarization was measured. Pulsars which show evidence for scatter broadening were excluded. There are two relatively well defined regions which are almost empty in this diagram. The dashed line shows the linear fit and the solid curve the fit of an arctan function illustrating the step in the degree of linear polarization.](linearpol.ps){height="0.95\hsize"}
![\[Fig\_ppdot\][*Top:*]{} The $P-\dot{P}$ diagram of all the observed pulsars at 20 cm for which a significant degree of linear polarization could be measured. The filled and open circles are the pulsars which are respectively more or less than 50% linearly polarized. [*Bottom:*]{} The $P-\dot{P}$ diagram of the pulsars which show sudden jumps in their PA-swing (open circles) and those which have a smooth PA-swing (filled circles). ](ppdot.ps "fig:"){width="0.95\hsize"}\
![\[Fig\_ppdot\][*Top:*]{} The $P-\dot{P}$ diagram of all the observed pulsars at 20 cm for which a significant degree of linear polarization could be measured. The filled and open circles are the pulsars which are respectively more or less than 50% linearly polarized. [*Bottom:*]{} The $P-\dot{P}$ diagram of the pulsars which show sudden jumps in their PA-swing (open circles) and those which have a smooth PA-swing (filled circles). ](opm_ppdot.ps "fig:"){width="0.95\hsize"}
It has been pointed out by several authors that degree of linear polarization is high for high $\dot{E}$ pulsars (e.g. @qmlg95 [@hlk98; @cmk01; @jw06]). This correlation is clearly confirmed, as can be seen in Fig. \[Fig\_linearpol\]. There is a transition from a low to a high degree of linear polarization which happens around $\dot{E}\sim10^{34}-10^{35}$ erg s$^{-1}$. Virtually all pulsars with $\dot{E} < 5\times10^{33}$ erg s$^{-1}$ have less than 50% linear polarization and for almost all pulsars with with $\dot{E} > 2\times10^{35}$ erg s$^{-1}$ this percentage is above 50%. There appears to be a transition region in between where pulsars can both have low and high degrees of polarization, although the transition is remarkably sharp and there are well defined spaces in the figure which are almost empty. The non-linearity of the degree of linear polarization versus $\dot{E}$ is confirmed by fitting an arctan function through the data (solid curve in Fig. \[Fig\_linearpol\]). This is done by minimizing the $\chi^2$ using the Levenberg-Marquardt algorithm [@mar63] as implemented in [@ptvf92] (the data points are weighted equally). The total $\chi^2$ is reduced by 20% compared with a linear fit, which shows that the step in the degree of linear polarization is important to consider. Adding higher order polynomial terms does not reduce the $\chi^2$ further, suggesting that the step is the most dominant deviation from non-linearity. The position of the steepest point in the fitted function occurs at $\log_{10}{\dot{E}} = 34.50\pm0.08$.
The emission of pulsars is thought to be a combination of two orthogonally polarized modes (OPM, e.g. @mth75). This aspect of the emission can manifest itself in sharp $\sim{\ensuremath{90^\circ}}$ jumps in the position angle (PA) over a small pulse longitude range. These jumps are thought to be sudden transitions from the domination of one mode to the other. Jumps in the PA-swing therefore indicates that both modes are present in the emission. The mixing of both modes at a certain longitude will lead to depolarization, so the presence of jumps in the PA-swing could be anti-correlated with the degree of polarization. By comparing the top and bottom panel of Fig. \[Fig\_ppdot\] one can see that the $\dot{E}$ value at which the transition from a low to a high degree of polarization takes place coincides with the $\dot{E}$ value after which pulsars do not have jumps in their PA. This is therefore important evidence that the increase in the degree of linear polarization with $\dot{E}$ is caused by one OPM dominating the emission. Most high $\dot{E}$ pulsars do not show OPM jumps, but the reverse is not always true. Low $\dot{E}$ pulsars can have a low degree of linear polarization without evidence for OPM jumps.
There are three curious exceptions in Fig. \[Fig\_linearpol\] which do not follow the general trend. First of all PSRs J1509–5850 and J1833–0827 have a low degree of linear polarization while they have a high $\dot{E}$ ($5.2\times10^{35}$ and $5.8\times10^{35}$ erg s$^{-1}$ respectively). However it must be noted that the leading and trailing components of PSR J1833–0827 are highly polarized at 10 cm. The degree of linear polarization of this pulsar shows a drop to zero in the middle of central component, which could indicate that there is a transition in the dominating OPM. The other exception is PSR J0108–1431, which has a low $\dot{E}$ but is nevertheless highly polarized. This could suggest that this pulsar has some similarities with high energy pulsars.
All pulsars without a significant amount of measured degree of linear polarization fall below the $\dot{E} < 5\times10^{33}$ erg s$^{-1}$ line. The only exception is PSR J1055–6032, which appears to have a very low degree of polarization. The rule that high $\dot{E}$ pulsars are highly polarized therefore is confirmed in the majority of all pulsars.
Emission heights
----------------
### The emission height derived from the pulse width
The wider pulse profiles of high $\dot{E}$ pulsars are often attributed to a larger emission height for those pulsars (e.g. @man96 [@kj07]). The divergence of the magnetic (dipole) field lines away from the magnetic axis makes the half opening angle $\rho$ of the beam scale with the square root of the emission height. Under the assumption that the beam of the pulsar is confined by the last open field lines it follows that $$\label{EqH}
\rho = \sqrt{\frac{9\pi \,\, h_{\rm em}}{2\,\,\, P\,\, c}}$$ (e.g @lk05), were $h_{\rm em}$ is the emission height and $c$ the speed of light.
------------ --------------------- ----------------- ---------- --------- -----------------
$\dot{E}$ $h_\mathrm{PA}$ $h_{90}$ $R_\mathrm{LC}$
\[erg s$^{-1}$\] \[km\] \[km\] \[deg\] \[km\]
J1015–5719 $8.27\times10^{35}$ – 5160 – 6674
J1302–6350 $8.25\times10^{35}$ – 3476 – 2279
J1803–2137 $2.22\times10^{36}$ 461 2967 8.3 6375
J1809–1917 $1.78\times10^{36}$ 130 1425 3.8 3948
J1826–1334 $2.84\times10^{36}$ 191 2522 4.5 4841
J0304+1932 $1.91\times10^{31}$ 75 562 0.1 66206
J0536–7543 $1.15\times10^{31}$ -21 1472 -0.0 59444
J0614+2229 $6.24\times10^{34}$ 1051 99 7.5 15982
J0630–2834 $1.46\times10^{32}$ 118 2459 0.2 59376
J0631+1036 $1.73\times10^{35}$ 1087 278 9.1 13731
J0729–1448 $2.81\times10^{35}$ 1143 291 10.9 12008
J0742–2822 $1.43\times10^{35}$ 338 68 4.9 7957
J0835–4510 $6.91\times10^{36}$ 32 30 0.9 4263
J0908–4913 $4.92\times10^{35}$ 320 24 7.2 5094
J1048–5832 $2.01\times10^{36}$ -24 141 -0.5 5901
J1105–6107 $2.48\times10^{36}$ 243 52 9.2 3015
J1119–6127 $2.34\times10^{36}$ 2082 1406 12.3 19455
J1123–4844 $1.76\times10^{32}$ 540 152 5.3 11682
J1253–5820 $4.97\times10^{33}$ 690 116 6.5 12191
J1320–5359 $1.67\times10^{34}$ 673 158 5.8 13347
J1359–6038 $1.21\times10^{35}$ 544 41 10.2 6084
J1420–6048 $1.04\times10^{37}$ 106 442 3.7 3253
J1531–5610 $9.09\times10^{35}$ 81 136 2.3 4018
J1535–4114 $1.98\times10^{33}$ 474 235 2.6 20654
J1637–4553 $7.51\times10^{34}$ 392 62 7.9 5667
J1701–3726 $2.97\times10^{31}$ 184 984 0.2 117118
J1705–3950 $7.37\times10^{34}$ -120 690 -0.9 15218
J1709–4429 $3.41\times10^{36}$ 563 326 13.2 4889
J1733–3716 $1.54\times10^{34}$ 932 1969 6.6 16107
J1740–3015 $8.24\times10^{34}$ 818 31 3.2 28952
J1835–1106 $1.78\times10^{35}$ 445 93 6.4 7916
J1841–0345 $2.69\times10^{35}$ 353 415 4.2 9737
------------ --------------------- ----------------- ---------- --------- -----------------
: \[emissionheights90\]The emission height $h_\mathrm{PA}$ is derived from the offset $\Delta\phi$ between the inflection point of the PA-swing and the centre of the pulse profile. The emission height $h_{90}$ is derived from the pulse width assuming an orthogonal rotator ($\alpha={\ensuremath{90^\circ}}$) and a line of sight which makes a central cut through the emission beam ($\beta={\ensuremath{0^\circ}}$), which is the emission height for a typical random geometry. The last column is the light cylinder radius. The first five pulsars are the pulsars which we have classified as energetic wide beam pulsars.
Wider beams are more likely to produce wide profiles, although the observed pulse width also depend on the orientation of the magnetic axis and the line of sight with respect to the rotation axis. The relevant parameters are the angle $\alpha$ between the magnetic axis and the rotation axis and the angle $\zeta$ between the line of sight and the rotation axis. A related angle is the impact parameter $\beta = \zeta-\alpha$, which is the angle between the line of sight and the magnetic axis at its closest approach. For most pulsars it is extremely difficult to obtain reliable values for these angles, which makes it hard to derive the emission height from $W_{10}$.
For a sample of pulsars with a random orientation of the magnetic axis and the line of sight both the $\alpha$ and $\zeta$ distribution are sinusoidal. Simulations using the model described in [@wj08a], show that the pulse width distribution for such a sample pulsars peaks at $2\rho$. Some pulsars will have wider profiles because the pulsar beam is more aligned with the rotation axis, while others will have narrower profiles because the line of sight grazes the beam. This implies that the typical pulse width of a large sample of pulsars which have random orientations of their spin and magnetic axis and have similar opening angles $\rho$ should be equal to $2\rho$. In other words, a typical profile width is equal to that which is expected for an orthogonal rotator ($\alpha={\ensuremath{90^\circ}}$) and a line of sight which makes a central cut through the emission beam ($\beta={\ensuremath{0^\circ}}$). For such geometry Eq. \[EqH\] can be rewritten to $$\label{EqH90}
h_{90} = \frac{cP\left(W_{10}\right)^2}{18\pi},$$ which is the emission height for a typical random geometry assuming a magnetic dipole field and an active area of the polar cap which is set by the last open field lines.
### The emission height derived from the PA-swing
An independent way to estimate the emission height is by measuring the shift of the PA-swing caused by the co-rotation of the emission region with the neutron star. In this method it is assumed that the PA-swing is described by the rotating vector model (RVM; @rc69a). The position angle $\psi$ is then predicted to depend on the pulse longitude $\phi$ as $$\tan\left(\psi-\psi_0\right)=\frac{\sin\alpha\;\sin\left(\phi-\phi_0\right)}{\sin\zeta\;\cos\alpha-\cos\zeta\;\sin\alpha\;\cos\left(\phi-\phi_0\right)},$$ where $\psi_0$ and $\phi_0$ are the PA and the pulse longitude corresponding to the intersection of the line of sight with the fiducial plane (the plane containing the rotation and magnetic axis). The PA-swing is a S-shaped curve and its inflection point occurs at $\phi_0$. The RVM fit is shown in the figures of Appendix A and B for the pulsars which have a roughly S-shaped PA-swing.
If the emission profile is symmetric around the magnetic axis, then one could expect the inflection point to coincide with the middle of the pulse profile. However, co-rotation causes the inflection point to be delayed with respect to the pulse profile. The pulse longitude difference $\Delta\phi$ between the middle of the profile and the inflection point of the PA-swing can be used to derive the emission height (@bcw91) $$\label{heighBCW92}
h_\mathrm{PA} = \frac{P\,c\,\Delta\phi}{8\pi }.$$ The relative shift of the PA-swing with respect to the profile is independent of $\alpha$ and $\zeta$ [@drh04]. If the emission height is too large it could be difficult to measure $\Delta\phi$ because the inflection point of the PA-swing is shifted beyond the edge of the pulse profile.
### The derived emission heights
![\[Fig\_height\]The emission height as derived using two independent methods, one using the pulse widths ($h_{90}$) and the other using the shift of the PA-swing with respect to the profile ($h_\mathrm{PA}$). The solid circles indicate profiles with a clear double structure. The points should lie on the line if the emission heights are consistent with each other.](heightheight.ps){height="0.95\hsize"}
The emission height derived using the PA-swing ($h_\mathrm{PA}$) and using the pulse width ($h_\mathrm{90}$) are both listed in Table \[emissionheights90\]. This only includes the pulsars which have a clear S-shaped PA-swing at 20 cm. The typical emission height is a few hundred km, which is similar to the emission height found by other authors (e.g. @bcw91 [@mr01]). One can see that for some pulsars $h_\mathrm{PA}$ is negative, which is obviously impossible. This can be considered to be a clear warning that the emission heights for an individual source could be completely wrong, but one can nevertheless hope that they are meaningful in a statistical sense. In order to test this we calculated the Spearman rank-order correlation coefficient between $h_\mathrm{PA}$ and $h_\mathrm{90}$, which shows that there is no evidence for any correlation between these parameters. This is also evident from Fig. \[Fig\_height\], where these quantities are plotted against each other. We are therefore forced to accept that even in a statistical sense the calculated emission heights are inconsistent, supporting the same conclusion reached by [@ml04] based on six pulsars.
There are a number of reasons why the heights derived using the two methods could be inconsistent. If the beams are significant patchy, then the centroid of the profile is not related to the position of magnetic axis and both methods to derive emission heights will fail. We therefore made a distinction in Fig. \[Fig\_height\] between the profiles which are clear doubles and other profiles, because the double structure could indicate that the pulsar beam is roughly symmetric around the magnetic axis. As one can see there is no noticeable difference in the distributions. Another effect that could be important is the effect of sweepback of the magnetic field lines. [@dh04] derived that the effect of sweepback can dominate over other effects of co-rotation at low altitudes, making it possible for the inflection point of the PA-curve to precede the profile centre. The PA-curve can also precede the profile in case of inward directed emission (@dfs+05).
Despite the inconsistency between the derived emission heights using both methods, it is not true that the emission heights are entirely random. Most pulsars show a positive emission height $h_\mathrm{PA}$, indicating that the steepest slope of the PA-swing trails the centroid of the profile in most cases. In fact, Fig. \[Fig\_height\] appears to show evidence that it is unlikely that both $h_{90}$ and $h_\mathrm{PA}$ are large. In Table \[emissionheights90\] one can see that the emission of the energetic wide beam pulsars should come from near the light cylinder in order to explain the width of the pulse profiles ($h_\mathrm{90}\sim R_\mathrm{LC}$). However, the derived emission heights from the the PA-swing fits are not unusually large. In this list one could add the emission height of PSR J1015–5719, which is estimated by [@jw06] to be 380 km.
All the energetic wide beam pulsars with a derived emission height from the PA-swing can be found below the solid line in Fig. \[Fig\_height\], as well as PSRs J1705–3950 and J1733–3716 which have similar profile shapes. It seems unlikely that they all have beams which are close to alignment with the rotation axis, which suggest a different reason for the large widths of the profiles of the energetic wide beam pulsars. Apparently the emission heights which are derived from the PA-swings of the energetic wide beam pulsars are systematically underestimated, or the heights derived from the profile widths are over estimated. The first case could be explained by magnetic field line sweepback when the emission height is low (@dh04). The second case implies that the beams of these pulsars are wider than could be expected from the divergence of the dipole field lines. The widening of the pulsar beam could, at least in principle, be caused by propagation effects in the magnetosphere.
Another explanation for the deviation of the energetic wide beam pulsars from the line in Fig. \[Fig\_height\] could be that the two methods estimate the emission heights at different locations in the magnetosphere. The method based on the profile width estimates the emission height at the edge of the of the beam, while the method using the PA-swing estimates the emission height of the more central regions of the beam. For the energetic wide beam pulsars $h_{90}$ was found to be systematically larger than $h_\mathrm{PA}$, which can be interpreted as evidence for an increase in the emission height at the edge of the beam. This interpretation will be discussed in more detail in the following section.
Fig. \[Fig\_height\] shows that besides the group of pulsars which have relatively large $h_{90}$ compared to $h_\mathrm{PA}$, there is also a group in where the opposite is seen. An explanation could be that for those pulsars only a fraction of the polar cap is active (e.g. @kg97). Support for this interpretation is that the profiles of a number of pulsars in this group are argued to be produced by partial cones, including PSR J0543+2329 (@wck+04), J0614+2229 (@jkk+07) and J0659+1414 (@ew01).
Circular polarization
---------------------
![\[Fig\_circularpol\]The degree of circular polarization (the absolute value of Stokes V) versus $\dot{E}$ of all pulsars observed at 20 cm for which a significant degree of circular polarization was measured. Pulsars which show evidence for scatter broadening were excluded. ](circularpol.ps){height="0.95\hsize"}
Unlike the degree of linear polarization the degree of circular polarization appears to be unaffected by $\dot{E}$. Also the fraction of pulsar which are left- and right-hand circularly polarized is about 50 per-cent for both the high and low $\dot{E}$ pulsars.
[@jw06] noted that, besides the total intensity, also the degree of circular polarization usually dominates in the trailing components of high $\dot{E}$ pulsars with well separated double profiles. This correlation is also clearly confirmed in our data for all pulsars with an $\dot{E}>10^{34}$ erg s$^{-1}$. The only clear counter example in our data-set could be PSR J1705–1906 ($\dot{E}=7\times10^{34}$ erg s$^{-1}$), which has a high degree of circular polarization in the leading half of the profile. However, it must be noted that the single pulse modulation properties of this pulsar suggest that the leading component is not the leading component of a double, but rather a precursor to a blended double which forms the trailing half of the profile (@wws07).
Remarkable correlations have been reported between the sign of the circular polarization and the sign of the slope of the PA-swing. According to [@rr90] the sign of the slope of the PA-swing is correlated with the sign of the circular polarization for pulsars which are cone dominated and for which the sign of the circular polarization is the opposite in the two components. But [@hmxq98] did not confirm this correlation. Instead they propose that there is a correlation between the sign of the circular polarization and the sign of the slope of the PA-swing for cone dominated pulsars which have the same sign of circular polarization. Our data does not show much evidence for either correlation. Compare for instance the plots for PSR J1826–1334 (positive circular polarization, decreasing PA-swing) with PSR J1733–3716 (negative circular polarization, decreasing PA-swing).
Discussion
==========
Extended radio emission regions?
--------------------------------
We concluded that the differences in the pulse profile morphology of the high and low $\dot{E}$ pulsars is in general rather subtle, without an objectively measurable discriminator between them. An exception is what we call the group of energetic wide beam pulsars, which do have distinct profile properties and which will be discussed separately below. The measured slope of the $W_{10}-P$ correlation appears to be flatter than the theoretical $P^{-1/2}$ slope. It is far from straight-forward to link the deviation of the slope to a physical mechanism. For example, as explained in [@wj08a], the measured slope depends on the details of the evolution of the pulsar spin-down and the alignment of the magnetic axis with the rotation axis. If the active area of the polar cap is influenced by other factors than just the opening angle of the last open field lines, or if the emission height varies from pulsar to pulsar, one can expect to observe its effects in the $W_{10}-P$ correlation. There is some marginal evidence that the slope of the correlation is steeper for high $\dot{E}$ pulsars, suggesting that for high $\dot{E}$ pulsars these other factors are less important.
If one believes that the emission geometry is simpler for high $\dot{E}$ pulsars, one can ask the question of what is causing this. One factor that could affect the complexity of profiles is the emission height. Complexity could arise because of multiple distinct emission heights within the beam (@kj07). In their model high $\dot{E}$ pulsars have only one emission height which is similar for different pulsars. However, one could also make the argument for an opposite effect. Maybe the emission of high $\dot{E}$ pulsars is not emitted from a well defined height, but rather from an extended height range. This would mean that the observed profiles of high $\dot{E}$ pulsars are a superposition of profiles emitted from a continuum of heights. The observed sum of those profiles (shifted with respect to each other by aberration and retardation) will have less complexity because they are blurred out. Not only are the profiles expected to be less complex in this scenario, but there is also not much room to vary the emission height from pulsar to pulsar if the height range is large. This would make the $W_{10}-P$ correlation follow the prediction more closely. Large emission height ranges are typical for high energy models, such as the slot gap models (e.g. @mh04) or the two pole caustic models (@dr03), hence there could be parallels with the radio emission for high $\dot{E}$ pulsars. These parallels could be even more relevant for the energetic wide beam pulsars.
The energetic wide beam pulsars
-------------------------------
As discussed by e.g. [@man96], there is a group of young pulsars which can be found among the highest $\dot{E}$ pulsars which have very wide profiles with often steep edges. The profiles are clearly mirror symmetric, suggesting that the components are the two sides of a single beam rather than two beams from opposing magnetic poles. This interpretation is also suggested by the frequency evolution of PSR B1259–63 (@mj95). Because these objects are young, the typical orientation of the magnetic axis is not expected to be very different for the highest $\dot{E}$ pulsars and those with intermediate values, which suggests that some pulsars with high $\dot{E}$ values can have very different beams compared with other pulsars.
An interesting analogue can be drawn between the radio profiles of energetic wide beam pulsars and the high energy profiles of pulsars. High energy profiles can also be wide doubles which often have sharp edges (e.g. @tho04). The pulsars which produce high energy emission are the pulsars with high $\dot{E}$ values, so there could be a direct link between the high energy pulsars and the energetic wide beam pulsars. Maybe the radio emission and the high energy emission are produced at the same location in the magnetosphere. The sharp edges of the high energy profiles are often explained by caustics which form because of the combined effect of field line curvature, aberration and retardation (e.g. @mor83). These caustics occur when the emission is produced high in the magnetosphere over a large altitude range and if the magnetic axis is not aligned to the rotation axis (e.g. @dr03). If the radio emission and $\gamma$-ray emission would come from similar locations, one would expect the radio and $\gamma$-ray profiles to look alike. Hopefully the Fermi satellite will find high energy counterparts for these pulsars which allows a test of this hypothesis.
Another way to produce profiles with sharp edges could be the combination of refraction of radio waves in pulsar magnetospheres in combination with an emission height which is different for different field lines (@wsv+03). Only the ordinary wave mode is refracted (e.g. @ba86) or scattered (@pet08a) in the magnetosphere. The profiles of the energetic wide beam pulsars can therefore be expected to be dominated by one polarization mode, which could potentially also explain their high degree of linear polarization. The unpolarized bump which is observed in the middle of some of these profiles could be the un-refracted part of the beam, which is depolarized because the presence of the extra-ordinary mode. These central components are strongest at lower frequencies, consistent with the steeper spectral index which is often observed for the central components of pulse profiles (e.g. @ran83). However, it remains to be seen if propagation effects can be strong enough to explain the extreme pulse widths which are observed.
The emission geometry appears to be different for high $\dot{E}$ pulsars and is possibly more similar to that of the high energy emission. However, not all pulsars with high $\dot{E}$ values produce these extremely wide profiles. Apparently only a subset of the high $\dot{E}$ pulsars have emission geometries which are very different from normal radio pulsars. A high $\dot{E}$ is therefore an important parameter required for the energetic wide beam pulsars, but not the only one. For instance, maybe only certain configurations of the plasma distributions enlarge the beam via propagation effects or maybe not all pulsars have a slot gap which produces radio emission. It must also be noted that, like the high $\dot{E}$ radio pulsars, not all the high energy pulse profiles of pulsars are doubles (e.g. PSR B1706–44).
Emission heights
----------------
There is no evidence that pulsars with large emission heights (derived from their PA-swing) have wider profiles. It is therefore not clear what the physical meaning of these emission heights is. There are many reasons why the derived emission heights could be wrong, including asymmetric beams, partially active polar caps or sweepback of the magnetic field lines. Also if the PA-swing of emission which is emitted far out in the magnetosphere the PA can be expected to deviate from the rotating vector model. For instance, the PA-swing for the outer gap model is predicted to have the steepest slope near the edges of the profile (@ry95), rather then at the pulse longitude corresponding to the location of the magnetic axis. This would complicate the calculation of emission heights from the observed PA-swing considerably. Nevertheless, the fact that most PA-swings trail the centroid of the profile suggest that the derived emission heights do carry some information.
The emission of the energetic wide beam pulsars should come from near the light cylinder in order to explain the width of the pulse profiles. However, the derived emission heights from the PA-swing fits seem to suggest that the emission heights are not unusually large. This can be seen as support that the beams of energetic wide beam pulsars are wide because of propagation effects instead of caused by a large emission height. An alternative interpretation is that the emission height at the edge of the beam is much larger than in the centre of the beam. This would fit in nicely with the result of [@gg03], who concluded that the outer components of PSR B0329+54 are emitted from higher in the magnetosphere. It also fits in nicely with the hypothesis that the emission of the energetic wide beam pulsars comes from an extended emission height range, making the emission geometry very similar to the slot gap model.
Interpulse problem?
-------------------
The conclusion that the beams of energetic wide beam pulsars are large appears to be unavoidable. If this is the case, than one would expect that it is very likely for the line of sight to intersect the beams of both poles of the pulsar. Using the model described by [@wj08a] the probability for the line of sight to intersect both beams is predicted to be 64%, assuming $\rho={\ensuremath{75^\circ}}$ and a random orientation of the magnetic axis and the line of sight. However, there is no clear example of an energetic wide beam pulsar which has a (double peaked) interpulse. The “interpulse problem” is then why we do not observe the interpulses of the energetic wide beam pulsars.
It is argued by [@man96] that the steep edges form the outer edge of an extremely wide beam, which would make the peak-to-peak separation of the profiles wider than [$180^\circ$]{}. In that case the weak bumps observed for some of these pulsars are then separated by half a rotational phase from the centre of the profile, which would make them interpulses. However, because the bumps fill in the region in between the sharp edges, it seems more likely that the sharp edges form the inner edges of a wide beam. In that case the profiles are less wide and the bump forms the centre of the same wide beam.
The interpulse problem suggests that the beam sizes are different for the magnetic poles of the energetic wide beam pulsars. As discussed above, not all profiles of high $\dot{E}$ pulsars have wide components. This implies that other criteria have to be met in order to make the beams wide. These criteria are not necessarily met simultaneously for both poles, which would reduce the fraction of pulsars with interpulses. The very different shape of the main- and interpulse of PSR B1055–52 show that interpulse beams can have very different shapes, hence possibly also very different sizes. Only 5 of the 26 pulsars with an $\dot{E}>5\times10^{35}$ erg s$^{-1}$ in Fig. \[Fig\_pulsewidths\_edot\] are classified to be energetic wide beam pulsars. Therefore the chance that both poles produces a wide beam is expected to be only $\sim4\%$ if the chance of producing a wide beam is independent for each pole.
A more extreme point of view to solve the interpulse problem is put forward by [@man96] who argues, following [@ml77], that all pulsars only have one active wide beam. Although this would trivially solve the interpulse problem, it does not explain the concentration of main- interpulse separations near [$180^\circ$]{} (see Fig. \[Fig\_pulsewidths\_edot\]).
Polarization
------------
The degree of linear polarization is found in several studies to increase with $\dot{E}$. Such behaviour is predicted for the natural wave modes in the cold plasma approximation (@hlk98). One of the most surprising results of this paper is the sudden increase in the degree of linear polarization with $\dot{E}$. This suggest that pulsars can be separated into two groups which have distinct physical properties. This could either be in the structure of the magnetosphere or the physics of the emission mechanism itself. It is remarkable that over 7 orders of magnitude in $\dot{E}$ the degree in linear polarization is the only thing that is clearly changing.
It has been shown that the degree of polarization is clearly related to the presence of OPM transitions in the PA-swing. The two plasma modes (X-mode and O-mode) can be expected to be separated more in pulse longitude for high $\dot{E}$ pulsars, because the difference in their refractive indices is larger (e.g. @hlk98). This could prevent the modes from mixing, and therefore prevent depolarization. However, the fact that high $\dot{E}$ pulsars are less likely to show jumps in their PA-swing suggests that they only effectively generate one of the modes. [@jhv+05] found that the velocity vectors of most pulsars make an angle close to either [$0^\circ$]{} or [$90^\circ$]{} with the PA of the linear polarization (measured at the inflection point). This is interpreted as evidence for alignment of the rotation axis of the star with its proper motion vector and the bimodal nature of the distribution of angles is interpreted to be due to the domination of different plasma modes for different stars. This result therefore suggest that if the emission of high $\dot{E}$ pulsars is dominated by one mode, it could be either of the two for different pulsars. If the profiles of the energetic wide beam pulsars are widened by refraction, than their emission should be dominated by the O-mode which can be refracted in the pulsar magnetosphere.
A very different interpretation of the sudden increase of the degree of linear polarization with $\dot{E}$ is based on the fact that the $\dot{E}$ at the transition is very similar to the death line for curvature radiation (@hm02). This death line could potentially cause a sudden change in for instance the plasma distribution in the magnetosphere (which is responsible for the refraction of the plasma waves), or it could possibly change the emission mechanism which is responsible for the radio emission. The possible link between the degree of linear polarization of the radio emission and the mechanism for the production of the high energy emission could therefore suggest that the $\gamma$-ray efficiency is correlated with the degree of linear polarization in the radio band. It would therefore be extremely interesting to find out if a pulsar like PSR J0108–1431, which is highly polarized with a low $\dot{E}$, can be detected by the Fermi satellite.
Circular polarization
---------------------
The trend noted by [@jw06] that the degree of circular polarization is usually higher in the second component of doubles is clearly present in our data as well. It is a possibility that this is a result of the co-rotation velocity of the emission region. As shown by for instance [@dyk08], particles travelling along the magnetic field lines of a rotating dipole will follow stronger curved paths (in the inertial observer frame) at the leading half of the pulse profiles compared to the trailing half of the pulse profile. This is the reason why the observed PA-swing appears to be shifted with respect to the pulse profile (equation \[heighBCW92\]). The degree of circular polarization is in general highest in the central parts of the pulse profile, there where the curvature of the field lines is weakest. This could therefore suggest that the location of the highest degree of circular polarization is, like the PA-swing, shifted to later times by co-rotation.
Conclusions
===========
In this paper we present and discuss the polarization profiles of a large sample of young, highly energetic pulsars which are regularly observed with the Parkes telescope. This sample is compared with a sample of a similar number of low $\dot{E}$ objects in order to draw general conclusions about their differences.
There is some evidence that the total intensity profiles of high $\dot{E}$ pulsars are slightly simpler based on a classification by eye. However, there is no difference in the complexity of the mathematical decomposition of the profiles, the amount of overlap between the components of doubles or the degree of profile symmetry. We therefore conclude that differences in the total intensity pulse morphology between high and low $\dot{E}$ pulsars are in general rather subtle. High $\dot{E}$ pulsars appear to show a stronger $W_{10}-P$ correlation which is closer to the theoretical expectation, suggesting that for high $\dot{E}$ pulsars there are less complicating factors in the emission geometry.
A much more pronounced difference between high and low $\dot{E}$ pulsars is the degree of polarization. The degree of polarization was already known to increase with $\dot{E}$, but our data shows there is a rapid transition between relatively unpolarized low $\dot{E}$ pulsars and highly polarized high $\dot{E}$ pulsars. The increase in the degree of polarization is related to the absence of OPM jumps. Refraction of the radio emission is expected to be more effective in the magnetosphere of high $\dot{E}$ pulsars, which could prevent depolarization because of mixing of the plasma modes. The absence of OPM jumps suggest that one of the two modes (not necessarily the same for different pulsars) dominates over the other. The $\dot{E}$ of the transition is very similar to the death line for curvature radiation, which could be the reason why the transition is relatively sharp. This potential link between the high energy radiation and the radio emission could mean that the $\gamma$-ray efficiency is correlated with the degree of linear polarization in the radio band.
The degree of circular polarization is in general higher in the second component of doubles. This remarkable correlation is possibly caused by the effect of co-rotation on the curvature of the field lines in the inertial observer frame, making this effect very similar to the shift of the PA-swing predicted for a finite emission height. In addition, the trailing component usually dominates in total power.
The $W_{10}-\dot{E}$ distribution clearly shows sub-groups which are not visible in the pulse $W_{10}-P$ distribution, suggesting that $\dot{E}$ is an important physical parameter for pulsar magnetospheres. Besides a group of pulsars which probably have beams aligned with the rotation axis and a group of pulsars with interpulses which are probably orthogonal rotators there is a group of energetic wide beam pulsars. These young pulsars have very wide profiles with often steep edges which are likely to be emitted from a single pole.
The profile properties of the energetic wide beam pulsars are similar to those of the high energy profiles, suggesting another possible link with the high energy emission. We therefore propose that the emission of these pulsars could come, like the high energies, from extended parts of the magnetosphere. The extended height range from where the emission is emitted will smear out the complex features of the profiles. A large height range could also prevent the emission height to vary much from pulsar to pulsar, which would result in a stronger $W_{10}-P$ correlation for high $\dot{E}$ pulsars, as is indeed observed. If the radio emission and $\gamma$-ray emission of these pulsars indeed come from similar locations in the magnetosphere, one would expect the radio and $\gamma$-ray profiles to look alike, something that potentially can be tested by the Fermi satellite.
An alternative mechanism to produce the profiles of the energetic wide beam pulsars could be the combination of refraction (or scattering) of radio waves in pulsar magnetospheres with an emission height which is different for different field lines (@wsv+03). Refraction (and scattering) is most severe for the ordinary wave mode, suggesting that these profiles are dominated by one polarization mode. This would be consistent with the high degree of linear polarization observed for these pulsars. The unpolarized bump in the middle of the profiles of the energetic wide beam pulsars could be the un-refracted part of the beam, which is depolarized because the mixing of the plasma modes.
Measurements of the emission height could potentially discriminate between the refraction model and the extended emission height model for the energetic wide beam pulsars. There is no evidence that pulsars with large emission heights (derived from their PA-swing) have wider profiles. It is therefore not clear what the physical meaning of these emission heights are. It could supports the idea that the beams of energetic wide beam pulsars are wide because of refraction instead of caused by a large emission heights. However, it could also mean that the emission height of the outer parts of the beam is much larger than for the central parts, making the emission geometry similar to that of a slot gap.
Acknowledgments {#acknowledgments .unnumbered}
===============
The authors would like to thank the referee, Axel Jessner, for his useful comments on this paper. The Australia Telescope is funded by the Commonwealth of Australia for operation as a National Facility managed by the CSIRO.
\[lastpage\]
Polarization figures at both 10 and 20 cm
=========================================
[**The astro-ph version is missing 528 figures due to file size restrictions. Please download the paper including the appendices from http://www.atnf.csiro.au/people/pulsar/wj08b.pdf.**]{}
Polarization figures only at 20 cm
==================================
[**The astro-ph version is missing 528 figures due to file size restrictions. Please download the paper including the appendices from http://www.atnf.csiro.au/people/pulsar/wj08b.pdf.**]{}
Table with the profile properties
=================================
[^1]: http://www.atnf.csiro.au/research/pulsar/psrcat
[^2]: http://www.atnf.csiro.au/people/joh414/ppdata/index.html
[^3]: This table is also available in electronic form at the CDS via anonymous ftp to [cdsarc.u-strasbg.fr (130.79.128.5)]{} or via [http://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/MNRAS/]{}
[^4]: The correlation is slightly different, although within the error identical to the value quoted in [@wj08a]. This is because since the publication of that paper more timing observations have been added.
|
---
abstract: 'We investigate the Zel’dovich effect in the context of ultra-cold, harmonically trapped quantum gases. We suggest that currently available experimental techniques in cold-atoms research offer an exciting opportunity for a direct observation of the Zel’dovich effect without the difficulties imposed by conventional condensed matter and nuclear physics studies. We also demonstrate an interesting scaling symmetry in the level rearragements which has heretofore gone unnoticed.'
author:
- 'Aaron Farrell, Zachary MacDonald and Brandon P. van Zyl'
title: 'The Zel’dovich effect in harmonically trapped, ultra-cold quantum gases'
---
Introduction
============
The Zel’dovich effect (ZE) [@Zel] occurs in [*any*]{} quantum two-body system for which the constituent particles are under the influence of a long range attractive potential, suplemented by a short-range attractive two-body interaction, which dominates at short distances. The system first found to exhibit the ZE consists of an electron experiencing an attractive long-range Coulomb potential, which at short distances, is modified by a short-range interaction. [@Zel]
The characterizing feature of the ZE in this scenario is that as the strength of the attractive two-body interaction reaches a critical value ([*i.e.,*]{} when a two-body bound state is supported in the short-range potential alone), the $S$-wave spectrum of the distorted Coulomb problem evolves such that the ground state $1S$ level plunges down to large negative energies while simultaneously, the first radially excited $2S$ state rapidly falls to fill in the “hole” left by the ground state level. This also occurs for higher levels in which, generally, the $(n+1)S$ level replaces the $nS$ level. This so-called level rearrangement [*is*]{} the signature of the ZE, and continues as the strength of the two-body interaction is further increased to support additional low-energy scattering resonances (see e.g., Fig. 2 of Ref. \[\]).
Recently, Combescure [*et. al*]{} have revisited the ZE in the context of “exotic atoms”, [@comb1; @comb2] where a negatively charged hadron replaces the electron, and the short-range interaction is provided by the strong nuclear force. However, tuning the short-range interaction in exotic atoms implies that one must be able to adjust the nuclear force in the laboratory, which is a formidable task. Indeed, while it is theoretically easy to adjust the strength of the short-range interaction between the particles in any of the systems above, the experimental reality is very different. As a result, direct experimental observation of the ZE has been lacking, in spite of suggestions for its observation in quantum dots, [@comb1] Rydberg atoms, [@Kolo] and atoms in strong magnetic fields. [@karnakov]
In this article, we explore the possibility for a direct observation of the ZE in harmonically trapped, charge neutral, ultra-cold atomic gases. The charge neutrality of the atoms ensures that the long-range attractive potential is provided solely by the isotropic harmonic oscillator trap, while the short-range two-body interaction is naturally present owing to the two-body $s$-wave scattering, which is known to dominate at ultra-cold temperatures. Moreover, the short-range interaction between the atoms is completely tuneable in the laboratory [*via*]{} the Feshbach resonance. [@cohen] The multi-channel Feshbach resonance can be treated in a simpler single-channel model by a finite-range, attractive two-body interaction, supporting scattering resonances. Thus ultra-cold atoms, at least in principle, provide all of the necessary ingredients for the experimental observation of the Zel’dovich effect.
The plan for the remainder of this paper is as follows. In Section II, we establish a deep connection between the level rearrangements and the two-body energy spectrum as characterized by the $s$-wave scattering length, $a$. This connection allows us to make contact with recent experimental results on ultra-cold two-body systems, [@stoferle] from which we suggest that a direct observation of the ZE is possible. Then, in Section III, we investigate the influence of the range of the two-body interaction on the level rearrangements examined in Section II. In particular, we reveal an interesting scaling symmetry of the two-body energy spectrum which has not been noticed before. Finally, in Section IV, we present our concluding remarks.
Zel’dovich Effect in Ultra-Cold Atoms
=====================================
Universal two-body energy spectrum
----------------------------------
The two-body spectrum for a pair of harmonically trapped ultra-cold atoms is obtained from the following Hamiltonian, $$H = \frac{{\bf p}_1^2}{2M} +\frac{{\bf p}_2^2}{2M}+ \frac{1}{2}M\omega^2\rv_1^2+ \frac{1}{2}M\omega^2\rv_2^2+V_{SR}(|\rv_1-\rv_2])~,$$ where each atom has a mass of $M$ and $V_{SR}(|\rv_1-\rv_2|)$ is a short-range potential. Introducing the usual relative, $\rv = \rv_1-\rv_2$, and centre of mass, ${\bf R} = (\rv_1+\rv_2)/2$ coordinates, and noting that the centre of mass motion may be separated out, the associated Schrödinger equation in the $s$-wave channel reads [@farrell2] $$-\frac{\hbar^2}{M} u''(r)+ \frac{1}{4}M\omega^2r^2u(r)+V_{SR}(r)u(r)+\frac{\hbar^2}{M}\frac{(d-1)(d-3)}{4r^2}u(r)=Eu(r)~,$$ where $u(r)=r^{(d-1)/2}\psi(r)$ is the reduced radial two-body wave function, primes denote derivatives, $d$ is the dimension of the space, and $E$ is the relative energy of the two-body system. Defining the dimensionless variables $\eta =2E/\hbar\omega$, $\el = \sqrt{\hbar/M\omega}$ and $x= r/\sqrt{2}\el$, Eq. (2) may be written as $$\label{TISE}
-u''(x)+x^2u(x)+ \tilde{V}_{SR}(x)u(x)+\frac{(d-1)(d-3)}{4x^2}u(x)-\eta u(x)=0,$$ where $\tilde{V}_{SR}(x) = 2V_{SR}/\hbar\omega$.
Exact analytical solutions to (\[TISE\]) exist $\forall d$ if the potential is taken to be an appropriately regularized zero-range contact interaction. For $d=3$ the spectrum is described by[@busch; @shea2008] $$\label{otd}
\frac{a}{\el} = \frac{\Gamma(1/4-E/(2\hbar\omega))}{\sqrt{2}\Gamma(3/4-E/(2\hbar\omega))},$$ for $d=1$ we have, [@farrell2] $$\label{1d}
\frac{\el}{a} = \frac{\sqrt{2}\Gamma(3/4-E/(2\hbar\omega))}{\Gamma(1/4-E/(2\hbar\omega))},$$ and for $d=2$, [@farrell2] $$\label{td}
\tilde{\psi}(1/2-E/(2\hbar\omega)) = \ln{\frac{\el^2}{2a^2}}+2\ln{2}-2\gamma~.$$ In the above, $a$ is the $s$-wave scattering length in $V_{SR}$ alone, $\Gamma(\cdot)$ is the gamma function, $\tilde{\psi}(\cdot)$ is the digamma function and $\gamma=0.577215665...$ is the Euler constant. [@handbook] Note that in any dimension, the two-body spectrum is universal in the sense that the relative energy, $E$, is determined entirely by the scattering length. Thus, even for a two-body potential with [*finite*]{} range, $b$, it has been shown that provided $b\ll\el$ (practically speaking, $b/\el \lesssim 0.01$), the [*same*]{} two-body energy spectrum as described above will be obtained for an arbitrary two-body interaction evaluated at the same scattering length. [@farrell2]
![Left panel: $s$-wave two-body energy spectrum versus strength for all 3 model potentials with fixed $b/\el=0.01$. Open circles, triangles and squares represent the numerical integration of Eq. (3) for the FSW, Poshl-Teller and exponential potentials, respectively. The vertical dashed line represents the critical strength, $g_c$, of all three potentials while the horizontal dashed lines are energy values at $a=\pm\infty$. The strength axis is scaled so that the critical strength values lie along the same vertical dashed line. Right panel: Energy versus scattering length for all 3 model potentials. The same symbols as the plots in the left panel are used. The vertical dot-dash line indicates $a=0$. In all 4 plots, the solid black line is the exact expression obtained from Eq. (\[otd\]). Units are scaled as discussed in the text.](Fig1.pdf)
Level rearrangements
--------------------
In this section, the level rearrangements ([*i.e.,*]{} the ZE) exhibited by the two-body energy spectrum are investigated. The results presented here are strictly for three-dimensions (3D), although analogous findings are also observed in other dimensions. We will focus on three different interaction potentials, [*viz.,*]{} a finite square well (FSW), the modified Poshl-Teller potential[@poschl] and an exponential potential[@exp] $$\label{srv}
V_{SR}(r) =
\begin{cases}
-V_0\Theta(b-r)\\
-V_0\text{sech}^2{(r/b)}\\
-V_0\exp{(-r/b)}~,
\end{cases}$$ respectively. In the above, $\Theta(\cdot)$ is the Heaviside step function, and $V_0$ is the depth of the potential. The 3D $s$-wave scattering lengths for the potentials are given by (in the same order as the potentials listed above) [@shea2008; @CJP; @poschl; @Flugge; @exp; @ahmed] $$\label{scatlength}
a=
\begin{cases}
b\left(1-\frac{\tan{\sqrt{g}}}{\sqrt{g}}\right)\\
b\left(\gamma + \tilde{\psi}(\lambda)+\frac{\pi}{2}\cot{\pi\lambda/2}\right)\\
b\left(2\gamma + \ln{g}-\frac{\pi Y_0(2\sqrt{g})}{J_0(2\sqrt{g})}\right)~,
\end{cases}$$ where $g\equiv MV_0b^2/\hbar^2$ is the dimensionless strength of the potential, $\lambda \equiv (1-\sqrt{1+4g})/2$, and $J_0(\cdot)$ and $Y_0(\cdot)$ are the zeroth order Bessel functions of the first and second kind, respectively. [@handbook]
We proceed by numerically integrating Eq. (\[TISE\]) for each of the three potentials. For our numerics, we have set $\hbar=\omega=1$ and $M=2$ to be consistent with the numerical results in Refs. \[,\]. We plot our numerical results (open symbols) for the relative energy, $E$ (in units of $\hbar\omega$), as characterized by both the strength, $g$, and the $s$-wave scattering length, $a$, in Figure 1. The left panel illustrates the level rearrangements as the strength, $g$, is increased beyond the first scattering resonance, whereas the right panel illustrates the relative energy, $E$, as determined by the $s$-wave scattering length. The level rearrangements shown in the left panels illustrate the $2S$ level replacing the $1S$ level at the first scattering resonance, while the $1S$ level dives down to large negative values.
A further examination of Fig. 1 reveals that while $b/\el = 0.01$ for all three potentials, the level rearrangements displayed in the left panels exhibit noticeable differences. In particular, we see that the FSW has a much sharper drop at $g=g_c$, than the Poshl-Teller or exponential potentials. These level repulsions, or “anticrossings”, are known to be as a result of the levels belonging to the same $SO(2)$ symmetry of the Hamiltonian, while the mixing of the levels is dependent on how rapidly the short-range potential “shuts off”. [@comb1]
The underlying message here is as follows. While all three plots on the left of Fig. 1 display the Zel’dovich effect, namely they all undergo level rearrangement at some value of the strength parameter $g$, all three [*different*]{} potentials map on to the same $E$ vs. $a$ curve, as illustrated in the right panel of Figure 1. This reaffirms that while the details of the ZE are sensitive to the form of the two-body interaction, the energy dependence on the scattering length, $a$, is indeed universal. It is also worthwhile pointing out that the solid curves in the left panels of Fig. 1 are obtained from substituting the expressions for the scattering length, Eq. (8), into the Eq. (4), which is exact [*only*]{} for a zero-range interaction. However, it is clear that the numerically obtained open symbols closely follow the solid curve derived from Equation (4). Thus, for $b/\el \ll 1$, the level rearrangements in harmonically trapped two-body systems interacting [*via*]{} a finite, short-range potential, are all equivalent to a zero-range interaction. Viewed another way, given a set of data for $E$ vs. $a$, there must exist [*some*]{} quantum two-body system ([*i.e.,*]{} the two-body potential need not be known explicitly) whose $E$ vs. $g$ dependence exhibits the Zel’dovich effect. This observation has some interesting implications, which we further explore in the next subsection.
Flow of the Spectrum
--------------------
In order to make the connection between the $E$ vs. $g$ and $E$ vs. $a$ curves more apparent, we now study the “flow” of the two-body energy spectrum. Although we focus our attention to the FSW, the same analysis holds for any other potential.
$
\begin{array}{cc}
\includegraphics[scale=.35]{fig2_1.pdf}&
\includegraphics[scale=.35]{fig2_2.pdf} \end{array}$
In the left panel of Fig. 2, we note that as $g$ is increased from zero, the energy only slightly varies from the unperturbed energy, until the critical strength, $g_c$, is reached at which point the Zel’dovich effect occurs. In the right panel of Fig. 2, the same flow is illustrated, but this time in terms of the scattering length. The lower flow in the left panel (red online) illustrates that the trajectory of the ground state $A \to B \to C \to D$ is continuous though the resonance at $g=g_c$. However, as we follow the same path in $E$ vs. $a$, the point $B$ flows out to $a \to -\infty$ while $C$ and $D$ flow in from $a \to +\infty$ and then to $a \to 0$. Thus, while the flow for the energy spectrum in $g$-space is continuous, the flow in $a$-space appears to be disconnected. Similarly for the first excited state (green online) where the $B'$ and $C'$ flow is continuous in $g$-space, but rapidly branches off to $a \to -\infty$ and $a \to 0$, in $a$-space, respectively. The continuous flow in $g$-space suggests that the $a$-space spectrum is more appropriately viewed on the topology of a cylinder, where $a=\pm \infty$ may be identified.
### Cylindrical Mapping
The observations made above suggest that we map the $E$ vs. $a$ spectrum onto the surface of a cylinder. The details of this mapping are closely related to the mapping of the real line (in our case, the scattering length) onto the unit circle, $S^1$, followed by constructing the Cartesian product, $\mathbb{R}\times S^1$, with $\mathbb{R}$ identified with the energy, $E$. The essential point of this mapping is to provide a more natural interpretation for the two-body $E$ vs. $a$ spectrum.
To this end, Fig. 3 illustrates a series of “snapshots” which show how the the original $E$ vs. $a$ spectrum is mapped onto the surface of a cylinder.
![The $E$ vs. $a$ spectrum being rolled onto a cylinder. Top from left to right: $a=-10$ to $a=10$, $a=-30$ to $a=30$, $a=-35$ to $a=35$. Bottom from left to right: $a=-40$ to $a=40$, $a=-90$ to $a=90$, $a=-\infty$ to $a=\infty$. The thick vertical line (red online) represents $a=0$. The symmtery axis of the cylinder is the $E$ axis while the azimuthal angle is connected to the scattering length, $a$.](Fig3.pdf)
Each of the 6 panels in Fig. 3 should be viewed as an intermediate step in taking the $E$ vs. $a$ spectrum and rolling it onto a cylinder. In Fig. 4, we present the complete mapping of the $E$ vs. $a$ spectrum up to the $4$-th excited state of the bare harmonic trap. This “Zel’dovich spiral” (ZS) may now be explicitly connected to the level rearrangements discussed in the left panel of Fig. 1 above.
Indeed, we observe that the flow of the spectrum shown in the left panels of Fig. 2 correspond to clockwise (CW) rotations about the Zel’dovich spiral. That is, increasing the strength of the two-body interaction corresponds to moving along the ZS in a CW direction, with the starting point ([*i.e.,*]{} the front of the cylinder) along the thick vertical line (red online) in Figure 4.
![The complete mapping of the $E$ vs. $a$ spectrum onto the surface of a cylinder illustrating the “Zel’dovich spiral”. The solid vertical line (red online) identifies the unperturbed system for which $a=0$. Every $2\pi$ winding along the spiral corresponds to a complete level rearrangement [*e.g.,*]{} $2S \to 1S$ after $2\pi$ rotations. The lower solid circle indicates the unperturbed $1S$ level, whereas the upper solid circle corresponds to the $2S$ level.](Fig4.pdf)
A CW rotation of $\pi$ puts us on the back of the cylinder, or $a=-\infty$, whereas a counter-CW rotation of $\pi$ takes us to $a=+\infty$ ([*i.e.,*]{} the azimuthal angle $|\phi| = \pi$ is a branch point).
To see how the ZS naturally contains the level rearrangements, let us first begin at $E=3/2$, which in Fig. 4 is represented by the lower solid circle along the vertical line. As we move in a CW rotation along the spiral, the $E=3/2$ ($a=0$, $\phi=0$) goes to $E\to 1/2^{+}$ for large negative values of $a$, and finally to $E=1/2$ at $a=-\infty$ (lower dotted curve in the right panel of Figure 2). A further infinitesimal rotation takes us to $E=1/2^{-}$ at large positive values of $a$, and finally to $E \to -\infty$ at $a = 0$ after a full $2\pi$ rotation; we have just followed the flow of the $1S$ level in the left panel of Fig. 1, [*viz.,*]{} $A \to B \to C \to D\to \cdot \cdot \cdot$. Similarly, the upper solid circle in Fig. 4 corresponds to $E=7/2$, which as we rotate CW, evolves to $E\to5/2^{+}$ for large negative values of $a$, $E=5/2$ at $|a|=\infty$ (upper dotted curve in the right panel of Figure 2), and subsequently to $E=3/2$ at $\phi=2\pi$; this description is precisely the flow of the $2S$ level in the left panel of Figure 2. If we were to the continue with our CW rotation ([*i.e.,*]{} continue increasing the strength, $g$), we would then evolve from $E=3/2 \to 1/2$ at $\phi = 3 \pi$ followed by $E \to -\infty$ at $\phi = 4 \pi$.
In our opinion, viewing level rearrangements in this way is more natural than the original $E$ vs. $a$ spectrum in $\mathbb{R}^2$. We see that critical strengths, $g_c$, correspond to CW rotations of odd multiples of $\pi$ whereas a complete level rearrangement occurs for even multiples of $\pi$. In general, the $(n+1)S$ level, with $E_n = 2n+3/2$ ($n=0,1,2,...$) will eventually evolve to the $1S$ level after $2n\pi$ CW rotations along the spiral.
The Zel’dovich spiral also helps to clarify several misconceptions about $E$ vs. $a$ spectrum in the literature. The spectrum is typically understood by taking $a=\pm \infty$ separately, and assigning different interpretations to $a\to0^+$ and $a\to0^-$. An example of this is a recent contribution is by Shea [*et al*]{}, [@shea2008] where the authors describe the spectrum by first “starting from the far left" and making the interaction weaker and weaker as $a\to0^-$ and then [*independently*]{} “starting from the right" and making the interaction stronger and stronger as $a\to0^+$. On the ZS, nothing is ambiguous, since one always moves in a CW rotation along the spiral, corresponding to increasing the strength, $g$, of the interaction; a complete level rearrangement occurs after we undergo an even muliple of $\pi$ CW rotations. Furthermore, the “counter-intuitive” properties of the $E$ vs. $a$ spectrum discussed in Ref. \[\] are now seen to be nothing more than a manifestation of the onset of the Zel’dovich effect. We find it rather surprising that the ZE has been present in the two-body $E$ vs. $a$ spectrum all along, but until now, has gone unnoticed.
### Experimental Observations
In a recent work, Stöferle [*et. al*]{}, [@stoferle] have experimentally measured the binding energy as a function of the $s$-wave scattering length between two interacting particles in a harmonic trap. This experiment highlights the versatility of trapped, ultra-cold atomic systems, in which an analytically solvable model, once only the purview of theoretical physics, has now been realized in the laboratory. Remarkably, the experimental results for the $E$ vs. $a$ spectrum are in excellent agreement with theory, (see Fig. 2 in Ref. \[\]), even though the two-body interaction in the experiments is most certainly [*not*]{} a zero range interaction. Thus, the theoretical prediction that the $E$ vs. $a$ spectrum is universal has been confirmed experimentally.
What has not been appreciated until now, however, is that the experimental $E$ vs. $a$ spectrum obtained in Ref. \[\] is exactly equivalent to obtaining the ground state branch in the left panel of Fig. 2 (single arrows, red online). In other words, the work of Stöferle [*et. al*]{}, has already been a direct [*experimental observation*]{} of the ground state branch of the two-body system exhibiting the Zel’dovich effect. We therefore suggest that further experiments along the lines of Ref. \[\] be performed so that data corresponding to the double arrows and primed letters (green branch online) in the right panel of Fig. 2 may be obtained. If such an extension to the experiments in Ref. \[\] is viable, then according to our analysis, this data would be exactly equivalent to the $2S$ branch (double arrows, green online, in the left panel of Fig. 2) undergoing the Zel’dovich effect. Therefore, just a few additional data points in the $E$ vs. $a$ spectrum, would provide for a direct experimental confirmation of the ZE for two interacting particles confined in a harmonic trap.
Level rearrangements in the zero-range limit
============================================
We close this work with a discussion of an interesting scaling symmetry present in the the level rearrangements. Specifically, we show that in the $b \to 0$ limit, the [*entire*]{} two-body $E$ vs. $g$ spectrum is determined by only the first level rearrangement.
Figure 5 illustrates the spectrum through the first three low-energy scattering resonances, [*viz.,*]{} $g=g_0, g_1, g_2$ for the FSW.
![Several level rearrangements for a FSW interaction with $b/\el = 0.01$. Dotted horizontal lines (green online) are unperturbed energy values while dotted vertical lines (red online) are values at which $a=0$. Each subsequent region corresponds to a CW rotation of $2\pi$ along the Zel’dovich spiral. The solid black circle is a representative data point which we wish to map back into the $g_0$ region, as schematically illustrated by the open circle in $g_0$. Single arrows on the $g$-axis indictate the various critical values, $g_{c,n}$, in the $n^{th}$ region, while double arrows indicate the $g_0^{(n)}$ for which the scattering length vanishes.](Fig5.pdf)
From this figure, we immediately notice the similarities between the level rearrangements as we move from one region to the next, along a fixed value for the energy, $E$. Each region begins at $a=0$, undergoes a level rearrangement, and then returns to $a=0$. In the language of the ZS, each region corresponds to one complete $2\pi$ CW rotation along the spiral. Figure 5 suggests that it may be possible to map every $g_n$ ($n\neq0$) region onto $g_0$ by some appropriate scaling of the $g$-axis. This mapping should ensure that $a=0$ in, say, $g_1$, matches with $a=0$ in $g_0$, and that the critical $g_1$ value in the first region overlaps with the critical $g_0$ value in the zeroth-region, and so on.
Let us first define some useful nomenclature. We define $g_0^{(n)}$ as the $n^{th}$ value of $g$ for which $a=0$ (double arrows in Fig. 5). Next, $g_{c,n}$ is defined as the $n^{th}$ critical $g$ value; that is the $g$ value at which the $n^{th}$ level rearrangement occurs (single arrows in Fig. 5). Lastly, $\tilde{g}$ is the value of $g$ outside the region $g_0$ which we intend to map back into $g_0$. For example, consider the point labeled by the solid dot in the $g_2$ region of Figure 5. Considering this point, which we wish to map back into $g_0$ (represented by the open circle in Fig. 5), we have $\tilde{g}=75$, $g_{c,2}=25\pi^2/4$ and $g_0^{(2)} \simeq 59.67951594410...$. The mapping that takes this $\tilde{g}$ back into $g_0$ is $$g = \frac{(\tilde{g}-g_0^{(2)})g_{c,0}}{g_{c,2}-g_0^{(2)}}\simeq 18.84894603...,$$ where $g_{c,0}=\pi^2/4$ is the zeroth critical $g$ value. We may generalize this example to [*any*]{} region by employing the following prescription $$g = \frac{(\tilde{g}-g_0^{(n)})g_{c,0}}{g_{c,n}-g_0^{(n)}}~,~~~~(n\neq 0).$$ With this remapped value of $g$, we also have the associated energy, $E$. If the mapping is indeed exact, the energy $E$ of the remapped point [*should*]{} be identical to the the energy for the same $g$ value in $g_0$.
In Fig. 6 we study this mapping for the FSW with $b/\el = 0.01$. At first glance, the left panel Fig. 6 appears to be show that the mapping is exact, but a closer examination of the spectrum for values of $g$ near the resonance (right panel in Fig. 6) reveals noticeable discrepancies between data in the $g_0$ and $g_n>g_0$ regions. Remarkably, even for $b/\el = 0.01$, the mapping of the data from $g_1$ and $g_2$ into $g_0$ agree almost perfectly ([*i.e.,*]{} the dashed (red online) and the dot-dashed (green online) curves, respectively). Regardless, the mapping given by Eq. (10) is [*not exact*]{} for any finite range, $b$.
![Left: Equation (10) as applied to data from the FSW to different regions with $b/\el=0.01$. Right: The same data as in the left-panel, but with $g$ values surrounding the critical value, $g_{c,0}$. In both panels, the solid (blue online), dashed (red online) and dot-dashed (green online) curves correspond to the $g_0, g_1$ and $g_2$ regions, respectively. In these figures, the remapped data from $g_1$ and $g_2$ are indistinguishable on the scale of the plots.](Fig6.pdf)
We can, however, show that this mapping becomes exact in the $b\to0$ limit by considering Equation (\[otd\]). We write this expression in the notationally convenient form $\tilde{a}(g)=R(E)$ where $R(E)=\frac{\sqrt{2}\Gamma(1/4-E/(2\hbar\omega)}{\Gamma(3/4-E/(2\hbar\omega))}$and $\tilde{a}(g)=2a(g)/\el$. Our goal is now to map $g$ values in two different regions, $g_k$ and $g_{k'}$, onto a value in the region $g_0$ and investigate the difference in their energy values. We denote $\delta E = E_{k'}-E_k$ and note that we have the two expressions for the spectrum $\tilde{a}(g_k)=R(E_k)$ and $\tilde{a}(g_{k'})=R(E_{k'})$. The difference between these two expressions is $\tilde{a}(g_{k'})-\tilde{a}(g_k) = R(E_{k'})-R(E_{k})= R(\delta E +E_k)-R(E_k)$. Taylor expanding up to first order in $\delta E$ gives $$\delta E = \frac{\delta a}{R'(E_k)},$$ where $\delta a = \tilde{a}(g_{k'})-\tilde{a}(g_k)$ and $R'(E_k) = \frac{d R(E_k)}{d E_k}$. From Eq. (\[otd\]) we may re-express Eq. (11) as $$\delta E = \frac{\delta a}{\tilde{a}'(g_k)}\frac{d E_k}{d g_k},$$ which upon noting that $E_k = R^{-1}(a(g_k))$, becomes the implicit expression $$\label{LHSRHS}
R'(a(g_k)) = R\left(\frac{\delta a}{\delta E}\right).$$ Assuming $\delta E$ to be small compared to $\delta a$, we seek an asymptotic expression for $$R\left(\frac{\delta a}{\delta E}\right) = \frac{\sqrt{2}\Gamma(1/4-\frac{\delta a}{2\delta E})}{\Gamma(3/4-\frac{\delta a}{2\delta E})}.$$ An application of Euler’s reflection formula[@handbook] and Stirling’s approximation to the above gives the approximate expression $$R\left(\frac{\delta a}{\delta E}\right) \simeq 2\sqrt{\frac{\delta E}{\delta a}} \tan{\left(\frac{\pi \delta a}{2\delta E}-\frac{3\pi}{4}\right)}.$$ Equation (15), along with Eq. (\[LHSRHS\]) gives $$\frac{\delta a }{\delta E} \simeq \frac{2}{\pi} \cot^{-1}\left( 2 \sqrt{\frac{\delta E}{\delta a}}\frac{1 }{R'(a(g_k))} \right)+\frac{3}{2}.$$ With the approximation $\cot^{-1}(x)\simeq \pi/2$, $x\ll1$ the difference in the energies becomes $$\delta E = \frac{2}{5} \delta a,$$ or, defining $a(g)=bA(g)$, $$\label{errE}
\delta E = \frac{4b}{5\el } \left(A(g_{k'})-A(g_k)\right) .$$ Equation (\[errE\]) analytically shows that $\delta E\to0$ as $b\to0$, and the $g$ values in the two different regions get mapped back into the the $g_0$ region at the [*exact*]{} same energy. Therefore, the mapping of all subsequent regions, $g_n$ $(n\ne 0)$ back onto $g_0$ is exact in the zero-range limit. It is important to note that our analysis has not relied upon specifying the details of interaction, and so is equally valid for [*any*]{} short-range two-body interaction supporting bound states.
It is also instructive to consider how the shape of the level rearrangements curves evolve as $b \to 0$. The shape-dependence of the curves can be established by expanding the right hand side of Eq. (\[otd\]) about $g=g_c$ (some resonant strength value) and the left hand side about $E=E_c$ where, in 3D, $E_c= 1/2, 5/2, 9/2$,.....([*i.e.,*]{} the energies at the back of the Zel’dovich spiral). The result is $$\label{expand}
\frac{c_Lb}{g/g_c-1} = \frac{ c_R\el}{1-E/E_c},$$ where $c_L$ and $c_R$ are constants unimportant to our overall discussion. Choosing two points equally spaced away from $g_c$, call these $g_1=g_c-\Delta g$, $g_2=g_c+\Delta g$, and their corresponding energies $E_1=E_c+\Delta E$, $E_2=E_c-\Delta E$ we may use two versions of the approximation in Eq. (\[expand\]) to write $$\frac{\Delta g}{g_c} = \frac{b}{\el} \frac{c_L \Delta E}{c_R E_c}.$$ The important point to take away from this analysis is that the width of the rearrangement region, [*i.e.*]{} the range in $g$ over which the rearrangement occurs, is $\frac{\Delta g}{g_c} \sim b/\el$. An analogous result to Eq. (20) is briefly discussed in Ref. \[\] in the context of exotic atoms. There, the width is stated to be $\sim b/a_B$, where $a_B$ is the Bohr radius. We see that our Eq. (20) is consistent with the result for exotic atoms, in that the width of the rearrangement region is of the order of the range of the potential over the characteristic length of the problem.
In Fig. 7, we numerically verify our analytical expression, [*viz.,*]{} Eq. (20), by plotting the lowest two branches of the FSW for decreasing values of the range, $b$, of the potential. It is evident that as $b \to 0$, the level rearrangements curves evolve to a series of staircase functions, which is entirely expected given the collective results of Equations (18) and (20).
![(Color online) Level rearrangement spectrum for several different values of the range, $b$ for the FSW. The solid line (orange online) is $b/\el=0.01$, The dashed line (black online) is $b/\el=0.001$, the dot-dot-dash line (red online) $b/\el=0.0001$ and the dot-dash line (green online) is $b/\el=0.00001$. Inset: A magnification of the data in the main figure near $g_{c,0}$ further illustrating how the level rearrangements evolve to the staircase profile as $b \to 0$](Fig7.pdf)
This staircase property of the spectrum has also been discussed in Refs. \[,\] in the context of the quantum defect of atomic physics, but using an entirely different approach to the one presented here. Note that in the inset to Fig. 7, all four curves intersect at a common point, namely, at $g=g_c$ which corresponds to the back ([*i.e.,*]{} $|\phi| = \pi$) of the Zel’dovich spiral.
There are two noteworty points to be taken from this staircase like behaviour. The first is that any other panel of the spectrum, [*e.g.*]{} $g_1$, $g_2$ [*etc.*]{} in Fig. 5 (provided $b/\el \ll 1$), can be obtained by simply applying the scaling transformation, Eq. (10), to the data in the $g_0$ region . In addition, the staircase property of the level rearrangements as $b \to 0$ is not specific to the FSW, which implies that for [*any*]{} short-range two-body potential, the $E$ vs. $g$ curves will exhibit the same scaling symmetry provided the critical values, $g_{c,n}$ are properly scaled as $b \to 0$. It is also important to realize that even the staircase level rearrangements are mapped onto the universal $E$ vs. $a$ spectrum, just as with the other potentials listed in Eq. (7) with $b/\el \ll 1$ in the right panel of Figure 1.
Conclusions
===========
In this paper, we have examined the two-body problem of ultra-cold harmonically trapped interacting atoms and its relation to the Zel’dovich effect. We have shown, through our construction of the “Zel’dovich spiral”, that the universal spectrum in terms of the scattering length is exactly equivalent to the Zel’dovich effect. This non-trivial observation has been used to motivate further experimental studies in order to provide additional data for the $E$ vs. $a$ spectrum, which may then be used to establish the first direct experimental obseravtion of the Zel’dovich effect. Finally, we have shown that in the $b \to 0$ limit, the level rearrangement spectrum exhibits an exact scaling symmetry, which has until now, gone unnoticed. The exact mapping means that the [*entire*]{} $E$ vs. $g$ spectrum (and therefore the $E$ vs. $a$ spectrum) may be obtained solely from knowledge of the $g_0$ region as $b \to 0$.
acknowledgements
================
Z. MacDonald would like to acknowledge the Natural Sciences and Engineering Research Council of Canada (NSERC) USRA program for financial support. B. P. van Zyl and A. Farrell would also like to acknowledge the NSERC Discovery Grant program for additional financial support.
[99]{}
Y. B. Zel’dovich, Sov. J. Solid State, [**1**]{}, 1497 (1960). M. Combescure, A. Khare, A. Raina, J.M. Richard and C. Weydert, Int. J. Mod. Phys. B [**21**]{} 3765 (2007). M. Combescure, C. Fayard, A. Khare, and J. M. Richard, J.Phys. A: Math. Theor. [**44**]{} 275302 (2011). E.B. Kolomeisky and M. Timmins, Phys. Rev. A, [**72**]{}, 022721 (2005).
B. M. Karnakov and V. S. Popov, JETP [**97**]{}, 890 (2003). The reader will find a clear description of the underlying physics of Feshbach resonance in Cohen-Tannoudji’s lecture-notes “Atom-atom interactions in ultra-cold quantum gases”, in [*Lectures on Quantum Gases*]{}, Institut Henri Poincaré, Paris, April 2007. T. Stöferle [*et al.*]{} Phys. Rev. Lett. [**96**]{}, 030401 (2006). A. Farrell and B. P. van Zyl, J. Phys. A: Math. Theor. [**43**]{}, 015302 (2010). T. Busch, B-G Englert, K. Rzazewski and M. Wilkens, Foundations of Physics, [**28**]{}, 549 (1998).
P. Shea, B. P. van Zyl and R. K. Bhaduri, Am. J. Phys. [**77**]{}, 511 (2008). Abramowitz and Stegun, [*Handbook of Mathematical Functions*]{} (Dover, New York, 1970). G. Pöschl and E. Teller, Z. Phys., [**83**]{}, 143 (1933). J. Shapiro and M. A. Preston, Can. J. Phys. [**34**]{}, 451 (1956).
S. Flugge, Practical Quantum Mechanics, Springer-Verlag, Berlin-Heidelberg-New York, 1971. Z. Ahmed, Am. J. Phys. [**78**]{} 418, (2010). A. Farrell, B.P. van Zyl, Can. J. Phys. [**88**]{}, 817 (2010). V. N. Ostrovsky, Phys. Rev. A [**74**]{} 012714 (2006).
|
---
abstract: 'In 2005, Boman et al introduced the concept of factor width for a real symmetric positive semidefinite matrix. This is the smallest positive integer $k$ for which the matrix $A$ can be written as $A=VV^T$ with each column of $V$ containing at most $k$ non-zeros. The cones of matrices of bounded factor width give a hierarchy of inner approximations to the PSD cone. In the polynomial optimization context, a Gram matrix of a polynomial having factor width $k$ corresponds to the polynomial being a sum of squares of polynomials of support at most $k$. Recently, Ahmadi and Majumdar [@Ahm14], explored this connection for case $k=2$ and proposed to relax the reliance on sum of squares polynomials in semidefinite programming to sum of binomial squares polynomials (sobs; which they call sdsos), for which semidefinite programming can be reduced to second order programming to gain scalability at the cost of some tolerable loss of precision. In fact, the study of sobs goes back to Reznick [@reznick1987; @reznick1989] and Hurwitz [@Hurwitz]. In this paper, we will prove some results on the geometry of the cones of matrices with bounded factor widths and their duals, and use them to derive new results on the limitations of certificates of nonnegativity of polynomials by sums of $k$-nomial squares using standard multipliers.'
author:
- João Gouveia
- Alexander Kovačec
- 'Mina Saee [^1]'
title: 'On sums of squares of $k$-nomials'
---
Motivation and introduction
===========================
Ahmadi and Majumdar in their recent paper [@Ahm14] propose a new subclass of polynomials for semidefinite programming. They note that although semidefinite programming has been highly successful in being able to address the question of good approximations even to NP-complete or NP-hard optimization problems it lacks good scalability, that is, programs tend to grow rapidly in size as we attempt better approximations. They further observe that, in many practical problems, resorting to the full power of semidefinite programming is unnecessarily time or memory consuming and polynomial optimization problems involving polynomials of degrees four to six and more than a dozen variables are currently unpractical to tackle with standard sums of squares techniques. To obviate these shortcomings, instead of working with the full class of sum-of-squares polynomials they propose to work with polynomials they call diagonally dominant (dsos) or scaled diagonally dominant (sdsos) sums of squares, respectively, obtaining problems that are linear programs (LP) and second order cone programs (SOCP), respectively. As proven in [@boman2005factor], scaled diagonally dominant (sdd) matrices are precisely the matrices with factor width at most two. In their paper Ahmadi and Majumdar already point out that a natural generalization would be to study certificates given by matrices with factor width greater or equal than $2$. In this paper we advance in that direction, studying the geometry of the cones of matrices of bounded factor width and using the fact that these cones provide a hierarchy of inner approximations to the PSD cone, to establish new certificates for checking nonegativity of a polynomial, and simultaneously showing their limitations.
We organize the paper as follows: In Section \[2\] we give some basic definitions and notations that will be used throughout the paper. In Section \[3\] we present the concept of factor width for positive semidefinite matrices. Then in Section \[4\] we give some geometric properties of the cone of bounded factor width matrices. In particular we characterize some of the extreme rays of their duals which will be used later to derive the main results of the paper. Section \[5\] follows the study of an example given by Ahmadi and Majumdar in [@Ahm14]. They considered the polynomial $p_n^a=(\sum_{i=1}^n x_{i})^2+(a-1)\sum_{i=1}^n x_{i}^2$ and proved that for $n=3$, if $a<2$, then no nonnegative integer $r$ can be chosen so that $(x_1^2+x_2^2+x_3^2)^r p_3^a$ is a sum of squares of binomials (sobs or so2s), although it is clearly nonnegative for $a\geq 1$. In other words, $p_3^a$ is not $r$-so2s for any $r$. We complete the study of this example for the strengthened certificates proposed, obtaining further negative results along the same direction. We first characterize when $p_n^a$ is a sum of $k$-nomial squares (soks), then we show that $p_{n,r}^a,$ that is, the multiplication of $p_n^a$ with $(\sum_{i=1}^n x_{i}^2)^r$, is a sum of $k$-nomial squares ($r$-so$k$s) if and only if this is the case for $r=0$. In the following Sections, we show that the behaviour found in Ahmadi and Majumdar’s example is actually the rule in many cases. More precisely, in Section \[6\], we prove that if a quadratic form is not sobs, then it is not $r$-sobs for any $r$ and in Section \[7\] we show that if a 4-variable quadratic form is not so$3$s, then it is not $r$-so$3$s for any $r$. To complete the paper, in Section \[8\] we give an example which shows that our results are complete, as they cannot be extended in the most natural way to five or more variables. To that end, we give a quadratic form in five variables which is not so$4$s but which becomes so$4$s after multiplication with $\sum_{i=1}^5 x_{i}^2$.
Definitions and notations {#2}
=========================
All our matrices are understood to be real. We denote by ${\mathcal{S}}^n,$ the $n\times n$ (real) symmetric matrices. A symmetric matrix $A$ is positive semidefinite (psd) if $x^T Ax\geq 0$ for all $x\in {\mathbb{R}}^n.$ This property will be denoted by the standard notation $A\succeq 0.$ By ${\mathcal{S}}^n_+ $ we denote the subset of real symmetric positive semidefinite matrices. The Frobenius inner product for matrices $A,B\in {\mathcal{S}}^n$ is given by $\langle A, B\rangle ={\rm trace}(AB^\top)=\sum_{i,j} A_{ij} B_{ij}.$ For a cone $K$ of matrices in ${\mathcal{S}}^n$, we define its dual cone $K^*$ as $\{Y\in {\mathcal{S}}^n :\langle Y, X\rangle \geq 0,\; \forall X\in K\}$.
If $X=(x_{ij})$ is an $n\times n$ matrix and $K\subseteq \{1,2,...,n\},$ then $X_K$ denotes the (principal) submatrix of $X$ composed from rows and columns of $X$ with indices in $K;$ $\text{supp}(X)=\{(i,j)\in \{1,2,...,n\}^2: x_{ij}\neq 0\}$ is the support of $X.$
If $B$ is a $k\times k$ matrix and $K\subseteq \{1,2,...,n\},$ a $k$ element subset of $\{1,2,...,n\},$ then $\iota_K(B)$ means the $n\times n$ matrix $X$ which has zeros everywhere, except that $X_K=B.$
We denote by ${\mathbb{R}}[x_{1:n}]={\mathbb{R}}[x_1,...,x_n]$ the algebra of polynomials in $n$ variables $x_1,x_2,\ldots,x_n$ over ${\mathbb{R}}.$ A [*monomial*]{} in ${\mathbb{R}}[x_{1:n}]$ is an expression of the form $x^{\alpha}=x_1^{\alpha_1}x_2^{\alpha_2}\cdots x_n^{\alpha_n}$ and a polynomial $p$ in ${\mathbb{R}}[x_{1:n}]$ is a finite linear combination of monomials; so $p=\sum_{\alpha}c_{\alpha}x^{\alpha}$. A polynomial $p\in {\mathbb{R}}[x_{1:n}]$ is nonnegative if it takes only nonnegative values, i.e., $p(x)\geq 0,$ for all $x\in {\mathbb{R}}^n$ and a polynomial $p\in {\mathbb{R}}[x_{1:n}] $ is a [*sum of squares (sos)*]{} polynomial, if it has a representation $p=\sum_{i=1}^m q_i^2$ with polynomials $q_i\in {\mathbb{R}}[x_{1:n}]. $ Of course every sum-of-squares polynomial is nonnegative and every nonnegative polynomial has necessarily even degree, $2d,$ say. A useful introduction to polynomial optimization using sums of squares is found in [@SDOCAG].
A polynomial $p$ is called a [*scaled diagonally dominant sum of squares*]{} (sdsos) if it can be written as a nonnegative linear combination of squares of monomials and binomials; that is, $p$ is a sum of expressions of the form $\alpha m^2$ and $\alpha (\beta_1 m_1+\beta_2 m_2)^2 $ with all the $\alpha$s$> 0$, and $\beta$s real. A polynomial $p$ is called a [*diagonally dominant sum of squares*]{} (dsos) if it can be written in this form using only the combinations $\beta_1=\beta_2=1$ or $ \beta_1=1=-\beta_2.$ A [*$k$-nomial*]{} is an expression of the form $ \alpha_1 m_1+\cdots +\alpha_k m_k$ with $\alpha_1,...,\alpha_k$ reals and $m_1,...,m_k$ monomials. Note that every $k-1$-nomial is also $k$-nomial. We call a sum of squares of $k$-nomials a [*so$k$s*]{}-expression. A polynomial $p\in P_n$ is then called $r$-[*so$k$s*]{} if $(\sum_{i=1}^nx_i^2)^r p$ is so$k$s.
For smooth reading the reader should keep in mind the following basic facts found in texts about convex sets, for example in [@Johnson], or in [@Ramana95somegeometric Sections 1.3 and 1.4].\
$\cdot$ If $C$ is a closed convex cone then $C=C^{**}.$\
$\cdot$ $\langle A, S_1 BS_2\rangle= \langle S_1^\top A S_2^\top, B\rangle,$ whenever the matrix products are defined.\
$\cdot$ The cone of real symmetric psd matrices is selfdual, i.e. ${\mathcal{S}}_+^n=({\mathcal{S}}_+^n)^*.$\
$\cdot$ If $A\in {\mathcal{S}}^n_+$ and for some $x\in {\mathbb{R}}^n,$ $x^\top Ax=0,$ then $Ax=0.$ See [@Johnson p. 463].\
$\cdot$ If $A\in {\mathcal{S}}^n$ then $A$ is psd iff for all psd matrices $B,$ $\langle A,B\rangle \geq 0.$\
$-$ In particular if $A,B\succeq 0,$ then $\langle A,B\rangle \geq 0.$\
$\cdot$ If $A,B\succeq 0,$ then $\langle A,B\rangle=0$ iff $AB=0.$
On the factor width of a matrix {#3}
===============================
The concept of [*factor width*]{} of a real symmetric positive semidefinite matrix $A$ was introduced by Boman et al. in [@boman2005factor] as the smallest integer $k$ such that there exists a real (rectangular) matrix $V$ such that $A=VV^\top$ and each column of $V$ contains at most $k$ non-zeros. We let $$FW^n_k=\{\text{symmetric positive semidefinite}\; n\times n\; \text{matrices of factor width}\leq k.\}.$$ We have of course $$FW_1^n\subset FW_2^n\subset FW_3^n\subset \cdots \subset FW^n_n={\mathcal{S}}_+^n.$$ Next assume $A=VV^T$ is a symmetric positive semidefinite matrix where each column of $V$ has at most $k$ nonzero entries. By the rules of matrix multiplication, for any $i,j \in\{1,...,n\},$ and writing $V_{*\nu}$ and $V_{\nu *}$ for the $\nu$-th column or row of a matrix $V,$ respectively, we have $$(VV^T)_{ij}= \sum_{\nu=1}^m V_{i\nu}(V^T)_{\nu j}
= \sum_{\nu=1}^m (V_{*\nu}V^T_{{\hspace*{2.5mm}}\nu *})_{ij} = \sum_{\nu=1}^m (V_{*\nu}V_{ * \nu}^{{\hspace*{2.5mm}}T} )_{ij}.$$ Write $A=\sum_{\nu=1}^m (V_{*\nu}V_{ * \nu}^{{\hspace*{2.5mm}}T} ).$ Note that each $V_{*\nu}V_{ * \nu}^{{\hspace*{2.5mm}}T} $ is a symmetric positive semidefinite $n\times n$ rank $1$ matrix whose support lies within a cartesian product $K^2=K\times K$ for some $K\subseteq \{1,2,...,n\}$ of cardinality $k.$ Since every $n\times n$ matrix with the latter properties can be written as $vv^T $ for some $v$ with at most $k$ nonzero entries, we have the following
\[factor\] Let $A$ be an $n\times n$ symmetric positive semidefinite matrix, and assume $k\in \mathbb{Z}_{\geq 1}.$ Then $A\in FW^n_k $ if and only if $A$ is the sum of a finite family of symmetric positive semidefinite $n\times n$ matrices whose supports are all contained in sets $K\times K$ with $|K| = k.$
From this proposition it follows immediately that each set $FW_k^n$ is a convex closed subcone of ${\mathcal{S}}^n_+.$ We will now focus on the dual cone of $FW_k^n.$ From [@Permenter2017 Lemma 5 + Subsection 3.2.5] we have the following result.
\[dualfactork\] The dual of $FW_k^n$ is given by $$(FW_k^n)^* = \{X\in {\mathcal{S}}^n \; |\; X_K\in {\mathcal{S}}^k_+
\text{ for all}\; K\subseteq \{1,2,...,n\} \text{ with } |K| =k \}.$$ Furthermore the following inclusions and identity hold $$FW_k^n \subseteq {\mathcal{S}}_+^n \subseteq (FW_k^n)^* \text{ and } FW_k^n=(FW_k^n)^{**}.$$
On the geometry of bounded factor width matrices {#4}
================================================
In this section, we give some geometric properties of the cone of bounded factor width matrices. In particular, we characterize some of the extreme rays of their duals.
We start with the following lemma about exposedness of the extreme rays of $(FW_{k}^n)^*$.
\[exposed\] The cone $(FW_{k}^n)^*$ is (linearly equivalent to) a spectrahedron. Therefore a matrix in $(FW_{k}^n)^*$ which spans an extreme ray is an exposed ray.
Let $E_{\{i,j\}}$ be the symmetric $n\times n$ matrix which has zeros everywhere except at the entries $(i,j)$ and $(j,i)$ where it has $1$s. Denote by $I_1,I_2,\ldots,I_{\binom{n}{k}}$ the $\binom{n}{k}$ distinct $k$ element subsets of $\{1,2,\ldots,n\}$ and define for $l=1,2,\ldots,\binom{n}{k}$ the matrix $$E_{\{i,j\}}^l=
\left\{ \begin{array}{ cl }
E_{\{i,j\}} & \mbox{ if $i,j$ are both contained in the $l$th of the sets $I_1,I_2,\ldots,I_{\binom{n}{k}}.$ } \\
0 & \mbox{ otherwise. }
\end{array} \right .$$ Consider now the condition $$\sum_{1\leq i\leq j\leq \binom{n}{k}} b_{ij} (E_{\{i,j\}}^{1}\oplus E_{\{i,j\}}^{2}\oplus \cdots\oplus E_{\{i,j\}}^{n})\succeq 0.$$ Since a direct sum of matrices is positive semidefinite if and only if each of its summands is positive semidefinite, the attentive reader finds that this condition expresses precisely that the submatrices $B_{I_r}$, $r=1,\ldots,\binom{n}{k}$ with $|I_r|=k,I_r\subseteq\{1,\ldots,n\}$ of $B=(b_{ij})\in {\mathcal{S}}^n$ should be positive semidefinite. Since this is the defining property of $B$ to be in $(FW_{k}^n)^*$ we find that $(FW_{k}^n)^*$ is a spectrahedron. The second part is a consequence of the theorem that every face of a spectrahedron is exposed. This is proved in [@Ramana95somegeometric p.11].
Our first result about the extreme rays of the cone $(FW_{k}^n)^*$ is as follows.
\[extrmerayrank1\] The matrix $A\in {\mathcal{S}}_{+}^n$ spans an extreme ray of $(FW_{k}^n)^*$ if and only if it has rank $1$.
Let $A\in {\mathcal{S}}_{+}^n$ span an extreme ray of $(FW_{k}^n)^*$ and assume $\text{rank}(A)=r\geq 2.$ Then, as $A\in {\mathcal{S}}_+^n,$ one can write $A=x_1 x_1^T + \cdots + x_r x_r^T$ with real pairwise orthogonal $x_i$. Since $x_ix_i^T\in {\mathcal{S}}^n_+,$ $i=1,...,r,$ , these $x_ix_i^T$ are elements of $(FW_{k}^n)^*$ - recall $FW_{k}^n\subset{\mathcal{S}}_{+}^n\subset(FW_{k}^n)^*$ - and since they are not multiples of each other, $A$ is not an extreme ray. So for extremality of $A$ rank equal to 1 is necessary.
Now we prove that if the matrix $A$ has rank 1, then it spans an extreme ray of $(FW_{k}^n)^*$. So let $A=xx^T.$ Assume now $A=X+Y$ with some $X,Y\in (FW_{k}^n)^*$ and some $x\in{\mathbb{R}}^n.$ Then for any $k$ element subset $I\subseteq \{1,2,\ldots,n\}$, $x_Ix_I^T =X_I+Y_I.$ By the characterization of $(FW_{k}^n)^*,$ $X_I,Y_I$ are positive semidefinite; that is we have found in ${\mathcal{S}}_{+}^n$ a representation of a rank $1$ matrix as a sum of two other matrices. Since the null space of a sum of two psd matrices is contained in the nullspace of each, we infer that $X_I$, $Y_I$ are multiples of $x_Ix_I^T$: for some real $\lambda_I$, $X_I=\lambda_I x_Ix_I^T,$ $Y_I=(1-\lambda_I) x_Ix_I^T.$ Now, considering any two $k\times k$ submatrices of $X$ indexed by $I$ and $J$, we have if $i\in I\cap J$, then $x_{ii}=\lambda_I x_i^2=\lambda_J x_i^2$ so if $x_{ii}\neq 0$ then $\lambda_I=\lambda_J$. Note that if $x_{ii}=0$, the entire $i$-th row and column of $X$ must be zero. For any $I$ and $J$ such that $i\in I$ and $j\in J$ with $x_{ii}\neq 0$ and $x_{jj}\neq 0$, we can pick a $k$-element set $K$ such that $i,j\in K$ and the above argument gives $\lambda _I=\lambda_J=\lambda_K$. So all are equal to some $\lambda$ and $X=\lambda x x^T.$
Next, we present a simple fact which will help us in the next theorem to characterize the extreme rays of $(FW_{n-1}^n)^*$.
\[rankpsd\] Assume that $A\in (FW_{n-1}^n)^*$ and let $A_I$ be an $n-1\times n-1$ principal submatrix of $A$ for some $I$ with $I\subseteq\{1,2,\ldots,n\}.$ If $rank(A_I)\leq n-3$, then $A$ is psd.
Since $A\in (FW_{n-1}^n)^*$, all its proper principal minors are nonnegative. So $A$ is psd if and only if $\det(A)\geq 0$. But by Cauchy’s interlacing theorem, see [@Johnson p. 185], if $\beta_1,\ldots, \beta _{n-1}$ are the (nonnegative) eigenvalues of $A_I$ and $\gamma_1,\ldots, \gamma_n$ are the eigenvalues of $A$, then $$\gamma_1\leq \beta_1\leq \gamma_2\leq \beta_2\leq\ldots\leq \beta_{n-1}\leq \gamma_n.$$ Now, since $\text{rank}(A_I)\leq n-3$, $\beta_1$ and $\beta_2$ should be zero which leads to $\gamma_2=0$ and so $\det(A)=0$, hence $A$ is psd.
\[extrmeraynotpsd\] If the matrix $A\in (FW_{n-1}^n)^*$ is not psd, the matrix $A$ spans an extreme ray of $(FW_{n-1}^n)^*$, if and only if all of its $(n-1)\times(n-1)$ principal submatrices have rank $n-2$.
We first prove that if the matrix $A$ spans an extreme ray of $(FW_{n-1}^n)^*$, then all of its $(n-1)\times(n-1)$ principal submatrices have rank $n-2$. Assume that this does not happen, which means there is one $(n-1)\times(n-1)$ principal submatrix which is full rank, otherwise by Lemma \[rankpsd\] $A$ will be psd. Suppose $A_{\{1,2,\ldots,n-1\}}$ is such a principal submatrix of full rank. Since the cone $(FW_{n-1}^n)^*$ is a spectrahedron, by Lemma \[exposed\] every of its faces is exposed. Hence $A$ is an exposed extreme ray of $(FW_{n-1}^n)^*$. So, there exists a $B\in (FW_{n-1}^n)^{**}=FW_{n-1}^n$ such that $\langle B,A\rangle=0$ and $\langle B,X\rangle>0$ for all $X\in (FW_{n-1}^n)^*\setminus\{\lambda A\;|\; \lambda \geq 0\}$.
This $B\in FW_{n-1}^n$, and so it can be written as $$B=\sum_{I\subseteq \{1,2,\ldots,n\}, |I|=n-1 }\iota_{I}(B_I),\quad \text{for}\; B_I\in {\mathcal{S}}_+^{n-1}.$$
We thus get $$0=\langle B,A\rangle=\sum_{I\subseteq \{1,2,\ldots,n\}, |I|=n-1 }\langle\iota_{I}(B_I),A\rangle=\sum_{I\subseteq \{1,2,\ldots,n\}, |I|=n-1 }\langle B_I,A_I\rangle.$$ Since the $(n-1)\times (n-1)$ principal submatrices of $A$ are all positive semidefinite, we get that all the inner products are nonnegative and hence must be $0$. Which means $\langle B_I,A_I\rangle=0$ for all $I.$
Under the current supposition that $A_{\{1,2,\ldots,n-1\}}$ is not singular, we conclude that $B_{\{1,2,\ldots,n-1\}}=0$.
Let now $a$ be the $n$-th column of $A$ and let $ \tilde A= aa^T.$ Of course $\tilde A\in {\mathcal{S}}_+^n$ and so $\tilde A\in (FW_{n-1}^n)^*.$ We have $$\langle \iota_{I}(B_I), \tilde A\rangle = \langle \iota_{I}(B_I),aa^T \rangle = \langle B_I, a_I a_I^T\rangle.$$ But note that $a_I$ is a column of $A_I$ for $I\neq\{1,2,\ldots,n-1\}$, so $A_I=a_Ia_I^{T}+A_I^\prime$ for some $A_I^\prime\succeq 0$ and $\langle B_I, A_I\rangle=0$ implies $\langle B_I, a_I a_I^T\rangle=0$. Since we know already $B_{\{1,2,\ldots,n-1\}}=0$ we get $\langle B,\tilde A \rangle=0$ . Now evidently $\tilde A$ is not a multiple of $A$ so it does not span the same ray and we have a contradiction to our assumption that $A_{\{1,2,...,n-1\} }$ has full rank. Therefore $A_{\{1,2,...,n-1\} }$ has rank at most $n-2$ and similarly any other principal $(n-1)\times (n-1)$ submatrix has rank at most $n-2.$
For the reverse direction, assume that $A$ does not span an extreme ray of $(FW_{n-1}^n)^*.$ This means that we can write it as $$A=\gamma X+(1-\gamma)Y\ \text{for some}\ X,Y \in (\widetilde{FW}_{n-1}^n)^*\ \text{and}\; \gamma\in ]0,1[,$$ where $(\widetilde{FW}_{n-1}^n)^*$ is the compact section of the cone $(FW_{n-1}^n)^*$ consisting of the matrices that have the same trace as matrix $A$.
Let $X_{\lambda}=\lambda X+(1-\lambda)Y,\; \lambda\in {\mathbb{R}}_{+}$. Given some $I$, we know that $(X_\lambda)_I$ has rank at most $n-2$: in fact, there is a $2$ dimensional space, $\ker(A_I)$, which is always contained in $\ker(X_\lambda)_I$. Then the set $L=\{\lambda|X_{\lambda}\in (FW_{n-1}^n)^*\}=[\lambda_{min},\lambda_{max}]$ since $L\cap (FW_{n-1}^n)^*\subseteq(\widetilde{FW}_{n-1}^n)^*$ which is compact. The eigenvalues and eigenvectors of $(X_\lambda)_I$ change continuously with $\lambda$. Since two zero eigenvalues correspond to fixed eigenvectors, the only way for $(X_\lambda)_I$ to stop being psd is if a third eigenvalue switches from positive to negative, which implies that for some $I$, $\text{rank}((X_{\lambda_{max}})_I)\leq n-3$ and the same for $(X_{\lambda_{min}})_I$ and this means by Lemma \[rankpsd\] that both are psd. Hence $A$ is psd since it is a convex combination of both. This is a contradiction to the hypothesis.
In the following observation, we observe that conjugating by a permutation and scaling of a matrix does not affect extreme rays.
[**Observation.**]{} Let $D$ be a positive definite $n\times n$ diagonal matrix and $P$ a $n\times n$ permutation matrix. Then
- The operation $\bullet \mapsto D\bullet D$ defines a bijection from ${\mathbb{R}}^{n\times n}$ onto itself which induces also bijections of from ${\mathcal{S}}^n$ onto itself and from ${\mathcal{S}}^n_+$ onto itself and similarly bijections of the families of extreme rays of these cones onto themselves.
- The cones $FW_{k}^n$ and $(FW_k^n)^*$ are by $\bullet \mapsto D\bullet D$ also bijectively mapped onto themselves and analogous claims are true for the families of respective extreme rays.
- The claims of parts a and b remain literally true if we replace in them the corresponding operation by $\bullet \mapsto P^T \bullet P.$
Note that the operations $\bullet \mapsto D \bullet D$ and $\bullet \mapsto P^T \bullet P$ are clearly linear maps and since $D$ and $P$ are invertible, they are bijections. This means that they map extreme rays to extreme rays, and we just have to show that they let the cones of interest invariant.
Note that $A\in {\mathcal{S}}_+^n,$ if and only if we can write it as $VV^T$, and both $DVV^TD^T$ and $P^TVV^TP$ can be directly seen to be still positive semidefinite. Moreover, if $V$ has at most $k$ nonzero entries per column, so do $DV$ and $P^TV$, so the operations also preserve $FW_k^n$. To see that it preserves $(FW_k^n)^*$ just note that the $k\times k$ submatrices of the image of $A$ are just images of the $k\times k$ submatrices of $A$ by maps of these types, so if all were positive semidefinite in $A$ they will all be positive semidefinite in the image of $A$, showing invariance of $(FW_k^n)^*$.
Characterizing extreme rays of $(FW_3^4)^*$
-------------------------------------------
We start this section with the following lemma.
\[dualB\] If $Q$ is positive semidefinite $4\times 4$ and $Q\not \in FW_3^4$ then there exists a symmetric $4\times 4$ matrix $B$ and a positive definite diagonal matrix $D$ such that
- $B$ spans an extreme ray in $(FW_{3}^4)^*$;
- $B$ has the diagonal entries all equal to $1$;
- $\langle DQD, B \rangle <0.$
Suppose first that for all $B\in (FW_3^4)^*$ we had $\langle Q, B\rangle \geq 0.$ This would show by definition of dual cones, that $Q\in (FW_3^4)^{**}.$ But we know by Proposition \[dualfactork\] that $(FW_3^4)^{**}=FW_3^4.$ So we get a contradiction. So there exists a matrix $B\in (FW_3^4)^*$ such that $\langle Q,B\rangle <0.$ Now every matrix in $(FW_3^4)^*$ is a finite positive linear combination of some matrices that span extreme rays of $(FW_3^4)^*.$ Hence for at least one of these extreme-ray-defining matrices we again must have the inequality. We call this extremal matrix now $B.$
By hypothesis $Q\in {\mathcal{S}}_+^4; $ so $\langle Q,B \rangle <0 ,$ implies $B\not\in {\mathcal{S}}_+^4.$ Since every diagonal entry of $B$ is a diagonal entry of some principal $3\times 3$ submatrix of $B,$ and these are positive semidefinite, the diagonal entries of $B$ are all nonnegative. Assume now that some diagonal entry, say $b_{11}= 0.$ Then by a standard argument, see e.g. [@Johnson p. 400], all the entries of column 1 and row 1 would be 0. The nonzero entries of $B$ are thus found in $B_{234},$ which is positive semidefinite. Hence $B$ is psd, a contradiction.
Thus we have $b_{11},b_{22},b_{33},b_{44} >0$ and the diagonal matrix $D=\Diag(b_{11}^{-1/2},b_{22}^{-1/2}, \linebreak b_{33}^{-1/2},b_{44}^{-1/2})$ is well defined. By the observation before, the matrix $ B'=DBD$ will be again an extreme ray of $(FW_3^4)^*$ and it is clear that $B'= (b_{ii}^{-1/2} b_{ij} b_{jj}^{-1/2})_{i,j=1}^4$ is a matrix which has only ones on the diagonal. Finally $\langle D^{-1}Q D^{-1}, B' \rangle= \langle Q , B \rangle <0.$ Thus renaming $D^{-1}, B'$ to $D,B,$ respectively, we get the claim.
Based on the results that we have proven so far, we can fully characterize the extreme rays of $(FW_3^4)^*$.
\[matrixform\] Let $B$ be a symmetric $4\times 4$ not positive semidefinite matrix which spans an extreme ray of $(FW_3^4)^*$, then for some $a,c \in ]-\pi,\pi[\setminus\{0\}$ some permutation $P$ and some nonsingular diagonal matrix $D$, the matrix $B$ has the following form $$DPBP^{T}D^{T}= \begin{bmatrix}
1 &\cos(a) &\cos(a-c) &\cos(c)\\
\cos(a)& 1 &\cos(c) & \cos(a- c)\\
\cos(a-c)& \cos(c) &1 &\cos(a) \\
\cos(c)& \cos(a- c) &\cos(a) &1
\end{bmatrix}.$$
First note that by the considerations of the previous lemma, we can always assume a scaling that takes all diagonal entries of $B$ to $1$. Furthermore, by assumption, $B\in (FW_3^4)^*$ which means all of its $3\times 3$ and accordingly its $2\times 2$ principal submatrices are psd, hence for all $i,j \in \{1,2,3,4\}$, $0\leq b_{ii}b_{jj}-b_{ij}^2 =1 -b_{ij}^2$ and hence $b_{ij}^2\leq 1$ for all pairs $(i,j)$. Therefore, using that the image of the cosine function is $[-1,1],$ we can write $B$ as $$B=\begin{bmatrix}
1 & \cos(a)& \cos(b) & \cos(c) \\
\cos(a)& 1 & b_{23}& b_{24}\\
\cos(b)& b_{23}& 1 & b_{34} \\
\cos(c)& b_{24}& b_{34}& 1
\end{bmatrix},$$ for some $a,b,c \in [-\pi,\pi]$. The possibilities, $a,b,c\in \{-\pi,0,\pi\}$ will be excluded below. Now since $B$ spans an extreme ray of $(FW_3^4)^*$, by Theorem \[extrmeraynotpsd\] all of its $3\times 3$ principal submatrices have rank $2$ and hence have zero determinant. Hence by starting with principal submatrix $B_{123}$, we have $$0=\text{det}\left(\begin{bmatrix}
1&\cos(a) &\cos(b)\\
\cos(a)& 1& b_{23} \\
\cos(b)&b_{23}&1
\end{bmatrix}\right)= 1 - b_{23}^2 - \cos(a)^2 + 2b_{23}\cos(a)\cos(b) - \cos(b)^2.$$ By solving this quadratic equation for $b_{23}$ one finds
$$\begin{array}{rcl}
b_{23}&\in &\{\cos(a)\cos(b) \pm \sqrt{1-\cos(a)^2-\cos(b)^2+\cos(a)^2\cos(b)^2} \}\\
&=& \{\cos(a)\cos(b) \pm \sqrt{(1-\cos(a))^2 (1-\cos(b)^2} \}\\
&=& \{\cos(a)\cos(b) \pm \sin(a)\sin(b) \}\\
&=&\{\cos(a\mp b)\}.
\end{array}$$
We do completely analogous calculations for principal submatrices $B_{134}$ and $B_{124}$ and obtain $b_{34}\in \{\cos(b\pm c)\}$ and $b_{24}\in \{\cos(a\pm c)\}$, respectively. Now we have eight matrices that emerge from choosing one of the symbols $+$ or $-$ in each of the patterns $a\pm b, a\pm c, b\pm c$ existent in the matrix below by taking care that the symmetry of the matrix is preserved. $$\begin{bmatrix}
1 &\cos(a) &\cos(b) &\cos(c)\\
\cos(a)& 1 &\cos(a\pm b) & \cos(a\pm c)\\
\cos(b)& \cos(a\pm b) &1 &\cos(b\pm c) \\
\cos(c)& \cos(a\pm c) &\cos(b\pm c) &1
\end{bmatrix}.$$ The following table indicates in the first column the possible selections of signs in $ a\pm b, a\pm c, b \pm c,$ respectively; and in the second column and the third column the determinants of the respective matrices $ B_{234}$ and $ B.$ $$\begin{array}{ccc}
x\pm y& \det( B_{234}) & \det(B) \\
+, +, +& 4\sin(a) \sin(b) \sin(c) \sin(a + b + c)& -4 \sin(a)^2 \sin(b)^2 \sin(c)^2\\
+, +, -& 0 & 0\\
+, -, +& 0 & 0\\
+, -, -& -4\sin(a) \sin(b) \sin(a + b - c) \sin(c)& -4\sin(a)^2\sin(b)^2\sin(c)^2\\
-, +, +& 0& 0\\
-, +, -& -4\sin(a) \sin(b) \sin(c) \sin(a - b + c)& -4\sin(a)^2 \sin(b)^2 \sin(c)^2\\
-, -, +& 4 \sin(a) \sin(b) \sin(a - b - c) \sin(c) & -4\sin(a)^2 \sin(b)^2 \sin(c)^2\\
-, -, -& 0& 0
\end{array}$$ Now assume one of the reals $a,b,c$ is $0$ or $\pi.$ Then the table shows that all entries in columns two and three vanish. Hence the matrix $B$ in this case is positive semidefinite. Thus in order that $B$, as required, is not positive semidefinite it is necessary that $a,b,c\neq\{-\pi,0,\pi\}$. In this case column 3 guarantees we get a not positive semidefinite matrix $B$ in exactly the cases of the sign choices $+++, +--,-+-, --+$ for $ a\pm b, a\pm c, b \pm c,$ respectively. The matrices corresponding to rows, 2,3,5,8 of the table are positive semidefinite independent of choices $a,b,c.$. Explicitly this means that $B$ must be one of the following four matrices
$$\hspace*{-1cm}\begin{bmatrix}
1 &\cos(a) &\cos(b) &\cos(c)\\
\cos(a)& 1 &\cos(a+ b) & \cos(a+ c)\\
\cos(b)& \cos(a+ b) &1 &\cos(b+ c) \\
\cos(c)& \cos(a+ c) &\cos(b + c) &1
\end{bmatrix},
\begin{bmatrix}
1 &\cos(a) &\cos(b) &\cos(c)\\
\cos(a)& 1 &\cos(a+ b) & \cos(a- c)\\
\cos(b)& \cos(a+ b) &1 &\cos(b- c) \\
\cos(c)& \cos(a- c) &\cos(b- c) &1
\end{bmatrix},$$ $$\hspace*{-1cm} \begin{bmatrix}
1 &\cos(a) &\cos(b) &\cos(c)\\
\cos(a)& 1 &\cos(a- b) & \cos(a+ c)\\
\cos(b)& \cos(a- b) &1 &\cos(b- c) \\
\cos(c)& \cos(a+ c) &\cos(b- c) &1
\end{bmatrix}, \begin{bmatrix}
1 &\cos(a) &\cos(b) &\cos(c)\\
\cos(a)& 1 &\cos(a- b) & \cos(a- c)\\
\cos(b)& \cos(a- b) &1 &\cos(b+ c) \\
\cos(c)& \cos(a- c) &\cos(b+ c) &1
\end{bmatrix}.$$
Note by substituting the letter $c$ by $-c$ in the left upper matrix we get the right upper matrix because $\cos(-c)=\cos( c).$ Exactly the same remark leads from the left lower matrix to the right lower matrix. Finally note that after doing the transpositions of rows and columns $3,4$, the upper left matrix shown takes the form $$\begin{bmatrix}
1 &\cos(a) &\cos(c) &\cos(b)\\
\cos(a)& 1 &\cos(a+ c) & \cos(a+ b)\\
\cos(c)& \cos(a+ c) &1 &\cos(b+ c) \\
\cos(b)& \cos(a+ b) &\cos(b + c) &1
\end{bmatrix}$$ and after changing the name of variable $c$ to $-b$ and of variable $b$ to $c$ and noting that $\cos(b-c)=\cos(c-b)$ we see we have obtained the following matrix, which is the left lower matrix. $$\begin{bmatrix}
1 &\cos(a) &\cos(b) &\cos(c)\\
\cos(a)& 1 &\cos(a- b) & \cos(a+ c)\\
\cos(b)& \cos(a- b) &1 &\cos(c- b) \\
\cos(c)& \cos(a+ c) &\cos(c -b) &1
\end{bmatrix}.$$ Hence we have one form and its possible permutations. We focus at the right lower matrix as the standard. Now we know that the determinant of the submatrix $B_{234}$ is $ 4\sin(a)\sin(b)\sin(a - b - c)\sin(c). $ We know by Theorem \[extrmeraynotpsd\] that all $3\times 3$ principal minors must vanish, so $\det(B_{234})=0$ which happens if and only if $b=a-c+k\pi$. Substituting this in the start matrix $B$ we get the following two forms $$\begin{bmatrix}
1 &\cos(a) &\delta\cos(a-c) &\cos(c)\\
\cos(a)& 1 &\delta\cos(c) & \cos(a- c)\\
\delta\cos(a-c)& \delta\cos(c) &1 &\delta\cos(a) \\
\cos(c)& \cos(a- c) &\delta\cos(a) &1
\end{bmatrix},$$ with $\delta =\pm 1$. But note that these are the same up to scaling by the diagonal matrix $\Diag(1,1,-1,1).$ So we may assume $\delta=1$, finishing the proof.
Factor width $k$ matrices and sums of $k$-nomial squares polynomials {#5}
====================================================================
Ahmadi and Majumdar in [@Ahm14] considered the polynomial $$p_n^a=(\sum_{i=1}^n x_{i})^2+(a-1)\sum_{i=1}^n x_{i}^2$$ when $n=3$ and proved that if $a<2$ then no nonnegative integer $r$ can be chosen so that $(x_1^2+x_2^2+x_3^2)^r p_3^a$ is a sum of squares of binomials, although it is clearly nonnegative for $a\geq 1$.
In this section, we give negative results along the same lines. We first characterize when $p_n^a$ is a sum of $k$-nomial squares, then we show that $p_{n,r}^a,$ that is, the multiplication of $p_n^a$ with $(\sum_{i=1}^n x_{i}^2)^r$, is a sum of $k$-nomial squares if and only if this is the case for $r=0$. Before presenting our proof, we make the connection between factor width $k$ matrices and sums of $k$-nomial squares polynomials which will be used along the proof. In the following proposition $z(x)_d$ is the vector of all monomials of degree $d$, arranged in some order, in the variables figuring in $x.$
\[factork\] A multivariate polynomial $p(x)$ of degree $2d$ is a sum of $k$-nomial squares (so$k$s) if and only if it can be written in the form $ p(x)=z(x)_d^T Q z(x)_d$ with matrix $Q\in FW^{n+d-1 \choose d}_k.$
Consider an expression $a_1 m_1+\cdots + a_k m_k$ with reals $a_1,\ldots,a_k$ and monomials $m_1,\ldots,m_k.$ Note that monomials $m_1,\ldots,m_k$ occur necessarily in the column $z(x)_d$ at positions $i_1,\ldots,i_k,$ say. Construct a column $q$ of size ${n+d \choose d}$ by putting into positions $i_1,\ldots,i_k$ respectively the reals $a_1,\ldots,a_k,$ and into all other positions 0s. Then evidently $z(x)_d^T q= a_1 m_1+\cdots + a_k m_k,$ and consequently $z(x)_d^T q q^T z(x)_d= (a_1 m_1+\cdots + a_k m_k)^2.$ Consequently, a polynomial which is a sum of, say, $t$ squares of $k$-nomials can be written as $z(x)_d^T Q z(x)_d ,$ where $Q=\sum_{\nu=1}^t q_\nu q_\nu^T ,$ with suitable columns $q_1,\ldots,q_t$ of size ${n+d \choose d}$ each of which has at most $k$ nonzero entries. It follows that $Q$ is a matrix of factor width $k.$ Conversely if $Q$ is of factor width $k,$ then we already know from the beginning of Section 3 that we can write $Q=\sum_{\nu=1}^t q_\nu q_\nu^T$ where each column $q_\nu$ has at most $k$ nonzero real entries. Clearly from the arguments above follows now that $z(x)_d^T Q z(x)_d$ yields a polynomial which is a finite sum of $k$-nomial squares.
We shall also need the following lemma.
\[coefficients\] Consider a quadratic form $q(x)=x^{T}Qx$ and a polynomial $p$ related to $q$ by $p=(\sum_{i=1}^n (\lambda_i x_i)^2)^r\,q.$ Then every monomial of $p$ has at most two odd degree variables and we have $p_{(i,j)}=2 (\sum_{i=1}^n \lambda_i^2)^r q_{ij}$ and $p_0=(\sum_{i=1}^n \lambda_i^2)^r {\rm trace}(Q)$ where $p_{(i,j)}$ is the sum of coefficients of the monomials in which $x_{i}$ and $x_{j}$ have odd degree, $p_0$ is the sum of coefficients of even monomials of $p$ and $q_{ij}$ is the entry $(i,j)$ of $Q$.
The quadratic form is $$q(x)=\sum_{1\leq i,j\leq n}x_{i}q_{ij}x_{j}=\sum_{i=1}^{n}q_{ii}x_{i}^{2}+\sum_{1\leq i<j\leq n} 2q_{ij}x_{i}x_{j},$$ while by the multinomial theorem we have $$( (\lambda_1 x_{1})^2+\cdots + (\lambda_n x_{n})^2)^r=
\sum_{i_{1}+\cdots+i_{n}=r} \binom{r}{i_{1},\ldots,i_{n}}
(\lambda_1 x_{1})^{2i_{1}}(\lambda_2 x_{2})^{2i_{2}}\ldots (\lambda_n x_{n})^{2i_{n}}.$$ Thus, putting the $\lambda$s into evidence, by definition of $p,$ we get $$\begin{aligned}
p&=\sum_{(i,\underline{i})\in J_{1}} q_{ii}
\binom{r}{\underline{i}} \lambda_{1}^{2i_{1}}\cdots \lambda_{n}^{2i_{n}} \cdot
x_{1}^{2i_{1}}\cdots x_{i}^{2i_{i}+2}\cdots x_{n}^{2i_{n}}\\
&+\sum_{((i,j),\underline{i})\in J_{2}} 2q_{ij} \binom{r}{\underline{i}}
\lambda_{1}^{2i_{1}}\cdots \lambda_{i}^{2i_{n}}\cdot
x_{1}^{2i_{1}}\ldots x_{i}^{2i_{i}+1}\ldots x_{j}^{2i_{j}+1}\ldots x_{n}^{2i_{n}},
\end{aligned}$$ where $\underline{i}=(i_{1},\ldots,i_{n})$ and, with $|\underline{i}|=i_{1}+\cdots+i_{n}$, $$\begin{aligned}
J_{1}&=\{(i,\underline{i}): i\in \{1,\ldots,n\},\underline{i}\in \mathbb{Z}_{\geq 0}^n,|\underline{i}|=r\},\\
J_{2}&=\{((i,j),\underline{i}): 1\leq i<j\leq n,\underline{i}\in \mathbb{Z}_{\geq 0}^n,|\underline{i}|=r\}.
\end{aligned}$$ From the above equation for $p$ , we recognize that $$p_{(i,j)}=2q_{ij}\sum_{i_{1}+\cdots+i_{n}=r} \binom{r}{i_{1},\ldots,i_{n}}
\lambda_{1}^{2i_{1}}\cdots \lambda_{i}^{2i_{n}}
=2 q_{ij} (\lambda_1^2+\cdots +\lambda_n^2)^r ,$$ again by the multinomial theorem; and similarly we have $${\hspace*{-2.5mm}}{\hspace*{-2.5mm}}{\hspace*{-2.5mm}}p_0=\sum_{i=1}^{n}\sum_{i_{1}+\cdots+i_{n}=r}
q_{ii} \binom{r}{i_{1},\ldots,i_{n}} \lambda_{1}^{2i_{1}} \cdots \lambda_{n}^{2i_{n}} =
\sum_{i=1}^{n}q_{ii}\sum_{i_{1}+\cdots+i_{n}=r} \binom{r}{i_{1},\ldots,i_{n}}
\lambda_{i}^{2i_{1}} \cdots \lambda_{n}^{2i_{n}}$$ $$=(\lambda_1^2+\cdots+\lambda_n^2 )^r {\rm trace}(Q).$$
In addition, we will make use of the following fact proved in Muir’s treatise [@muir1933treatise p 61].
\[determinant\] For the determinant at the left hand side below which has only letters $a$ except on the diagonal, we have $$\left|\begin{array}{cccc}
b_1& a & ... & a \\
a & b_2 & ... & a \\
\vdots & \vdots &\ddots & \vdots \\
a & a & ... & b_n
\end{array}\right| = \prod_{i=1}^n (b_i-a)+ a \sum_{j=1}^n \prod_{i:i\neq j}^n (b_i-a)$$
Now we are ready to prove our results regarding Ahmadi and Majumdar’s example.
If $a\geq \frac{n-1}{k-1},$ then $p_n^a$ is a sum of $k$-nomial squares.
The quadratic form $p_n^a$, can be written as $a \sum_{i=1}^n x_i^2 +2\sum_{i<j}x_ix_j$, so $p= z(x)^T Q z(x)$ by means of the $n\times n$ matrix $Q$ shown. $$Q=\begin{bmatrix}
a & 1 & \cdots &1 & 1 \\
1 & a & \cdots &1 & 1 \\
\vdots& & \ddots & &\vdots \\
1 & & & & 1 \\
1 & 1 & \cdots & 1 & a
\end{bmatrix}.$$ Now there exist ${n\choose k}$ subsets $K$ of cardinality $k$ of the set $\{1,2,\ldots,n\}.$ Let $i,j\in \{1,2,\ldots,n\}.$ A pair $(i,i)$ lies in exactly ${n-1 \choose k-1}$ of the sets $K\times K$ while a pair $(i,j)$ with $i\neq j$ lies in $K\times K$ if and only if $\{i,j\}\subseteq K.$ It hence lies in exactly ${n-2 \choose k-2}$ sets $K\times K.$ Consider the $k\times k$ matrix $B$ as following $$B={n-2\choose k-2}^{-1}
\begin{bmatrix}
\frac{(k-1) a}{n-1} & 1 & \cdots &1 & 1 \\
1 & \frac{(k-1) a}{n-1} & \cdots &1 & 1 \\
\vdots& & \ddots & &\vdots \\
1 & & & & 1 \\
1 & 1 & \cdots & 1 & \frac{(k-1) a}{n-1}
\end{bmatrix},$$ and define $\iota_K(B)$ to be the $n\times n$ matrix of support $K\times K$ which carries on it the matrix $B.$ Then our arguments yield that $\sum_{K: |K|=k} \iota_K(B) =Q.$
Take an arbitrary $l\times l$ submatrix of the matrix factor of $B.$ By the previous lemma , this submatrix has determinant $( \frac{(k-1)a}{n-1} -1)^{-1+l} )( \frac{(k-1)a}{n-1}-1+l).$ It follows from the hypothesis for $a$ that this determinant is nonnegative. So $B,$ and thus $\iota_K(B),$ is a positive semidefinite matrix and $Q$ hence a matrix of factor width $\leq k$ by Proposition \[factor\]. This means by Proposition \[factork\] that $p_n^a$ is a sum of $k$-nomial squares.
\[thm:generalAhmadiMajumdar\] For integers $n\geq 0$ and $r\geq 0,$ define
$$p_{n,r}^a=(\sum_{i=1}^n x_{i}^2)^r\,p_{n}^a
=(\sum_{i=1}^n x_{i}^2)^r \cdot \left( (\sum_{i=1}^n x_{i})^2+(a-1)\sum_{i=1}^n x_{i}^2\right).$$ Then $p_{n,r}^a$ is a sum of $k$-nomial squares if and only if $p_n^a=p_{n,0}^a$ is a sum of $k$-nomial squares.
Clearly, if $p_n^a$ is a so$k$s then $p_{n,r}^a$ is a so$k$s. So we need to show the inverse. Assume that the degree $2(r+1)$ polynomial $p_{n,r}^a$ is a so$k$s. Let $I_{n,r+1}=\{(i_1,\ldots,i_n)\ s.t.\ i_k\in \mathbb{N}_0, \sum_{k=1}^n i_k=r+1\}$ be the set of vectors of exponents in $\mathbb{Z}_{\geq 0}^n$ that occurs in the family of monomials of a homogeneous polynomial of degree $r+1$ in variables $x_1,...,x_n.$ Let this family of monomials be also the one that occurs in $z(x)_{r+1}$.
By Proposition \[factork\], we can write $$p_{n,r}^a=z(x)_{r+1}^T H_{n,r} z(x)_{r+1}\;\text{for some}\; H_{n,r} \in FW_{k}^{ {n+r \choose r +1} }.$$ Call an $i\in \mathbb{Z}_{\geq 0}^n$ [*even*]{} if it has only even entries and consider now the matrix $B_{n,r}\in {\mathbb{R}}^{{I_{n,r+1}}\times{I_{n,r+1}}}$ given by
$$(B_{n,r})_{ij} = \left\{ \begin{array}{cl}
k-1 & \text{ if $i+j$ is even, } \\
-1 & \mbox{ otherwise. }
\end{array} \right .$$ We will show now that $B_{n,r}\in (FW_{k}^{{n+r \choose r+1}})^{*};$ that is we shall prove that every $k\times k$ principal submatrix of $B_{n,r}$ is positive semidefinite, see Proposition \[dualfactork\]. Since $n,r$ are fixed, we write $B$ and $H$ for matrices $B_{n,r}, H_{n,r}$ respectively.
Note that a sum $i+j$ of such $n$-uples is even if and only if the sets of positions in $i$ where odd entries occur equals the corresponding set in $j.$ (Example: The 5-uple $i=(1,0,0,3,2)$ has $\{1,4\}$ as the set of positions of odd entries.)
So take a $k\times k$ submatrix $M$ of $B$ with rows and columns indexed by the $n$-uples $i_{1},\ldots,i_{k},$ say. Determine for each $n$-uple its set of positions of odd entries. Let $S_1,\ldots,S_l$ $(l\leq k)$ be the distinct non empty sets of such positions. Now rearrange the $n$-uples so that the first few $n$-uples each have $S_1$ as set of positions of odd entries, the next few have $S_2$ as such set of positions, etc. Let $s_1,\ldots,s_l$ be the sizes of these sets. To the rearrangement of the $n$-uples corresponds a $k\times k$ permutation matrix $P$ such that $PMP^T$ is ‘a direct sum of blocks of sizes $s_1\times s_1,\ldots,s_l\times s_l$ with entries $k-1$ over a background of $-1$s’. Formally, for suitable $P$ we can express this as
$$PMP^T = (-1)J_k + k(J_{s_1}\oplus J_{s_2}\oplus\cdots\oplus J_{s_l})$$ This same matrix can be produced as follows. Define $l\times l$ matrix $N$ and $l\times k$ matrix $C$ by $$N=(-1)J_l+kI_l=\begin{bmatrix}
k-1& -1&\ldots & -1\\
-1& k-1& \ldots & -1\\
\vdots&\ & \ddots&\\
-1&\quad&\ldots & k-1
\end{bmatrix},$$ $$C=\left(\begin{array}{cccccccccccccc}
1&1&\cdots &1& & & & & & & & & & \\
& & & &1&1&\cdots &1& & & & & & \\
& & & & & & & & .... & & & & & \\
& & & & & & & & &1& 1&\cdots &1&
\end{array}\right)$$ where rows, $1,2,...,l$ of $C$ have, respectively, $s_1,s_2,...,s_l$ entries equal to 1. Check that then $PMP^T = C^T N C.$ Now, again by Lemma \[determinant\], $N$ is positive semidefinite, Hence $M$ will be psd. Since the $k\times k$ submatrix $M$ of $B$ was arbitrary, we are done with proving that $B\in (FW_k^{n+r\choose r+1})^*.$ By definition of the concept of a dual cone, we have $\langle B, H\rangle \geq 0,$ and by the definitions of $\langle, \rangle,$ $H,$ and $B,$ hence $$\langle B, H\rangle = (k-1)\sum_{i,j: i+j\; \text{even}} h_{ij}+ (-1) \sum_{i,j: i+j\; \text{non-even}} h_{ij} \geq 0.$$ Since the quadratic form underlying our construction of $p_{n,r}^a$ is $p_n^a=a \sum_{i=1}^n x_i^2 +2\sum_{i<j}x_ix_j,$ and it has the defining matrix $Q$ mentioned in the previous proposition, we get by Lemma \[coefficients\] that $$\begin{aligned}
\sum_{i,j: i+j\; \rm even}h_{ij}&=&n^r {\rm trace}\, Q = n^{r+1} a; \\ \sum_{i,j: i+j\: \rm non-even} h_{ij} &= & 2 n^r \times \sum_{1\leq i<j \leq n} q_{ij} = 2n^r \frac{1}{2}n(n-1)=n^{r+1} (n-1).\end{aligned}$$ Hence the inequality above reads $(k-1) n^{r+1} a \geq n^{r+1}(n-1)$ or $a\geq \frac{n-1}{k-1},$ which means by the previous proposition that $p_n^a$ is a sum of $k$-nomial squares.
A quadratic form that is not sum of squares of binomials (so2s) is not $r$-so2s for any $r$ {#6}
===========================================================================================
For the case of $k=2$, sums of squares of $k$-nomials are also known as sums of binomial squares [@Fidalgo2011] or scaled diagonally dominant sums of squares (SDSOS) [@Ahm14]. In this section we will try to generalize Ahmadi And Majumdar’s counterexample in this setting. More concretely we will prove that the standard multipliers are useless for certifying nonnegativity of quadratics using sobs, as we prove that a quadratic form is $r$-sobs, if and only if it is sobs. But before we proceed further, we shall need the following proposition.
[*[@Fidalgo2011 Corollary 2.8].*]{}\[dmtsobsprop\] Given a quadratic form $q(x)=\sum_{i=1}^{n}q_ix_i^{2}+\sum_{i<j}q_{ij}x_i x_j$, then if $\hat{q}(x)=\sum_{i=1}^{n}q_ix_i^{2}-\sum_{i<j}|q_{ij}|x_i x_j$ is nonnegative, $q(x)$ is a sum of binomial squares.
This is enough to show the previously announced result.
\[Quadraticsobs\] Let $q(x)=q(x_1,\ldots,x_n)$ be a real quadratic form and let $r\in \mathbb{Z}_{\geq 0}$. Then if $q(x)(x_1^2+\cdots + x_n^2)^r$ is a sum of binomial squares, so is $q(x)$ itself.
Assume that $q(x)(x_{1}^2+\cdots+ x_{n}^2)^r$ is a sum of binomial squares. We will prove that $q(x)$ is a sum of binomial squares. Write $q(x)=\sum_{i=1}^{n}a_ix_i^2+\sum_{1\leq i<j\leq n} d_{ij}x_ix_j,$ say. Then considerations as in the proof of Lemma \[coefficients\] yield $$\begin{aligned}
q(x).(x_{1}^2+\cdots + x_{n}^2)^r
&=\sum_{(i,\underline{i})\in J_1}
a_i \binom{r}{\underline{i}} x_{1}^{2i_{1}} \cdots x_{i}^{2i_{i}+2} \cdots x_n^{2i_n}\\
&+\sum_{((i,j),\underline{i})\in J_2} d_{ij} \binom{r}{\underline{i}} x_1^{2i_1}\cdots x_i^{2i_i+1}\cdots x_j^{2i_j+1}\cdots x_n^{2i_n},
\end{aligned}$$ where again, $$J_{1}=\{(i,\underline{i}): i\in \{1,\ldots,n\},\underline{i}\in \mathbb{Z}_{\geq 0}^n,|\underline{i}|=r\},$$ $$J_{2}=\{((i,j),\underline{i}): 1\leq i<j\leq n,\underline{i}\in \mathbb{Z}_{\geq 0}^n,|\underline{i}|=r\}.$$ Now the monomials of degree $r$ are of the form $x_1^{i_1}x_2^{i_2}\ldots x_n^{i_n}$ with $i_1+\cdots+i_n=r$. There are as we know $L=\binom{r+n-1}{r}$ such monomials. We order these and denote them by $m_1,\ldots,m_L$. Every binomial is of the form $(\alpha_{ij}m_i+\beta_{ij}m_j)$ with some selection of $i,j$ with $1\leq i<j\leq L$. By defining suitable $\alpha_{ii}$, we can thus assume the binomials are of the form $\alpha_{ii}m_i$, $1\leq i\leq L$ or $(\alpha_{ij}m_i+\beta_{ij}m_j)$ with $1\leq i<j\leq L$. A sum of binomial squares is thus given as
$$\begin{aligned}
\lefteqn{ \sum_{i=1}^L\alpha_{ii}^2m_i^2+\sum_{1\leq i<j\leq L}(\alpha_{ij}m_i+\beta_{ij}m_j)^2}\\
&=&\sum_{i=1}^{L}\alpha_{ii}^{2}m_{i}^{2}+\sum_{ i<j}\alpha_{ij}^{2}m_{i}^{2}+\sum_{ i<j}\beta_{ij}^{2}m_{j}^2 + \sum_{ 1\leq i<j\leq L}
2\alpha_{ij}\beta_{ij}m_{i}m_{j}\\
&=&\sum_{i=1}^{L}(\alpha_{ii}^{2}+\alpha_{i,i+1}^{2}+\ldots+\alpha_{iL}^{2}+
\beta_{1i}^2+\ldots\beta_{i-1,i}^2)m_{i}^2+\sum_{ 1\leq i<j\leq L}2\alpha_{ij}\beta_{ij}m_{i}m_{j}.\end{aligned}$$
Now assuming, as we do, that $q(x)(x_{1}^2+\cdots + x_{n}^2)^r$ is a sobs, by means of comparison of coefficients, we get a system of $|J_{1}|+|J_{2}|$ equations between reals. It is easily seen that these equations can be obtained as follows: For each $(i,\underline{i})\in J_{1}$ define $$T(i,\underline{i})=\{\text{indices}\ t\in \{1,\ldots,L\}\ \text{for which}\ m_{t}^2=
x_{1}^{2i_{1}}\cdots x_{i}^{2i_{i}+2}\cdots x_{n}^{2i_{n}} \},$$ $$S(i,\underline{i})=\{\text{pairs}\ s_{1}<s_{2}\ \text{so that}\ m_{s_1}m_{s_2}=
x_1^{2i_1}\cdots x_i^{2i_i+2}\cdots x_n^{2i_n} \}$$ and write the equation $$a_{i} {r\choose \underline i} =\sum_{ t\in T(i,\underline{i})} (\alpha_{tt}^{2}+\ldots+\alpha_{tL}^{2}+\beta_{1t}^{2}+\ldots+\beta_{t-1,t}^{2})+\sum_{(s_{1},s_{2})\in S(i,\underline{i})}2\alpha_{s_{1}s_{2}}\beta_{s_{1}s_{2}}$$ for each $((i,j),\underline{i})\in J_{2}$, let $$S^{\prime}((i,j),\underline{i})=\{\text{pairs}\ s_{1}^{\prime} < s_{2}^{\prime}\ \text{ so that }\ m_{s_{1}^{\prime}}m_{s_{2}^{\prime}}=x_{1}^{2i_{1}}\ldots x_{i}^{2i_{i}+1}\ldots x_{j}^{2i_{j}+1}\ldots x_{n}^{2i_{n}}\},$$ and write the equation $$d_{ij} {r\choose \underline i} =\sum_{ s_{1}^{\prime}, s_{2}^{\prime}\in S^{\prime}((i,j),\underline{i})}2\alpha_{s_{1}^{\prime} s_{2}^{\prime}}\beta_{s_{1}^{\prime} s_{2}^{\prime}}.$$ Every system of reals $(\{a_{i}\}_{i=1}^n,\{d_{ij}\}_{i,j=1}^n,\{\alpha_{ij}\}_{1\leq i\leq j\leq L},\{\beta_{ij}\}_{1\leq i < j\leq L})$ which satisfies the system of equations gives rise to a quadratic form $q$ and binomials so that $ q(x)(x_1^2+\cdots+x_n^2)^r$ is a sum of squares of these binomials. Now if we have a system of reals satisfying the system, then we can find a particular new solution by replacing the $d_{ij}$ which are positive by $-d_{ij}$ and simultaneously replacing the $\beta_{s_1^{\prime} s_2^{\prime}}$ for which $s_1^\prime, s_2^\prime\in S^\prime((i,j),\underline{i})$ by $-\beta_{s_1^\prime s_2^\prime}$. Indeed note that the sets $S^\prime((i,j),\underline{i})$ are disjoint from the sets $S(i,\underline{i})$ and $(-\beta_{s_1^\prime s_2^\prime})^2=(\beta_{s_1^\prime s_2^\prime})^2$, hence the first set of $|J_{1}|$ equations will again be satisfied. What concerns the second set of equations we note that the sets $S^{\prime}((i,j),\underline{i})$ are also mutually disjoint, because a choice $(s_{1}^{\prime}, s_{2}^{\prime})$ defines via forming $ m_{s_{1}^{\prime}}m_{s_{2}^{\prime}}$ a unique power product $x_{1}^{2i_{1}}\ldots x_{i}^{2i_{i}+1}\ldots x_{j}^{2i_{j}+1}\ldots x_{n}^{2i_{n}}$ with exactly two odd exponents determining $i,j$ and then $\underline{i}$. In other words $(s_{1}^{\prime}, s_{2}^{\prime})$ lives in only one of the sets $S^{\prime}((i,j),\underline{i})$ hence carrying through the replacements indicated we change the sign at the left hand side of equation if and only if we change the sign of the corresponding right hand side. We therefore satisfy also the second group of equations.
The new solution tells us that $\hat{q}(x) (x_1^2+\cdots+x_n^2)^r$ is a sum of squares of binomials where $\hat{q}(x)=\sum_{i=1}^{n}a_{i}x_{i}^{2}-\sum_{1\leq i<j\leq n} |d_{ij}|x_{i}x_{j}$. Now since the multiplier is evidently positive definite, $\hat{q}$ is nonnegative. Hence by Proposition \[dmtsobsprop\], $q$ is a sum of squares of binomials.
Factor width 3 matrices and sums of trinomial squares {#7}
=====================================================
The purpose of this section is to show that if a quarternary quadratic form $q(w,x,y,z)$ is not a sum of squares of trinomials then, given any positive integer $r$, the form $(w^2+x^2+y^2+z^2)^r\cdot q $ is not a sum of squares of trinomials. In fact it will be necessary to show more generally that for nonzero reals $\lambda_{1},\lambda_{2},\lambda_{3},\lambda_{4}$, the form $(\lambda_{1}^2w^2+\lambda_{2}^2 x^2+\lambda_{3}^2 y^2+\lambda_{4}^2 z^2)^r\cdot q $ is not a sum of squares of trinomials.
\[scaledtrinomial\] Let $\underline{x}=[w,x,y,z]^\top$ and let $q= \underline{x}^T Q
\underline{x}$ be a psd quadratic form and $B$ a matrix such that $B$ spans an extreme ray in $(FW_3^4)^*$ and $\langle Q, B\rangle <0.$ Then the degree $(2r+2)$ form $p=(\lambda_1^2 w^2+\lambda_2^2 x^2+\lambda_3^2 y^2+\lambda_4^2 z^2)^r q$ is not a sum of trinomial squares, for any, not all zero, reals $\lambda_1,\lambda_2,\lambda_3,\lambda_4$.
The inequality in the hypothesis implies that $B$ is not psd. In addition, it spans an extreme ray, hence by Proposition \[matrixform\], for some permutation $P$ and non singular matrix $D$ and some $a,c\in ]-\pi,\pi[\setminus\{0\}$ it has the following form $$B_2=DPBP^{T}D^{T}= \begin{bmatrix}
1 &\cos(a) & \cos(a-c) &\cos(c)\\
\cos(a)& 1 & \cos(c) & \cos(a- c)\\
\cos(a-c)& \cos(c) &1 & \cos(a) \\
\cos(c)& \cos(a- c) & \cos(a) &1
\end{bmatrix} .$$ We now have the inequality $0> \langle Q,B \rangle= \langle P^T D^{T} Q D P, B_2 \rangle. $ We work with the new quadratic form $q_{\rm new}$ defined by $q_{\rm new}=x^T P^T D^{T} Q D P x$ and show that given any $ \lambda\in ({\mathbb{R}}^*)^4\setminus\{0\},$ we have that the associated quartic form $p_{\rm new}=(\lambda_1^2 w^2+\lambda_2^2 x^2+\lambda_3^2 y^2+\lambda_4^2 z^2)^r q_{\rm new}$, for any $\lambda_1,\lambda_2,\lambda_3,\lambda_4$ is not a sum of trinomial squares. Since the property ‘not being a sum of trinomial squares for any $\lambda$’ is invariant under permutations and scalings of the variables in $q_{\rm new}$, we shall get the claim concerning the original $p,q.$ For simplicity of notation be aware that [*we redefine*]{} $(Q,B):=(P^T D^{T} Q D P, B_2)$ and $(p,q):=(p_{\rm new},q_{\rm new}).$ The original $Q,B,p,q$ will not play any further role in this proof.
The polynomial $p$ is of degree $2r+2.$ From Theorem \[factork\] we know that $p$ has a, usually nonunique, representation $p=z(x)_{r+1}^T Q' z(x)_{r+1},$ where $z(x)_{r+1}$ collects all monomials of degree $r+1$ and hence $Q'$ is an ${r+4 \choose 3}\times {r+4 \choose 3}$ matrix. We define the matrix $B'=(b_{ij}')$ as follows (where we use for the moment as the most natural indexation, the one given by the vectors of exponents of the monomials), where $i,j\in \mathbb{Z}_{\geq 0}^4$ are uples with $|i|=|j|=r+1$ so that $B'$ is also an ${r+4 \choose 3}\times {r+4 \choose 3}$ matrix:
$$b_{ij}'= \left\{ \begin{array}{rl}
b_{kl} & \mbox{ iff $i+j$ has two odd entries exactly in positions $k\neq l$ } \\
1 & \mbox{ iff $i+j$ has only even entries } \\
0 & \mbox{ iff $i+j$ has 1 or 3 odd entries } \\
\omega & \mbox{ iff $i+j$ has only odd entries }
\end{array} \right .$$ (The case that $i+j$ has exactly 1 or 3 odd entries can actually not happen in case $|i|=|j|$, but we will need the given rules below also in cases where $|i|\neq |j|$.) We will show that $B' \in (FW_3^{{r+4 \choose 3}})^*,$ and then that $\langle B',Q'\rangle <0,$ thus showing $Q'\not\in FW_3^{{r+4 \choose 3}},$ and hence showing by Propositions \[dualfactork\] and \[factork\] that $p$ is not a sum of squares of trinomials. We will then see from the fact that being a sum of squares of trinomials is invariant under permutations, that the original $p$ is also not a sum of squares of trinomials.
To any string of exponents $i=(i_1,i_2,i_3,i_4)\in {\mathbb{Z}}_{\geq 0}^4$ we can associate a unique 4-uple ${\varepsilon}={\varepsilon}(i)=({\varepsilon}_1,{\varepsilon}_2,{\varepsilon}_3,{\varepsilon}_4)\in \{0,1\}^4$ defined by $i_\nu \equiv {\varepsilon}_\nu \mod 2.$
To prove that $B' \in (FW_3^{{r+4 \choose 3}})^*,$ note that its entries depend only on ${\varepsilon}(i+j)={\varepsilon}(i)+{\varepsilon}(j)$ (computed in $\mathbb{Z}_2$).
If $|i|$ is even then the only 4-uples possible for ${\varepsilon}(i)$ are:
$0000,1100,1010,1001,0110,0101,0011,1111.$
If $|i|$ is odd then the only 4-uples possible for ${\varepsilon}(i)$ are:
$1000,0100,0010,0001,1110,1101,1011,0111.$
The table below is the modulo 2 addition table for 4-uples ${\varepsilon}(i)$ with $| i |$ even (for example $1100+1001= 0101$). The reader verifies that precisely the same addition table would be obtained when the first line and the first column would be replaced by the 4-uples ${\varepsilon}(i)$ for which $|i|$ is odd. If we replace the 4-uples of the inner part of this table according to the rules given for the construction of matrix $B'$ we get the matrix that follows the table. For example to 0101 corresponds $b_{24}.$ That matrix can serve as a look-up table for the construction of (sub)matrices of $B'.$ $$\begin{array}{c|cccccccc}
+ & 0000 &1100&1010&1001&0110&0101&0011&1111 \\ \hline
0000 & 0000 &1100&1010&1001&0110&0101&0011&1111 \\
1100 & 1100&0000&0110&0101&1010&1001&1111&0011 \\
1010 & 1010&0110&0000&0011&1100&1111&1001&0101\\
1001 & 1001&0101 &0011& 0000&1111&1100&1010&0110\\
0110 &0110&1010&1100 &1111& 0000&0011&0101&1001 \\
0101 & 0101& 1001&1111& 1100& 0011 & 0000 &0110 &1010\\
0011 & 0011& 1111 &1001 & 1010 & 0101 & 0110 & 0000& 1100\\
1111 & 1111& 0011 & 0101& 0110 & 1001 & 1010 &1100 & 0000
\end{array}$$ $$\begin{array}{cccccccc}
1 &b_{12} &b_{13} &b_{14} &b_{23} &b_{24} &b_{34} &\omega \\
b_{12} &1 &b_{23} &b_{24} &b_{13} &b_{14} &\omega & b_{34} \\
b_{13} &b_{23} &1 &b_{34} &b_{12} & \omega &b_{14} &b_{24} \\
b_{14} &b_{24} &b_{34} &1 & \omega &b_{12} &b_{13} &b_{23} \\
b_{23} &b_{13} &b_{12} &\omega &1 &b_{34} &b_{24} &b_{14} \\
b_{24} &b_{14} &\omega &b_{12} &b_{34} &1 &b_{23} &b_{13} \\
b_{34} &\omega &b_{14} &b_{13} &b_{24} &b_{23} &1 &b_{12} \\
\omega &b_{34} &b_{24} &b_{23} &b_{14} &b_{13} &b_{12} & 1
\end{array}$$
After having imposed some order on the set of $4$-uples $i$ of 1-norm $|i|=1+r$ one can construct the matrix $B'.$ Consider now selecting three distinct 4-uples $i,j,k$ of 1-norm $1+r$ and selecting in the matrix $B'$ the $3\times 3$ submatrix determined by this selection. If $i$ precedes $j$ precedes $k$ in the ordering of the 4-uples the obtained $3\times 3$ matrix is the matrix at the left. Its entries are, as mentioned, completely determined by the matrix at the right
$\begin{pmatrix}
b_{ii}' & b_{ij}' & b_{ik}' \\
b_{ji}' & b_{jj}' & b_{jk}' \\
b_{ki}' & b_{kj}' & b_{kk}'
\end{pmatrix}$ $\begin{pmatrix}
{\varepsilon}(i+i) & {\varepsilon}(i+j) & {\varepsilon}(i+k) \\
{\varepsilon}(j+i) & {\varepsilon}(j+j) & {\varepsilon}(j+k) \\
{\varepsilon}(k+i) & {\varepsilon}(k+j) & {\varepsilon}(k+k)
\end{pmatrix},$
from which it can be constructed using the above look-up table. Hence the $3\times 3$ submatrix of $B'$ is simply permutation equivalent to a principal $3\times 3$ submatrix and it is sufficient to show that all principal $3\times 3$ submatrices of the look-up table are positive semidefinite. To see this note first that the left upper $4\times 4$ matrix of the look-up table coincides with $B.$ More generally all principal $3\times 3$ submatrices of the look up table which do not contain an $\omega$ are permutation equivalent to $3\times 3$ principal submatrices of $B$ and hence are automatically positive semidefinite. The $3\times 3$ principal submatrices containing $\omega$ stem from selecting sets of three line indices which contain one of the sets $\{1,8\},\{2,7\},\{3,6\},\{4,5\}.$ These matrices are permutation equivalent to one of the following matrices:
$$\begin{bmatrix}
1&\omega & b_{12}\\
\omega& 1& b_{34} \\
b_{12}&b_{34}&1
\end{bmatrix},{\hspace*{2.5mm}}\begin{bmatrix}
1&\omega & b_{14}\\
\omega& 1& b_{23} \\
b_{14}&b_{23}&1
\end{bmatrix},{\hspace*{2.5mm}}\begin{bmatrix}
1&\omega & b_{13}\\
\omega& 1& b_{24} \\
b_{13}&b_{24}&1
\end{bmatrix}.$$
So it is sufficient to find an $\omega\in \mathbb{R}$ such that these matrices are positive semidefinite. To see this, the easiest choice is to put $\omega=1.$ This is a universal choice valid for all $0<a,c<\pi$ that result in determinants equal to 0. If one is given explicit real numbers for $a,b,c,$ then putting $\omega=1-{\varepsilon})$ for sufficiently small ${\varepsilon}>0,$ one will obtain strictly positive definite (sub)determinants. With these checks we have proved that $B' \in (FW_3^{{r+4 \choose 3}})^*.$
We now show the other claim we made for $B'.$
Claim: There holds $\langle B',Q'\rangle = (\sum_{i=1}^4 \lambda_i^2 )^r \, \langle B,Q\rangle.$ Thus $\langle Q', B' \rangle <0.$
By the definition of the inner product in matrix space, we have to show
${\displaystyle}\sum \{b_{ij}' q_{ij}': \; i,j\in \mathbb Z_{\geq 0}^4, |i|=|j|=1+r \}
=(\sum_{i=1}^4 \lambda_i^2)^r \sum_{i,j=1}^4 b_{ij}q_{ij}.$
Now, given $ i,j\in \mathbb Z_{\geq 0}^4, |i|=|j|=1+r,$ we have of course $|i+j|=2r+2.$ Furthermore for any such sum $s=i+j$ we have [*a priori*]{} exactly one of the following possibilities: all entries are even; exactly two entries are odd; one or three entries are odd; all entries are odd.
Since for an $s\in \mathbb Z_{\geq 0}^4$ for which $|s|$ is even it is impossible that $s$ has exactly one or three odd entries, we can write the left side above as follows:
${\displaystyle}\sum_{\scriptsize \begin{array}{c}
|s|=2r+2\\ s \text{ has four}\\ \text{even entries} \end{array}}
\sum_{\scriptsize \begin{array}{c}
|i|=|j|=r+1 \\ i+j=s \end{array}} {\hspace*{-2.5mm}}b_{ij}' q_{ij}'
+ {\hspace*{-2.5mm}}\sum_{\scriptsize \begin{array}{c}
|s|=2r+2 \\ s \text{ has two} \\\text{ odd entries} \end{array}}
\sum_{\scriptsize \begin{array}{c}
|i|=|j|=r+1\\ i+j=s \end{array}} {\hspace*{-2.5mm}}b_{ij}' q_{ij}' +
{\hspace*{-2.5mm}}\sum_{\scriptsize \begin{array}{c}
|s|=2r+2\\ s \text{ has four}\\ \text{odd entries} \end{array}}
\sum_{\scriptsize \begin{array}{c}
|i|=|j|=r+1 \\ i+j=s \end{array}} {\hspace*{-2.5mm}}{\hspace*{-2.5mm}}b_{ij}' q_{ij}' .$
By the definition of $B'$ given, this is equal to
${\displaystyle}{\hspace*{-2.5mm}}{\hspace*{-2.5mm}}\sum_{\scriptsize \begin{array}{c}
|s|=2r+2 \\ s \text{ has four } \\ \text{even entries} \end{array}}
{\hspace*{-2.5mm}}\sum_{\scriptsize \begin{array}{c}
|i|=|j|=r+1\\ i+j=s \end{array}} {\hspace*{-2.5mm}}{\hspace*{-2.5mm}}q_{ij}' +
\sum_{1\leq k<l\leq 4}
\sum_{\scriptsize \begin{array}{c}
|s|=2r+2 \\ s \text{ has odd}\\ \text{entries at } k,l \end{array}}
{\hspace*{-2.5mm}}{\hspace*{-2.5mm}}\sum_{\scriptsize \begin{array}{c}
|i|=|j|=r+1 \\ i+j=s \end{array}} {\hspace*{-2.5mm}}{\hspace*{-2.5mm}}b_{kl} q_{ij}'
+{\hspace*{-2.5mm}}\sum_{\scriptsize \begin{array}{c}
|s|=2r+2\\ s \text{ has four}\\ \text{odd entries} \end{array}}
{\hspace*{-2.5mm}}\sum_{\scriptsize \begin{array}{c}
|i|=|j|=r+1 \\ i+j=s \end{array}} {\hspace*{-2.5mm}}{\hspace*{-2.5mm}}\omega q_{ij}' .$
Now we remember that by its construction, polynomial $p$ cannot have a monomial with only odd exponents so the third sum is 0. The sum of the coefficients of monomials whose variables have only even powers in $p$ is given by Lemma \[coefficients\] by
${\displaystyle}(\sum_{i=1}^4 \lambda_i^2 )^r (q_{11}+q_{22}+q_{33}+q_{44});$
while the second sum is
$ {\displaystyle}\sum_{1\leq k<l\leq 4} b_{kl}
\sum_{\scriptsize \begin{array}{c}
|s|=2r+2\\ s \text{ has odd}\\\text{ entries at } k,l \end{array}}
\sum_{\scriptsize \begin{array}{c}
|i|=|j|=r+1\\ i+j=s \end{array}} q_{ij}' $
The inner double sum here can be described exactly as the sum of the coefficients of the monomials of $p$ which have two odd entries at distinct $k,l.$ Hence again by Lemma \[coefficients\] the inner double sum is equal to $2(\sum \lambda_i^2 )^r q_{kl}$ and so the sum is
$2 {\displaystyle}( \sum_{i=1}^4 \lambda_i^2)^r \sum_{1\leq k<l\leq 4} b_{kl} q_{kl} =
( \sum_{i=1}^4 \lambda_i^2)^r \sum_{\scriptsize \begin{array}{c} 1\leq k,l \leq 4,\\ k\neq l \end{array} } b_{kl} q_{kl}.$
The claim now follows because $\sum_{i=1}^4 \lambda_i^2>0.$
To conclude the proof we detail an idea we mentioned at the beginning. We have till now shown that whatever the reals $\lambda_1,...,\lambda_4,$ (not all zeros) are, if the polynomial $q_{\rm new}=x^\top P'^\top Q P' x,$ (with $Q$ satisfying the hypotheses) then the polynomial $p_{\rm new}=(\lambda_1^2 w^2+\lambda_2^2 x^2+\lambda_3^2 y^2+\lambda_4^2 z^2)^r q_{\rm new}$ is not sum of trinomial squares. Now by its definition $q_{\rm new}(w,x,y,z)=q(\pi(w),\pi(x),\pi(y),\pi(z))$ where $\pi$ embodies the permutation matrix $P'.$ Since the property ‘to be a sum of squares of trinomials’ is evidently invariant under permutations, it follows that $(\lambda_1^2 \pi^{-1}(w)^2+\lambda_2^2 \pi^{-1}(x)^2+\lambda_3^2 \pi^{-1}(y)^2+\lambda_4^2 \pi^{-1}(z)^2)^r q(w,x,y,z)$ is not a sum of trinomial squares for any $\lambda_1,...,\lambda_4$. Since $\{\pi^{-1}(w), \pi^{-1}(x), \pi^{-1}(y),\linebreak \pi^{-1}(z)\}=\{w,x,y,z\}$ it follows that $(\lambda_1^2 w^2+\lambda_2^2 x^2+\lambda_3^2 y^2+\lambda_4^2 z^2)^r q(w,x,y,z)$ is not a sum of trinomial squares.
We can now extract from the previous result a new theorem of the same general form of Theorems \[thm:generalAhmadiMajumdar\] and \[Quadraticsobs\].
\[thm:trinomialson4variables\] Assume $r\in {\mathbb{Z}}_{\geq 0}.$ If the quadratic form $q(\underline x)=q(w,x,y,z)$ is not a sum of squares of trinomials, then the quarternary form $(w^2+x^2+y^2+z^2)^r q(\underline x)$ is not a sum of squares of trinomials.
If the quadratic form is not positive semidefinite then the claim is trivial. So assume now $q$ is positive semidefinite and let it be written as $q=x^T Q x.$ Then $Q$ is positive semidefinite and by Proposition \[factork\], $Q\not\in FW_3^4.$ So there exists $B\in (FW_3^4)^*$ spanning an extreme ray such that $\langle B,Q \rangle <0.$ By Proposition \[scaledtrinomial\] it follows that $(\lambda_1^2 w^2+\lambda_2^2 x^2+\lambda_3^2 y^2+\lambda_4^2 z^2)^r q(\underline x)$ is not a sum of squares of trinomials for any $\lambda_1,\lambda_2,\lambda_3,\lambda_4$. In particular, $(w^2+x^2+y^2+z^2)q(x)$ is not a sum of squares of trinomials.
A counterexample {#8}
================
Up to now we established three results (Theorems \[thm:generalAhmadiMajumdar\], \[Quadraticsobs\] and \[thm:trinomialson4variables\]) that show that quadratics on $n$ variables are $r$-so$k$s if and only if they are so$k$s under certain assumptions, namely that they are symmetric, that $k=2$ or that $n \leq 4$. A natural belief that may occur to the reader is that in fact the same would hold without such assumptions. In this section we give a counterexample to that natural conjecture. We give a quadratic form in $5$ variables which is not so$4$s but that becomes so$4$s after multiplication with $x_1^2+\cdots+x_5^2$.
*Consider the matrix $M$ given by $$M=\left[
\begin{array}{lllll}
49 & -21 & 37 & -37 & -21 \\
-21 & 17 & -21 & 21 & 29 \\
37 & -21 & 41 & -25 & -33 \\
-37 & 21 & -25 & 41 & 33 \\
-21 & 29 & -33 & 33 & 73
\end{array}
\right].$$ This matrix is not in $FW_4^5$. To see this just verify that the matrix $$A=\begin{bmatrix}
3&1& -2& 2& -1\\
1& 3& 0& 0& -1\\
-2& 0& 2& -1& 1\\
2& 0& -1& 2& -1\\
-1& -1& 1& -1& 1
\end{bmatrix}$$ is in $(FW_4^5)^*$, by checking that all its $4 \times 4$ principal submatrices are psd, and note that $\langle A, M \rangle = -1 < 0$.*
Consider the quadratic form $q_M = x^T M x$. By our previous observation, $q_M$ is not so$4$s. Let then $p_M= (x_1^2+x_2^2+x_3^2+x_4^2+x_5^2)\cdot q_M $. We claim that $p_M$ is so$4$s, hence, $q_M$ is $1$-so$4$s. To prove it one would have to provide an exact certificate. One can easily check that $p_M=z(x)_2^T Q z(x)_2$ where
$${\hspace*{-2.5mm}}{\hspace*{-2.5mm}}{\hspace*{-2.5mm}}{\hspace*{-2.5mm}}Q=
\begin{bmatrix}
49 & -21 & 0 & 37 & 0 & 0 & -37 & 0 & -5 & 0 & -21 & 0 & 0 & 0 & 0 \\
-21 & 66 & -21 & -21 & 37 & -11/5 & 21 & -37 & 0 & -17/5 & 29 & -21 & 0 & 0 & 0 \\
0 & -21 & 17 & 0 & -21 & 0 & 0 & 21 & 0 & 0 & 0 & 29 & 0 & 0 & 0 \\
37 & -21 & 0 & 90 & -94/5 & 37 & -20 & 0 & -37 & 0 & -33 & 0 & -14 & 0 & 0 \\
0 & 37 & -21 & -94/5 & 58 & -21 & 0 & -25 & 21 & 0 & 0 & -33 & 29 & 0 & -4 \\
0 & -11/5 & 0 & 37 & -21 & 41 & 0 & 0 & -25 & 0 & -7 & 0 & -33 & 0 & 0 \\
-37 & 21 & 0 & -20 & 0 & 0 & 90 & -88/5 & 37 & -37 & 33 & 0 & 0 & 12 & 0 \\
0 & -37 & 21 & 0 & -25 & 0 & -88/5 & 58 & -21 & 21 & 0 & 33 & 0 & 29 & 17/5 \\
-5 & 0 & 0 & -37 & 21 & -25 & 37 & -21 & 82 & -25 & 0 & 0 & 33 & -33 & -23/5 \\
0 & -17/5 & 0 & 0 & 0 & 0 & -37 & 21 & -25 & 41 & -9 & 0 & 0 & 33 & 0 \\
-21 & 29 & 0 & -33 & 0 & -7 & 33 & 0 & 0 & -9 & 122 & -21 & 37 & -37 & -21 \\
0 & -21 & 29 & 0 & -33 & 0 & 0 & 33 & 0 & 0 & -21 & 90 & -17 & 88/5 & 29 \\
0 & 0 & 0 & -14 & 29 & -33 & 0 & 0 & 33 & 0 & 37 & -17 & 114 &-102/5& -33\\
0 & 0 & 0 & 0 & 0 & 0 & -12 & 29 & -33 & 33 & -37 & 88/5 &-102/5& 114 & 33 \\
0 & 0 & 0 & 0 & -4 & 0 & 0 & 17/5 &-23/5& 0 & -21 & 29 & -33 & 33 & 73
\end{bmatrix}.$$
It remains to show that this matrix is in fact in $FW_4^{15}$. In general, such matrices are sums of up to $\binom{15}{4}=1365$ matrices with $4\times 4$ support, and generating rational decompositions is certainly not trivial. In this case the example was chosen in such a way that numerically we can do it using only $27$ such matrices (in fact possibly all with rank one) with supports $K\times K$ with $K$ as follows; we write $1,2,4,7$ instead of $\{1,2,4,7\},$ etc.:
--------------- --------------- --------------- -------------- -------------- ---------------
1,2,4,7 $1,2,4,11$ $1,2,7,11$ $1,4,7,9$ $2,3,5,8$ $2,3,5,12$
$ 2,3,8,12$ 2,4,5,6 $2,5,8,12$ $2,7,8,10$ $3,5,8,12$ $4,5,6,9$
$4,5,6,13$ $4,5,9,13$ 4,6,11,13 $5,6,9,13$ $5,12,13,15$ $7,8,9,10$
$ 7,8,9,14$ $7,10,11,14$ $8,9,10,14$ $8,12,14,15$ $9,13,14,15$ $11,12,13,14$
$11,12,13,15$ $11,12,14,15$ $11,13,14,15$
--------------- --------------- --------------- -------------- -------------- ---------------
Since to put the 27 matrices with their floating point entries themselves at this place would be too space consuming, the reader interested to check the example can obtain them by request from the first author.
We did simply a numerical verification, but due to the small size of the calculation we have confidence in the example. Further work would involve rationalizing this certificate, in order to eliminate any remaining doubts.
[99]{} Amir Ali Ahmadi and Anirudha Majumdar, [*DSOS and SDSOS optimization: more tractable alternatives to sum of squares and semidefinite optimization*]{}, SIAM J. Appl. Algebra Geom., 3(2) (2019) 193-230.
G. Blekherman, P. A. Parrilo, and R. Thomas, editors. , volume 13 of [*MOS-SIAM Series on Optimization*]{}. SIAM, 2012.
Erik G. Boman, Doron Chen, Ojas Parekh, and Sivan Toledo, [*On factor width and symmetric H-matrices*]{}, Linear Algebra Appl., 405 (2005) 239-248. Carla Fidalgo and Alexander Kovacec, [*Positive semidefinite diagonal minus tail forms are sums of squares*]{}, Math. Z., 269(3-4) (2011) 629-645. Roger A. Horn and Charles R. Johnson, [*Matrix analysis*]{}, Cambridge University Press, Cambridge, second edition, 2013. Adolf Hurwitz, [*Ueber den Vergleich des arithmetischen und des geometrischen Mittels*]{}, J. Reine Angew. Math., 108 (1891) 266-268. Thomas Muir, [*A treatise on the theory of determinants*]{}, revised and enlarged by William H. Metzler. Dover Publications, Inc., New York, 1960.
Permenter, F. and Parrilo, P. (2018). Partial facial reduction: simplified, equivalent [SDP]{}s via approximations of the [PSD]{} cone. , 171(1-2, Ser. A):1–54.
Motakuri Ramana and A. J. Goldman, [*Some geometric results in semidefinite programming*]{}, J. Global Optim., 7(1) (1995) 33-50.
Bruce Reznick. A quantitative version of Hurwitz’ theorem on the arithmetic-geometric inequality, J. Reine Angew. Math., 377 (1987), 108-112.
Bruce Reznick. Forms derived from the arithmetic-geometric inequality. Math. Ann. 283 (1989) 431-464
[^1]: All authors were supported by Centro de Matemática da Universidade de Coimbra – UID/MAT/00324/2019, funded by the Portuguese Government through FCT/MEC and co-funded by the European Regional Development Fund through the Partnership Agreement PT2020. JG was partially supported by FCT under Grant P2020 SAICTPAC/0011/2015. MS was supported by a PhD scholarship from FCT, grant PD/BD/128060/2016.
|
---
abstract: |
We classify possible finite groups of symplectic automorphisms of K3 surfaces of order divisible by 11. The characteristic of the ground field must be equal to 11. The complete list of such groups consists of five groups: the cyclic group of order 11, $11\rtimes
5$, $L_2(11)$ and the Mathieu groups $M_{11}$, $M_{22}$. We also show that a surface $X$ admitting an automorphism $g$ of order 11 admits a $g$-invariant elliptic fibration with the Jacobian fibration isomorphic to one of explicitly given elliptic K3 surfaces.
address:
- 'Department of Mathematics, University of Michigan, Ann Arbor, MI 48109, USA'
- 'School of Mathematics, Korea Institute for Advanced Study, Seoul 130-722, Korea '
author:
- 'Igor V. Dolgachev'
- JongHae Keum
title: K3 surfaces with a symplectic automorphism of order 11
---
Ø ß
[^1]
[^2]
Introduction
============
Let $X$ be a K3 surface over an algebraically closed field $k$ of characteristic $p\ge 0$. An automorphism $g$ of $X$ is called *symplectic* if it preserves a regular 2-form of $X$. In positive characteristic $p$, an automorphism of order a power of $p$ is called *wild*. A wild automorphism is symplectic. A subgroup $G$ of the automorphism group ${\textup{Aut}}(X)$ is called *symplectic* if all elements of $G$ are symplectic, and *wild* if it contains a wild automorphism.
It is a well-known result of V. Nikulin that the order of a symplectic automorphism of finite order of a complex K3 surface takes value in the set $\{1,2,3,4,5,6,7,8\}$. This result is true over an algebraically closed field $k$ of positive characteristic $p$ if the order is coprime to $p$. The latter condition is automatically satisfied if $p > 11$ [@DK2]. If $p = 11$, a K3 surface $X_\varepsilon$ defined by the equation of degree 12 in ${\mathbb{P}}(1,1,4,6)$ $$\label{formula}
y^2+x^3+\varepsilon x^2t_0^4+t_1^{11}t_0-t_0^{11}t_1 = 0, \quad
\varepsilon\in k,$$ admits a symplectic automorphism of order 11 $$\label{form2}
g_{\varepsilon}:(t_0,t_1,x,y) \mapsto (t_0,t_0+t_1,x,y).$$
The main result of the paper is the following.
\[main\] Let $G$ be a finite group of symplectic automorphisms of a K3 surface $X$ over an algebraically closed field of characteristic $p\ge 0$. Assume that the order of $G$ is divisible by $11$. Then $p = 11$ and $G$ is isomorphic to one of the following five groups $$C_{11},\, 11:5=11\rtimes 5,\, L_2(11) = {\textup{PSL}}_2({\mathbb{F}}_{11}),\, M_{11},\, M_{22}.$$ Moreover, the following assertions are true.
- For any element $g\in G$ of order $11$, $X$ admits a $(g)$-invariant elliptic pencil $|F|$ and $X$ is $C_{11}$-equivariantly isomorphic to a torsor of one of the surfaces $X_{\varepsilon}$ equipped with its standard elliptic fibration.
- If $X = X_\varepsilon$ and $G$ contains an element of order $11$ leaving invariant both the standard elliptic fibration and a section, then $G\cong C_{11}$ if $\varepsilon\neq 0$ and $G$ is isomorphic to a subgroup of $L_2(11)$ if $\varepsilon=0$.
The surface $X_0$ is a supersingular K3 surface with Artin invariant 1 isomorphic to the Fermat surface $$x_0^4+x_1^4+x_2^4+x_3^4 = 0.$$ In a recent paper of Kondō [@Ko] it is proven that both $M_{11}$ and $M_{22}$ appear as symplectic automorphism groups of $X_0$. An element $g$ of order $p=11$ in these groups leaves invariant an elliptic pencil with no $g$-invariant section, and we do not know whether the $g$-invariant elliptic pencil has no sections or has a section but no $g$-invariant section. Thus the surface $X_0$ admits three maximal finite simple symplectic groups of automorphisms isomorphic to $L_2(11), M_{11}$ and $M_{22}$.
A finite group $G$ acts symplectically and wildly on a K3 surface over an algebraically closed field of characteristic $11$ if and only if $G$ is isomorphic to a subgroup of $M_{23}$ of order divisible by $11$ and having $3$ or $4$ orbits in its natural action on a set of $24$ elements.
The authors are grateful to S. Kondō for many fruitful discussions.
For an automorphism group $G$ or an automorphism $g$ of $X$, we denote by $X^g$ [*the fixed locus*]{} with reduced structure, i.e. the set of fixed points of $g$.
A subset $T$ of $X$ is [*$G$-invariant*]{} if $g(T)=T$ for all $g\in G$. In this case we say $G$ leaves $T$ invariant.
An elliptic pencil $|E|$ on $X$ is [*$G$-invariant*]{} if $g(E)\in |E|$ for all $g\in G$. In this case we say $G$ leaves $|E|$ invariant.
We also use the following notations for groups:
$C_n$ the cyclic group of order $n$, sometimes denoted by $n$,
$m:n=m\rtimes n$ the semi-direct product of cyclic groups $C_m$ and $C_n$,
$M_n$ the Mathieu group of degree $n$,
$\#G$ the cardinality of $G$,
$V^g$ the subspace of $g$-invariant vectors of $V$.
The surfaces $X_0$ and $X_1$
============================
Let $p = 11$ and $X_\varepsilon$ be the K3-surface from . The surface $X_\varepsilon$ has an elliptic pencil defined by the projection to the $t_0,t_1$ coordinates $$f_\varepsilon:X_\varepsilon\to {\mathbb{P}}^1.$$ We will refer to it as the [*standard elliptic fibration*]{}. Its zero section, the section at infinity, will be denoted by $S_\varepsilon$. It is immediately checked that the surface $X_\varepsilon$ is nonsingular. Computing the discriminant $\Delta_\varepsilon$ of the Weierstrass equation of the general fibre of the elliptic fibration on $X_\varepsilon$ we find that $$\label{for2}
\Delta_\varepsilon =
-t_0^2(t_1^{11}-t_1t_0^{10})(5t_1^{11}-5t_1t_0^{10}+4\varepsilon^3
t_0^{11}).$$ This shows that the set of singular fibres of the elliptic fibration on $X_0$ (resp. $X_{\varepsilon}, \varepsilon\neq 0$) consists of 12 irreducible cuspidal curves (resp. one cuspidal fibre and 22 nodal fibres). The automorphism $g_\varepsilon$ given by is symplectic and of order $11$. It fixes pointwisely the cuspidal fibres over the point $\infty = (0,1)$ and has 1 orbit (resp. 2 orbits) on the set of remaining singular fibres. It leaves invariant the zero section $S_\varepsilon$. The quotient surface $X_{\varepsilon}/(g_{\varepsilon})$ is a rational elliptic surface with a double rational point of type $E_8$ equal to the image of the singular point of the fixed fibre. A minimal resolution of the surface has one reducible non-multiple fibre of type $\tilde{E}_8$ and one irreducible singular cuspidal fibre (resp. 2 nodal fibres).
\[surface\] Let $X$ be a K3 surface over an algebraically closed field $k$ of characteristic $11$. Assume that $X$ admits an automorphism $g$ of order $11$. Assume also that $X$ admits a $(g)$-invariant elliptic fibration $f:X\to {\mathbb{P}}^1$ with a section $S$. Then there exists an isomorphism $\phi:X\to X_\varepsilon$ of elliptic surfaces such that $\phi g \phi^{-1} = \tau g_\varepsilon$ for some translation automorphism $\tau$ of $X_\varepsilon$. In particular, if $g(S)=S$ then $\phi g \phi^{-1}=g_\varepsilon$.
Let $$y^2+x^3+A(t_0,t_1)x+B(t_0,t_1) = 0$$ be the Weierstrass equation of the $g$-invariant elliptic pencil, where $A$ (resp. B) is a binary form of degree $8$ (resp. $12$). Since $f$ does not admit a non-trivial $11$-torsion section ([@DK2], Proposition 2.11), $g$ acts non-trivially on the base of the fibration. After a linear change of the coordinates $(t_0,t_1)$ we may assume that $g$ acts on the base by $$g:(t_0,t_1)\mapsto
(t_0,t_1+t_0).$$ We know that a $g$-invariant elliptic fibration has one $g$-invariant irreducible cuspidal fibre $F_0$ and either 22 irreducible nodal fibres forming two orbits, or 11 irreducible cuspidal fibres forming one orbit ([@DK1], p.124). Thus the discriminant polynomial $\Delta = -4A^3-27B^2$ must have one double root (corresponding to the fibre $F_0$) and either one orbit of double roots or two orbits of simple roots. We know that the zeros of $A$ correspond to either cuspidal fibres or nonsingular fibres with “complex multiplication” automorphism of order 6. Since this set is invariant with respect to our automorphism of order 11 acting on the base, we see that the only possibility is $A = ct_0^8$ for some constant $c\in k$. We obtain $\Delta = -4c^3t_0^{24}-27B^2.$ Again this uniquely determines $B$ and hence the surface. Since $B$ is of degree 12 and invariant under the action of $g$ on the base, it must be of the form $$B=a(t_1^{11}-t_1t_0^{10})t_0+bt_0^{12},$$ for some constants $a$, $b$. One can rewrite the above Weierstrass equation in the form $$y^2+x^3+\varepsilon
x^2t_0^4+a(t_1^{11}t_0-t_0^{11}t_1)+b't_0^{12} = 0.$$ A suitable linear change of variables $u_0=t_0,\, u_1=t_1+dt_0$ makes $b'=0$ without changing the action of $g$ on the base. Thus $X \cong
X_\varepsilon$ as an elliptic surface. Let $\phi:X\to
X_\varepsilon$ be the isomorphism. The composite $$\phi g \phi^{-1}g_\varepsilon^{-1}:X_\varepsilon\to
X_\varepsilon$$ acts trivially on the base, hence must be a translation automorphism. Since $\phi$ maps the zero section $S$ of $f:X\to {\mathbb{P}}^1$ to the zero section $S_\varepsilon$ of $f_\varepsilon:X_\varepsilon\to {\mathbb{P}}^1$ and $g_\varepsilon(S_\varepsilon)=(S_\varepsilon)$, the last assertion follows.
\[taug\] Let $\varepsilon=0$. For any translation automorphism $\tau$ of $X_0$, the composite automorphisms $\tau g_0$ and $g_0\tau$ are of order $11$.
Let $f:X\to B$ be any elliptic surface with a section $S$. Recall that its Mordell-Weil group ${\text{MW}}(f)$ is isomorphic to the quotient of the Neron-Severi group by the subgroup generated by the divisor classes of $S$ and the components of fibres. Thus any automorphism $g$ of $X$ which preserves the class of a fibre and the section $S$ acts linearly on the group ${\text{MW}}(f)$. Assume ${\text{MW}}(f)$ is torsion free. Suppose $g$ is of finite order $n$ with ${\textup{rank}}~{\text{MW}}(f)^g=0$ and let $\tau$ be a translation automorphism identified with an element of ${\text{MW}}(f)$. Then, for any $s\in
{\text{MW}}(f)$ we have $$\tau g(s) = g(s)+\tau, \quad (\tau g)^n(s) = g^n(s)+g^{n-1}(\tau)+\ldots +g(\tau)+\tau = s.$$ The last equality follows from that the linear action of $g-1_X$ on ${\text{MW}}(f)$ is invertible. This shows that $(\tau g)^n$ acts identically on ${\text{MW}}(f)$. It also acts identically on the class of a fibre. Thus $(\tau g)^n$ acts identically on the Neron-Severi lattice.
Apply this to our case $\varepsilon=0$, when $g = g_0$ is a symplectic automorphism of order 11 of $X_0$. We will see in the proof of Proposition \[Hmax\] that ${\text{MW}}(f_0)$ is torsion free. By Lemma \[lem1\](iii) below, ${\textup{rank}}~{\text{MW}}(f_0)^g=0$. Since the surface $X_0$ is supersingular (see Remark \[goto\]), by a theorem of Ogus [@Ogus], an automorphism acting identically on the Picard group must be the identity. Thus $\tau g_0$ is a symplectic automorphism of order 11 for any section $\tau$.
An interesting question: Is there a $\tau$ such that the fixed locus $X_0^{\tau g_0}$ consists of an isolated point, the cusp of a cuspidal curve fixed pointwisely by $g_0$? We do not know any example of a symplectic automorphism of order 11 with an isolated fixed point.
\[lem1\] Let $X$ be a K3 surface over an algebraically closed field $k$ of characteristic $11$. Assume that $X$ admits an automorphism $g$ of order $11$. Then the following assertions are true.
- $X$ admits a $(g)$-invariant elliptic pencil $|F|$;
- ${\textup{rank}}~{\textup{Pic}}(X/(g))=2$;
- for any $l\ne 11$, $\dim H^2_{\rm et}(X,{{\mathbb{Q}}}_l)^g={\textup{rank}}~{\textup{Pic}}(X)^g=2;$
- ${\textup{rank}}~{\textup{Pic}}(X)=2$, $12$ or $22$.
To prove (i), assume first that $X$ does not admit a $(g)$-invariant elliptic pencil and $X^g$ is a point. This case could happen only if the sublattice $N$ of the Picard group of a minimal resolution of $X/(g)$ generated by irreducible components of exceptional curves is $11$-elementary, and $N^\perp$ is an even lattice of rank 2. This is contained in the proof of Proposition 2.9 of [@DK2]. The intersection matrix of $N^\perp$ is of the form $$\begin{pmatrix}2a&c\\
c&2b\end{pmatrix}$$ Since $N^\perp$ is indefinite and $11$-elementary, $$\det N^\perp = 4ab-c^2=-1, \quad -11\quad {\rm or}\quad -121.$$ In the first case, $N^\perp\cong U$, where $U$ is an even indefinite unimodular lattice. The second case cannot occur, since no square of an integer is congruent to 3 modulo 4. Assume the third case. Since $N^\perp$ is $11$-elementary, all of the coefficients of the matrix must be divisible by 11, and hence $N^\perp\cong U(11)$. Therefore, in any case $N^\perp$ contains an isotropic vector. This is enough to deduce that $X$ admits a $(g)$-invariant elliptic pencil by the same proof as in Proposition 2.9 of [@DK2].
Let $|F|$ be a $(g)$-invariant elliptic pencil. It follows from [@DK1], p. 124, that the elliptic fibration has one cuspidal fibre and 22 nodal fibres, or 12 cuspidal fibres. The automorphism $g$ leaves one cuspidal fibre $F_0$ over a point $s_0\in {\mathbb{P}}^1$ invariant.
Assertion (ii) follows from [@DK1], where we proved that $X/(g)$ is a rational elliptic surface with no reducible fibres, and its minimal resolution is an extremal elliptic surface, i.e. the sublattice of the Picard group generated by irreducible components of fibres is of corank 1.
It is proven in [@HN], Proposition 3.2.1, that for any $l\ne
p$ coprime with the order of $g$ $$\dim H^2_{\rm et}(X,{{\mathbb{Q}}}_l)^g=\dim H^2_{\rm et}(X/(g),{{\mathbb{Q}}}_l).$$ In fact it is true for all $l\ne p$ because of the invariance of the characteristic polynomial of an endomorphism of a smooth algebraic variety. Now by (ii), $$\dim H^2_{\rm et}(X,{{\mathbb{Q}}}_l)^g=\dim H^2_{\rm
et}(X/(g),{{\mathbb{Q}}}_l)={\textup{rank}}~ {\textup{Pic}}(X/(g))=2.$$ Since $g$ fixes the class of a fibre and an ample divisor, ${\textup{rank}}~{\textup{Pic}}(X)^g\ge 2.$ This proves (iii).
Considering the ${\mathbb{Q}}$-representation of the cyclic group $(g)$ of order 11 on ${\textup{Pic}}(X)\otimes {\mathbb{Q}}$, we get (iv) from (iii).
\[jac\] Let $X$ be a K3 surface over an algebraically closed field $k$ of characteristic $11$. Assume that $X$ admits an automorphism $g$ of order $11$. Then $X$ is isomorphic to a torsor of one of the elliptic surfaces $X_\varepsilon$. The order of this torsor in the Shafarevich-Tate group of torsors is equal to $1$ or $11$.
Let $f_J:J\to {\mathbb{P}}^1$ be the Jacobian fibration of the elliptic fibration $f:X\to {\mathbb{P}}^1$ defined by the $g$-invariant elliptic pencil. Let $J^o$ be the open subset of $J$ whose complement is the set of singular fibres of $f_J$. We know that the fibres of $f$ are irreducible. By a result of M. Raynaud, this allows us to identify $J^o$ with the component $\textbf{Pic}_{X/{\mathbb{P}}^1}^0$ of the relative Picard scheme of invertible sheaves of degree 0 (see [@CD], Proposition 5.2.2). The automorphism $g$ acts naturally on the Picard functor and hence on $J^o$. Since $J$ is minimal, it acts biregularly on $J$. This action preserves the elliptic fibration on $J$ and defines an automorphism of order $11$ on the base. This implies that there exists an $C_{11}$-equivariant isomorphism of elliptic surfaces $J$ and $X_\varepsilon$.
The assertion about the order of the torsor follows from the existence of a section or an 11-section of $f$. In fact, let $Y$ be a nonsingular relatively minimal model of the elliptic surface $X/(g)$ with the elliptic fibration induced by $f$. It is a rational elliptic surface. Let $F_0$ be the $g$-invariant fibre of $f$ over a point $s_0\in {\mathbb{P}}^1$. The singular fibres of the elliptic fibration $f':Y\to {\mathbb{P}}^1$ over ${\mathbb{P}}^1\setminus\{s_0\}$ are either two irreducible nodal fibres ($\varepsilon\neq 0$) or one cuspidal irreducible fibre ($\varepsilon = 0$). The standard argument in the theory of elliptic surfaces shows that the fibre of $f'$ over $s_0$ is either of type $\tilde{E}_8$ or $\tilde{D}_8$. This fibre is not multiple if and only if $f'$ has a section. The pre-image of this section is a section of $f$ making $X$ the trivial torsor. A singular fibre of additive type can be multiple only if the characteristic is positive, and the multiplicity $m$ must be equal to the characteristic (see [@CD], Proposition 5.1.5). In this case an exceptional curve of the first kind on $Y$ is a $m$-section. The pre-image of this multi-section on $X$ is a $m$-section, in our case an $11$-section.
Note that, even in the case $X = X_\varepsilon$, the $g$-invariant fibration may be different from the standard elliptic fibration. In other words a non-trivial torsor of an elliptic surface could be isomorphic to the same surface. This strange phenomenon could happen only in positive characteristic and only for torsors of order divisible by the characteristic. We do not know an example where this strange phenomenon really occurs. In Kondō’s example, the $g$-invariant elliptic fibration for an element $g$ of order 11 in $G=M_{11}$ or $M_{22}$ may have a section (but no $g$-invariant section!). If this happens, it is isomorphic to the standard elliptic fibration and hence $g$ is conjugate to $\tau
g_\varepsilon$ as we have seen in Proposition \[surface\].
\[lemma\] Suppose $p = 11$. Then there is a finite subgroup $K_\varepsilon$ of ${\textup{Aut}}({X_\varepsilon})$ satisfying the following property:
- $K_\varepsilon$ leaves invariant both the standard elliptic fibration of ${X_\varepsilon}$ and the zero section $S_\varepsilon$ which is the section at infinity.
- $K_0\cong \GU_2(11)/(\pm I)\cong L_2(11):12$ and $K_1\cong 11:4$, where the first factor in the semi-direct product is a symplectic subgroup and the second factor a non-symplectic subgroup.
- The image of $K_\varepsilon$ in ${\textup{Aut}}({\mathbb{P}}^1)$ is equal to the subgroup ${\textup{Aut}}({\mathbb{P}}^1, V(\Delta_\varepsilon))$ which leaves the set $V(\Delta_\varepsilon)$ invariant.
- ${\textup{Aut}}({\mathbb{P}}^1, V(\Delta_0)) \cong
\PGU_2(11) \cong L_2(11).2$ and ${\textup{Aut}}({\mathbb{P}}^1,
V(\Delta_\varepsilon)) \cong 11:2$ if $\varepsilon\neq 0$.
Assume $\varepsilon = 0$. After a linear change of variables $$t_0 = \alpha^{11}t'_0+\alpha t'_1,\quad t_1 = t'_0+t'_1,$$ where $\alpha\in {\mathbb{F}}_{{11}^2}\setminus {\mathbb{F}}_{11}\subset k^*$, we can transform the polynomial $t_0t_1^{11}-t_0^{11}t_1$ to the form $\lambda t_0^{12}+\mu t_1^{12}$. After scaling, it becomes of the form $f = t_0^{12}+ t_1^{12}$. Now notice that this equation represents a hermitian form over the field ${\mathbb{F}}_{{11}^2}$, hence the finite unitary group $\GU_2(11)$ leaves the polynomial $f$ invariant. The group $\GU_2(11)$ acts on the surface $$\label{neweq}
X_0 \cong V(y^2+x^3+t_0^{12}+t_1^{12})$$ in an obvious way, by acting on the variables $t_0,t_1$ and identically on the variables $x,y$. Note that $$(t_0, t_1, x, y)=(\lambda t_0, \lambda t_1, \lambda^4 x,
\lambda^6 y)$$ in ${\mathbb{P}}(1,1,4,6)$ for all $\lambda \in k^*$. In particular $(t_0, t_1, x, y)=(-t_0, -t_1, x, y)$, so $-I\in
\GU_2(11)$ acts trivially on $X_0$. Note also that ${\textup{SU}}_2(11)$ and hence $\PSU_2(11)$ acts symplectically on $X_0$. The action of $\PSU_2(11)$ is faithful because it is a simple group. Take $K_0=\GU_2(11)/(\pm I)$ and consider the homomorphism $$\det: K_0\to ({\mathbb{F}}_{{11}^2})^*.$$ It is known that $$U_2(11) = \PSU_2(11)\cong {\textup{PSL}}_2({\mathbb{F}}_{11})=L_2(11).$$ If $A\in \GU_2(11)$, then $(\det A)^{12}=(\det A)(\overline{\det A})=\det A^t\overline{A}
=\det I=1$, so the image of det is a cyclic group of order dividing 12. On the other hand, if $\zeta\in
{\mathbb{F}}_{{11}^2}$ is a 12-th root of unity, the unitary matrix $$\begin{pmatrix}1&0\\
0&\zeta\end{pmatrix}$$ generates an order 12 subgroup of $K_0$, which acts on $X_0$ non-symplectically. This proves (i) and (ii).
We know that the group $\GU_2(11)$ leaves the polynomial $f$ invariant. Thus its image $\PGU_2(11)$ in ${\textup{Aut}}({\mathbb{P}}^1)$ must coinside with ${\textup{Aut}}({\mathbb{P}}^1, V(\Delta_0))$. It is known that $\PGU_2(11)$ is a maximal subgroup in the permutation group ${\mathfrak{S}}_{12}$ and $\PGU_2(11) \cong {\textup{PGL}}_2({\mathbb{F}}_{11})\cong L_2(11).2$, a non-split extension. The quotient group is generated by the image of the automorphism $:(t_0, t_1)\to (t_0,
\zeta t_1)$, where $\zeta\in{\mathbb{F}}_{{11}^2}$ is a 12-th root of unity. This proves (iii) and (iv).
Assume $\varepsilon \neq 0$. An element of ${\textup{PGL}}_2(k)$ leaving $V(\Delta_\varepsilon)$ invariant must either leave all factors of $\Delta_\varepsilon$ from invariant or interchange the second and the third factors. It can be seen by computation that the group ${\textup{Aut}}({\mathbb{P}}^1,
V(\Delta_\varepsilon))$ is generated by the following 2 automorphisms $$e(t_0, t_1)=(t_0, t_1+t_0), \quad i(t_0, t_1)=(t_0, -t_1+bt_0)$$ where $b$ is a root of $b^{11}-b+3\varepsilon^3=0$. The order of $e$ (resp. $i$) is 11 (resp. 2) and $i$ normalizes $e$. We see that they lift to automorphisms of $X_\varepsilon$ $$\tilde{e}(t_0, t_1, x, y)=(t_0, t_1+t_0, x, y), \,
\tilde{i}(t_0, t_1, x, y)=(t_0, -t_1+bt_0, -x+3\varepsilon t_0^4,
\sqrt{-1}y)$$ and we take $K_\varepsilon=(\tilde{e}, \tilde{i})$. Clearly $\tilde{i}$ is non-symplectic of order 4 and normalizes $\tilde{e}$ which is symplectic of order 11, and both leave invariant the zero section $S_\varepsilon$.
\[goto\] The equation makes $X_0$ a weighted Delsarte surface according to the definition in [@Goto]. It follows from loc.cit. that $X_0$ is a supersingular surface with Artin invariant $\sigma = 1$. It follows from the uniqueness of such surface that $X_0$ is also isomorphic to the Fermat quartic $$x_0^4+x_1^4+x_2^4+x_3^4 = 0,$$ the Kummer surface associated to the product of supersingular elliptic curves, and the modular elliptic surface of level 4 (see [@Shioda]). We do not know whether the surface $X_\varepsilon$, $\varepsilon\neq 0$, is supersingular. By Lemma \[lem1\], we know that ${\textup{rank}}~{\textup{Pic}}(X_\varepsilon)=2$, 12 or 22.
\[H\] The subgroup $K_\varepsilon\subset{\textup{Aut}}({X_\varepsilon})$ from Lemma \[lemma\] contains a symplectic subgroup leaving invariant the standard elliptic fibration of ${X_\varepsilon}$, isomorphic to $L_2(11)$ if $\varepsilon = 0$ and to $C_{11}$ if $\varepsilon = 1$. Denote this subgroup by $H_{\varepsilon}$. It leaves invariant the zero section $S_\varepsilon$ of the elliptic fibration.
The group $H_{\varepsilon}$ acts on the base curve ${\mathbb{P}}^1$ and we have a homomorphism $$\pi: H_{\varepsilon}\to {\textup{Aut}}({\mathbb{P}}^1, V(\Delta_\varepsilon)),$$ which is an embedding. The image $\pi(H_{\varepsilon})$ is equal to the unique index 2 subgroup of ${\textup{Aut}}({\mathbb{P}}^1, V(\Delta_\varepsilon))$.
\[Hmax\] Let $G$ be a finite group of symplectic automorphisms of the surface ${X_\varepsilon}$ leaving invariant the standard elliptic fibration of ${X_\varepsilon}$. Let $$\psi: G\to {\textup{Aut}}({\mathbb{P}}^1, V(\Delta_\varepsilon))$$ be the natural homomorphism. Then $\psi$ is an embedding. If in addition $G$ is wild and leaves invariant the zero section $S_\varepsilon$, then $G$ is contained in $H_{\varepsilon}$.
Let $\alpha\in \Ker(\psi)$. Then $\alpha$ acts trivially on the base curve. Since $p>3$, $\alpha$ being symplectic must be a translation by a torsion section. It is known that there is no $p$-torsion in the Mordell-Weil group of an elliptic K3 surface if the characteristic $p>7$ ([@DK2]), and there are no other torsion sections because no symplectic automorphism of order coprime to $p$ can have more than 8 fixed points (Theorem 3.3 [@DK2]), while the fibration has 12 or 23 singular fibres. Hence $\alpha$ must be the identity automorphism. This proves that $\psi$ is an embedding.
If $\psi$ is surjective, then $\#G=2.\#L_2(11)$ or $2.11$, which cannot be the order of a wild symplectic group in characteristic 11, by Proposition \[mathieu\] and Lemma \[orders\]. Thus $\psi$ is not surjective. From this we see that if $G$ is wild, then $\psi(G)$ is contained in the unique index 2 subgroup $\pi(H_{\varepsilon})$ of ${\textup{Aut}}({\mathbb{P}}^1, V(\Delta_\varepsilon))$. If an element $\alpha\in G$ and an element $h\in H_\varepsilon$ have the same image in ${\textup{Aut}}({\mathbb{P}}^1, V(\Delta_\varepsilon))$, then $\alpha h^{-1}$ is a translation by a section. If $\alpha$ leaves invariant the zero section $S_\varepsilon$, so does $\alpha
h^{-1}$, hence $\alpha h^{-1}$ is the identity automorphism. This proves the second assertion.
A Mathieu representation
========================
From now on $X$ is a K3 surface over an algebraically closed field of characteristic $p = 11$ and $G$ a group of symplectic automorphisms of $X$ of order divisible by $11$.
\[lemm2\] Let $S$ be a normal projective rational surface with an isolated singularity $s$. Then $$e_c(S\setminus \{s\}) \ge 2,$$ where $e_c$ denotes the l-adic Euler-Poincaré characteristic with compact support.
Let $f:S'\to S$ be a minimal resolution of $S$. Let $E$ be the reduced exceptional divisor. Then $e_c(E) = 1-b_1(E)+b_2(E)
\le 1+b_2(E)$. Since the intersection matrix of irreducible components of $E$ is negative definite, we have $b_2(S') \ge 1+b_2(E)$. This gives $$e_c(S\setminus \{s\}) = e_c(S'\setminus E) = e_c(S')-e_c(E) \ge 2+b_2(S')-(1+b_2(E)) \ge 2.$$
\[lemm3\] Let $g$ be an automorphism of $X$ of order $11$. Assume that $X^g$ is a point. Then the cyclic group $(g)$ is not contained in a larger symplectic cyclic subgroup of ${\textup{Aut}}(X)$.
Let $H=(h)$ be a symplectic cyclic subgroup of ${\textup{Aut}}(X)$ containing $(g)$. Write $\#H=11r$ and $g=h^r$. Without loss of generality, we may assume that $r$ is a prime, and by Theorem 3.3 of [@DK2], may further assume that $r=2,3,5,7$, or $11$.
Assume $r\neq 11$. Let $f=h^{11}$. Then $f$ is symplectic of order $r=2,3,5$, or $7$. By Theorem 3.3 of [@DK2], $X^f$ is a finite set of points of cardinality $< 11$. The order 11 automorphism $g$ acts on $X^f$, hence acts trivially. Thus $X^f\subset X^g$, but $\#X^f\ge 3$, a contradiction. Thus $r=11$.
Let $x\in X$ be the fixed point of $g$, and $y\in X/(g)$ be its image. Let $V = X/(g)\setminus \{y\}$. We claim that the quotient group $\bar{H}=H/(g)$ acts freely on $V$. To see this, suppose that $h(z)=g^i(z)$ for some point $z\in X$, some $g^i\in (g)$. Then $g(z)=h^{11}(z)=g^{11i}(z)=1_X(z)=z$, so $z=x$. This proves the claim.
By Lemma \[lem1\], for any $l\ne 11$, $\dim H^2_{\rm
et}(X,{{\mathbb{Q}}}_l)^g=2$. This implies that $${\textup{Tr}}(g^*|H^2_{\rm
et}(X,{{\mathbb{Q}}}_l))=0.$$ By the Trace formula of S. Saito [@Saito], $$l_x(g)={\textup{Tr}}(g^*|H^*_{\rm et}(X,{{\mathbb{Q}}}_l))=2,$$ where $l_x(g)$ is the intersection index of the graph of $g$ with the diagonal at the point $(x,x)$. The formula of Saito ([@Saito], Theorem 7.4, or [@DK2], Lemma 2.8) gives $e_c(V)= 3$. Since the group $H/(g)$ acts freely on $V$, $e_c(V/\bar{H}) = 3/\#\bar{H}.$ Applying Lemma \[lemm2\] to the surface $S = X/H$, we obtain that $\bar{H}$ is trivial.
\[order\] Let $G$ be a finite group of symplectic automorphisms of a K3 surface $X$ over an algebraically closed field of characteristic $p=11$. Then $${\rm ord}(g) \in
\{1,2,3,4,5,6,7,8,11\}$$ for all $g\in G$.
If the order ${\rm ord}(g)$ of $g\in G$ is coprime to the characteristic $p=11$, then by Theorem 3.3 of [@DK2] $${\rm
ord}(g)\in \{1,\ldots,8\}.$$ It remains to show that $G$ cannot contain any element of order $11r, r>1$. Assume the contrary, and let $h\in G$ be an element of order $11r$. We may assume that $r$ is a prime and hence $r=2,3,5,7$, or $11$. Let $g=h^r$ and $f =
h^{11}$. We see that $g$ is of order 11. By Lemma \[lemm3\], $X^g$ cannot be a point, hence must be a cuspidal curve ([@DK1]). Denote this curve by $F$. It is easy to see that $F$ is $h$-invariant, i.e. $h(F)=F$.
Assume $r=11$. Then $h$ acts on the base curve ${\mathbb{P}}^1$ of the pencil $|F|$ faithfully, however, using the Jordan canonical form we see that ${\mathbb{P}}^1$ does not admit an automorphism of order $11^2$.
Next, assume that $r=2,3,5,7$. By Theorem 3.3 of [@DK2], $$3\le \#X^{f}\le 8.$$ Since $r$ is prime to $11$, $$X^h=X^{f}\cap X^{g}.$$ Clearly $g$ acts on the finite set $X^{f}$, and this action cannot be of order $11$. This means that $g$ acts trivially on $X^{f}$, i.e. $X^{f}\subset X^{g}=F$. Thus $$X^h= X^{f}.$$ This means that $h$ acts on $F$ with $\#X^f$ fixed points. But no nontrivial action on a rational curve can fix more than 2 points. A contradiction.
A Mathieu representation of a finite group $G$ is a 24-dimensional representation on a vector space $V$ over a field of characteristic zero with character $$\chi(g)=\epsilon({\rm ord}(g)),$$ where $$\label{mumu}
\epsilon(n)=24(n\prod_{p|n}(1+{1\over p}))^{-1}, \quad \epsilon(1)=24.$$ The number $$\label{mu}
\mu(G) = \frac{1}{\# G}\sum_{g\in G}\epsilon({\text{ord}}(g))$$ is equal to the dimension of the subspace $V^G$ of $V$. Here $V^G$ is the linear subspace of vectors fixed by $G$. The natural action of a finite group $G$ of symplectic automorphisms of a complex K3 surface on the singular cohomology $$H^*(X,{{\mathbb{Q}}})=\oplus_{i=0}^4H^i(X,{{\mathbb{Q}}})\cong {{\mathbb{Q}}}^{24}$$ is a Mathieu representation with $$\mu(G) = \dim H^*(X,{{\mathbb{Q}}})^G \ge 5.$$ From this Mukai deduces that $G$ is isomorphic to a subgroup of $M_{23}$ with at least 5 orbits. In positive characteristic, if $G$ is wild, then the formula for the number of fixed points is no longer true and the representation of $G$ on the $l$-adic cohomology, $l\ne p$, $$H^*_{\rm et}(X,{{\mathbb{Q}}}_l)=\bigoplus_{i=0}^4H^i_{\rm
et}(X,{{\mathbb{Q}}}_l)\cong {{\mathbb{Q}}}_l^{24}$$ is not Mathieu in general. However, in our case we have the following:
\[mathieu\] Let $G $ be a finite group acting symplectically on a K3 surface $X$ over a field of characteristic $11$. Then the representation of $G$ on $H^*_{\rm
et}(X,{\mathbb{Q}}_l)$, $l\ne 11$, is a Mathieu representation with $\dim H^*_{\rm et}(X,{\mathbb{Q}}_l)^G \ge 3$.
Note that ${\textup{rank}}~{\textup{Pic}}(X)^G\ge 1$, and the second assertion follows. It remains to prove that the representation is Mathieu. By Lemma \[order\], it is enough to show this for automorphisms of order $11$. Let $g\in G$ be an element of order $11$. We need to show that the character $\chi(g)$ of the representation on the $l$-adic cohomology $H^*_{\rm
et}(X,{{\mathbb{Q}}}_l)$ is equal to $\epsilon(11)=2$. Since $$\chi(g)={\textup{Tr}}(g^*|H^*_{\rm et}(X,{{\mathbb{Q}}}_l)),$$ it suffices to show that ${\textup{Tr}}(g^*|H^2_{\rm et}(X,{{\mathbb{Q}}}_l))=0,$ or $\dim H^2_{\rm et}(X,{{\mathbb{Q}}}_l)^g=2$. Now the result follows from Lemma \[lem1\].
Determination of Groups
=======================
In this section we determine all possible finite groups which may act symplectically and wildly on a K3 surface in characteristic 11. We use only purely group theoretic arguments.
\[bd\] Let $G$ be a finite group having a Mathieu representation over ${\mathbb{Q}}$ or over ${\mathbb{Q}}_l$ for all prime $l\ne 11$. Then $$\#G=2^a.3^b.5^c.7^d.11^e.23^f$$ for $a\le 7, b\le 2, c\le 1, d\le 1,
e\le 1, f\le 1$.
If the representation is over ${\mathbb{Q}}$, this is the theorem of Mukai ([@Mukai] (Theorem (3.22)). In his proof, Mukai uses at several places the fact that the representation is over ${\mathbb{Q}}$. The only essential place where he uses that the representation is over ${\mathbb{Q}}$ is Proposition (3.21), where $G$ is assumed to be a 2-group containing a maximal normal abelian subgroup $A$ and the case of $A =({\mathbb{Z}}/4)^2$ with $\#(G/A)\ge 2^4$ is excluded by using that a certain 2-dimensional representation of the quaternion group $Q_8$ cannot be defined over ${\mathbb{Q}}$. We use that $G$ also admits a Mathieu representation on 2-adic cohomology, and it is easy to see that the representation of $Q_8$ cannot be defined over ${\mathbb{Q}}_2$.
The following lemma is of purely group theoretic nature and its proof follows an argument employed by S. Mukai [@Mukai].
\[orders\] Let $G$ be a finite group having a Mathieu representation over ${\mathbb{Q}}$ or over ${\mathbb{Q}}_l$ for all prime $l\ne 11$. Assume $\mu(G)\ge 3$. Assume that $G$ contains an element of order $11$, but no elements of order $>11$. Then the order of $G$ is equal to one of the following: $$11,\quad 5.11, \quad 2^2.3.5.11, \quad 2^4.3^2.5.11,
\quad 2^7.3^2.5.7.11.$$
Since $G$ has no elements of order 23, by Proposition \[bd\], we have $$\#G=2^a.3^b.5^c.7^d.11, \, (a\le 7, b\le 2, c\le 1, d\le 1).$$ Let $S_q$ be a $q$-Sylow subgroup of $G$ for $q=5,7$ or $11$. Then $S_q$ is cyclic and its centralizer coincides with $S_q$. Let $N_q$ be the normalizer of $S_q$. Since $G$ does not contain elements of order $5k, 7k, 11k$ with $k > 1$, the index $m_q:=[N_q:S_q]$ is a divisor of $q-1$. Since it is known that the dihedral groups $D_{14}$ and $D_{22}$ do not admit a Mathieu representation, we have $m_7=1$ or 3, and $m_{11}=1$ or 5. Let $a_n$ be the number of elements of order $n$ in $G$. Then $a_q=\frac{\#G(q-1)}{qm_q}.$ As in [@Mukai], we have $$\label{muG}
\mu(G)=\frac{1}{\#G}\sum\epsilon(n)a_n
=8+\frac{1}{\#G}(16-2a_3-4a_4-4a_5-6a_6-5a_7-6a_8-6a_{11}).$$
Case 1. The order of $G$ is divisible by 7.
The formula gives $$\label{muG2}
\mu(G)\le 8+\frac{16}{\#G}-\frac{30}{7m_7}-\frac{60}{11m_{11}}.$$ Since $\mu(G) \ge 3$, the numbers $m_{11}$ and $m_{7}$ must be greater than 1.
Assume $m_{11}=5, m_{7}=3$. Then $\#G$ is divisible by 5, and the formula gives $$\label{muG3}
\mu(G)\le 8+\frac{16}{\#G}-\frac{16}{5m_5}-\frac{10}{7}-\frac{12}{11}.$$ If $m_5=1$, then this inequality gives $\mu(G)<3$. If $m_5=2$, then the number of $q$-Sylow subgroups is equal to $2^{a-1}.3^b.7.11$, $2^{a}.3^{b-1}.5.11$, $2^{a}.3^b.5.7$ for $q=5,7,11$ respectively. Taking $q = 5$ and applying Sylow’s theorem, we get $a-b\equiv 0 \mod 4$. Since $1\le a\le 7$, $1\le b\le 2$, the only solutions are $(a,b) = (5,1),
(6,2)$. However, neither $2^5.5.11$ nor $2^6.3.5.11$ is congruent to 1 modulo 7.
If $m_5=4$, then the number of $q$-Sylow subgroups is equal to $2^{a-2}.3^b.7.11$, $2^{a}.3^{b-1}.5.11$, $2^{a}.3^b.5.7$ for $q=5,7,11$ respectively. A similar argument, shows that $a-b \equiv 1 \mod 4$ and the possible order is $2^7.3^2.5.7.11$.
Case 2. The order of $G$ is divisible by 5, but not by 7.
The formula gives $$\label{muG4}
\mu(G)\le 8+\frac{16}{\#G}-\frac{16}{5m_5}-\frac{60}{11m_{11}}.$$ Assume that $m_{11}=1$. Then this inequality gives $\mu(G)<3$.
Assume $m_{11}=5$. If $m_5=1$, then the number of $q$-Sylow subgroups is equal to $2^{a}.3^b.11$, $2^{a}.3^b$ for $q=5, 11$ respectively. By Sylow’s theorem, $a-b\equiv 0 \mod 4$, $a+8b \equiv 0 \mod 10$. This system of congruences has only one solution $a=b=0$ in the range $a\le 7$, $b\le
2$. This gives the possible order $5.11$.
If $m_5=2$, then the number of $q$-Sylow subgroups is equal to $2^{a-1}.3^b.11$, $2^{a}.3^b$ for $q=5, 11$ respectively. By Sylow’s theorem, $a-b\equiv 1 \mod 4$, $a+8b \equiv 0 \mod 10$. This system has only one solution $a=2, b=1$ in the range $1\le a\le
7$, $b\le 2$. This gives the possible order $2^2.3.5.11$.
If $m_5=4$, then the number of $q$-Sylow subgroups is equal to $2^{a-2}.3^b.11$, $2^{a}.3^b$ for $q=5, 11$ respectively. By Sylow’s theorem, $a-b\equiv 2 \mod 4$, $a+8b \equiv 0 \mod 10$. This system has only one solution $a=4, b=2$ in the range $2\le a\le 7$, $b\le 2$. This gives the possible order $2^4.3^2.5.11$.
Case 3. The order of $G$ is divisible by neither 5 nor 7.
In this case $m_{11}\ne 5$, and hence $m_{11}=1$. Thus the formula gives $$\label{muG5}
\mu(G)\le 8+\frac{16}{\#G}-\frac{60}{11}.$$ The number of $11$-Sylow subgroups is equal to $2^{a}.3^b$. By Sylow’s theorem, $a+8b \equiv 0 \mod 10$. This congruence has 3 solutions $(a,b)=(0,0)$, $(2, 1)$, $(4, 2)$ in the range $a\le 7$, $b\le
2$. The first gives the possible order $11$. In the second and the third case, the inequality gives $\mu(G)<3$.
\[simple\] In the situation of the previous lemma, $G$ is isomorphic to one of the following groups: $$C_{11},\quad 11:5, \quad L_2(11), \quad M_{11}, \quad M_{22}.$$
By Lemma \[orders\], there are five possible orders for $G$ $$\label{five}
11,\quad 5.11, \quad 2^2.3.5.11, \quad 2^4.3^2.5.11, \quad
2^7.3^2.5.7.11.$$ In the first two cases, the assertion is obvious. The remaining possible 3 orders are exactly the orders of the 3 simple groups given in the assertion. The theory of finite simple groups shows that there is only one simple group of the order in each of these cases.
Assume the last 3 cases. It suffices to show that $G$ is simple.
Let $K$ be a proper normal subgroup of $G$ such that $G/K$ is simple. If $\#K$ is not divisible by 11, then an order 11 element of $G$ acts on the set ${\text{Syl}}_q(K)$ of $q$-Sylow subgroups of $K$. Since $\#{\text{Syl}}_q(K)$ is not divisible by 11 for any prime $q$ dividing $\#K$, the order 11 element $g$ must normalize a $q$-Sylow subgroup of $K$. If one of the numbers $q = 3,5,$ or $7$ divides $\#K$, then $g$ centralizes an element of one of these orders. This contradicts the assumption that $G$ does not contain an element of order $>11$. If $q =2$ divides $\#K$, then a $2$-Sylow subgroup of $K$ is of order $\le 2^7$, and hence $g$ centralizes an element of order 2, again a contradiction. So, we may assume that $11|\#K$. If $\#K=11$, then an order 2 element of $G$ normalizes $K$. Neither a cyclic group of order 22 nor a dihedral group of order 22 has a Mathieu representation, so $\#K>11$. If $K\cong 11:5$, then an order 2 element of $G$ normalizes the unique $11$-Sylow subgroup of $K$, again a contradiction. If $\#K$ is one of the remaining three possibilities, then the group $G/K$ is of order $2^5.3.7$ or $2^3.7$ or $2^2.3$. In the first case an order 7 element of $G$ normalizes, hence centralizes a Sylow 11-subgroup of $K$, again a contradiction. Obviously in the other two cases $G/K$ cannot be simple. This proves that $G$ is simple.
\[normalizer\] Let $G $ be a finite group acting symplectically and wildly on a K3 surface $X$ over a field of characteristic $11$. Let $g$ be an element of order $11$ in $G$. Then the normalizer of $(g)$ in $G$ must be isomorphic to $11:5$ if $\#G>11$.
Proof of the Main Theorem
=========================
In this section we complete the proof of Theorem \[main\] announced in Introduction. It remains to prove the assertion (ii).
\[varneq0\] Assume $\varepsilon\neq 0$. Let $G\subset {\textup{Aut}}(X_\varepsilon)$ be a finite wild symplectic subgroup. If an element $g\in G$ of order $11$ leaves invariant the standard elliptic fibration with a $g$-invariant section, then $G=(g)\cong C_{11}$ and $G$ is conjugate to $H_\varepsilon=(g_\varepsilon)$. In particular, $H_\varepsilon$ is a maximal finite wild symplectic subgroup of ${\textup{Aut}}(X_\varepsilon)$.
Since $g$ leaves a section invariant, it must be a conjugate to $g_\varepsilon$. So up to conjugation, we may assume that $g$ leaves the zero section $S_\varepsilon$ invariant. Thus $g=g_\varepsilon$ by Proposition \[Hmax\].
Suppose $G>(g)$. Let $N$ be the normalizer of $(g)$ in $G$. Then $N \cong 11:5$ by Corollary \[normalizer\].
Claim that $N$ leaves invariant the standard elliptic pencil $|F|$. It is enough to show that $h(F_0)=F_0$ for any $h\in N$, where $F_0=X^g$ is a cuspidal curve in $|F|$. In fact, for any $x\in F_0$, we have $h(x)=hg(x) = g^ih(x)$ for some $i$, so $h(x)\in X^{(g)}=F_0$, which proves the claim.
Next, claim that $N$ leaves invariant the zero section $S_\varepsilon$. In fact, $h(S_\varepsilon)=hg(S_\varepsilon)=g^ih(S_\varepsilon)$, so $(g)$ leaves invariant $h(S_\varepsilon)$, and hence $h(S_\varepsilon)=S_\varepsilon$ as $g$ cannot leave invariant two distinct sections by Lemma \[lem1\] (iii).
Now Proposition \[Hmax\] gives a contradiction. Hence, $G=(g)$.
\[var0\] Let $G\subset {\textup{Aut}}(X_0)$ be a finite wild symplectic subgroup, isomorphic to $L_2(11)$. If an element $g\in G$ of order $11$ leaves invariant both the standard elliptic fibration and a section, then $G$ is conjugate to $H_0$. In particular, if $G$ contains $g_0$ then $G=H_0$.
Replacing $G$ by a conjugate subgroup in ${\textup{Aut}}(X_0)$, we may assume that $g$ leaves invariant both the standard elliptic fibration and the zero section $S_0$, i.e. $g=g_0$. We need to prove that $G=H_0$.
Let $|F|$ be the standard elliptic fibration. Then $g(S_0)=S_0$ and $X^g=F_0$, a cuspidal curve in $|F|$.
Let $N$ be the normalizer of $(g)$ in $G$. Then $N \cong 11:5$. The same argument as in the proof of Lemma \[varneq0\] shows that $N$ leaves invariant both the standard elliptic pencil $|F|$ and the zero section $S_0$. By Proposition \[Hmax\], $N\subset
H_0$.
We have $N\subset G\cap H_0$. Suppose $G\cap H_0=N$. Consider the $G$-orbit of the divisor class $[F]\in{\textup{Pic}}(X_0)$, $$G([F])=\{ h([F])\in{\textup{Pic}}(X_0)| h\in G\}.$$ Clearly $N$ acts on it. Note $$\#G([F])=[G:N]=12.$$ Thus $G([F])$ is the set of 12 different elliptic fibrations with a section. The automorphism $g$ cannot leave invariant an elliptic fibration other than $|F|$, hence fixes $[F]$ and has 1 orbit on the remaining 11 elliptic fibrations, which we denote by $[F_1],\cdots, [F_{11}]$.
Recall that $H_0$ leaves invariant the zero section $S_0$. The three divisor classes $$[F], \quad \sum_{j=1}^{11}[F_j], \quad [S_0]$$ are $N$-invariant, and their intersection matrix is given as follows: $$\begin{pmatrix}0&11m&1\\
11m&110m&11b\\
1&11b&-2\end{pmatrix}$$ where $m = F\cdot F_i, \ b = S_0\cdot F_i,
i \ge 1$. Its determinant is equal to $$242(m^2+bm)-110m,$$ which cannot be 0 for any positive integers $m$ and $b$. This implies that $$\mu(N) = 2+{\textup{rank}}~{\textup{Pic}}(X_0)^{N}\ge 5,$$ a contradiction to the equality $\mu(N)=4$. This proves that $N$ is a proper subgroup of $G\cap H_0$. Since $N$ is a maximal subgroup of $G$, we have $G=H_0$.
Note that $\mu(M_{11})=\mu(M_{22})=3$ and $\mu(L_2(11))=4$. Note also that $L_2(11)$ is isomorphic to a maximal subgroup of both $M_{11}$ and $M_{22}$.
The following proposition completes the proof of Theorem \[main\] (ii).
\[fin\] Let $G\subset {\textup{Aut}}(X_0)$ be a finite wild symplectic subgroup. Assume that $G\cong M_{11}$ or $M_{22}$. Then no conjugate of $G$ in ${\textup{Aut}}(X_0)$ contains the automorphism $g_0$ given by . In other words, no element of $G$ of order $11$ can leave invariant both the standard elliptic fibration and a section. In particular, $H_0$ is a maximal finite wild symplectic subgroup of ${\textup{Aut}}(X_0)$.
Suppose that a conjugate of $G$ contains $g_0$. Replacing $G$ by the conjugate, we may assume that $g_0\in
G$.
Let $K$ be a subgroup of $G$ such that $g_0\in K\subset G$ and $K\cong L_2(11)$. Then by Lemma \[var0\], $K=H_0$. Thus $$g_0\in
H_0\subset G.$$ Since $H_0\cong L_2(11)$ is a maximal subgroup of $G$, its normalizer subgroup $N_G(H_0)$ coincides with $H_0$.
Let $|F|$ be the standard elliptic fibration on $X_0$, and $S_0$ the zero section. Then $g(S_0)=S_0$ and $X^g=F_0$, a cuspidal curve in $|F|$. Furthermore, both the section $S_0$ and the elliptic pencil $|F|$ are $H_0$-invariant (see Definition \[H\]).
Consider the $G$-orbit of the divisor class $[F]$, $$G([F])=\{ h([F])\in{\textup{Pic}}(X_0)| h\in G\}.$$ Consider the action of $H_0$ on it. By Proposition \[Hmax\], the stabilizer subgroup $G_{[F]}$ of $[F]$ coincides with $H_0$. The automorphism $g_0$ cannot leave invariant two different elliptic fibrations, hence fixes $[F]$ and has orbits on the set $G([F])\setminus \{[F]\}$ of cardinality divisible by 11. This implies that $H_0$ fixes $[F]$ and has orbits on the set $G([F])\setminus \{[F]\}$ of cardinality divisible by 11. Write $$G([F])=\{[F]=[F_0],\, [F_1],\, [F_2],\, ...,\, [F_{r-1}]\}$$ where $r=\#G([F])=[G:H_0]$. Let $${\mathcal{O}}_1\cup {\mathcal{O}}_2\cup
...\cup{\mathcal{O}}_s$$ be the orbit decomposition of the index set $\{1,\, 2,\, ...,\, r-1\}$ by the action of $H_0$. Since $H_0$ fixes $[F]$ and acts transitively on each ${\mathcal{O}}_i$, the intersection number $F\cdot F_t$ is constant on the orbit ${\mathcal{O}}_i$ containing $t$, i.e. $F\cdot F_t=m_i$ for all $t\in
{\mathcal{O}}_i$. Note that the divisor $${\mathcal{F}}=\sum_{j=0}^{r-1}F_j$$ is $G$-invariant, and $$\label{comp1}
{\mathcal{F}}^2=(\sum_{j=0}^{r-1}F_j)^2=rF_0\cdot\sum_{j=0}^{r-1}F_j=r\sum_{i=1}^sm_i\#{\mathcal{O}}_i.$$
Next recall that $H_0$ leaves invariant the zero section $S_0$. Similarly we consider the $G$-orbit of the divisor class $[S_0]$ $$G([S_0])=\{ h([S_0])\in{\textup{Pic}}(X_0)| h\in G\}.$$ Let $G_0$ be the stabilizer subgroup of $[S_0]$. Since it contains $H_0$ and $H_0$ is maximal in $G$, we obtain that $G_0 = H_0$ or $G_0 = G$.
Assume $G_0 = H_0$. Then all stabilizers are conjugate to $H_0$. Similarly as above we claim that $g_0\in H_0$ fixes no elements of $G([S_0])$ other than $[S_0]$. If $g_0h(S_0)=h(S_0)$ for some $h\in G$, Then $g_0\in hH_0h^{-1}$ and since all cyclic subgroups of order $11$ in $H_0$ are conjugate inside $H_0$ we can write $(g_0) = hh'(g_0)h'^{-1}h^{-1}$ for some $h'\in H_0$. This implies $hh'\in N_G((g_0))$. Since $\#N_G((g_0))=\#N_{H_0}((g_0))=55$ (see the proof of Lemma \[orders\]), we obtain that $N_G((g_0))=N_{H_0}((g_0))\subset H_0$, hence $h\in H_0$ and $h(S_0) = S_0$. This proves the claim and shows that $H_0$ has orbits on the set $G(S_0)\setminus \{S_0\}$ of cardinality divisible by 11. Write $$G([S_0])=\{[S_0],\, [S_1],\, [S_2],\, ...,\, [S_{r-1}]\}.$$ It is clear that the divisor $${\mathcal{S}}=\sum_{j=0}^{r-1}S_j$$ is $G$-invariant. Let $S_0\cdot F_t = b_i$ for $t\in {\mathcal{O}}_i$. Then we have $$\label{comp2}
{\mathcal{F}}\cdot {\mathcal{S}}=(\sum_{j=0}^{r-1}F_j)\cdot (\sum_{j=0}^{r-1}S_j)=rS_0\cdot\sum_{j=0}^{r-1}F_j
=r(1+\sum_{i=1}^{s}b_i\#{\mathcal{O}}_i).$$ In either case $G\cong M_{11}$ or $M_{22}$, we know $\mu(G)=3$ and hence the two divisors ${\mathcal{F}}$ and ${\mathcal{S}}$ are linearly dependent in ${\textup{Pic}}(X_0)$. This implies $${\mathcal{F}}^2{\mathcal{S}}^2 = ({\mathcal{F}}\cdot {\mathcal{S}})^2.$$ Substituting from , , we get $$\label{comp3}
r(\sum_{i=1}^sm_i\#{\mathcal{O}}_i){\mathcal{S}}^2
=r^2(1+\sum_{i=1}^{s}b_i\#{\mathcal{O}}_i)^2.$$ Since $\#{\mathcal{O}}_i\equiv 0$ mod 11 for all $i$ and $r\equiv
1$ mod 11, the left hand side $\equiv 0 \mod 11$, but the right hand side $\equiv 1 \mod 11$, a contradiction.
Assume $G_0 = G$. Then the divisor ${\mathcal{S}}=S_0$ is $G$-invariant, and we have a simpler equality $$\label{comp4}
r(\sum_{i=1}^sm_i\#{\mathcal{O}}_i){\mathcal{S}}^2
=(1+\sum_{i=1}^{s}b_i\#{\mathcal{O}}_i)^2,$$ again a contradiction.
In [@Ko] Kondo proves that the unique supersingular K3 surface $X$ with Artin invariant 1 admits symplectic automorphism groups $G\cong M_{11}$ or $G\cong M_{22}$. It follows from the previous results that any element $g\in G$ of order 11 leaves invariant an elliptic pencil without a $g$-invariant section. In fact, according to his construction of $G$ on $X$, one can show that ${\textup{Pic}}(X)^g\cong U(11)$, hence a $(g)$-invariant elliptic pencil has only an 11-section.
It is known that the Brauer group of a supersingular K3 surface is isomorphic to the additive group of the field $k$ [@Artin]. It is well-known that the group of torsors of an elliptic fibration with a section is isomorphic to the Brauer group. We do not know which torsors admit a non-trivial automorphism of order $p$ (maybe all?). We also do not know whether they define elliptic fibrations on the same surface $X_0$. Note that the latter could happen only for torsors of order divisible by $p =\textup{char}(k)$. It would be very interesting to see how the three groups $L_2(11)$, $M_{11}$ and $M_{22}$ sit inside the infinite group ${\textup{Aut}}(X_0)$.
It follows from Lemma \[lemma\] that our surface $X_0$ admits a non-symplectic automorphism of order $12$. By Remark \[goto\], $X_0$ is supersingular with Artin invariant $\sigma = 1$. It follows from [@Ny] that the maximal order of a non-symplectic isomorphism of a supersingular surface with Artin invariant $\sigma$ divides $1+p^\sigma$. Thus $12$ is the maximum possible order. What is the maximum possible non-symplectic extension of $M_{11}$ or $M_{22}$?
A K3 surface may admit a non-symplectic automorphism of order 11 over any field of characteristic $0$ or $p \ne 2,3,11$. The well-known example is the surface $V(x^2+y^3+z^{11}+w^{66})$ in ${\mathbb{P}}(1,6,22,33)$. It is interesting to know whether there exists a supersingular K3 surface $X$ which admits a non-symplectic automorphism of order 11. It follows from [@Ny] that, if $p \ne 2$, then 11 must divide $1+p^\sigma$, where $\sigma$ is the Artin invariant of $X$.
[\[BPV\]]{}
M. Artin, *Supersingular K3 surfaces*, Ann. Ec. Norm. Sup., 4-e Serie, [**7**]{} (1974), 543–567.
F. Cossec., I. Dolgachev, *Enriques surfaces I*, Birkhäuser 1989.
I. Dolgachev, J. Keum, *Wild $p$-cyclic actions on K3 surfaces*, J. Algebraic Geometry, [**10**]{} (2001), 101-131. I. Dolgachev, J. Keum, *Finite groups of symplectic automorphisms of K3 surfaces in positive characteristic*, math. AG/0403478, to appear in Ann. Math.
Y. Goto, *The Artin invariant of supersingular weighted Delsarte surfaces*, J. Math. Kyoto Univ., [ **36**]{} (1996), 359–363.
G. Harder, M.S. Narasimhan, *On the cohomology groups of moduli spaces of vector bundles on curves*, Math. Ann. [**212**]{} (1975), 215–248.
S. Kondō, *Maximal subgroups of the Mathieu group $M_{23}$ and symplectic automorphisms of supersingular K3 surfaces*, math.AG/0511286.
S. Mukai, *Finite groups of automorphisms of $K3$ surfaces and the Mathieu group*, Invent. Math. [**94**]{} (1988), 183-221. N. Nygaard, *Higher DeRam-Witt complexes on supersingular K3 surfaces*, Comp. Math. [**42**]{} (1980/81), 245–271.
A. Ogus, *Supersingular K3 crystals*, in “Journées de Géometrie Algébrique de Rennes”, Asterisque, vol. 64 (1979), pp. 3–86. S. Saito, *General fixed point formula for an algebraic surface and the theory of Swan representations for two-dimensional local rings*, Amer. J. Math. [**109**]{} (1987), 1009-1042.
T. Shioda, *Supersingular K3 surfaces*, Algebraic geometry (Proc. Summer Meeting, Univ. Copenhagen, Copenhagen, 1978), Lecture Notes in Math., 732, Springer, Berlin, 1979, pp. 564–591.
[^1]: Research of the first named author is partially supported by NSF grant DMS-0245203
[^2]: Research of the second named author is supported by KOSEF grant R01-2003-000-11634-0
|
---
author:
- |
\
Institut für Theoretische Physik, Universität Regensburg, D-93040 Regensburg, Germany\
Institute of Theoretical and Experimental Physics, 117259 Moscow, Russia\
E-mail:
- |
V. G. Bornyakov [^1]\
Institute for High Energy Physics, 142281 Protvino, Russia\
Institute of Theoretical and Experimental Physics, 117259 Moscow, Russia\
School of Biomedicine,Far Eastern Federal University, 690950 Vladivostok, Russia
- |
P. V. Buividovich [^2]\
Institut für Theoretische Physik, Universität Regensburg, D-93040 Regensburg, Germany
- |
N. Cundy [^3]\
Lattice Gauge Theory Research Center, FPRD, and CTP\
Department of Physics and Astronomy, Seoul National University,Seoul, 151-747, South Korea
title: 'Deconfinement transition in two-flavour lattice QCD with dynamical overlap fermions.'
---
Introduction and numerical setup {#sec:intro}
================================
The study of the QCD thermodynamics, and in particular the confinement - deconfinement phase transition is one of the main applications of lattice QCD. Since the parameters of the deconfinement transition are of utmost importance for the interpretation of experimental data from heavy-ion colliders, one should reduce any systematical errors when studying them numerically. For this reason lattice QCD simulations with chirally invariant overlap fermions are now the state-of-the-art [@Fodor:04; @Fodor:12; @Cossu:13]. Unfortunately, these simulations are very expensive computationally and are almost always restricted to a single topological sector. Several important improvements in the Hybrid Monte-Carlo algorithm [@Arnold:2003sx; @Cundy:2004pza; @Cundy:06; @Cundy:09:1; @Cundy:09:2; @Cundy:11:2] have allowed for large-scale simulations with dynamical overlap fermions and without any restriction to the fixed topology sector. While in general one cannot expect that the use of dynamical overlap fermions will result in a strong modification of thermodynamic properties, they can be very useful, e.g. to study fluctuations of topology at finite temperature.
Another important application of the algorithms of [@Arnold:2003sx; @Cundy:2004pza; @Cundy:06; @Cundy:09:1; @Cundy:09:2; @Cundy:11:2] is the study of the deconfinement phase transition in an external magnetic field, which has attracted a lot of attention recently and was intensively investigated both theoretically [@Agasian:08:1; @Fraga:08; @Simonov:14] and in lattice simulations [@D'Elia:10; @Endrodi:12:jhep; @Muller-Preussker:13]. Lattice simulations [@D'Elia:10; @Endrodi:12:jhep] have revealed a strong dependence of the sign of the shift of the deconfinement temperature in external magnetic field on the pion mass. In [@D'Elia:10] it was found that at sufficiently large pion masses the magnetic field increases the chiral condensate by the conventional “magnetic catalysis” mechanism [@Gusynin:94:1; @Smilga:97:1] and hence increases the deconfinement temperature. On the other hand, the simulations of [@Endrodi:12:jhep] were performed at physical pion mass and revealed an unexpected decrease of the chiral condensate with magnetic field in the vicinity of the deconfinement transition, which results in a decrease of deconfinement temperature in magnetic field (“inverse magnetic catalysis”). At the same time, in recent theoretical works [@Chao:13; @Yu:14:1] it was suggested that the fluctuations of chirality and topology can play an important role in the inverse magnetic catalysis. Thus the use of chiral lattice fermions with unrestricted topology can be advantageous for numerical studies of the inverse magnetic catalysis. Indeed, in our recent work [@Kochetkov:14] it was found that in HMC simulations with dynamical overlap fermions [@Arnold:2003sx; @Cundy:2004pza; @Cundy:06; @Cundy:09:1; @Cundy:09:2; @Cundy:11:2] inverse magnetic catalysis is observed for pion masses as large as $500 \, {\rm MeV}$.
Unfortunately, up to now the exact location of the phase transition for the lattice action used in our dynamical overlap simulations [@Kochetkov:14] is not known. Since the knowledge of the critical temperature is an essential prerequisite for any further finite-temperature simulations with the algorithms of [@Arnold:2003sx; @Cundy:2004pza; @Cundy:06; @Cundy:09:1; @Cundy:09:2; @Cundy:11:2], in these Proceedings we report on our preliminary studies of the deconfinement phase transition for dynamical overlap fermions. We are able to identify the temperatures which certainty correspond to the confinement and the deconfinement regimes, however, the precise location of the phase transition remains elusive with our present statistics.
We consider lattice QCD with $N_f = 2$ flavours of dynamical overlap fermions with equal masses. We use the massive overlap Dirac operator, $$\begin{aligned}
\label{overlap_definition}
D{ \left[ \mu \right] } = 1 + \mu/2 + \gamma_5 { \left( 1 - \mu/2 \right) } { {\rm sign} \, }{ \left( K \right) } ,\end{aligned}$$ where $K = \gamma_5 { \left( D_W - \rho \right) }$ and $D_W$ is the Wilson-Dirac operator with one level of over-improved stout smearing [@Moran:2008ra; @Morningstar:2003gk]. In order to ensure that lattice gauge fields are sufficiently smooth, we use the tadpole improved Lüscher-Weisz gauge action [@TILW; @TILW2]. The temperature $T = 1/(N_t a)$ is changed by varying the inverse gauge coupling $\beta$ and thus the lattice spacing $a$. The pion mass and lattice spacing were determined using independent runs on $12^3\times 24$ lattices for $\beta=7.5$ and $\beta=8.3$. We have performed measurements at $\beta=7.5$, which corresponds to $a=0.15{\, {\rm fm}}$ and $T=220{\, {\rm MeV}}$ and at $\beta=8.3$, for which $a=0.12{\, {\rm fm}}$ and $T=280{\, {\rm MeV}}$ as well as at the intermediate values of $\beta = 7.6, 7.7, 7.8, 7.9$ and $8.1$. For these intermediate values of $\beta$ the scale setting has not been performed yet. For every value of $\beta$ we have between $500$ and $1000$ successively generated configurations. The correlations between the configurations are taken into account using the Jackknife method.
Numerical results {#sec:num_res}
=================
*Polyakov loop.* The expectation values of the Polyakov loop are shown in Fig. \[fig:pl\_condensate\] at different inverse gauge couplings. As expected, the Polyakov loop increases at larger temperatures. Unfortunately, from our data it is difficult to identify the inflection point of the Polyakov loop. Also, the points at $\beta = 7.7$ and $7.8$ deviate somehow from the smooth behavior, which might be the result of some long-range correlation in our simulations.
*Chiral condensate.* The chiral condensate in lattice units is shown in Fig. \[fig:pl\_condensate\] on the right. Again, the condensate gradually decreases towards larger inverse gauge couplings, except for some deviation at points $\beta = 7.7$ and $\beta = 7.8$. There is also no clearly defined inflection point.
*Chiral susceptibility.* A quantity which is typically more sensitive to the (partial) restoration of chiral symmetry than the expectation value of the chiral condensate is the chiral susceptibility $$\label{susc_def}
\chi_c(T)= -\left.
\frac{\partial \langle {\bar u}u\rangle}{\partial m_q} \right|_{m_q=0} .$$ At the transition point, $\chi_c{ \left( T \right) }$ usually has a characteristic peak with height which increases with volume for the first or second order phase transitions and stays constant for the crossover. In Fig. \[fig:susc\_vev\] we show the connected part of the chiral susceptibility (in lattice units) as a function of $\beta$. Unfortunately, we do not see a pronounced peak, but rather some sort of plateau starting from $\beta = 7.6$. There is only a slight hint at the peak at $\beta = 7.8$.
![Histograms of the eigenvalues $\lambda$ of the massless overlap Dirac operator in lattice units at different values of the inverse coupling constant $\beta$ which correspond to different temperatures in the range $220 {\, {\rm MeV}}< T < 280 {\, {\rm MeV}}$.[]{data-label="fig:hist"}](histograms3x3.eps){width="14cm"}
*Low-lying Dirac eigenvalues.* As a more sensitive test of the temperature at which the chiral symmetry is restored, we consider the statistical distributions of the low-lying eigenvalues $\lambda$ of the projected massless Dirac operator $$\begin{aligned}
\label{massless_dirac}
\tilde{D}_0 = \frac{2 \rho D_0}{2 - D_0}, \quad D_0 = 1 + \gamma_5 { {\rm sign} \, }{ \left( K \right) } .\end{aligned}$$ The eigenvalues $\lambda$ of $\tilde{D}_0$ are purely imaginary and are related to the chiral condensate on the lattice exactly in the same way as in the continuum theory: $$\begin{aligned}
\label{lattice_condensate}
\Sigma = \sum\limits_i \frac{1}{m_q + \lambda_i} = \sum\limits_{{ {\rm Im} \, }\lambda_i > 0} \frac{2 m_q}{m_q^2 + |\lambda_i|^2},\end{aligned}$$ where $\Sigma = \frac{1}{V} \frac{\partial}{\partial \, m_q} \mathcal{Z}{ \left( m_q \right) }$ and $\mathcal{Z}{ \left( m_q \right) }$ is the lattice partition function with the Dirac operator (\[overlap\_definition\]). By virtue of the relation (\[lattice\_condensate\]), which implies that the condensate is mostly saturated by Dirac eigenmodes with $|\lambda_i| \lesssim m_q$, effective restoration of chiral symmetry should result in a significant widening of the gap in the spectrum of $\tilde{D}_0$.
The histograms of ${ {\rm Im} \, }\lambda$ (plotted in lattice units) are shown in Figure \[fig:hist\]. At $\beta=7.5$ and $\beta=7.6$ one can see a lot of near-zero eigenvalues which indicates that these two points are still in the phase with broken chiral symmetry. For $\beta = 7.7$ and $\beta = 7.8$, it seems that the gap already starts to appear. However, at $\beta = 7.9$ some near-zero eigenvalues appear again. This non-monotonic behavior of the gap agrees with the non-monotonic behavior of the Polyakov loop and the chiral condensate at $\beta = 7.7$ and $\beta = 7.8$. Finally, at $\beta=8.1$ and $\beta=8.3$ the gap in the spectrum of $\lambda$ is well pronounced, as expected for the phase with restored chiral symmetry.
*Topological charge fluctuations.* Yet another possible way to distinguish the confinement and the deconfinement phases is to consider the fluctuations of topological charge, which should be strongly suppressed above the deconfinement temperature [@Shuryak:98:1]. Monte-Carlo histories of the topological charge (defined from the number of exact zero modes of the massless overlap Dirac operator (\[massless\_dirac\])) are shown in Figure \[fig:mc\_histories\](left) for different values of $\beta$. There are significant fluctuations of topological charge at $\beta=7.5,7.6$ and very few fluctuations at $\beta = 7.7, \, 7.8, \, 8.1, \, 8.3$. Strangely, at $\beta = 7.9$ strong fluctuations reappear. Again we see that the points with $\beta = 7.7$ and $\beta = 7.8$ somehow deviate from the general trend.
The autocorrelation time of topological charge in our simulations is of the order of several hundreds of HMC trajectories. Of course, this is only a rough estimate based on our sets of no more than thousand configurations. Let us also note that long autocorrelations of the topological charge might significantly affect the autocorrelation time of the chiral susceptibility. A close look at the Monte-Carlo histories of the connected chiral susceptibility (see Fig. \[fig:mc\_histories\] on the right) reveals that it is strongly correlated with the topological charge. When the topological charge changes, the value of the susceptibility also shifts to some “plateau” and then fluctuates around this “plateau” (see e.g. the HMC histories for $\beta = 7.9$ near configurations $100$ and $200$). The resulting large autocorrelations of the chiral susceptibility might explain the absence of well-defined peak in the chiral susceptibility in Fig. \[fig:susc\_vev\].
Discussion and conclusions. {#sec:conclusions}
===========================
The preliminary data presented here certainly rules out the phase transition (or crossover) at $\beta \leq 7.6$ and $\beta \geq 8.1$, thus the transition point in our simulations with dynamical overlap fermions should be somewhere in the range $7.6 < \beta < 8.1$. Within this range of $\beta$, however, the precision of our measurements is insufficient to determine the transition point. The fact that for $\beta = 7.7$ and $\beta = 7.8$ all our observables deviate from the smooth behavior suggests that autocorrelation times might be larger for these two points, which can in turn hint at the proximity to the phase transition. We therefore plan to investigate this region of $\beta$ values using larger numbers of configurations. We should also note that at present we perform simulations at fixed bare quark mass, thus changing the temperature by varying the inverse gauge coupling $\beta$ results also in some variation of the pion mass. Therefore in our simulations the physical pion mass is not fixed. Possible improvement of our simulation strategy would be then either to keep the physical pion mass constant, or to vary the temperature by changing the temporal size of the lattice.
[99]{}
H. Neuberger, Phys.Lett.B [**417**]{} (1998), 141. [\[[hep-lat/9707022]{}\]](http://arxiv.org/abs/hep-lat/9707022)
G. Arnold, N. Cundy, J. [van den Eshof]{}, A. Frommer, S. Krieg, [*Numerical methods for the [QCD]{} overlap operator. 2. [O]{}ptimal [K]{}rylov subspace methods*]{} (2003). [\[[hep-lat/0311025]{}\]](http://arxiv.org/abs/hep-lat/0311025)
N. Cundy, J. [van den Eshof]{}, A. Frommer, S. Krieg, T. Lippert, [*Numerical methods for the [QCD]{} overlap operator. 3. [N]{}ested iterations*]{}, Comput.Phys.Commun. [**165**]{} (2005), 221–242. [\[[hep-lat/0311025]{}\]](http://arxiv.org/abs/hep-lat/0311025)
N. Cundy, [*Current status of [D]{}ynamical [O]{}verlap project*]{}, Nucl.Phys.Proc.Suppl. [**153**]{} (2006), 54–61. [\[[hep-lat/0511047]{}\]](http://arxiv.org/abs/hep-lat/0511047)
N. Cundy, S. Krieg, G. Arnold, A. Frommer, T. Lippert, K. Schilling, [ *Numerical [M]{}ethods for the [QCD]{} [O]{}verlap [O]{}perator [IV]{}: [H]{}ybrid [M]{}onte [C]{}arlo*]{}, Comput.Phys.Commun. [**180**]{} (2009), 26–54. [\[[hep-lat/0502007]{}\]](http://arxiv.org/abs/hep-lat/0502007)
N. Cundy, [*Low-lying [W]{}ilson [D]{}irac operator eigenvector mixing in dynamical overlap [H]{}ybrid [M]{}onte-[C]{}arlo*]{}, Comput.Phys.Commun [**180**]{} (2009), 180–191. [\[[0706.1971]{}\]](http://arxiv.org/abs/0706.1971)
N. Cundy, W. Lee, [*Modifying the molecular dynamics action to increase topological tunnelling rate for dynamical overlap fermions*]{} (2011). [\[[1110.1948]{}\]](http://arxiv.org/abs/1110.1948)
N. O. Agasian, S. M. Fedorov, [*Quark-hadron phase transition in a magnetic field*]{}, Phys. Lett. B [**663**]{} (2008), 445 – 449. [\[[0803.3156]{}\]](http://arxiv.org/abs/0803.3156)
E. S. Fraga, A. Mizher, [*Chiral transition in a strong magnetic background*]{}, Phys. Rev. D [**78**]{} (2008), 025016. [\[[0804.1452]{}\]](http://arxiv.org/abs/0804.1452)
V. D. Orlovsky, Y. A. Simonov, [*The quark-hadron thermodynamics in magnetic field*]{}, Phys. Rev. D [**89**]{} (2014), 054012. [\[[1311.1087]{}\]](http://arxiv.org/abs/1311.1087)
M. D’Elia, S. Mukherjee, F. Sanfilippo, [*[QCD]{} phase transition in a strong magnetic background*]{}, Phys. Rev. D [**82**]{} (2010), 051501. [\[[1005.5365]{}\]](http://arxiv.org/abs/1005.5365)
G. S. Bali, F. Bruckmann, G. Endrodi, Z. Fodor, S. D. Katz, S. Krieg, A. Schäfer, K. K. Szabo, [*The [QCD]{} phase diagram for external magnetic fields*]{}, JHEP [**02**]{} (2012), 044. [\[[1111.4956]{}\]](http://arxiv.org/abs/1111.4956)
E. Ilgenfritz, M. Müller-Preussker, B. Petersson, A. Schreiber, [ *Magnetic catalysis (and inverse catalysis) at finite temperature in two-color lattice [QCD]{}*]{}, Phys. Rev. D [**89**]{} (2014), 054512. [\[[1310.7876]{}\]](http://arxiv.org/abs/1310.7876)
V. P. Gusynin, V. A. Miransky, I. A. Shovkovy, [*Catalysis of dynamical flavor symmetry breaking by a magnetic field in 2 + 1 dimensions*]{}, [Phys.Rev.Lett. [**73**]{} (1994), 3499](http://link.aps.org/doi/10.1103/PhysRevLett.73.3499).
I. A. Shushpanov, A. V. Smilga, [*Quark condensate in a magnetic field*]{}, Phys. Lett. B [**402**]{} (1997), 351. [\[[hep-ph/9703201]{}\]](http://arxiv.org/abs/hep-ph/9703201)
J. Chao, P. Chu, M. Huang, [*Inverse magnetic catalysis induced by sphalerons*]{}, Phys. Rev. D [**88**]{} (2013), 054009. [\[[1305.1100]{}\]](http://arxiv.org/abs/1305.1100)
L. Yu, H. Liu, M. Huang, [*Spontaneous generation of local [CP]{} violation and inverse magnetic catalysis*]{} (2014). [\[[1404.6969]{}\]](http://arxiv.org/abs/1404.6969)
V. G. Bornyakov, P. V. Buividovich, N. Cundy, O. A. Kochetkov, A. Schaefer, [*Deconfinement transition in two-flavour lattice qcd with dynamical overlap fermions in an external magnetic field*]{}, Phys. Rev. D [**90**]{} (2014), 034501. [\[[1312.5628]{}\]](http://arxiv.org/abs/1312.5628)
T. Schaefer, E. Shuryak, [*Instantons in [QCD]{}*]{}, Rev.Mod.Phys. [**70**]{} (1998), 323 – 426. [\[[hep-ph/9610451]{}\]](http://arxiv.org/abs/hep-ph/9610451)
M. L[ü]{}scher, P. Weisz, [*On-shell improved lattice gauge theories*]{}, [Commun. Math. Phys. [**97**]{} (1985), 59](http://dx.doi.org/10.1007/BF01206178).
G. P. Lepage, P. B. Mackenzie, [*Viability of lattice perturbation theory*]{}, Phys. Rev. D [**48**]{} (1993), 2250–2264. [\[[hep-lat/9209022]{}\]](http://arxiv.org/abs/hep-lat/9209022)
C. Morningstar, M. Peardon, [*Analytic smearing of [SU]{}(3) link variables in lattice [QCD]{}*]{}, Phys. Rev. D [**69**]{} (2004), 054501. [\[[hep-lat/0311018]{}\]](http://arxiv.org/abs/hep-lat/0311018)
P. J. Moran, D. B. Leinweber, [*Over-[I]{}mproved [S]{}tout-[L]{}ink [S]{}mearing*]{}, Phys. Rev. D [**77**]{} (2008), 094501. [\[[0801.1165]{}\]](http://arxiv.org/abs/0801.1165)
S. Borsanyi, Y. Delgado, S. Durr, Z. Fodor, S. D. Katz, S. Krieg, T. Lippert, D. Nogradi, K. K. Szabo, [*QCD thermodynamics with dynamical overlap fermions*]{}, Phys. Lett. B [**713**]{} (2012), 342 – 346. [\[[1204.4089]{}\]](http://arxiv.org/abs/1204.4089)
Z. Fodor, S.D. Katz, K.K. Szabo,[*Dynamical overlap fermions, results with hybrid Monte-Carlo algorithm*]{}, JHEP [**08**]{} (2004), 003. [\[[hep-lat/0311010]{}\]](http://arxiv.org/abs/hep-lat/0311010)
Guido Cossu, Sinya Aoki, Hidenori Fukaya, Shoji Hashimoto, Takashi Kaneko, Hideo Matsufuru, Jun-Ichi Noaki, [*Finite temperature study of the axial U(1) symmetry on the lattice with overlap fermion formulation*]{}, Phys. Rev. D [**87**]{} (2013), 114514. [\[[1304.6145]{}\]](http://arxiv.org/abs/1304.6145)
[^1]: Computations were performed on the “Lomonosov” supercomputer at the supercomputing center of MSU. V.B. is supported by the RFBR grant 13-02-01387-a.
[^2]: The work of P.B. was supported by the S. Kowalewskaja award from the Alexander von Humboldt foundation.
[^3]: NC thank “BK21 Plus Frontier Physics Research Division, Department of Physics and Astronomy, Seoul National University, Seoul, South Korea” for financial support. This research was supported by Basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Education (2013057640).
|
---
abstract: 'Radiative effects in the electroproduction of photons in polarized $ep$-scattering are calculated in the leading log approximation and analyzed numerically for kinematical conditions of current measurement at Jefferson Lab. Radiative corrections to the cross sections, their azimuthal distributions and Fourier coefficients are in particular focus. Kinematical regions where the radiative corrections are considerable are identified.'
author:
- Igor Akushevich
- Alexander Ilyichev
bibliography:
- 'dvcsll.bib'
title: Radiative effects in the processes of exclusive photon electroproduction from polarized protons
---
\[Intro\]Introduction
=====================
The processes of the photon electroproduction are intensively investigated both theoretically [@BelitskyMuller2009PRD; @BeMu2010PRD] and experimentally [@Camacho_etal_2006_PRL; @Mazouz_etal_2007_PRL; @Girod_etal_2008_PRL]. The cross section of the process is sensitive to the deep virtual Compton scattering (DVCS) amplitude that is of great interest due to its connection to generalized parton distributions. The Bethe-Heitler (BH) process is not distinguishable from the DVCS measurements and therefore it is the basic background contribution to the observed cross section. One obstacle in the analysis of vast data on DVCS collected in Jlab experiments is the deficit of comprehensive theoretical calculations of radiative corrections (RC) including the effects of hard photon emission with controlled accuracy. Available calculations of QED radiative effects in [@AkBaSh1986YP; @Vanderhaeghen2000; @ByKuTo2008PRC; @AkKuSh2000PRDtail; @AkKuSh2001PRDDVCS] have certain limitations and cannot cover all modern requirements of experimental data analysis on photon electroproduction. In this paper we present the radiative correction calculations to the cross section of BH in leading approximation. The main process contributing to the RC is two-photon emission, i.e., $e+p\rightarrow e'+p'+2\gamma$. Another contribution is due to one-loop effects in $e+p\rightarrow e'+p'+\gamma$. In the approximation the only leading term containing $L=\log(Q^2/m^2)$ ($m$ is the electron mass) is kept. For Jlab kinematics $L\sim 15$ and therefore, the used approximation allows to keep the major part of the RC.
The paper is organized as follows. The BH cross section is calculated in Section \[Born\]. Specific attention is paid on explicit representation of the BH cross section including polarization part of the cross section and mass corrections, as well as for angular structure of the BH cross section. RC calculation is performed in Section \[RC\]. First we calculate the matrix element squared and trace all sources of occurrence of the electron mass dependence. Second, we represent the phase space of two final photons, introduce the so-called shifted kinematics, and calculate integrals over additional photon phase space. Third, we add the contribution of loops and calculate the lowest order RC to the BH cross section. Fourth we generalize the result for the RC to the BH cross section to represent the higher order corrections. Section \[SectNumeric\] presents the numeric estimates of the radiative effects in current experiments at JLab focusing on the RC to cross section in a wide kinematic region and angular structure (i.e., respective Fourier coefficients) with specific focus on the coefficient not appeared in the BH cross section but generated by RC. Finally in Section \[SectDiscussion\] we discuss the most interesting features of our findings in the theoretical calculation and numeric results (e.g., importance of mass corrections and kinematical regions with large effects generated by RC), briefly describe the state-of-art in calculations of RC to exclusive photon electroproduction processes and the place of our calculation among other calculations, and comment perspectives in further theoretical development in RC in exclusive photon electroproduction measurements.
\[Born\]The BH cross sections
=============================
The BH process $$\label{BHprocess}
e(k_1)+p(p)\longrightarrow e'(k_2)+p'(p')+\gamma(k),$$ is traditionally described using four kinematical variables: $Q^2=-(k_1-k_2)^2$, $x=Q^2/(2p(k_1-k_2))$, $t=(p-p')^2$, and $\phi$, the angle between planes $({\bf k_1},{\bf k_2})$ and $({\bf q},{\bf p'})$ ($q=k_1-k_2$).
The BH matrix element is ${\cal M}_{BH}=e^3t^{-1}J^h_\mu J_{\mu}^{BH}$ with $$J^h_\mu={\bar u}(p')\biggl(\gamma_{\mu}F_1+i\sigma_{\mu\nu}\frac{p_\nu '-p_\nu}{2M}F_2\biggr)u(p)$$ and $$\begin{aligned}
J_{\mu}^{BH}&=&
{\bar u}_2\Biggl [
\gamma_\mu \frac{{\hat k}_1-{\hat k}+m}{-2kk_1}{\hat \epsilon}
+ {\hat \epsilon} \frac{{\hat k}_2+{\hat k}+m}{2kk_2}\gamma_\mu
\Biggr ]u_1
\nonumber \\&=& -
{\bar u}_2\Biggl [\left(\frac {k_1\epsilon}{kk_1}-\frac {k_2\epsilon}{kk_2}\right)\gamma_\mu
-\frac{\gamma_\mu \hat{k}\hat{\epsilon}}{2kk_1}
-\frac{\hat{\epsilon}\hat{k}\gamma_\mu }{2kk_2}
\Biggr ]u_1, \end{aligned}$$ where ${\bar u}_2\equiv {\bar u}(k_2)$, ${u}_1\equiv {u}(k_1)$, and $\epsilon$ is the photon polarization vector. The matrix element ${\cal M}_{BH}$ corresponds to the graphs in Figure \[BHgraphs\]a and \[BHgraphs\]b.
\
[**a) b)**]{}
The cross section of the BH process is $$\begin{aligned}
\label{dGamma}
d\sigma_0&=&\frac 1{2S} {\cal M}_{BH}^2 d\Gamma_0
%\nonumber \\ &=&
=
\frac{32\pi^3\alpha^3}{St^2} \bigl(J^h_\mu J^{BH}_{\mu}\bigr)^2 d\Gamma_0,\end{aligned}$$ where $S=2ME_1$, $E_1$ is the beam energy in the lab. system, and $M$ is the proton mass.
Phase space for the BH cross section is parametrized as $$\begin{aligned}
d\Gamma_0&=&\frac 1{(2\pi)^5}
\frac{d^3k_2}{2E_2}
\frac{d^3p'}{2p'_0}
\frac{d^3k}{2\omega}
\delta^4(k_1+p-k_2-p'-k)
\nonumber \\&=&
\frac{Q^2dQ^2dxdtd\phi}{(4\pi)^4x^2 S \sqrt{\lambda _Y}}\end{aligned}$$ with $\lambda _Y=S_x^2+4 M^2Q^2$ and $S_x=S-X=Q^2/x$. Kinematical limits on $t$ are defined as $$\begin{aligned}
t_{2,1}=\frac{-1}{2W^2}\bigl((S_x-Q^2)(S_x\pm\sqrt{\lambda_Y})+2M^2Q^2\bigr),\end{aligned}$$ where $W^2=S_x-Q^2+M^2$.
The 4-fold BH cross section ($\sigma_{BH}\equiv d\sigma_0/dQ^2dxdtd\phi$) including both unpolarized and spin dependent parts is $$\label{BHCS}
\sigma_{BH}= { -\alpha^3 Q^2 \over 4\pi S^2 x^2 t \sqrt{\lambda_Y}}\sum_{i=1}^4 (T_i+2m^2\hat{T}_i){\cal F}_i.$$ Those terms proportional to the lepton mass squared ($m^2$) are kept that give finite (i.e., non-vanishing for $m\rightarrow 0$) contribution after integration over $\phi$. First two terms, ($(T_1+2m^2\hat{T}_1){\cal F}_1$ and $(T_2+2m^2\hat{T}_2){\cal F}_2$), describe unpolarized cross section and last two terms, ($(T_3+2m^2\hat{T}_3)){\cal F}_3$ and $(T_4+2m^2\hat{T}_4){\cal F}_4$), correspond to the spin dependent part of the cross section. The quantities ${\cal F}_i$ are squared combinations of nucleon formfactors: $$\begin{aligned}
{\cal F}_1&=&{\cal F}_3=(F_1(t)+F_2(t))^2,
\nonumber \\
{\cal F}_2&=&\frac{4}{t}(F_1^2(t)+\frac{-t}{4M^2}F_2^2(t)),
\nonumber \\
{\cal F}_4&=&\frac{4}{t}(F_1(t)+F_2(t))(F_1(t)+\frac{t}{4M^2}F_2(t)).\end{aligned}$$ The quantities $T_i$ are $$\begin{aligned}
T_1&=&\frac{2}{u_0w_0}(u_0^2 + w_0^2 - 2Q^2t ),
\nonumber \\
T_2&=&\frac{M^2}{2}T_1+\frac{t}{u_0w_0}(S^2+X^2 \nonumber\\&&\qquad
-Q^2S_x-Sw_0-Xu_0),
\nonumber \\
T_3&=&\frac{4Mt(\eta p')}{\lambda_tu_0w_0}
(2X(u_0-Q^2)-2S(w_0+Q^2)
\nonumber\\&&\qquad +(u_0 + w_0)(Q^2-t)),
\nonumber \\
T_4&=&-M^2T_3+\frac{2M}{u_0w_0}\Bigl(
(Q^2-u_0)\bigl(t(\eta k_2)+X(\eta p')\bigr) \nonumber\\&&
+(Q^2+w_0)\bigl(t(\eta k_1)+S(\eta p')\bigr)\Bigr),\end{aligned}$$ where $\lambda_t=t(t-4M^2)$ and $\eta$ is the target polarization vector. The quantities represented the lepton mass corrections are $$\begin{aligned}
\hat{T}_1&=&\frac{2t}{u_0^2} + \frac{2t}{w_0^2},
\nonumber \\
\hat{T}_2&=&\frac{M^2}{2}\hat{T}_1+\frac{S^2+St}{u_0^2}+\frac{X^2+Xt}{ w_0^2},
\nonumber \\
\hat{T}_3&=&\frac{4M(\eta p')}{t-4M^2}\biggl(
(2S+t)\Bigl(\frac{1}{u_0^2}
+ \frac{S_x}{Sw_0^2}\Bigr)
\nonumber\\&&
+
\frac{1}{w_0^2}\Bigl(-Q^2+\frac{1}{S}(X^2+(X-t)^2) \biggr),
\nonumber \\
\hat{T}_4&=&-M^2\hat{T}_3+2M\Bigl(
\frac{S_{x}+t}{Sw_0^2}\bigl(t(\eta k_2)+X(\eta p')\bigr) \nonumber\\&&
-\Bigl( \frac{1}{u_0^2} + \frac{1}{w_0^2}\Bigr)\bigl(t(\eta k_1)+S(\eta p')\bigr)\Bigr).\end{aligned}$$
All variables including the scalar products $\eta k_1$, $\eta k_2$, and $\eta p'$ are ultimately expressed in terms of 5 kinematical variables: $S$, $t$, $Q^2$, $x$ and $\phi$, e.g., $$\begin{aligned}
w_0&=& 2kk_1=-\frac{1}{2}(Q^2+t)
+\frac{S_p}{2\lambda_Y}\bigl(S_x(Q^2-t)+2tQ^2\bigr)+
\nonumber\\&& \qquad \qquad
+\frac{\sqrt{\lambda_{uw}}}{\lambda_{Y}}\cos\phi ,
\nonumber \\
u_0&=& 2kk_2=w_0+Q^2+t\end{aligned}$$ with $S_p=S+X$ and $$\label{lambdauw}
\lambda_{uw}=4Q^2W^2(SX-M^2Q^2-m^2\lambda_Y)(t-t_1)(t_2-t).$$ An additional azimuthal angle is required to describe the case of transversely polarized target (see eq. (\[etatrans\])). Note that in massless approximation (for $m\rightarrow 0$) the BH cross section exactly coincides with results of [@BKM2002]. The following equations relating our notation to the notation of ref. [@BKM2002] are valid: $u={\cal P}_2Q^2$, $w=-{\cal P}_1Q^2$, and (for $m\rightarrow 0$) $\lambda_{uw}=4Q^4S^2S_x^2K^2$.
Explicit expressions for the scalar products of momenta with target polarization are $$\begin{aligned}
\eta k_1&=&-{SS_x+2M^2Q^2 \over 2M\sqrt{\lambda_Y}},
\nonumber \\
\eta k_2&=&-{XS_x-2M^2Q^2 \over 2M\sqrt{\lambda_Y}},
\nonumber \\
\eta p'&=&-{-tS_x+2M^2(Q^2-t) \over 2M\sqrt{\lambda_Y}}\end{aligned}$$ for longitudinal part of proton polarization vector (i.e., $\boldsymbol \eta$ $|| \bf q$) and $$\begin{aligned}
\label{etatrans}
\eta k_1&=&-\sqrt{\lambda_{SX} \over \lambda_Y}\cos (\varphi +\phi ),
\nonumber \\
\eta k_2&=&-\sqrt{\lambda_{SX} \over \lambda_Y}\cos (\varphi +\phi ),
\nonumber \\
\eta p'&=&-{Q^2SS_xK \over \sqrt{\lambda_{SX}\lambda_Y}} \cos \varphi\end{aligned}$$ for its transverse part (i.e., ${\boldsymbol \eta } \perp {\bf q}$). Here $\lambda_{SX}=SXQ^2-M^2Q^4$ and $\varphi$ is the angle between polarization and production planes, i.e., planes defined as $({\bf q},{ {\boldsymbol \eta }_T})$ and $({\bf q},{\bf p'})$). In the general case ($\eta=(0,\eta_x,\eta_y,\eta_z)$) the scalar products are $$\begin{aligned}
\label{etaxyz}
\eta k_1&=&-\sqrt{\lambda_{SX} \over \lambda_Y}\;\eta_x-{SS_x+2M^2Q^2 \over 2M\sqrt{\lambda_Y}}\;\eta_z,
\nonumber \\
%\\\label{etaxyz2}
\eta k_2&=&-\sqrt{\lambda_{SX} \over \lambda_Y}\;\eta_x-{XS_x-2M^2Q^2 \over 2M\sqrt{\lambda_Y}}\;\eta_z,
\nonumber \\
%\\\label{etaxyz3}
\eta p'&=&-{Q^2SS_xK \over \sqrt{\lambda_{SX}\lambda_Y}} (\eta_x\cos\phi+\eta_y\sin\phi)
\nonumber \\&&\quad
-{-tS_x+2M^2(Q^2-t) \over 2M\sqrt{\lambda_Y}}\;\eta_z.\end{aligned}$$
The cross section $\sigma_{BH}$ is defined as the 4-dimensional cross section in eq. (\[dGamma\]). It means that integration over $\varphi$ is assumed to be performed resulting in additional factor $2\pi$. This is because of respective symmetry of the process with unpolarized or longitudinally polarized target. The cross section of transversely polarized target explicitly depends on $\varphi$ because of (\[etatrans\]). Thus $\sigma$ in the case of transversely polarized target means 5-dimensional cross section with addition of $d\varphi/2\pi$.
\
[**a) b) c)**]{}
\[polar\]Angular structure of the BH cross section
--------------------------------------------------
Azimuthal structure of the BH cross sections are often of interest both theoretically and experimentally. Dependence of the BH cross section (\[dGamma\]) on the angle $\phi$ appears in $u_0$ and $w_0$ in numerator and denominator of the BH cross section and in scalar products $\eta k_1$ and $\eta k_2$ for transversely polarized proton (angle $\varphi$ is assumed to be fixed, i.e., $\phi$-independent). The upolarized and spin-dependent parts of BH cross section can be presented in the form: $$\begin{aligned}
\label{scc0}
\sigma_{BH}^{unp}&=&\frac{f}{{\cal P}_1{\cal P}_2}(c_{0,unp}+c_{1,unp}\cos\phi+c_{2,unp}\cos 2\phi),
\nonumber \\
\sigma_{BH}^{LP}&=&\frac{f}{{\cal P}_1{\cal P}_2}(c_{0,LP}+c_{1,LP}\cos\phi),
\nonumber \\\label{ccctp}
\sigma_{BH}^{TP}&=&\frac{f}{{\cal P}_1{\cal P}_2}(c_{0,TP}+c_{1,TP}\cos\phi+s_{1,TP}\sin\phi),\end{aligned}$$ where $f=\alpha^3 S_x^3/(8\pi x^3t\lambda_Y^{5/2})$. The Fourier coefficients are expressed as $$\begin{aligned}
\label{cc0}
c_0&=&\frac{1}{2\pi f}\int_0^{2\pi}d\phi {\cal P}_1{\cal P}_2\;\;\sigma_{BH},
\nonumber \\
c_1&=&\frac{1}{\pi f}\int_0^{2\pi}d\phi \cos\phi\;\;{\cal P}_1{\cal P}_2\;\;\sigma_{BH},
\nonumber \\
c_2&=&\frac{1}{\pi f}\int_0^{2\pi}d\phi \cos2\phi\;\;{\cal P}_1{\cal P}_2\;\;\sigma_{BH},
\nonumber \\
s_1&=&\frac{1}{\pi f}\int_0^{2\pi}d\phi \sin\phi\;\;{\cal P}_1{\cal P}_2\;\;\sigma_{BH} \label{ss1}.\end{aligned}$$ Only terms $T_1-T_4$ contribute to eq.(\[cc0\]) while mass corrections represented by ${\hat T}_1-{\hat T}_4$ can be neglected. This is because the integration over $\phi$ in (\[cc0\]) is performed with weights ${\cal P}_1{\cal P}_2$ reducing singularity level in this terms resulting in their zeroth contribution to (\[cc0\]) in massless approximation. The Fourier coefficients (\[cc0\]) are defined in exactly the same way as those given in eqs. (35-42) of [@BKM2002]. Our analytic calculation of the Fourier coefficients for unpolarizad, longitudinally and transversely polarized cross sections and their subsequent analytical comparison with the expressions of [@BKM2002] show that both sets of formulae are identical. Therefore, we do not show the explicit expressions for the Fourier coefficients here. Note, however, that for transversely polarized case the Fourier coefficients still depend on $\varphi$ (defined after eq. \[etatrans\]) that must be assumed to be fixed to have eqs. (\[ccctp\]) and (\[cc0\]) valid. Alternatively, one can assume that the angle between scattering and polarization planes ($\bar\varphi=\phi+\varphi$). In this case $\varphi$-dependence for transversely polarized target needs to be moved out from the expressions for the Fourier coefficients in Eq. (\[ccctp\]) and that equation needs to be rewritten as $$\begin{aligned}
\label{barc}
&&\sigma_{BH}^{TP}=\frac{f}{{\cal P}_1{\cal P}_2}({c}^\prime_{0,TP}\cos\varphi+{c}^\prime_{1,TP}\cos\varphi\cos\phi
\\&&\quad
+{s}^\prime_{1,TP}\sin\varphi\sin\phi)=\frac{f}{2{\cal P}_1{\cal P}_2}(
({c}^\prime_{1,TP}-{s}^\prime_{1,TP})\cos{\bar\varphi}
\nonumber\\&&\quad
+2{c}^\prime_{0,TP}\cos(\phi-\bar\varphi)
+({c}^\prime_{1,TP}+{s}^\prime_{1,TP})\cos(2\phi-\bar\varphi)
)=\nonumber
\\&&\quad
=\frac{f}{{\cal P}_1{\cal P}_2}({\bar c}^{}_{0,TP}\cos{\bar\varphi}+{\bar c}^{}_{1,TP}\cos(\phi-\varphi)
\nonumber
\\&&\qquad \qquad \qquad \qquad \qquad\qquad \qquad
+{\bar c}^{}_{2,TP}\cos(2\phi-\varphi)).\nonumber \end{aligned}$$
Important research questions are what magnitude of RC for Fourier coefficients is and whether RC can generate new functions (e.g., involving $\cos3\phi$ or $\sin 2\phi$) vanishing at the level of the BH cross section.
\[RC\]RC cross section
======================
The cross section of two photon emission, i.e., the process $$\label{twogammaprocess}
e(k_1)+p(p)\longrightarrow e'(k_2)+p'(p')+\gamma(\kappa_1)+\gamma(\kappa_2),$$ is $$\begin{aligned}
d\sigma&=&\frac{1}{4S} \biggl(\sum_{i=1}^6{\cal M}_i\biggr)^2 d\Gamma,\end{aligned}$$ where additional factor 2 in the denominator is because there are two identical particles (photons) in the final state. Phase space is parametrized as: $$\begin{aligned}
d\Gamma&=&\frac 1{(2\pi )^8}
\frac{d^3k_2}{2E_2}
\frac{d^3p'}{2p'_0}
\frac{d^3\kappa_1}{2\omega_1}
\frac{d^3\kappa_2}{2\omega_2}
\nonumber \\&&\times
\delta^4(k_1+p-k_2-p'-\kappa_1-\kappa_2).\end{aligned}$$
Six matrix elements of the process with emission of additional photon correspondent to graphs in Figure \[Twoggraphs\] are denoted ${\cal M}_{1-6}=e^4t^{-1}J_\mu^h J_{1-6,\mu}$. The quntities $J_{1-6,\mu}$, proportional to the leptonic currents, are: $$\begin{aligned}
J_{1\mu}&=&
{\bar u}_2
\gamma_\mu
\frac{{\hat k}_1-{\hat \kappa}+m}{-2\kappa k_1+V^2}
{\hat \epsilon}_2
\frac{{\hat k}_1-{\hat \kappa}_1+m}{-2k_1\kappa_1}
{\hat \epsilon}_1
u_1,
\nonumber \\
J_{2\mu}&=&
{\bar u}_2
\gamma_\mu
\frac{{\hat k}_1-{\hat \kappa}+m}{-2\kappa k_1+V^2}
{\hat \epsilon}_1
\frac{{\hat k}_1-{\hat \kappa}_2+m}
{-2k_1\kappa_2}{\hat \epsilon}_2
u_1,
\nonumber \\
J_{3\mu}&=&
{\bar u}_2
{\hat \epsilon}_2
\frac{{\hat k}_2+{\hat \kappa_2}+m}{2k_2\kappa_2}
{\hat \epsilon}_1
\frac{{\hat k}_2+{\hat \kappa }+m}{2\kappa k_2+V^2}
\gamma_\mu
u_1,
\nonumber \\
J_{4\mu}&=&
{\bar u}_2
{\hat \epsilon}_1
\frac{{\hat k}_2+{\hat \kappa_1}+m}{2k_2\kappa_1}
{\hat \epsilon}_2
\frac{{\hat k}_2+{\hat \kappa }+m}{2\kappa k_2+V^2}
\gamma_\mu
u_1,
\nonumber \\
J_{5\mu}&=&
{\bar u}_2
{\hat \epsilon}_1
\frac{{\hat k}_2+{\hat \kappa_1}+m}{2k_2\kappa_1}
\gamma_\mu
\frac{{\hat k}_1-{\hat \kappa}_2+m}{-2k_1\kappa_2}
{\hat \epsilon}_2
u_1,
\nonumber \\
J_{6\mu}&=&
{\bar u}_2
{\hat \epsilon}_2
\frac{{\hat k}_2+{\hat \kappa_2}+m}{2k_2\kappa_2}
\gamma_\mu
\frac{{\hat k}_1-{\hat \kappa}_1+m}{-2k_1\kappa_1}
{\hat \epsilon}_1
u_1,\end{aligned}$$ where $V^2=\kappa^2=(\kappa_1+\kappa_2)^2$.
Matrix elements in leading approximation {#matrixel}
----------------------------------------
There are four kinematical regions contributed to the cross section in leading approximation: when one of the photon is observed and another in so-called $s$- and $p$-peaks. For $s$-peak ($p$-peak) the additional unobserved photon is emitted in the direction of the initial (final) lepton. Therefore, $$\begin{aligned}
\bigr(\sum_{i=1}^6{\cal M}_i\bigr)^2 ={\cal M}_{1s}^2+{\cal M}_{1p}^2+{\cal M}_{2s}^2+{\cal M}_{2p}^2,\end{aligned}$$ where indices correspond to the unobserved photon, e.g., $1s$ means that the photon with momentum $\kappa_1$ is unobserved and in the $s$-peak.
The matrix element ${\cal M}_{1s}^2$ squared in the leading approximation is calculated assuming that the momentum $\kappa_1$ of unobserved photon is approximated as $$\label{sappro}
\kappa_1=(1-z_1)k_1.$$ However this approximation have to be carefully applied after analyzing the the structure of poles, i.e., powers of $k_1\kappa_1$ in denominators. Only terms with the first-order pole ($1/k_1\kappa_1$) contribute the the cross section in the leading approximation. The second-order poles appear in the form of $m^2/(k_1\kappa_1)^2$ and does not contain the leading log after integration and taking the limit $m\rightarrow 0$. Only $J_{1\mu}$ and $J_{6\mu}$ have the pole, $$\begin{aligned}
J_{1\mu}&\approx &\frac{k_1\epsilon_1}{2\;\; k_1\kappa_2\;\; k_1\kappa_1}
{\bar u}_2 \gamma_\mu (z_1 {\hat k}_1-{\hat \kappa}_2){\hat \epsilon}_2
u_1
\nonumber \\\label{J6app}
J_{6\mu}&\approx &\frac{z_1\;\;k_1\epsilon_1}{2\;\; k_1\kappa_2\;\; k_1\kappa_1}
{\bar u}_2 {\hat \epsilon}_2({\hat k}_2+{\hat \kappa}_2)\gamma_\mu
u_1 ,\end{aligned}$$ while $J_{2\mu}$, $J_{3\mu}$, $J_{4\mu}$, and $J_{5\mu}$ do not. It means that the interference term $(J_{1\mu}+J_{6\mu})(J_{2\nu}+J_{3\nu}+J_{4\nu}+J_{5\nu})^{\dagger}$ has the pole and therefore contributes to leading log approximation and that $J_{1\mu}+J_{6\mu}$ squared can have the pole of the second order.
Calculating the interference, the equations (\[J6app\]) can be applied, and the substitutions (\[sappro\]) and $m=0$ can be used everywhere except in $k_1\kappa_1$ in the denominator. Both $J_{1\mu}$ and $J_{6\mu}$ are proportional to $k_1\epsilon_1$. Therefore, to calculate the interference one needs to calculate $\sum J_{i\mu} k_1\epsilon_1$ ($i=2,\dots ,5$) by averaging over unobserved photon polarization states. This results in $$\begin{aligned}
\sum k_1\epsilon_1 (J_{3\mu}+J_{4\mu}) &= &
-\frac{{\bar u}_2 {\hat \epsilon}_2({\hat k}_2+{\hat \kappa}_2)\gamma_\mu
u_1}{2(1-z_1)\;k_2\kappa_2},
\nonumber \\
\sum k_1\epsilon_1 (J_{2\mu}+J_{5\mu}) &= &
\frac{
{\bar u}_2 \gamma_\mu (z_1 {\hat k}_1-{\hat \kappa}_2){\hat \epsilon}_2
u_1}{2\; k_1\kappa_2\;z_1(1-z_1)}. \end{aligned}$$ and therefore $$\begin{aligned}
&&(J_{1\mu}+J_{6\mu})(J_{2\nu}+J_{3\nu}+J_{4\nu}+J_{5\nu})^{\dagger}+h.c.=
\nonumber \\&&
=J^{BH}_\mu(z_1k_1,k_2)(J^{BH}_\nu(z_1k_1,k_2))^\dagger\frac{2}{k_1\kappa_1\;(1-z_1)}.\end{aligned}$$
Calculating the term with $J_{1\mu}+J_{6\mu}$ squared, the poles have to be extracted in the form of $1/k_1\kappa_1$ and $m^2/(k_1\kappa_1)^2$, and only then the substitutions (\[sappro\]) and $m=0$ can be used everywhere except in $k_1\kappa_1$ in the denominator. This results in $$\begin{aligned}
&&(J_{1\mu }+J_{6\mu })
(J_{1\nu }+J_{6\nu })^{\dagger}|_{\kappa_1\to (1-z_1)k_1}
\nonumber \\&&\qquad
=J^{BH}_\mu(z_1k_1,k_2)(J^{BH}_\nu(z_1k_1,k_2))^\dagger\frac{1-z_1}{z_1\;k_1\kappa_1}.\end{aligned}$$ We finally have in leading approximation: $$\begin{aligned}
\label{Jspeak}
&&M_{1s}^2
=\frac{4\pi\alpha }{k_1\kappa_1}{\cal M}^2_{BH}(z_1k_1,k_2)\frac{1+z_1^2}{z_1(1-z_1)}.\end{aligned}$$
In the case when the unobserved photon is emitted parallel to the final electron the scalar product $k_2\kappa_1$ is small. For this case it is assumed that $\kappa_1=(z_2^{-1}-1)k_2$ resulting in $$\begin{aligned}
\label{Jppeak}
&&M_{1p}^2
=\frac{4\pi\alpha }{k_2\kappa_1}{\cal M}^2_{BH}\left(k_1,\frac{k_2}{z_2}\right)\frac{1+z_2^2}{1-z_2}.\end{aligned}$$
Phase space and shifted kinematics {#phasespace}
----------------------------------
The photon four-vectors appear in denominators of (\[Jspeak\]) and (\[Jppeak\]) in the form of scalar products $k_1\kappa_1$ and $k_2\kappa_1$. Two integrals over phase space of two photons are: $$\begin{aligned}
&&\int \frac{d^3\kappa_1}{2\omega_1}\frac{d^3\kappa_2}{2\omega_2}
\frac{\delta(\Lambda-\kappa_1-\kappa_2)}{k_1\kappa_1}=
\nonumber\\&&
\qquad\qquad=\int \frac{d^3\kappa_1}{2\omega_1}
\frac{\delta(\Lambda^2-2\Lambda\kappa_1)}{k_1\kappa_1}
=\frac{\pi L}{w},
\nonumber\\\label{intL2}
&&\int \frac{d^3\kappa_1}{2\omega_1}\frac{d^3\kappa_2}{2\omega_2}
\frac{\delta(\Lambda-\kappa_1-\kappa_2)}{k_2\kappa_1}
\nonumber\\&&
\qquad\qquad=\int \frac{d^3\kappa_1}{2\omega_1}
\frac{\delta(\Lambda^2-2\Lambda\kappa_1)}{k_2\kappa_1}
=\frac{\pi L}{u},\end{aligned}$$ where $\Lambda=k_1+p-k_2-p'$, $w=2k_1\Lambda$, $u=2k_2\Lambda$. Only terms containing the large (or leading) logarithm $L$ are kept. The results (\[intL2\]) are immediately obtained if to consider the system of center-of-mass of two photons (${\boldsymbol\Lambda}=0$) with z-axis directed along ${\bf k}_1$ and ${\bf k}_2$, respectively.
The phase space of the final proton is parametrized as: $$\frac{d^3p'}{2p'_0}=\frac{\sqrt{\lambda_t}}{8M^2}dtd\phi d\cos\theta '=
\frac{dt d\phi dV^2}{4\sqrt{\lambda _Y}},$$ where $V^2=\Lambda^2$ and $\theta '$ is the angle between $\bf q$ and $\bf p'$ . The relation $$\label{VV2}
V^2=\frac{tS_x}{2M^2}+t-Q^2+
{\sqrt{\lambda_Y}\sqrt{\lambda_t}\over 2M^2}\cos\theta '$$ was used to obtained the parametrization in terms of $V^2$. Finally the integration over $d\Gamma$ is $$\begin{aligned}
\int \frac{d\Gamma}{k_1\kappa_1}&=&d\Gamma_0\frac{L}{8\pi^2w}dV^2,
\nonumber \\
\int \frac{d\Gamma}{k_2\kappa_1}&=&d\Gamma_0\frac{\pi L}{8\pi^2u}dV^2. \end{aligned}$$
The matrix elements squared for $s$- (and $p$-) peak contributions in eqs. (\[Jspeak\]) and (\[Jppeak\]) are expressed in terms of $z_1$ and $z_2$, therefore the variable $V^2$ (and $\cos \theta '$) has to be related to these variables. The equation for establishing this relation is obtained from condition in $\delta$-function argument of intermediate expressions in (\[intL2\]) if to use the representation for $\kappa_1$ used in subsection \[matrixel\], i.e., $\kappa_1=(1-z_1)k_1$ for $s$-peak and $\kappa_1=(z_2^{-1}-1)k_2$ for $p$-peak. Below for representation of this equation and its solution we use the generalized notation included both $z_1$ and $z_2$. Substitution $z_2=1$ ($z_1=1$) has to be used to formally extract $s$-peak ($p$-peak) contribution. Also we define the 4-vector $q_z$: $q_z=z_1k_1-k_2$ for $s$-peak, $q_z=k_1-z_2^{-1}k_2$ for $p$-peak, or $q_z=z_1k_1-z_2^{-1}k_2$ in the generalized notation. Meaning of the used vectors is clarified in Figure \[figshifted\]. Vector $q_z$ has meaning of “true” transferring momentum in the case of additional photon emitted. The vector is in the plane OXZ, its projection into OX and OZ axes are always negative and positive respectively. Its magnitude is always less than that of $q$. The equation for establishing the relation between $z_1$, $z_2$, and $V^2$ in terms of introduced notation reads: $$\begin{aligned}
\Lambda^2-2\Lambda (q-q_z)
&=&\frac{\sqrt{\lambda_t\lambda_{Yz}}}{2M^2}
({\cos\bar\theta}-A)=0,
\label{Eqdelta}\end{aligned}$$ where ${\cos\bar\theta}$ is the angle between ${\bf q}_z$ and $\bf p'$. It is expressed in terms of the angle between $\bf q$ and ${\bf q}_z$ (denoted by $\theta_z$) as $$\label{thetabar}
{\cos\bar\theta}=\cos\theta '\cos\theta_z-\sin\theta '\sin\theta_z\cos\phi ,$$ where sinus and cosine of $\theta_z$ and the quantity $A$ are defined by kinematics in terms of $z_{1,2}$ and measured quantities: $$\begin{aligned}
\cos\theta_z&=&{S_x(z_1S-z_2^{-1}X)+2(z_2^{-1}+z_1)M^2Q^2 \over \sqrt{\lambda_Y}\sqrt{\lambda_{Yz}}},
\nonumber \\
\sin\theta_z&=&{2(z_2^{-1}-z_1)MQ(SX-M^2Q^2)^{1/2} \over \sqrt{\lambda_Y}\sqrt{\lambda_{Yz}}} ,
\nonumber \\
A&=&-{(z_1S-z_2^{-1}X)t+2M^2(t-z_1z_2^{-1}Q^2)\over \sqrt{\lambda_t}\sqrt{\lambda_{Yz}}} ,
\nonumber \\\label{lyzz1z2}
\lambda_{Yz}&=&(z_1S-z_2^{-1}X)^2+4M^2z_1z_2^{-1}Q^2.\end{aligned}$$ Eq. (\[Eqdelta\]) has unique solution in the kinematically allowed region: $$\begin{aligned}
\cos\theta '&=&\frac{A\cos\theta_z+\sqrt{{\cal D}_0}\sin\theta_z\cos\phi}{\cos^2\theta_z+\sin^2\theta_z\cos^2\phi},
\nonumber \\
\sin\theta '&=&\frac{\cos\theta_z\sqrt{{\cal D}_0}-A\sin\theta_z\cos\phi}{\cos^2\theta_z+\sin^2\theta_z\cos^2\phi},
\nonumber \\
\label{DD0}
{\cal D}_0&=&\cos^2\theta_z+\sin^2\theta_z\cos^2\phi-A^2.\end{aligned}$$ The direction of ${\bf q}_z$ defines new polar ($\bar\theta$) and azimuthal ($\bar\phi$) angles of the final proton and thus generates so-called shifted kinematics. The angle ($\bar\theta$) was defined in (\[thetabar\]) and the angle $\bar\phi$ is related to measured $\phi$ as $$\label{cosphiz}
\cos{\bar\phi}\sin{\bar \theta }=\cos\theta_z\sin\theta '\cos\phi+\sin\theta_z\cos\theta '$$ and $$\label{sinphiz}
{\sin{\bar\phi}}\sin{\bar \theta }=\sin\theta '\sin\phi .$$ The origin of the equation (\[sinphiz\]) is clear because the projection of $\bf p'$ on axis OY is the same for original and shifted kinematics. Recall, that the equations (\[Eqdelta\]-\[sinphiz\]) are used for both $s$- and $p$-peaks. In the first case one sets $z_2=1$ and $z_1=1$ is set for the second case.
The target polarization in shifted kinematics is calculated using the orthogonal transformation: $$\begin{aligned}
{\bar\eta}_x&=&\cos\theta_z\eta_x+\sin\theta_z\eta_z,
\nonumber \\
{\bar\eta}_y&=&\eta_y,
\nonumber \\
{\bar\eta}_z&=&-\sin\theta_z\eta_x+\cos\theta_z\eta_z.\end{aligned}$$ Thus, in the shifted kinematics the target polarization are not longer pure longitudinal or transversely polarized, therefore, the scalar products in shifted kinematics are then calculated using eqs. (\[etaxyz\]).
The variable $V^2$ is related to $z_{1,2}$ through eq. (\[VV2\]) where $\cos\theta '$ is given by (\[DD0\]) and quantities in the R.H.S. of (\[DD0\]) depend on $z_{1,2}$ in (\[lyzz1z2\]). Tedious, but straightforward calculation gives $$\begin{aligned}
\frac{1}{w}\frac{dV^2}{dz_1}&=&-\frac{\sqrt{\lambda_Y}}{\sqrt{\lambda_{Yz}}} \frac{\sin\theta '}{\sqrt{{\cal D}_0}} ,
\nonumber \\ \label{dVV22}
\frac{1}{u}\frac{dV^2}{dz_2}&=&-\frac{\sqrt{\lambda_Y}}
{z_2^2\sqrt{\lambda_{Yz}}}
\frac{\sin\theta '}{\sqrt{{\cal D}_0}}.\end{aligned}$$ The R.H.S. of the equations (\[sinphiz\],\[dVV22\]) are taken using respective peak kinematics.
The equation for minimal value for $z_1$ (denoted by $z_{1}^m$) allowed by kinematics is $\cos\theta_z=A$. It follows from (\[VV2\]): $V^2_{max}=tS_x/2M^2+t-Q^2+\sqrt{\lambda_Y}\sqrt{\lambda_t}/2M^2$ (that corresponds to $\cos\theta '=1$). The solution is $$z_{1}^m={Xt-2M^2t+\xi(XS_x-2M^2Q^2) \over St-2M^2Q^2+\xi(SS_x+2M^2Q^2)}.$$ where $\xi^2=\lambda_t/\lambda_Y$. Similarly, for kinematics of $p$-peak we obtain $$z_{2}^m={Xt+2M^2Q^2+\xi(XS_x-2M^2Q^2) \over St+2M^2t+\xi(SS_x+2M^2Q^2)}.$$
Both $z_{1}^m$ and $z_{2}^m$ do not depend on $\phi$.
The lowest order RC to BH cross section
---------------------------------------
\
[**a) b) c) d)**]{}\
\
[**e) f) g) h)**]{}\
\
[**i) j)**]{}
Combining results obtained in Sections \[matrixel\] and \[phasespace\] we find the cross section of two photon emission as: $$\begin{aligned}
&&\sigma _{s}(S,x,Q^2,t,\phi)=\frac{\alpha}{2\pi }L \times
\nonumber\\ && \qquad
\int\limits_{z_{1}^m}^1 dz_1\frac {1+z_1^2}{1-z_1}
{\sin\theta_s ' \over {{\cal D}_{0s}^{1/2}}}
\left(\frac{ x_s}{x}\right)^2\sigma _{BH}(z_1S,x_s,z_1Q^2,t,\bar\phi_s),
\nonumber\\&&\label{rvkladp}
\sigma _{p}(s,x,Q^2,t,\phi)=\frac{\alpha}{2\pi }L \times
\\&&
\int\limits_{z_{2}^m}^1 dz_2\frac {1+z_2^2}{z_2(1-z_2)}
{\sin\theta_p ' \over {{\cal D}_{0p}^{1/2}}}
\left( \frac{ x_p}{x}\right )^2\sigma _{BH}(S,x_p,z_2^{-1}Q^2,t,{\bar \phi}_p),\nonumber\end{aligned}$$ where $x_s=z_1Q^2/(z_1S-X)$ and $x_p=Q^2/(z_2S-X)$ are Bjorken $x$ in shifted kinematics; $\sin\theta '$ and $\bar\phi$ are given by (\[DD0\]) and (\[sinphiz\])—subscript explicitly indicates the type of kinematics for that these quantities have to be calculated.
The integrals in (\[rvkladp\]) are divergent at upper integration limit, therefore it is regularized using a parameter $\omega_{min}$ separating the integration region on the part corresponding to emission of soft and hard photons. For $z_{1,2}$ the regulating parameter $\Delta$ is $\Delta=\Delta_1=2M\omega_{min}/S$ for $s$-peak and $\Delta=\Delta_2=2M\omega_{min}/X$ for $p$-peak.
The contributions of loops (Fig. \[VVgraphs\][*a-h*]{}) and soft photon emission are known [@ByKuTo2008PRC]. Their sum is proportional to BH cross section $$\sigma_V=\frac{\alpha}{\pi}\biggl(\log\frac{4M^2\omega_{min}^2}{SX}+\frac{3}{2}\biggr)
L\sigma_{BH}$$ and can be presented as $$\label{vvklad}
\sigma_V=-\frac{\alpha L}{2\pi}\sigma_{BH}\biggl(
\int\limits_{0}^{1-\Delta_1} dz_1 \frac{1+z_1^2}{1-z_1}
+\int\limits_{0}^{1-\Delta_2} dz_2 \frac{1+z_2^2}{1-z_2}\biggr).$$ Sum of (\[rvkladp\]) and (\[vvklad\]) is infrared free and regularization can be removed: $\Delta_{1,2}=0$. The result for the observed cross section is:
$$\begin{aligned}
&&\sigma _{obs}^{1-loop}(S,x,Q^2,t,\phi)=(1 + 2\Pi (t))\sigma _{BH}(S,x,Q^2,t,\phi)
+ \frac{\alpha}{2\pi }L
\Biggl [\;
\nonumber
\\[0.1cm]&&
\quad\int\limits_{0}^1 dz_1\left (\frac {1+z_1^2}{1-z_1} \right )
\left(
{\sin\theta_s ' \over {{\cal D}_{0s}^{1/2}} }
\theta(z-z_1^m) \left(\frac{x_s}{x}\right )^2\sigma _{BH}(z_1S,x_s,z_1Q^2,t,{\bar\phi}_s)
-
\sigma _{BH}(S,x,Q^2,t,\phi)\right)
\nonumber\\[0.1cm]&& +
\int\limits_{0}^1 dz_2\left (\frac {1+z_2^2}{1-z_2} \right )
\left(
{\sin\theta_p ' \over {{\cal D}_{0p}^{1/2}}}
\theta(z-z_2^m)\frac 1{z_2}\left (\frac{x_p}{x}\right )^2
\sigma _{BH}(S,x_p,z_2^{-1}Q^2,t,{\bar\phi}_p)-
\sigma _{BH}(S,x,Q^2,t,\phi)\right)
\Biggr ].
\label{1l}\end{aligned}$$
Here $ \Pi (t)=\alpha/(2\pi)\delta_{vac}$ and $\delta_{vac}$ is the contribution of vacuum polarization by leptons and hadrons (Fig. \[VVgraphs\][*i,j*]{}) calculated as in [@AKSh1994] (see eq. (21) and discussion before eq. (20)).
Behavior of the cross section for $t$ close to kinematical bounds (i.e., in the region where $t\sim t_1$ and $t\sim t_2$) deserves special attention. The integrals in (\[1l\]) become infinite when $t \rightarrow t_1$ or $t \rightarrow t_2$. In this limit $z_1^m=1$ and $z_2^m=1$. To extract the divergence the part of integrals in (\[1l\]) from 0 to $z_1^m$ or $z_2^m$ need to be calculated analytically resulting in: $$\sigma _{obs}^{1-loop}=\Bigl( 1 + \frac{\alpha}{\pi}\bigl( \delta_{vac}+\delta_{inf}+\delta_{fin}\bigr)\Bigr) \sigma _{BH} + \sigma_{F}.$$ where $ \sigma_{F}$ is the non-divergent contributions of remaining integrals (i.e., as in (\[1l\]), but with low limits $z_1^m$ and $z_2^m$). The correction terms $$\begin{aligned}
\delta_{fin}&=&\frac{L}{4}\bigl(z_1^m(2+z_1^m)+z_2^m(2+z_2^m)\bigr), \\
\delta_{inf}&=&L\bigl( \log(1-z_1)+\log(1-z_2)\bigr). \nonumber\end{aligned}$$ represent the the finite and infinite parts of the results of the analytical integration. The source of occurrence of the divergence is known [@YennieFrautschiSuura1961]. The divergence is canceled by taking into account multiple soft photon emission. We follow the so-called exponentiation procedure suggested in [@Shumeiko]: $$\begin{aligned}
\label{exponentiation}
&&\Bigl( 1 + \frac{\alpha}{\pi}\bigl( \delta_{vac}+\delta_{inf}+\delta_{fin}\bigr)\Bigr)
\rightarrow \\
&&\qquad \qquad \exp\bigl(\frac{\alpha}{\pi} \delta_{inf}\bigr)\Bigl( 1 + \frac{\alpha}{\pi}\bigl( \delta_{vac}+\delta_{fin}\bigr)\Bigr). \nonumber\end{aligned}$$ After this procedure the observed cross section vanishes at the kinematical bounds on $t$.
Higher order corrections
------------------------
In previous section we found RC to BH cross section in leading approximation induced by lepton leg in the lowest order over $\alpha $. The generalization of eq. (\[1l\]) on highest order over $\alpha $ using electron structure function method suggested in [@KurFad85] (see also [@ESFRAD; @ESF]) has a form: $$\begin{aligned}
&&\sigma _{obs}(S,x,Q^2,t,\phi)=
\int\limits_{z_1^m}^1 dz_1
\int\limits_{z_{2,1}^m}^1 \frac{dz_2}{z_2}
D(z_1,Q^2)D(z_2,Q^2)
\nonumber \\&&\qquad\times
\left (\frac {x_{sp}}x\right )^2
{\sin \theta ' \over {{\cal D}_{0}^{1/2}}}
\hat{\sigma } _{BH}(z_1S,x_{sp},z_2^{-1}z_1Q^2,t,{\bar\phi}),
\label{hl}\end{aligned}$$ where ${\hat \sigma} _{BH}=
\sigma _{BH}[\alpha^3 \to \alpha^3/(1-\Pi(t))^2]$, $x_{sp}=z_1 Q^2/(z_1 z_2S-X)$ and $$z_{2,1}^m={Xt+2z_1M^2Q^2+\xi(XS_x-2M^2Q^2) \over z_1 St+2M^2t+z_1\xi(SS_x+2M^2Q^2)}.$$
The electron structure function $D(z,L)$ includes contributions due to photon emission and pair production $$\label{4, unpol. EST}
D = D^{\gamma} + D^{e^+e^-}_N + D^{e^+e^-}_S \ ,$$ where $D^{^{\gamma}}$ is responsible for the photons radiation and $D^{^{e^+e^-}}_N $ and $D^{^{e^+e^-}}_S$ describe pair production in non-singlet (by single photon mechanism) and singlet (by double photon mechanism) channels, respectively. The explicit expression for $D(z,L)$ are given by eqs. (5-7) of ref. [@ESFRAD].
Notice, that equation (\[1l\]) can be reproduced by expansion of (\[hl\]) over $\alpha $ and keeping only zero and first order.
Numerical estimates {#SectNumeric}
===================
Numerical analysis is designed to evaluate the RC for the cross section and the Fourier coefficients in the kinematics of modern measurements at Jlab [@Camacho_etal_2006_PRL; @Mazouz_etal_2007_PRL; @Girod_etal_2008_PRL]. Specific focus in this analysis will be on i) the $t$- and $\phi$-dependencies of the magnitude of RC factor and ii) investigation of RC for the Fourier coefficients both non-vanishing and vanishing at the level of the BH cross section.
Cross section
-------------
The $t$-distribution of the BH cross section has two sharp peaks that correspond to collinear radiation. The typical shapes of the $t$-dependence of the BH cross section with RC are represented in Figure \[RCF\]a. Figure \[RCF\]b gives $t$-dependence of the RC factor for the given kinematical points. The plots for spin dependent parts looks similar for both longitudinal and transverse polarizations (not shown). In this analyses the cross section integrated over $\phi$ is considered.
Analyses of $t$-dependence presented in Figure \[RCF\] revealed three specific regions in which the shapes of RC deserve attention and further clarification: i) the region close to bound over $t$ where RC factor goes rapidly down, ii) the region close to collinear peaks, and iii) the region between the peaks where RC factor can reach large values, however, capable of being suppressed by a cut on missing energy (also shown in Figure \[RCF\] by the line without dots).
Decrease of the RC factor in the region close to the bounds (i.e., $t\sim t_1$ or $t\sim t_2$) is simply the reflection of the fact that observed cross section after the exponentiation procedure (\[exponentiation\]), as well as the observed cross section (\[hl\]) included higher order corrections, goes to zero at these kinematical bounds.
In the region close to $s$- and $p$-peaks, i.e., when $$t=t_s=-{Q^2 X \over S-Q^2}, \qquad
t=t_p=-{Q^2 S \over X+Q^2}.$$ the RC factor slowly decrease (when $-t$ approaches to the $s$-peak from the left or $p$-peak from the right, see Figure \[RCF\]), reach its minimum at $t=t_s$ or $t=t_p$, and then rapidly increase, reach its maximum at $t=-Q^2$. Analysis of the integrand shows that the region around the point $\phi=\pi$ is responsible for this difference. Therefore the $\phi$-dependence of the RC factor (Figure \[RCFphidep\]) was analyzed. The RC factor typically has flat behavior except the point $\phi=\pi$ corresponding to the situation when the scattering and production planes coincide. In this case the RC factor can rapidly increase. Further analysis of the integrand showed that this increase of the RC factor is due to contribution of the second integral in (\[1l\]) when $w_0$ is very small. The the second integral in (\[1l\]) describes the $p$-peak contribution of the one photon, and the region of small $w_0$ corresponds to the $s$-peak of the second photon. Therefore, the large contribution comes from the two photon emission process when two irradiated photons are collinear to initial and final electrons. Corresponding BH process (i.e., one photon emission process) is the process with the emitted photon with 4-momentum corresponding to the sum of momenta of the two collinear photons. This photon is not collinear and therefore respective cross section of BH process is not large. The RC factor defined as the ratio of observed cross section (with large contribution of the two collinear photons) to the BH cross section (with not large BH cross section) can become larger than 2, i.e., RC to BH cross section can be larger than the BH cross section. Roughly the effect for RC factor can be estimated as $1+\alpha L^2$ (one collinear photon produce one leading log $L$). If $L\sim 15$ then the RC factor equals 2.64.
RC and azimuthal structure of the cross section
-----------------------------------------------
Azimuthal structure for the unpolarized BH cross section and for the longitudinally and transversely polarized cross sections are represented by Fourier coefficients defined in Section \[polar\]. There are eight non-zero Fourier coefficients: three for unpolarized cross section, two for longitudinally polarized, and three for transversely polarized. Radiatively corrected Fourier coefficients are calculated using eqs. (\[cc0\],\[barc\]) with the observed cross section (\[hl\]) substituted instead of $\sigma_{BH}$. Figure \[RCFfour\] presents the results for these coefficients calculated using the BH and observed cross sections. The observed cross section was calculated with and without kinematical cut on maximal photon energy $E_\gamma=0.3 GeV$. One can see from this plot that the eight coefficients are quite stable in respect to RC. Similarly to the case of the cross section the regions with noticeable effect from RC are the region of small $-t$ and the region of $t$ close to (and between of) the $s$- and $p$-peaks. Also the results show that using the cut on missing energy suppresses the correction in the latter region.
In contrast to the BH cross section, the azimuthal structure of the observed cross cannot be represented neither in terms of this eight coefficients nor in terms of any finite number of such coefficients. This is because of complicated and nonlinear dependence of the observed cross section on $\phi$. Several coefficients representing next terms in the Fourier series are presented in Figure \[RCFfour2\]. All of them are defined through $\cos (n\phi )$. The Fourier coefficients with $\sin (n\phi )$ were also investigated (all of them vanish at the level of the BH process), however no significant contributions at the level of observed cross section were found.
Discussion and Conclusion {#SectDiscussion}
=========================
In this paper we calculated RC to the BH cross section in leading approximation. Both unpolarized and polarized parts of the cross sections were considered. All final formulae are presented in analytical form. Details of calculation of matrix element squared are given with specific attention to occurrence of mass terms non-vanishing in the approximation of small lepton mass. Phase space was parametrized using the notion of shifted kinematics resulting in compact and convenient parametrization for the two-photon phase space and opportunities for analytic integration over angles. Numerical analysis of the effects of RC was focused on the RC to cross section and the Fourier coefficients representing the angular dependence of the BH and observed cross sections.
Analysis of the RC to the BH cross section revealed the kinematical regions where RC can exceed the BH cross section in several times. This is the region with scattering azimuthal angle equaling $\pi$. The situation when both photons are collinear (one is collinear to initial lepton and another is collinear to final lepton) are kinematically allowable. Since the photon in respective BH process is not collinear (its momentum is the sum of two collinear photons), the BH cross section is not so large. As a result, the RC factor can be at the level of several dozens.
There are eight Fourier coefficients contributing to the BH cross section with arbitrary polarized target, while new coefficients appear in the Fourier expansion at the level of the observed cross sections. The calculation of the contributions of the additional terms of the Fourier series can be significant. For example, as follows from the comparison of results in Figures \[RCFfour\] and \[RCFfour2\] the effect of “new” coefficient $c_{3,unp}$ can reach 10% from the effect of main contribution represented by $c_{0,unp}$. This effect, however, can be suppressed by using the kinematical cut on missing energy. Note that experimental procedure of extraction of the Fourier coefficients from data is based on the fitting of observed cross section by the functions representing the angular structure of the BH cross section. Occurence of large terms of the next orders in Fuorier expansion can results in systematical uncertainties in kinematic regions where the effects of these additional terms is noticable.
One feature of the calculation is that the lepton mass cannot be completely eliminated in the expressions for the BH cross section (\[dGamma\]). First, the lepton mass has to be kept in the lepton propagators $w_0$ and $u_0$. Since the propagators are proportional to $E_L-p_Lcos\beta$ ($E_L$ and $p_L$ are energy and momentum of the lepton, and $\beta$ is the angle between momenta of the lepton and photon), there are kinematical points where $w_0$ or $u_0$ vanish in massless approximation making the BH cross section infinite. These points can be excluded when the BH process is investigated experimentally. However RC calculation requires integration of the BH cross section over broad kinematical area and the singular point occur in the integration region. Therefore the lepton mass has to be kept in the expressions for $w_0$ and $u_0$. This is the reason of occurrence of the $m^2$ in (\[lambdauw\]). Second, terms in the BH cross section containing $m^2$ in numerator and $w_0$ or $u_0$ squared in denominator are also infinite in massless approximation for certain $\phi$ and result in finite (independent of the lepton mass) terms after integration over $\phi$. Such terms were kept in the expression for the BH cross section (\[dGamma\]). Note that our experience of dealing with RC tells us that such terms can give important contribution to the observed cross section (e.g., in DIS cross section measurements).
The motivation for our calculation was the lack of complete calculations of the RC performed with accuracy to be controlled. One-loop correction and soft photon emission was calculated by Vanderhaeghen et al. [@Vanderhaeghen2000]. Detailed consideration of one-loop correction was done. Box-type diagrams were evaluated in the style of ref. [@MaximonTjon2000PRC]. However, the radiative tail corresponding to photon emission processes was calculated in the approximation where the photon energy is very small compared to the lepton momenta.
Bytev, Kuraev, and Tomasi-Gustafsson [@ByKuTo2008PRC] applied the method of the electron structure functions to calculate RC due to two photon emission in the process $e^-\mu^+$ that was chosen as a model process of DVCS. Main focus in this calculation was on the correction to the helicity-odd part of cross section, i.e., the interference between BH and DVCS amplitudes. Authors were focused on another experimental design: they integrated over the energy fraction of scattered electron.
The task of the calculation of RC to BH is closely related to the task of RC to radiative tail from the elastic peak that is the important (and often dominant) contribution to RC in DIS measurements. The radiative tail is simply the BH cross section integrated over photonic variables, i.e., over $\phi$ and $t$. Integration over $\phi$ is performed analytically, and because of dependence of the cross section on formfactors the integration over $t$ is left for numerical analysis. Programs for RC calculation of the radiative tail such as POLRAD 2.0 [@POLRAD20] include both the contribution of the radiative tail (correspondent to the BH cross section) and approximate calculation of the RC to the radiative tail (correspondent to two-photon emission and loop effects) [@AISh1998; @AkKuSh2000PRDtail]. The approach to exact calculation of the RC to the radiative tail was developed by Akhundov, Bardin, and Shumeiko [@AkBaSh1986YP]. They used the formalism of covariant extraction and cancellation of infrared divergence and calculated the QED corrections to the elastic radiative tail for unpolarized case. No analytical expressions represented the result of the exact calculation were published.
The formulae in this paper are presented in analytical form providing good starting point for more precise calculations. One further generalization can be done using the approach of [@AkBaSh1986YP] to exactly calculate the lowest order correction to polarized BH cross section. Another direction for generalization is to apply the developed formalism for RC to DVCS, i.e., interference of BH and DVCS amplitudes. Hadronic part of DVCS is known from refs. [@BKM2002; @BeMu2010PRD].
Note also that the developed formulae are obtained for the specific way of reconstruction of kinematic variables. Specifically, leptonic and hadronic momenta are used to reconstruct the kinematics of the BH process. Kinematical variables of the photon were assumed to be unmeasured. If information about photonic variables are involved into reconstruction of the kinematics of the BH process the calculation presented in this paper requires modification. Universal way to avoid multiple calculation to cover all possibilities for data analysis designs is the development of the Monte Carlo generator of the BH process with the additional process with two photons. Any specific choice of base set of kinematical variables can be used for this construction including those considered in this paper.
[**Acknowledgments**]{}. The authors are grateful to Harut Avakian for interesting discussions and comments. This work was supported by DOE contract No. DE- AC05-06OR23177, under which Jefferson Science Associates, LLC operates Jefferson Lab.
|
---
abstract: 'A primary objective of the Lunar Laser Ranging (LLR) experiment is to provide precise observations of the lunar orbit that contribute to a wide range of science investigations. In particular, time series of the highly accurate measurements of the distance between the Earth and Moon provide unique information used to determine whether, in accordance with the Equivalence Principle (EP), both of these celestial bodies are falling towards the Sun at the same rate, despite their different masses, compositions, and gravitational self-energies. 35 years since their initiation, analyses of precision laser ranges to the Moon continue to provide increasingly stringent limits on any violation of the EP. Current LLR solutions give $(-1.0 \pm 1.4) \times 10^{-13}$ for any possible inequality in the ratios of the gravitational and inertial masses for the Earth and Moon, $\Delta(M_G/M_I)$. This result, in combination with laboratory experiments on the weak equivalence principle, yields a strong equivalence principle (SEP) test of $\Delta(M_G/M_I)_{\tt SEP} = (-2.0 \pm 2.0) \times 10^{-13}$. Such an accurate result allows other tests of gravitational theories. The result of the SEP test translates into a value for the corresponding SEP violation parameter $\eta$ of $(4.4 \pm 4.5)\times10^{-4}$, where $\eta = 4\beta -\gamma -3$ and both $\gamma$ and $\beta$ are parametrized post-Newtonian (PPN) parameters. Using the recent result for the parameter $\gamma$ derived from the radiometric tracking data from the Cassini mission, the PPN parameter $\beta$ (quantifying the non-linearity of gravitational superposition) is determined to be $\beta - 1 = (1.2 \pm 1.1) \times 10^{-4}$. We also present the history of the lunar laser ranging effort and describe the technique that is being used. Focusing on the tests of the EP, we discuss the existing data, and characterize the modeling and data analysis techniques. The robustness of the LLR solutions is demonstrated with several different approaches that are presented in the text. We emphasize that near-term improvements in the LLR ranging accuracy will further advance the research of relativistic gravity in the solar system, and, most notably, will continue to provide highly accurate tests of the Equivalence Principle.'
address: |
Jet Propulsion Laboratory, California Institute of Technology,\
4800 Oak Grove Drive, Pasadena, CA 91109, USA
author:
- 'JAMES G. WILLIAMS, SLAVA G. TURYSHEV, DALE H. BOGGS'
title: |
LUNAR LASER RANGING TESTS OF\
THE EQUIVALENCE PRINCIPLE WITH THE EARTH AND MOON
---
Introduction
============
The Equivalence Principle (EP) has been a focus of gravitational research for more than four hundred years. Since the time of Galileo (1564-1642) it has been known that objects of different mass and composition accelerate at identical rates in the same gravitational field. In 1602-04 through his study of inclined planes and pendulums, Galileo formulated a law of falling bodies that led to an early empirical version of the EP. However, these famous results would not be published for another 35 years. It took an additional fifty years before a theory of gravity that described these and other early gravitational experiments was published by Newton (1642-1727) in his Principia in 1687. Newton concluded on the basis of his second law that the gravitational force was proportional to the mass of the body on which it acted, and by the third law, that the gravitational force is proportional to the mass of its source.
Newton was aware that the *inertial mass* $M_I$ appearing in the second law ${\bf F} = M_I {\bf a}$, might not be the same as the *gravitational mass* $M_G$ relating force to gravitational field ${\bf F} = M_G {\bf g}$. Indeed, after rearranging the two equations above we find ${\bf a} = (M_G/M_I){\bf g}$ and thus in principle materials with different values of the ratio $(M_G/M_I)$ could accelerate at different rates in the same gravitational field. He went on testing this possibility with simple pendulums of the same length but different masses and compositions, but found no difference in their periods. On this basis Newton concluded that $(M_G/M_I)$ was constant for all matter, and by a suitable choice of units the ratio could always be set to one, i.e. $(M_G/M_I) = 1$. Bessel (1784-1846) tested this ratio more accurately, and then in a definitive 1889 experiment Eötvös was able to experimentally verify this equality of the inertial and gravitational masses to an accuracy of one part in $10^9$ (see Refs. ).
Today, almost three hundred and twenty years after Newton proposed a comprehensive approach to studying the relation between the two masses of a body, this relation still continues to be the subject of modern theoretical and experimental investigations. The question about the equality of inertial and passive gravitational masses arises in almost every theory of gravitation. Nearly one hundred years ago, in 1915, the EP became a part of the foundation of Einstein’s general theory of relativity; subsequently, many experimental efforts focused on testing the equivalence principle in the search for limits of general relativity. Thus, the early tests of the EP were further improved by Roll et al.[@Roll_etal_1964] to one part in $10^{11}$. Most recently, a University of Washington group[@Baessler_etal_1999; @Adelberger_2001] has improved upon Dicke’s verification of the EP by several orders of magnitude, reporting $M_G/M_I - 1 < 1.4 \times 10^{-13}$.
The nature of gravity is fundamental to our understanding of our solar system, the galaxy and the structure and evolution of the universe. This importance motivates various precision tests of gravity both in laboratories and in space. To date, the experimental evidence for gravitational physics is in agreement with the general theory of relativity; however, there are a number of reasons to question the validity of this theory. Despite the success of modern gauge field theories in describing the electromagnetic, weak, and strong interactions, it is still not understood how gravity should be described at the quantum level. In theories that attempt to include gravity, new long-range forces can arise in addition to the Newtonian inverse-square law. Even at the purely classical level, and assuming the validity of the equivalence principle, Einstein’s theory does not provide the most general way to establish the space-time metric. Regardless of whether the cosmological constant should be included, there are also important reasons to consider additional fields, especially scalar fields.
Although scalar fields naturally appear in the modern theories, their inclusion predicts a non-Einsteinian behavior of gravitating systems. These deviations from general relativity lead to a violation of the EP, modification of large-scale gravitational phenomena, and cast doubt upon the constancy of the “constants.” In particular, the recent work in scalar-tensor extensions of gravity that are consistent with present cosmological models[@Damour_Nordtvedt_1993a; @Damour_Nordtvedt_1993b; @Damour_etal_2002a; @Damour_etal_2002b; @Nordtvedt_2003; @Turyshev_etal_2007; @Turyshev_2008] predicts a violation of the EP at levels of $10^{-13}$ to $10^{-18}$. This prediction motivates new searches for very small deviations of relativistic gravity from general relativity and provides a robust theoretical paradigm and constructive guidance for further gravity experiments. As a result, this theoretical progress has given a new strong motivation for high precision tests of relativistic gravity and especially those searching for a possible violation of the equivalence principle. Moreover, because of the ever increasing practical significance of the general theory of relativity (i.e. its use in spacecraft navigation, time transfer, clock synchronization, standards of time, weight and length, etc) this fundamental theory must be tested to increasing accuracy.
Today Lunar Laser Ranging (LLR) is well positioned to address the challenges presented above. The installation of the cornercube retroreflectors on the lunar surface more than 35 years ago with the Apollo 11 lunar landing, initiated a unique program of lunar laser ranging tests of the EP. LLR provides a set of highly accurate distance measurements between an observatory on the Earth and a corner cube retroreflector on the Moon which is then used to determine whether, in accordance with the EP, these astronomical bodies are both falling towards the Sun at the same rate, despite their different masses and compositions. These tests of the EP with LLR were among the science goals of the Apollo project. Today this continuing legacy of the Apollo program[@Dickey_etal_1994] constitutes the longest running experiment from the Apollo era; it is also the longest on-going experiment in gravitational physics.
Analyses of laser ranges to the Moon have provided increasingly stringent limits on any violation of the EP; they also enabled accurate determinations of a number of relativistic gravity parameters. Ranges started in 1969 and have continued with a sequence of improvements for 35 years. Data of the last decade are fit with an rms residual of 2 cm. This accuracy permits an EP test for the difference in the ratio of the gravitational and inertial masses for the Earth and Moon with uncertainty of $1.4 \times 10^{-13}$ (see Refs. ). The precise LLR data contribute to many areas of fundamental and gravitational physics, lunar science, astronomy, and geophysics. With a new LLR station in progress and the possibility of new retro-reflectors on the Moon, lunar laser ranging remains on the front of gravitational physics research in the 21st century.
This paper focuses on the tests of the EP with LLR. To that extent, Section \[sec:history\] discusses the LLR history, experimental technique, and the current state of the effort. Section \[sec:ep\] is devoted to the discussion of the tests of the EP with the Moon. It also introduces various “flavors” of the EP and emphasizes the importance of the Earth and Moon as two test bodies to explore the Strong Equivalence Principle (SEP). Section \[sec:data\] describes the existing LLR data including the statistics for the stations and reflectors, observational selection effects, and distributions. Section \[sec:model\] introduces and characterizes the modeling and analysis techniques, focusing on the tests of the EP. In Section \[sec:data\_analysis\] we discuss the details of the scientific data analysis using the LLR data set for tests of the EP. We present solutions for the EP and also examine the residuals in a search for any systematic signatures. Section \[sec:derived\] focuses on the effects derived from the precision tests of the EP. Section \[sec:ememrging\_oops\] introduces the near term emerging opportunities and addresses their critical role for the future progress in the tests of the equivalence principle with lunar laser ranging. We conclude with a summary and outlook.
Lunar Laser Ranging: History and Techniques {#sec:history}
===========================================
LLR accurately measures the round-trip time of flight for a laser pulse fired from an observatory on the Earth, bounced off of a corner cube retroreflector on the Moon, and returned to the observatory. The currently available set of LLR measurements is more than 35 years long and it has become a major tool to conduct precision tests of the EP in the solar system. Notably, if the EP were to be violated this would result in an inequality of gravitational and inertial masses and thus, it would lead to the Earth and the Moon falling towards the Sun at slightly different rates, thereby distorting the lunar orbit. Thus, using the Earth and Moon as astronomical test bodies, the LLR experiment searches for an EP-violation-induced perturbation of the lunar orbit which could be detected with the available ranges.
In this Section we discuss the history and current state for this unique experimental technique used to investigate relativistic gravity in the solar system.
Lunar Laser Ranging History {#sec:early_history}
---------------------------
The idea of using the orbit of the Moon to test foundations of general relativity belongs to R. H. Dicke, who in early 1950s suggested using powerful, pulsed searchlights on the Earth to illuminate corner retroreflectors on the Moon or a spacecraft.[@Alley_1972; @Bender_etal_1973] The initial proposal was similar to what today is known as astrometric optical navigation which establishes an accurate trajectory of a spacecraft by photographing its position against the stellar background. The progress in quantum optics that resulted in the invention of the laser introduced the possibility of ranging in early 1960s. Lasers—with their spatial coherence, narrow spectral emission, small beam divergence, high power, and well-defined spatial modes—are highly useful for many space applications. Precision laser ranging is an excellent example of such a practical use. The technique of laser Q-switching enabled laser pulses of only a few nanoseconds in length, which allowed highly accurate optical laser ranging.
Initially the methods of laser ranging to the Moon were analogous to radar ranging, with laser pulses bounced off of the lunar surface. A number of these early lunar laser ranging experiments were performed in the early 1960’s, both at the Massachusetts Institute of Technology and in the former Soviet Union at the Crimean astrophysics observatory.[@Abalakin-Kokurin-1981; @Kokurin_2003] However, these lunar surface ranging experiments were significantly affected by the rough lunar topography illuminated by the laser beam. To overcome this difficulty, deployment of a compact corner retroreflector package on the lunar surface was proposed as a part of the unmanned, soft-landing Surveyor missions, a proposal that was never realized.[@Alley_1972] It was in the late 1960’s, with the beginning of the NASA Apollo missions, that the concept of laser ranging to a lunar corner-cube retroreflector array became a reality.
The scientific potential of lunar laser ranging led to the placement of retroreflector arrays on the lunar surface by the Apollo astronauts and the unmanned Soviet Luna missions to the Moon. The first deployment of such a package on the lunar surface took place during the Apollo 11 mission (Figure \[fig:1\]) in the summer of 1969 and LLR became a reality[@Bender_etal_1973]. Additional retroreflector packages were set up on the lunar surface by the Apollo 14 and 15 astronauts (Figure \[fig:2\]). Two French-built retroreflector arrays were on the Lunokhod 1 and 2 rovers placed on the Moon by the Soviet Luna 17 and Luna 21 missions, respectively (Figure \[fig:3\]a). Figure \[fig:3\]b shows the LLR reflector sites on the Moon.
The first successful lunar laser ranges to the Apollo 11 retroreflector were made with the 3.1 m telescope at Lick Observatory in northern California[^1].[@Faller_etal_1969] The ranging system at Lick was designed solely for quick acquisition and confirmation, rather than for an extended program. Ranges started at the McDonald Observatory in 1969 shortly after the Apollo 11 mission, while in the Soviet Union a sequence of laser ranges was made from the Crimean astrophysical observatory.[@Abalakin-Kokurin-1981; @Kokurin_2003] A lunar laser ranging program has been carried out in Australia at the Orroral Observatory[^2]. Other lunar laser range detections were reported by the Air Force Cambridge Research Laboratories Lunar Ranging Observatory in Arizona[@AFCRL_1969], the Pic du Midi Observatory in France[@Calame_etal_1970], and the Tokyo Astronomical Observatory[@Kozai_1972].
While some early efforts were brief and demonstrated capability, most of the scientific results came from long observing campaigns at several observatories. The LLR effort at McDonald Observatory in Texas has been carried out from 1969 to the present. The first sequence of observations was made from the 2.7 m telescope. In 1985 ranging operations were moved to the McDonald Laser Ranging System (MLRS) and in 1988 the MLRS was moved to its present site[^3]. The MLRS has the advantage of a shorter laser pulse and improved range accuracy over the earlier 2.7 m system, but the pulse energy and aperture are smaller. From 1978 to 1980 a set of observations was made from Orroral in Australia.[@Luck_etal_1973; @Morgan_King_1982] Accurate observations began at the Observatoire de la Côte d’Azur (OCA) in 1984[^4] and continue to the present, though first detections were demonstrated earlier. Ranges were made from the Haleakala Observatory on the island of Maui in the Hawaiian chain from 1984 to 1990[^5].
Two modern stations which have demonstrated lunar capability are the Wettzell Laser Ranging System in Germany[^6] and the Matera Laser Ranging Station in Italy[^7]. Neither is operational for LLR at present. The Apache Point Observatory Lunar Laser ranging Operation (APOLLO) was recently built in New Mexico.[@Murphy_etal_2000; @Williams_Turyshev_Murphy_2004; @Murphy_etal_2007; @Murphy_etal_2008; @Turyshev_2008]
The two stations that have produced LLR observations routinely for decades are the McDonald Laser Ranging System (MLRS)[@Shelus_etal_2003] in the United States and the OCA[@Veillet_etal_1993; @Samain_etal_1998] station in France.
LLR and Fundamental Physics Today {#sec:llr_funphys}
----------------------------------
The analyses of LLR measurements contribute to a wide range of scientific disciplines, and are solely responsible for production of the lunar ephemeris. For a general review of LLR see Ref. . An independent analysis for Ref. gives geodetic and astronomical results. The interior, tidal response, and physical librations (rotational variations) of the Moon are all probed by LLR,[@Williams_etal_2001b; @Williams_Dickey_2003] making it a valuable tool for lunar science.
The geometry of the Earth, Moon, and orbit is shown in Figure \[fig:4\]. The mean distance of the Moon is 385,000 km, but there is considerable variation owing to the orbital eccentricity and perturbations due to Sun, planets, and the Earth’s $J_2$ zonal harmonic. The solar perturbations are thousands of kilometers in size and the lunar orbit departs significantly from an ellipse. The sensitivity to the EP comes from the accurate knowledge of the lunar orbit. The equatorial radii of the Earth and Moon are 6378 km and 1738 km, respectively, so that the lengths and relative orientations of the Earth-Moon vector, the station vector, and the retroreflector vector influence the range. Thus, not only is there sensitivity of the range to anything which affects the orbit, there is also sensitivity to effects at the Earth and Moon. These various sensitivities allow the ranges to be analyzed to determine many scientific parameters.
Concerning fundamental physics, LLR currently provides the most viable solar system technique for testing the Strong Equivalence Principle (SEP)–the statement that all forms of mass and energy contribute equivalent quantities of inertial and gravitational mass (see discussion in the following Section). The SEP is more restrictive than the weak EP, which applies to non-gravitational mass-energy, effectively probing the compositional dependence of gravitational acceleration.
-15pt
In addition to the SEP, LLR is capable of measuring the time variation of Newton’s gravitational constant, $G$, providing the strongest limit available for the variability of this “constant.” LLR can also precisely measure the de Sitter precession–effectively a spin-orbit coupling affecting the lunar orbit in the frame co-moving with the Earth-Moon system’s motion around the Sun. The LLR results are also consistent with the existence of gravitomagnetism within 0.1% of the predicted level[@Nordtvedt_1999; @Nordtvedt_2003]; the lunar orbit is a unique laboratory for gravitational physics where each term in the parametrized post-Newtonian (PPN) relativistic equations of motion is verified to a very high accuracy.
A comprehensive paper on tests of gravitational physics is Williams et al.[@Williams_etal_1996a] A recent test of the EP is in Ref. and other general relativity tests are in Ref. . An overview of the LLR gravitational physics tests is given by Nodtvedt.[@Nordtvedt_1999] Reviews of various tests of relativity, including the contribution by LLR, are given in papers by Will.[@Will_1990; @Will_2001] Our recent paper, Ref. , describes the model improvements needed to achieve the mm-level accuracy for LLR. The most recent LLR results for gravitational physics are given in our recent paper of Ref. .
Equivalence Principle and the Moon {#sec:ep}
===================================
Since Newton, the question about equality of inertial and passive gravitational masses arises in almost every theory of gravitation. Thus, almost one hundred years ago Einstein postulated that not only mechanical laws of motion, but also all non-gravitational laws should behave in freely falling frames as if gravity were absent. If local gravitational physics is also independent of the more extended gravitational environment, we have what is known as the strong equivalence principle. It is this principle that predicts identical accelerations of compositionally different objects in the same gravitational field, and also allows gravity to be viewed as a geometrical property of space-time–leading to the general relativistic interpretation of gravitation.
The Equivalence Principle tests can therefore be viewed in two contexts: tests of the foundations of the standard model of gravity (i.e. general theory of relativity), or as searches for new physics because, as emphasized by Damour and colleagues,[@Damour_1996; @Damour_2001] almost all extensions to the standard model of particle physics generically predict new forces that would show up as apparent violations of the EP. The SEP became a foundation of Einstein’s general theory of relativity proposed in 1915. Presently, LLR is the most viable solar system technique for accurate tests of the SEP, providing stringent limits on any possible violation of general relativity - the modern standard theory of gravity.
Below we shall discuss two different “flavors” of the Principle, the weak and the strong forms of the EP that are currently tested in various experiments performed with laboratory tests masses and with bodies of astronomical sizes.
The Weak Form of the Equivalence Principle {#sec:wep}
------------------------------------------
The weak form of the EP (the WEP) states that the gravitational properties of strong and electro-weak interactions obey the EP. In this case the relevant test-body differences are their fractional nuclear-binding differences, their neutron-to-proton ratios, their atomic charges, etc. Furthermore, the equality of gravitational and inertial masses implies that different neutral massive test bodies will have the same free fall acceleration in an external gravitational field, and therefore in freely falling inertial frames the external gravitational field appears only in the form of a tidal interaction[@Singe_1960]. Apart from these tidal corrections, freely falling bodies behave as if external gravity were absent.[@Anderson_etal_1996] General relativity and other metric theories of gravity assume that the WEP is exact. However, extensions of the standard model of particle physics that contain new macroscopic-range quantum fields predict quantum exchange forces that generically violate the WEP because they couple to generalized “charges” rather than to mass/energy as does gravity.[@Damour_Polyakov_1994a; @Damour_Polyakov_1994b]
In a laboratory, precise tests of the EP can be made by comparing the free fall accelerations, $a_1$ and $a_2$, of different test bodies. When the bodies are at the same distance from the source of the gravity, the expression for the equivalence principle takes an elegant form: $$\frac{\Delta a}{a} = \frac{2(a_1- a_2)}{ a_1 + a_2}
= \left(\frac{M_G}{M_I}\right)_1 -\left(\frac{M_G}{M_I}\right)_2
= \Delta\left(\frac{M_G}{M_I}\right),
\label{WEP_da}$$
where $M_G$ and $M_I$ represent gravitational and inertial masses of each body. The sensitivity of the EP test is determined by the precision of the differential acceleration measurement divided by the degree to which the test bodies differ (e.g. composition).
Since the early days of general relativity, Einstein’s version of the Equivalence Principle became a primary focus of many experimental efforts. Various experiments have been performed to measure the ratios of gravitational to inertial masses of bodies. Recent experiments on bodies of laboratory dimensions verify the WEP to a fractional precision $\Delta(M_G/M_I) \lesssim 10^{-11}$ by Roll et al.[@Roll_etal_1964], to $\lesssim 10^{-12}$ by Refs. and more recently to a precision of $\lesssim 1.4\times 10^{-13}$ in Ref. . The accuracy of these experiments is sufficiently high to confirm that the strong, weak, and electromagnetic interactions each contribute equally to the passive gravitational and inertial masses of the laboratory bodies.
This impressive evidence for laboratory bodies is incomplete for astronomical body scales. The experiments searching for WEP violations are conducted in laboratory environments that utilize test masses with negligible amounts of gravitational self-energy and therefore a large scale experiment is needed to test the postulated equality of gravitational self-energy contributions to the inertial and passive gravitational masses of the bodies[@Nordtvedt_1968a]. Once the self-gravity of the test bodies is non-negligible (currently with bodies of astronomical sizes only), the corresponding experiment will be testing the ultimate version of the EP - the strong equivalence principle, that is discussed below.
The Strong Form of the Equivalence Principle {#sec:sep}
--------------------------------------------
In its strong form the EP is extended to cover the gravitational properties resulting from gravitational energy itself. In other words, it is an assumption about the way that gravity begets gravity, i.e. about the non-linear property of gravitation. Although general relativity assumes that the SEP is exact, alternate metric theories of gravity such as those involving scalar fields, and other extensions of gravity theory, typically violate the SEP.[@Nordtvedt_1968a; @Nordtvedt_1968b; @Nordtvedt_1968c; @Nordtvedt_1991] For the SEP case, the relevant test body differences are the fractional contributions to their masses by gravitational self-energy. Because of the extreme weakness of gravity, SEP test bodies that differ significantly must have astronomical sizes. Currently, the Earth-Moon-Sun system provides the best solar system arena for testing the SEP.
A wide class of metric theories of gravity are described by the parametrized post-Newtonian formalism,[@Nordtvedt_1968b; @Will_1971; @Will_Nordtvedt_1972] which allows one to describe within a common framework the motion of celestial bodies in external gravitational fields. Over the last 35 years, the PPN formalism has become a useful framework for testing the SEP for extended bodies. To facilitate investigation of a possible violation of the SEP, in that formalism the ratio between gravitational and inertial masses, $M_G/M_I$, is expressed[@Nordtvedt_1968a; @Nordtvedt_1968b] as $$\left[\frac{M_G}{M_I}\right]_{\tt SEP} = 1 + \eta\left(\frac{U}{Mc^2}\right), \label{eq:MgMi}$$
where $M$ is the mass of a body, $U$ is the body’s gravitational self-energy $(U< 0)$, $Mc^2$ is its total mass-energy, and $\eta$ is a dimensionless constant for SEP violation.[@Nordtvedt_1968a; @Nordtvedt_1968b; @Nordtvedt_1968c]
Any SEP violation is quantified by the parameter $\eta$. In fully-conservative, Lorentz-invariant theories of gravity[@Will_1993; @Will_2001] the SEP parameter is related to the PPN parameters by $$\eta = 4\beta - \gamma -3.
\label{eq:eta}$$
In general relativity $\beta = 1$ and $\gamma = 1$, so that $\eta = 0$.
The self energy of a body B is given by $$\left(\frac{U}{Mc^2}\right)_B
= - \frac{G}{2 M_B c^2}\int_B d^3{\bf x} d^3{\bf y}
\frac{\rho_B({\bf x})\rho_B({\bf y})}{| {\bf x} - {\bf y}|}.
\label{eq:omega}$$
For a sphere with a radius $R$ and uniform density, $U/Mc^2 = -3GM/5Rc^2 = -3 v_E^2/10 c^2$, where $v_E$ is the escape velocity. Accurate evaluation for solar system bodies requires numerical integration of the expression of Eq. (\[eq:omega\]). Evaluating the standard solar model[@Ulrich_1982] results in $(U/Mc^2)_S \sim -3.52 \times 10^{-6}$. Because gravitational self-energy is proportional to $M^2$ (i.e. $U/Mc^2 \sim M$) and also because of the extreme weakness of gravity, the typical values for the ratio $(U/Mc^2)$ are $\sim 10^{-25}$ for bodies of laboratory sizes. Therefore, the experimental accuracy of a part in $10^{13}$ (see Ref. ) which is so useful for the WEP is not a useful test of how gravitational self-energy contributes to the inertial and gravitational masses of small bodies. To test the SEP one must utilize planetary-sized extended bodies where the ratio Eq. (\[eq:omega\]) is considerably higher.
Nordtvedt[@Nordtvedt_1968a; @Nordtvedt_1968c; @Nordtvedt_1970] suggested several solar system experiments for testing the SEP. One of these was the lunar test. Another, a search for the SEP effect in the motion of the Trojan asteroids, was carried out by Orellana and Vucetich.[@Orellana_Vucetich_1988; @Orellana_Vucetich_1993] Interplanetary spacecraft tests have been considered by Anderson et al.[@Anderson_etal_1996] and discussed by Anderson and Williams.[@Anderson_Williams_2001] An experiment employing existing binary pulsar data has been proposed by Damour and Schäfer.[@Damour_Schafer_1991] It was pointed out that binary pulsars may provide an excellent possibility for testing the SEP in the new regime of strong self-gravity[@Damour_Esposito-Farese_1996a; @Damour_Esposito-Farese_1996b], however the corresponding tests have yet to reach competitive accuracy[@Wex_2001; @Lorimer_Freire_2004]. To date, the Earth-Moon-Sun system has provided the most accurate test of the SEP with LLR being the available technique.
Equivalence Principle and the Earth-Moon system {#sec:EP_earth_moon}
-----------------------------------------------
The Earth and Moon are large enough to have significant gravitational self energies and a lunar test of the equivalence principle was proposed by Nordtvedt.[@Nordtvedt_1968c] Both bodies have differences in their compositions and self energies and the Sun provides the external gravitational acceleration. For the Earth[@Flasar_Birch_1973; @Williams_etal_1996a] a numerical evaluation of Eq. (\[eq:omega\]) yields: $$\left(\frac{U}{Mc^2}\right)_E = -4.64 \times 10^{-10}.
\label{eq:earth}$$
The two evaluations, with different Earth models, differ by only 0.1%. (A uniform Earth approximation is 10% smaller in magnitude.) A Moon model, with an iron core $\sim$20% of its radius, gives $$\left(\frac{U}{Mc^2}\right)_M = -1.90 \times 10^{-11}.
\label{eq:moon }$$
The subscripts $E$ and $M$ denote the Earth and Moon, respectively. The lunar value is only 1% different from the uniform density approximation which demonstrates its insensitivity to the model. The lunar value was truncated to two digits in Ref. . For the SEP effect on the Moon’s position with respect to the Earth it is the difference of the two accelerations and self-energy values which is of interest. $$\left(\frac{U}{Mc^2}\right)_E - \left(\frac{U}{Mc^2}\right)_M =
-4.45 \times 10^{-10}.
\label{eq:earth_moon}$$
The Jet Propulsion Laboratory’s (JPL) program which integrates the orbits of the Moon and planets considers accelerations due to Newtonian, geophysical and post-Newtonian effects. Considering just the modification of the point mass Newtonian terms, the equivalence principle enters the acceleration $a_j$ of body $j$ as $${\bf a}_j = G \left(\frac{U}{Mc^2}\right)_j
\sum_k M_k \frac{{\bf r}_{jk}}{r_{jk}^3},
\label{eq:accel}$$
where ${\bf r}_{jk} = {\bf r}_k - {\bf r}_j$ is the vector from accelerated body $j$ to attracting body $k$ and $r_{jk} = |{\bf r}_{jk}|$. For a more through discussion of the integration model see Ref. .
The dynamics of the three-body Sun-Earth-Moon system in the solar system barycentric inertial frame provides the main LLR sensitivity for a possible violation of the equivalence principle. In this frame, the quasi-Newtonian acceleration of the Moon with respect to the Earth, ${\bf a} = {\bf a}_M - {\bf a}_E$, is calculated to be: $${\bf a} = - \mu^* \frac{{\bf r}_{EM}}{r^3_{EM}}
- \left(\frac{M_G}{M_I}\right)_M \mu_S \frac{{\bf r}_{SM}}{r^3_{SM}} + \left(\frac{M_G}{M_I}\right)_E \mu_S \frac{{\bf r}_{SE}}{r^3_{SE}}, \label{eq:range1_m}$$
where $\mu^* = \mu_E (M_G/M_I)_M + \mu_M (M_G/M_I)_E$ and $\mu_k = G M_k$. The first term on the right-hand side of Eq.(\[eq:range1\_m\]), is the acceleration between the Earth and Moon with the remaining pair being the tidal acceleration expression due to the solar gravity. The above acceleration is useful for either the weak or strong forms of the EP.
For the SEP case, $\eta$ enters when expression Eq. (\[eq:MgMi\]) is is combined with Eq. (\[eq:range1\_m\]), $${\bf a} = - \mu^* \frac{{\bf r}_{EM}}{r^3_{EM}} + \mu_S \left[\frac{{\bf r}_{SE}}{r^3_{SE}} - \frac{{\bf r}_{SM}}{r^3_{SM}}\right] + \eta \mu_S \left[\left(\frac{U}{Mc^2}\right)_E \frac{{\bf r}_{SE}}{r^3_{SE}} - \left(\frac{U}{Mc^2}\right)_M \frac{{\bf r}_{SM}}{r^3_{SM}}\right].
\label{eq:range2_M}$$
The presence of $\eta$ in $\mu^*$ modifies Kepler’s third law to $n^2 a^3 = \mu^*$ for the relation between semimajor axis a and mean motion n in the elliptical orbit approximation. This term is notable, but in the LLR solutions $\mu_E + \mu_M$ is a solution parameter, or at least uncertain (see Sec. \[sec:derived\]), so this term does not provide a sensitive test of the equivalence principle, though its effect is implicit in the LLR solutions. The second term on the right-hand side with the differential acceleration toward the Sun is the Newtonian tidal acceleration. The third term involving the self energies gives the main sensitivity of the LLR test of the equivalence principle. Since the distance to the Sun is $\sim$390 times the distance between the Earth and Moon, the last term, is approximately $\eta$ times the difference in the self energies of the two bodies times the Sun’s acceleration of the Earth-Moon center of mass.
Treating the EP related tidal term as a perturbation Nordtvedt[@Nordtvedt_1968c] found a polarization of the Moon’s orbit in the direction of the Sun with a radial perturbation
$$\Delta r = S \left[\left(\frac{M_G}{M_I}\right)_E - \left(\frac{M_G}{M_I}\right)_M \right] \cos D,
\label{eq:range_r}$$
where $S$ is a scaling factor of about $-2.9 \times 10^{13}$ mm (see Refs. ). For the SEP, combining Eqs. (\[eq:MgMi\]) and (\[eq:range\_r\]) yields $$\begin{aligned}
\Delta r &=& S \eta \left[\frac{U_E}{M_Ec^2}-\frac{U_M}{M_Mc^2} \right] \cos D,
\label{eq:S_cosD}\\
\Delta r &=& C_0 \eta \cos D.
\label{eq:cosD}\end{aligned}$$
Applying the difference in numerical values for self-energy for the Earth and Moon Eq. (\[eq:earth\_moon\]) gives a value of $C_0$ of about 13 m (see Refs. ). In general relativity $\eta = 0$. A unit value for $\eta$ would produce a displacement of the lunar orbit about the Earth, causing a 13 m monthly range modulation. See subsection \[sec:EP\_solution\] for a comparison of the theoretical values of $S$ and $C_0$ with numerical results. This effect can be generalized to all similar three body situations.
In essence, LLR tests of the EP compare the free-fall accelerations of the Earth and Moon toward the Sun. Lunar laser-ranging measures the time-of-flight of a laser pulse fired from an observatory on the Earth, bounced off of a retroreflector on the Moon, and returned to the observatory (see Refs. ). If the Equivalence Principle is violated, the lunar orbit will be displaced along the Earth-Sun line, producing a range signature having a 29.53 day synodic period (different from the lunar orbit period of 27 days). The first LLR tests of the EP were published in 1976 (see ). Since then the precision of the test has increased[@Dickey_Newhall_Williams_1989; @Dickey_etal_1994; @Chandler_etal_1994; @Williams_etal_1996a; @Williams_etal_1996b; @Mueller_Nordtvedt_1998; @Anderson_Williams_2001; @Williams_etal_2002; @Williams_Turyshev_Boggs_2004] until modern results are improved by two orders-of-magnitude.
Equivalence Principle and Acceleration by Dark Matter {#sec:dark_matter}
-----------------------------------------------------
At the scales of galaxies and larger there is evidence for unseen dark matter. Thus, observations of disk galaxies imply that the circular speeds are approximately independent of distance to the center of the galaxy at large distances. The standard explanation is that this is due to halos of unseen matter that makes up around 90% of the total mass of the galaxies.[@Tremaine_1992] The same pattern repeats itself on larger and larger scales, until we reach the cosmic scales where a baryonic density compatible with successful big bang nucleosynthesis is less than 10% of the density predicted by inflation, i.e. the critical density. Braginsky et al.[@Braginsky_etal_1992; @Braginsky_1994] have studied the effect of dark matter bound in the galaxy but unbound to the solar system. Such galactic dark matter would produce an anisotropy in the gravitational background of the solar system.
A possible influence of dark matter on the Earth-Moon system has been considered by Nordtvedt[@Nordtvedt_1994], who has pointed out that LLR can also test ordinary matter interacting with galactic dark matter. It was suggested that LLR data can be used to set experimental limits to the density of dark matter in the solar system by studying its effect upon the motion of the Earth-Moon system. The period of the range signature is the sidereal month, 27.32 days. An anomalous acceleration of $10^{-15}$ m/s$^2$ would cause a 2.5 cm range perturbation. At this period there are also signatures due to other solution parameters: one component of station location, obliquity, and orbital mean longitude. These parameters are separable because they contribute at other periods as well, but they are complications to the dark matter test.
In 1995, Nordtvedt Müller, and Soffel published an upper limit of $3 \times 10^{-16}$ m/s$^2$ for a possible differential acceleration in coupling of dark matter to the different compositions of Earth and Moon. This represented a stronger constraint by a factor of 150 than was achieved by the laboratory experiments searching for differential cosmic acceleration rates between beryllium and copper and between beryllium and aluminum.[@Smith_etal_1993; @Su_etal_1994; @Baessler_etal_1999; @Adelberger_2001]
Data {#sec:data}
====
The accuracy and span of the ranges limit the accuracy of fit parameters. This section describes the data set that is used to perform tests of the Equivalence Principle with LLR. The data taking is a day-to-day operation at the McDonald Laser Ranging System (MLRS) and the Observatoire de la Côte d’Azur (OCA) stations.
LLR has remained a viable experiment with fresh results over 35 years because the data accuracies have improved by an order of magnitude. See Section 4.1 below for a discussion and illustration (Figure \[fig:5\]) of that improvement. The International Laser Ranging Service (ILRS)[^8] provides lunar laser ranging data and their related products to support geodetic and geophysical research activities.
Station and Reflector Statistics {#sec:stations}
--------------------------------
LLR data have been acquired from 1969 to the present. Each measurement is the round-trip travel time of a laser pulse between a terrestrial observatory and one of four corner cube retroreflectors on the lunar surface. A normal point is the standard form of an LLR datum used in the analysis. It is the result of a statistical combining of the observed transit times of several individual photons detected by the observing instrument within a relatively short time, typically a few minutes to a few tens of minutes.
The currently operating LLR stations, McDonald Laser Ranging System in Texas[@Shelus_etal_2003] and Observatoire de la Côte d’Azur[@Samain_etal_1998], typically detect 0.01 return photons per pulse during normal operation. A typical “normal point” is constructed from 3-100 return photons, spanning 10-45 minutes of observation.[@Dickey_etal_1994]
The LLR data set for analysis has observations from McDonald Observatory, Haleakala Observatory, and OCA. Figure \[fig:5\] shows the weighted RMS residual for each year. Early accuracies using the McDonald Observatory’s 2.7 m telescope hovered around 25 cm. Equipment improvements decreased the ranging uncertainty to $\sim$15 cm later in the 1970s. In 1985 the 2.7 m ranging system was replaced with the MLRS. In the 1980s lunar ranges were also received from Haleakala Observatory on the island of Maui, Hawaii, and OCA in France. Haleakala ceased lunar operations in 1990. A sequence of technical improvements decreased the rms residual to the current $\sim$2 cm of the past decade. The 2.7 m telescope had a greater light gathering capability than the newer smaller aperture systems, but the newer systems fired more frequently and had a much improved range accuracy. The new systems do not distinguish returning photons against the bright background near full Moon, which the 2.7 m telescope could do. There are some modern eclipse observations.
-10pt
The first LLR test of the EP used 1523 normal points up to May 1975 with accuracies of 25 cm. By April 2004, the data set has now grown to more than 15,554 normal points spanning 35 years, and the recent data is fit with $\sim$2 cm rms scatter. Over time the post-fit rms residual has decreased due to improvements at both the McDonald and the OCA sites. Averaged over the past four years there have been a total of several hundred normal points per year.
The full LLR data set is dominated by three stations: the McDonald Station in Texas, the OCA station at Grasse, France, and the Haleakala station on Maui, Hawaii. At present, routine ranges are being obtained only by the MLRS and OCA. Figure \[fig:3\]b shows the distribution of the lunar retroreflectors. Over the full data span 78% of the ranges come from Apollo 15, 10% from Apollo 11, 9% from Apollo 14, 3% from Lunokhod 2, and nothing from Lunokhod 1 (lost).
The notable improvement of the LLR data set with time implies comparable improvement in the determination of the solution parameters. Data from multiple ranging sites to multiple retroreflectors are needed for a robust analysis effort.
Observational Influences and Selection Effects {#sec:influences}
----------------------------------------------
To range the Moon the observatories on the Earth fire a short laser pulse toward the target retroreflector array. The outgoing laser beam is narrow and the illuminated spot on the Moon is a few kilometers across. The retroreflectors are made up of arrays of corner cubes: 100 for Apollos 11 and 14, 300 for Apollo 15, and 14 for the Lunokhods. At each corner cube (Figure \[fig:6\]) the laser beam enters the front face and bounces off of each of the three orthogonal faces at the rear of the corner cube. The triply reflected pulse exits the front face and returns in a direction opposite to its approach. The returning pulse illuminates an area around the observatory which is a few tens of kilometers in diameter. The observatory has a very sensitive detector which records single photon arrivals. Color and spatial filters are used to eliminate much of the background light. Photons from different laser pulses have similar residuals with respect to the expected round-trip time of flight and are thus separated from the widely scattered randomly arriving background photons. The resulting “range” normal point is the round trip light time for a particular firing time. (For more details on satellite and lunar laser ranging instrumentation, experimental set-up, and operations, consult papers by Degnan[@Degnan_1985; @Degnan_1993; @Degnan_2002] and Samain et al.[@Samain_etal_1998]
The signal returning from the Moon is so weak that single photons must be detected. Not all ranging attempts are successful and the likelihood of success depends on the conditions of observation. Observational effects may influence the strength of the signal, the background light which competes with the detection of the returning laser signal, the width of the outgoing or returning beam, and the telescope pointing. Some of these observational influences select randomly and some select systematically, e.g. with phase of Moon, time of day, or time of year. Selection with phase influences the equivalence principle test. This subsection briefly discusses these observational influences and selection effects.
The narrow laser beam must be accurately pointed at the target. Seeing, a measure of the chaotic blurring of a point source during the transmission of light through the atmosphere, affects both the outgoing laser beam and the returning signal. The beam’s angular spread, typically a few seconds of arc (“), depends on atmospheric seeing so the spot size on the Moon is a few kilometers across at the Moon’s distance (use 1.87 km/”). The amount of energy falling on the retroreflector array depends inversely on that spot area. At the telescope’s detector both a diaphragm restricting the field of view and a (few Angstrom) narrow-band color filter reduce background light. When the background light is high the diaphragm should be small to reduce the interference and increase the signal-to-noise ratio. When the seeing is poor the image size increases and this requires a larger diaphragm.
The phase of the Moon determines whether a target retroreflector array is illuminated by sunlight or is in the dark. These phase effects include the following influences.
- The target illumination determines the amount of sunlight scattered back toward the observatory from the lunar surface near the target. A sunlit surface increases the noise photons at the observatory’s detector and decreases the signal to noise ratio.
- The pointing technique depends on solar illumination around the target array. Visual pointing is used when the target is sunlit while more difficult offset pointing, alignment using a displaced illuminated feature, is used when the target is dark.
- Retroreflector illumination by sunlight determines solar heating of the array and thermal effects on the retroreflector corner cubes. A thermal gradient across a corner cube distorts the optical quality and spreads the return beam. The Lunokhod corner cubes are about twice the size of the Apollo corner cubes and are thus more sensitive to thermal effects. Also, the Lunokhod corner cubes have a reflecting coating on the three reflecting back sides while the Apollo corner cubes depend on total internal reflection. The coating improves the reflected strength for beams that enter the front surface at an angle to the normal, where the Apollo efficiency decreases, but it also heats when sunlit. Thus, the Lunokhod arrays have greater thermal sensitivity and are more difficult targets when heated by sunlight. A retroreflector in the dark is in a favorable thermal environment, but the telescope pointing is more difficult.
Whether the observatory is experiencing daylight or night determines whether sunlight is scattered toward the detector by the atmosphere. As the Moon’s phase approaches new, the fraction of time the Moon spends in the observatory’s daylight sky increases while the maximum elevation of the Moon in the night sky decreases, so atmosphere-scattered sunlight is correlated with lunar phase.
The beam returning from the Moon cannot be narrower than the diffraction pattern for a corner cube. The diffraction pattern of a corner cube has a six-fold shape that depends on the six combinations of ways that light can bounce off of the three orthogonal reflecting faces. An approximate computation for green laser light (0.53 $\mu$m) gives 7 arcsec for the angular diameter of an Airy diffraction disk. The larger Lunokhod corner cubes would give half that diffraction pattern size. Thermal distortions, imperfections, and contaminating dust can make the size of the returning beam larger than the diffraction pattern. So the returning spot size on the Earth is $\sim$30 km across for green laser light. The power received by the telescope depends directly on the telescope’s collecting area and inversely on the returning spot area. Velocity-caused aberration of the returning beam is roughly 1" and is not a limitation since it is much smaller than the diffraction pattern.
There are geometrical selection effects. For the two operational northern ranging stations the Moon spends more time above the horizon when it is at northern declinations and less when south. Also, atmospheric effects such as seeing and absorption increase at low elevation. Consequently, there is selection by declination of the Moon. This, along with climate, causes seasonal selection effects. A station can only range the Moon when it is above the horizon which imposes selection at the 24 hr 50.47 min mean interval between meridian crossings.
The best conditions for ranging occur with the Moon located high in a dark sky on a night of good seeing. A daylight sky adds difficulty and full Moon is even more difficult. A retroreflector in the dark benefits from not being heated by the Sun, but aiming the laser beam is more difficult. New Moon occurs in the daylight sky near the Sun and ranging is not attempted since sensitive detectors are vulnerable to damage from bright light.
Data Distributions {#sec:data_distribution}
------------------
Observational selection effects shape the data distribution. Several selection effects depend on the phase of the Moon and there is a dramatic influence on the distribution of the number of observations with phase. The elongation of the Moon from the Sun is approximated with the angle $D$, the smooth polynomial representation of the difference in the mean longitudes for Sun and Moon. Zero is near new Moon, 90$^\circ$ is near first quarter, 180$^\circ$ is near full Moon, and 270$^\circ$ is near last quarter. Figure \[fig:7\]a illustrates the distribution of observations for the decade from 1995-2004 with respect to the angle $D$. The shape of the curve results from the various selection effects discussed above. There are no ranges near new Moon and few ranges near full Moon. The currently operating observatories only attempt full Moon ranges during eclipses. The original 2.7 m McDonald ranging system transmitted more energy in its longer pulse than currently operating systems, which gave it a higher single shot signal to noise ratio against a bright background. It could range during full Moon as the distribution of the full data set for 1970-2004 shows in Figure \[fig:8\]a.
Factors such as weather and the northern hemisphere location of the operating stations cause seasonal selection effects. The distribution of the number of observations vs the mean anomaly of the Earth-Moon system about the Sun is shown in Figure \[fig:9\]a. The annual mean anomaly is zero in the first week of January so that the mean anomaly is offset from calendar day of the year by only a few days. There is considerable variation in the frequency of observation; the distribution is at its highest in fall and winter and at its lowest in summer.
Other selection effects such as distance and declination also influence the data distribution and can be seen with appropriate histograms. Nonuniform data distributions are one contribution to correlations between solution parameters.
-15pt
-30pt
-10pt
-5pt -10pt
-10pt
-22pt
-10pt
Modeling {#sec:model}
========
Lunar Laser Ranging measures the range from an observatory on the Earth to a retroreflector on the Moon. The center-to-center distance of the Moon from the Earth, with mean value 385,000 km, is variable due to such things as orbit eccentricity, the attraction of the Sun, planets, and the Earth’s bulge, and relativistic corrections. In addition to the lunar orbit, the range from an observatory on the Earth to a retroreflector on the Moon depends on the positions in space of the ranging observatory and the targeted lunar retroreflector. Thus, the orientation of the rotation axes and the rotation angles for both bodies are important. Tidal distortions, plate motion, and relativistic transformations also come into play. To extract the scientific information of interest, it is necessary to accurately model a variety of effects.
The sensitivity to the equivalence principle is through the orbital dynamics. The successful analysis of LLR data requires attention to geophysical and rotational effects for the Earth and the Moon in addition to the orbital effects. Modeling is central to the data analysis. The existing model formulation, and its computational realization in computer code, is the product of much effort. This section gives an overview of the elements included in the present model.
Range Model {#sec:range_model}
-----------
The time-of-flight (“range”) calculation consists of the round-trip “light time” from a ranging site on the Earth to a retroreflector on the Moon and back to the ranging site. This time of flight is about 2.5 sec. The vector equation for the one-way range vector $\boldsymbol{\rho}$ is
$$\boldsymbol{\rho} = {\bf r} - {\bf R}_{\tt stn} + {\bf R}_{\tt rfl},
\label{eq:distance}$$
where ${\bf r}$ is the vector from the center of the Earth to the center of the Moon, ${\bf R}_{\tt stn}$ is the vector from the center of the Earth to the ranging site, and ${\bf R}_{\tt rfl}$ is the vector from the center of the Moon to the retroreflector array (see Figure \[fig:4\] for more details). The total time of flight is the sum of the transmit and receive paths plus delays due to atmosphere and relativistic gravitational delay
$$t_3 - t_1 = (\rho_{12} + \rho_{23})/c + \Delta t_{\tt atm} + \Delta t_{\tt grav}.
\label{eq:dt}$$
The times at the Earth are transmit (1) and receive (3), while the bounce time (2) is at the Moon. Due to the motion of the bodies the light-time computation is iterated for both the transmit and receive legs. Since most effects effectively get doubled, it is convenient to think of 1 nsec in the round-trip time as being equivalent to 15 cm in the one-way distance.
The center of mass of the solar system is treated as unaccelerated. This solar system barycenter (SSB) is the coordinate frame for evaluating the above equations including relativistic computations. First, the transmit time at the station is transformed to the SSB coordinate time (called $T_{\tt eph}$ by Standish[@Standish_1998] approximated by TDB), the basic computations are made in that SSB frame, and the computed receive time is transformed back to the station’s time.
$$(t_3 - t_1 )_{\tt stn} = t_3 - t_1 + \Delta t_{\tt trans}.
\label{eq:dtstn}$$
The form of Eq. (\[eq:distance\]) separates the modeling problem into aspects related to the orbit, the Earth, and the Moon. Eq. (\[eq:dt\]) shows that time delays must be added and Eq. (\[eq:dtstn\]) demonstrates modification of the round-trip-time-delay due to choice of reference frame. For the discussion below we make a similar separation. The dynamics of the orbits and lunar rotation come from a numerical integration, and those are the first two topics. Earth and Moon related computations are discussed next. The last topic is time delays and transformations.
### Orbit Dynamics, ${\bf r}$
The lunar and planetary orbits and the lunar rotation result from a simultaneous numerical integration of the differential equations of motion. The numerical integration model is detailed by Standish and Williams[@Standish_Williams_2005]. Ephemerides of the Moon and planets plus lunar rotation are available at the Jet Propulsion Laboratory web site [http://ssd.jpl.nasa.gov/]{}.
The numerical integration of the motion of the Moon, planets, and Sun generates positions and velocities vs time. The existing model for accelerations accounts for:
- Newtonian and relativistic point mass gravitational interaction between the Sun, Moon, and nine planets. Input parameters include masses, orbit initial conditions, PPN parameters $\beta$ and $\gamma$, $\dot G$, and equivalence principle parameters $(M_G/M_I)$.
- Newtonian attraction of the largest asteroids.
- Newtonian attraction between point mass bodies and bodies with gravitational harmonics: Earth ($J_2, J_3, J_4$), Moon (second- through fourth-degree spherical harmonics), and Sun ($J_2$).
- Attraction from tides on both Earth and Moon includes both elastic and dissipative components. There is a terrestrial Love number $k_2$ and a time delay for each of three frequency bands: semidiurnal, diurnal, and long period. The Moon has a different Love number $k_2$ and time delay.
### Lunar Rotation Dynamics
The numerical integration of the rotation of the Moon generates three Euler angles and three angular velocities. The torque model accounts for:
- Torques from the point mass attraction of Earth, Sun, Venus, Mars and Jupiter. The lunar gravity field includes second- through fourth-degree terms.
- Figure-figure torques between Earth ($J_2$) and Moon ($J_2$ and $C_{22}$).
- Torques from tides raised on the Moon include elastic and dissipative components. The formulation uses a lunar Love number $k_2$ and time delay.
- The fluid core of the Moon is considered to rotate separately from the mantle. A dissipative torque at the lunar solid-mantle/fluid-core interface couples the two[@Williams_etal_2001b]. There is a coupling parameter and the rotations of both mantle and core are integrated.
- An oblate fluid-core/solid-mantle boundary generates a torque from the flow of the fluid along the boundary. This is a recent addition.
### Effects at Earth, ${\bf R}_{\tt stn}$
- The ranging station coordinates include rates for horizontal plate motion and vertical motion.
- The solid-body tides are raised by Moon and Sun and tidal displacements on the Earth are scaled by terrestrial Love numbers $h_2$ and $l_2$. There is also a core-flattening correction for a nearly diurnal term and a “pole tide” due to the time-varying part of the spin distortion.
- The orientation of the Earth’s rotation axis includes precession and nutation. The body polar ($z$) axis is displaced from the rotation axis by polar motion. The daily rotation includes UT1 variations. A rotation matrix between the space and body frames incorporates these effects.
- The motion of the Earth with respect to the solar system barycenter requires a Lorentz contraction for the position of the geocentric ranging station.
A compilation of Earth-related effects has been collected by McCarthy and Petit[@McCarthy_Petit_2003].
### Effects at the Moon, ${\bf R}_{\tt rfl}$
- The Moon-centered coordinates of the retroreflectors are adjusted for solid-body tidal displacements on the Moon. Tides raised by Earth and Sun are scaled by the lunar displacement Love numbers $h_2$ and $l_2$.
- The rotation matrix between the space and lunar body frames depends on the three Euler angles that come from the numerical integration of the Euler equations.
- The motion of the Moon with respect to the solar system barycenter requires a Lorentz contraction for the position of the Moon-centered reflector.
### Time Delays and Transformations
- Atmospheric time delay $\Delta t_{\tt atm}$ follows Ref. . It includes corrections for surface pressure, temperature and humidity which are measured at the ranging site.
- The relativistic time transformation has time-varying terms due to the motion of the Earth’s center with respect to the solar system barycenter. In addition, the displacement of the ranging station from the center of the Earth contributes to the time transformation. The transformation changes during the $\sim$2.5 sec round-trip time and must be computed for both transmit and receive times.
- The propagation of light in the gravity fields of the Sun and Earth causes a relativistic time delay $\Delta t_{\tt grav}$.
### Fit Parameters & Partial Derivatives
For each solution parameter in the least-squares fit there must be a partial derivative of the “range” with respect to that parameter. The partial derivatives may be separated into two types - geometrical and dynamical.
Geometrical partials of range are explicit in the model for the time of flight. Examples are partial derivatives of range with respect to geocentric ranging station coordinates, Moon-centered reflector coordinates, station rates due to plate motion, tidal displacement Love numbers $h_2$ and $l_2$ for Earth and Moon, selected nutation coefficients, diurnal and semidiurnal $UT1$ coefficients, angles and rates for the Earth’s orientation in space, and ranging biases.
Dynamical partials of lunar orbit and rotation are with respect to parameters that enter into the model for numerical integration of the orbits and lunar rotation. Examples are dynamical partial derivatives with respect to the masses and orbit initial conditions for the Moon and planets, the masses of several asteroids, the initial conditions for the rotation of both the lunar mantle and fluid core, Earth and Moon tidal gravity parameters ($k_2$ and time delay), lunar moment of inertia combinations $(B-A)/C$ and $(C-A)/B$, lunar third-degree gravity field coefficients, a lunar core-mantle coupling parameter, equivalence principle $M_G/M_I$, PPN parameters $\beta$ and $\gamma$, geodetic precession, solar $J_2$, and a rate of change for the gravitational constant $G$. Dynamical partial derivatives for the lunar and planetary orbits and the lunar rotation are created by numerical integration.
Considering Eqs. (\[eq:distance\]) and (\[eq:dt\]), the partial derivative of the scalar range $\rho$ with respect to some parameter $p$ takes the form
$$\frac{\partial \rho}{\partial {p}} = (\hat{\boldsymbol\rho} \cdot \frac{\partial {\boldsymbol\rho}}{\partial p}),
\label{eq:partial}$$
where $\hat{\boldsymbol\rho} = {\boldsymbol\rho}/\rho$ is the unit vector. From the three terms in Eq. (\[eq:distance\]), $\partial {\boldsymbol\rho}/\partial p$ depends on $\partial {\bf r}/\partial p$, $-\partial {\bf R}_{\tt stn}/\partial p$, and $\partial {\bf R}_{\tt rfl}/\partial p$.
$$\frac{\partial \rho}{\partial {p}}
= \left(\hat{\boldsymbol\rho} \cdot (\frac{\partial {\bf r}}{\partial p} - \frac{\partial {\bf R}_{\tt stn}}{\partial p} + \frac{\partial {\bf R}_{\tt rfl}}{\partial p} )\right).
\label{eq:partialr}$$
The dynamical partial derivatives of the orbit are represented by $ \partial {\bf r}/ \partial p$. Rotation matrices are used to transform ${\bf R}_{\tt stn}$ and ${\bf R}_{\tt rfl}$ between body and space-oriented coordinates, so the partial derivatives of the rotation matrices depend on fit parameters involving the Earth and Moon Euler angles such as the Earth rotation, precession and nutation quantities and numerous lunar parameters which are sensitive through the rotation. Only geometrical partials contribute to $\partial {\bf R}_{\tt stn}/\partial p$. Both dynamical and geometrical partials affect $\partial {\bf R}_{\tt rfl}/\partial p$.
### Computation
The analytical model has its computational realization in a sequence of computer programs. Briefly these programs perform the following tasks.
- Numerically integrate the lunar and planetary orbits along with lunar rotation.
- Numerically integrate the dynamical partial derivatives.
- Compute the model range for each data point, form the pre-fit residual, and compute range partial derivatives. At the time of the range calculation a file of integrated partial derivatives for orbits and lunar rotation with respect to dynamical solution parameters is read and converted to partial derivatives for range with respect to those parameters following Eq. (\[eq:partialr\]). The partial derivative for PPN $\gamma$ has both dynamical and geometrical components.
- Solve the weighted least-squares equations to get new values for the fit parameters.
- Generate and plot post-fit residuals.
A variety of solutions can be made using different combinations of fit parameters. Linear constraints between solution parameters can also be imposed. The dynamical parameters from a solution can be used to start a new integration followed by new fits. The highest quality ephemerides are produced by iterating the integration and fit cycle.
### Data Weighting
A range normal point is composed of from 3 to 100 single photon detections. As the normal point comes from the station, the uncertainty depends on the calibration uncertainty and the time spread of the detected returned pulse. The latter depends on the length of the outgoing laser pulse, spread at the lunar retroreflector due to tilt of the array, and detector uncertainty. Gathering more photons reduces these return pulse length contributions to the normal point. The analyst can also adjust the weightings according to experience with the residuals. The analysis program includes uncertainty associated with the input UT1 and polar motion variations.
### Solar Radiation Pressure {#sec:solar_rad_press}
Solar radiation pressure, like the acceleration from an equivalence principle violation, is aligned with the direction from the Sun and it produces a perturbation with the 29.53 d synodic period. Thus, this force on the Earth and Moon deserves special consideration for the most accurate tests of the equivalence principle. This acceleration is not currently modeled in the JPL software. Here we rely on the analysis of Vokrouhlicky[@Vokrouhlicky_1997] who considered incident and reflected radiation for both bodies plus thermal radiation from the Moon. He finds a solar radiation perturbation of $-3.65 \pm 0.08$ mm $\cos D$ in the radial coordinate.
### Thermal Expansion
The peak to peak variation of surface temperature at low latitudes on the Moon is nearly 300$^\circ$C. The lunar “day” is 29.53 days long. This is the same period as the largest equivalence principle term so a systematic effect from thermal expansion is indicated. The phase of the thermal cycle depends on the retroreflector longitude.
The Apollo retroreflector arrays and the Lunokhod 1 vehicle with the attached retroreflector array are shown in Figure \[fig:1\]-\[fig:3\]. The Apollo 11, 14, and 15 retroreflector arrays are close to the lunar surface and the center of each array front face is about 0.3, 0.2, and 0.3 m above the surface, respectively. The Apollo corner cubes are mounted in an aluminum plate. The thermal expansion coefficient for aluminum is about $2 \times 10^{-5}$/$^\circ$C. If the Apollo arrays share the same temperature variations as the surface, then the total variation of thermal expansion will be 1 to 2 mm. The Lunokhod 2 vehicle is 1.35 m high. From images the retroreflector array appears to be just below the top and it is located in front of the main body of the Lunokhod. We do not know the precise array position or the thermal expansion coefficient of the rover, but assuming the latter is in the range of $1 \times 10^{-5}$/$^\circ$C to $3 \times 10^{-5}$/$^\circ$C then the peak vertical thermal variation will be in the range of 3 to 10 mm. The horizontal displacement from the center of the Lunokhod is poorly known, but it appears to be $\sim$1 m and the horizontal thermal variation will be similar in size to the vertical variation. The thermal expansion cycle is not currently modeled. For future analyses, it appears to be possible to model the thermal expansion of the Apollo arrays without solution parameters, but a solution parameter for the Lunokhod 2 thermal cycle expansion seems to be indicated.
The soil is heated and subject to thermal expansion, but it is very insulating and the “daily” thermal variation decreases rapidly with depth. So less displacement is expected from the thermal expansion of the soil than from the retroreflector array.
Data Analysis {#sec:data_analysis}
=============
This section presents analysis of the lunar laser ranging data to test the equivalence principle. To check consistency, more than one solution is presented. Solutions are made with two different equivalence principle parameters and different ways of establishing the masses of Earth and Moon. Also, spectra of the residuals after fits, the post-fit residuals, are examined for systematics.
Solutions for Equivalence Principle {#sec:EP_solution}
-----------------------------------
The solutions presented here use 15,554 ranges from March 1970 to April 2004. The ranging stations include the McDonald Observatory, the Observatoire de la Côte d’Azur, and the Haleakala Observatory. The ranges of the last decade are fit with a 2 cm weighted rms residual. Planetary tracking data are used to adjust the orbits of the Earth and other planets in joint lunar and planetary fits. The planetary data analysis does not include a solution parameter for the equivalence principle.
Among the solution parameters are $GM_{\tt Earth+Moon}$, lunar orbit parameters including semimajor axis, Moon-centered retroreflector coordinates, geocentric ranging station coordinates, and lunar tidal displacement Love number $h_2$. For additional fit parameters see the modeling discussion in Section \[sec:model\]. An equivalence principle violation can be solved for in two ways. The first is a parameter for $M_G/M_I$ with a dynamical partial derivative generated from numerical integration. The second solves for a coefficient of $\cos D$ in the lunar range, a one-term representation. The latter approach was used in two papers[@Williams_etal_1976; @Ferrari_etal_1980], but the more sophisticated dynamical parameter is used in more recent JPL publications, namely Refs. . Both approaches are exercised here to investigate consistency.
Five equivalence principle solutions are presented in Table \[tab:1\] as EP 1 to EP 5. Each of these solutions includes a standard set of Newtonian parameters in addition to one or more equivalence principle parameters. In addition, the EP 0 solution is a comparison case which does not solve for an equivalence principle parameter. The solution EP 1 solves for the $({M_G}/{M_I})$ parameter using the integrated partial derivative. That parameter is converted to the coefficient of $\cos D$ in radial distance using the factor $S = -2.9 \times 10^{13}$ mm in Eq. (\[eq:range\_r\]) from subsection \[sec:EP\_earth\_moon\]. The EP 2 case solves for coefficients of $\cos D$ and $\sin D$ in distance using a geometrical partial derivative. Solution EP 3 solves for the $({M_G}/{M_I})$ parameter along with coefficients of $\cos D$ and $\sin D$ in distance. The EP 4 solution constrains the Sun/(Earth+Moon) and Earth/Moon mass ratios. The EP 5 solution uses the mass constraints and also constrains the lunar $h_2$.
-------------- --------------------------------- -------------------------------------------- ---------------------- --------------- ---------------------------
[Solution]{} [$({M_G}/{M_I})$]{} [conversion ]{} [ $\cos D$ ]{} [ $\sin D$]{} [Sun/(Earth+Moon)]{}
[solution]{}, [$ ({M_G}/{M_I})\rightarrow {\tt coef}$]{} [solution]{}, [solution]{},
$ \times \,10^{-13}$ mm mm mm
EP 0 $328900.5596 \pm 0.0011$
EP 1 $ 0.30\pm1.42$ $-0.9\pm4.1$ $328900.5595 \pm 0.0012$
EP 2 -12pt $-0.5 \pm 4.2$ $0.9 \pm 2.1$ $328900.5596 \pm 0.0012$
EP 3 $ 0.79\pm 6.09$ $-2.3 \pm 17.7$ $1.7 \pm 17.8$ $0.9 \pm 2.1$ $328900.5595 \pm 0.0012 $
EP 4 $ 0.21\pm1.30 $ $-0.6\pm3.8$ $328900.5597 \pm 0.0007$
EP 5 $ \hskip -10pt -0.11 \pm 1.30 $ $0.3 \pm 3.8$ $328900.5597 \pm 0.0007$
-------------- --------------------------------- -------------------------------------------- ---------------------- --------------- ---------------------------
\[tab:1\]
The values in Table \[tab:2\] are corrected for the solar radiation pressure perturbation as computed by Vokrouhlicky.[@Vokrouhlicky_1997] See the modeling subsection \[sec:solar\_rad\_press\] on solar radiation pressure for a further discussion. For the EP 3 case, with two equivalence principle parameters, the sum of the two $\cos D$ coefficients in Table \[tab:1\] is $-0.6\pm 4.2$ mm and that sum corrects to $3.1 \pm 4.2$ mm, which may be compared with the four entries in Table \[tab:2\].
---------- ------------------------------------ ---------------------------------------- ------------------- -------------------
Solution $({M_G}/{M_I})$ $({M_G}/{M_I})\rightarrow {\tt coef} $ [coef]{} $\cos D$ [coef]{} $\sin D$
solution conversion solution solution
mm mm mm
EP 1 $(-0.96\pm 1.42) \times 10^{-13}$ $2.8 \pm 4.1$
EP 2 $3.1 \pm 4.2$ $0.9 \pm 2.1$
EP 4 $(-1.05 \pm 1.30) \times 10^{-13}$ $3.0 \pm 3.8$
EP 5 $(-1.37 \pm 1.30) \times 10^{-13}$ $4.0 \pm 3.8$
---------- ------------------------------------ ---------------------------------------- ------------------- -------------------
\[tab:2\]
The equivalence principle solution parameters in Tables \[tab:1\] and \[tab:2\] are within their uncertainties for all cases except EP 5 in Table \[tab:2\], and that value is just slightly larger. Also, the EP 2 coefficient of $\cos D$ agrees reasonably for value and uncertainty with the conversion of the $M_G/M_I$ parameter of the EP 1, EP 4 and EP 5 solutions to a distance coefficient. For the EP 3 solution, the sum of the converted $M_G/M_I$ coefficient and the $\cos D$ coefficient agrees with the other solutions in the two tables. There is no evidence for a violation of the equivalence principle and solutions with different equivalence principle parameters are compatible.
The difference in uncertainty between the $\sin D$ and $\cos D$ components of both the EP 2 and EP 3 solutions is due to the nonuniform distribution of observations with respect to $D$, as illustrated in Figures \[fig:7\]a and \[fig:8\]a. The $\sin D$ coefficient is well determined from observations near first and last quarter Moon, but the $\cos D$ coefficient is weakened by the decrease of data toward new and full Moon.
The EP 3 case, solving for $M_G/M_I$ along with $\cos D$ and $\sin D$ coefficients, is instructive. The correlation between the $M_G/M_I$ and $\cos D$ parameters is 0.972 so the two quantities are nearly equivalent, as expected. The uncertainty for the two equivalence principle parameters increases by a factor of four in the joint solution, but the solution is not singular, so there is some ability to distinguish between the two formulations. The integrated partial derivative implicitly includes terms at frequencies other than the $D$ argument (Nordtvedt, private communication, 1996) and it will also have some sensitivity to the equivalence principle influence on lunar orbital longitude. The equivalence principle perturbation on lunar orbital longitude is about twice the size of the radial component and it depends on $\sin D$. The ratio of Earth radius to lunar semimajor axis is $R_E/a \sim 1/60.3$, the parallax is about 1$^\circ$, so the longitude component projects into range at the few percent level.
The uncertainties in the EP 3 solution can be used to check the theoretical computation of the coefficients $S$, which multiplies $\Delta (M_G/M_I)$, and $C_0$, which are associated with the $\cos D$ radial perturbation (subsection 3.3). Given the high correlation, a first approximation of $S=-2.92 \times10^{13}$ mm is given by the ratio of uncertainties, and our knowledge that it must be negative. A more sophisticated estimate of $S=-2.99 \times 10^{13}$ mm comes from computing the slope of the axis of the uncertainty ellipse for the two parameters. Using expression Eq. (\[eq:earth\_moon\]) for the difference in self energies of the Earth and Moon, the two preceding values give $\Delta r=13.0$ m $\eta \cos D$ and $\Delta r=13.3$ m $\eta \cos D$, respectively. For comparison, the theoretical computations of Ref. give $S=-2.9\times 10^{13}$ mm and $\Delta r=12.8$ m $\eta \cos D$, Damour and Vokrouhlicky[@Damour_Vokrouhlicky_1996a] give $S=-2.9427 \times 10^{13}$ mm, corresponding to $\Delta r=13.1$ m $\eta \cos D$, and Nordtvedt and Vokrouhlicky[@Nordtvedt_Vokrouhlicky_1997] give $S=-2.943\times 10^{13}$ mm and $\Delta r=13.1$ m $\eta \cos D$. The numerical results here are consistent with the theoretical computations within a few percent.
The EP 1 solution serves as an example for correlations. The correlation of $M_G/M_I$ with both $GM_{\tt Earth+Moon}$ and osculating semimajor axis (at the 1969 epoch of the integration) is 0.46. $GM$ and mean semimajor axis are connected through Kepler’s third law given that the mean motion is very well determined. The product of mean semimajor axis and mean eccentricity is well determined and the correlation of $M_G/M_I$ with osculating eccentricity is 0.45. The correlation with the Earth-Moon mass ratio is 0.26.
The value of $GM_{\tt Earth+Moon}$ is important for the equivalence principle solutions. The Sun’s $GM$ is defined in units of AU$^3$/day$^2$ so $GM_{\tt Earth+Moon}$ in those same units may be expressed as the mass ratio Sun/(Earth+Moon) as is done in Table \[tab:1\]. The Sun/(Earth+Moon) mass ratio is a solution parameter in EP 0 through EP 3. The solutions marked EP 4 and EP 5 use a value derived from sources other than LLR. The Sun/(Earth+Moon) mass ratio is fixed at a value, with uncertainty, based on GM(Earth) from Ries et al.[@Ries_etal_1992] and an Earth/Moon mass ratio of $81.300570 \pm 0.000005$ from Konopliv et al.[@Konopliv_etal_2002]. The uncertainty for $M_G/M_I$ is improved somewhat for solution EP 4. With a fixed $GM$, the correlation with semimajor axis becomes small, as expected, but the correlation with the lunar $h_2$ is now $0.42$ and the $h_2$ solution value is $0.044 \pm 0.007$. For comparison, solution EP 1 had a correlation of $-0.01$ and a solution value of $0.043 \pm 0.009$. The solution EP 5 adds the lunar Love number $h_2$ to the constrained values using $h_2 = 0.0397$ from the model calculations of Williams et al.[@Williams_etal_2005] A realistic model $h_2$ uncertainty is about 15%, close to the EP 4 solution value, and the $M_G/M_I$ uncertainty is virtually the same as in the EP 4 solution. All solutions use a model Love number $l_2$ value constrained to $0.0106$. Considering the difficulty of precisely comparing uncertainties between analyses of different data sets, the gains for the last two constrained equivalence principle solutions are modest at best.
Five solutions presented in this subsection have tested the equivalence principle. They do not show evidence for a significant violation of the equivalence principle.
Spectra - Searching for Signatures in the Residuals {#sec:spectra}
---------------------------------------------------
Part of the LLR data analysis is the examination of post-fit residuals including the calculation of overall and annual rms, a search for signatures at certain fundamental periods, and spectra over a spread of frequencies. Direct examination of residuals can reveal some systematic effects but spectra of residuals, appropriately weighted for their uncertainties, can expose subtle effects.
First consider the baseline solution EP 0 without an equivalence principle parameter. The distribution of observations vs $D$ has been shown in the histograms of Figures \[fig:7\]a and \[fig:8\]a. The last decade of mean weighted residuals vs $D$ is presented in Figure \[fig:7\]b and all of the data is plotted in Figure \[fig:8\]b. If an equivalence principle violation were present it would look like a cosine. No such signature is obvious and a fit to the residuals gives a 1 mm amplitude, which is insignificant.
The LLR data are not evenly spaced or uniformly accurate so aliasing will be present in the spectra. Here, a periodogram is computed by sequentially solving for sine and cosine components at equally spaced frequencies corresponding to periods from 18 years (6585 d) to 6 d. Figure \[fig:10\]a shows the amplitude spectrum of the weighted post-fit residuals for the baseline solution. Nothing is evident above the background at the 29.53 day synodic period (frequency \#223), which is consistent with the results of Table \[tab:1\]. There are two notable features: a 3.6 mm peak at 1 yr and a broad increase at longer periods. There are several uncompensated effects which might be contributing at 1 yr including loading effects on the Earth’s surface height due to seasonal atmosphere and groundwater changes, and “geocenter motion,” the displacement of the solid body (and core) of the Earth with respect to the overall center of mass due to variable effects such as oceans, groundwater and atmosphere. Averaged over more than 1000 frequencies the spectrum’s background level is 1.2 mm. Broad increases in the background near 1 month, $1/2$ month, and $1/3$ month etc, are due to aliasing.
-10pt
-30pt
-10pt
For comparison, an equivalence principle signature was deliberately forced into another least-squares solution. A finite $\Delta (M_G/M_I)$ value of $1.5\times 10^{-12}$, an order of magnitude larger than the uncertainty of the EP 1 solution of Tables \[tab:1\] and \[tab:2\], was constrained in a multiparameter least-squares solution. The standard solution parameters were free to minimize the imposed equivalence principle signature as best they could. Notably, $GM_{\tt Earth+Moon}$ and the Earth/Moon mass ratio were distorted from normal values by 5 and 3 times their realistic uncertainties, respectively, and the correlated orbit parameters also shifted by significant amounts. The overall (35 year) weighted rms residual increased from 2.9 to 3.1 cm. Figure \[fig:10\]b shows the spectrum of the residuals. The two strongest spectral lines, 13 mm and 10 mm, are at the $D$ and $3D$ frequencies, respectively. Detailed examination also shows weaker features, a $5D$ line and mixes of integer multiples of the $D$ frequency with the monthly and annual mean anomaly frequencies. The expected equivalence principle signature of 44 mm $\cos D$ has been partly compensated by the least-squares adjustment of parameters for $GM$ and other quantities. Note that the ratio of the 13 mm peak to the 1.2 mm background is compatible with the ratio of 44 mm (or $1.5 \times 10^{-12}$) to the equivalence principle uncertainty of 4.2 mm (or $1.4 \times 10^{-13}$) in Tables \[tab:1\] and \[tab:2\]. The spectral amplitudes are computed one frequency at a time, but if the amplitudes of $\cos D$ and $\cos 3D$ are simultaneously fit to the post-fit residuals (not to the original ranges) then one gets $34 \cos D + 18 \cos 3D$ in mm. This combination would be largest near new Moon, where there are no observations, and near full Moon, where there are very few accurate observations. The spectrum for the baseline solution in Figure \[fig:10\]a shows no such lines. In this figure the $\sim3$ mm peaks near 1 month and 1/3 month are at unassociated periods.
In summary, a post-fit residual spectrum of baseline solution EP 0 without an equivalence principle parameter shows no evidence of any equivalence principle violation. Manipulation shows that while a systematic equivalence principle signature can be diminished by adjusting other parameters during the least-squares solution, that compensation is only partly effective and a systematic effect cannot be eliminated. It is also seen that the parameter uncertainties and correlations from the least-squares solutions are in reasonable agreement with the experience based on the spectra.
Classical Lunar Orbit {#sec:lunar_orbit}
---------------------
The JPL analyses use numerical integrations for the orbit and dynamical partial derivatives. However, Keplerian elements and series expansions for the orbit give insight into the solution process.
The Keplerian elements and mean distance of the Moon are summarized in Table \[tab:3\]. Note that the inclination is to the ecliptic plane, not the Earth’s equator plane. The lunar orbit plane precesses along a plane which is close to the ecliptic because solar perturbations are much more important than the Earth’s $J_2$ perturbation. A time average is indicated by $\left<...\right>$.
------------------------------- ------------------------------- ---------------
Mean distance $\left<r \right>$ 385,000.5 km
Semimajor axis a = 1/$\left<1/r\right>$ 384,399.0 km
Eccentricity $e$ 0.0549
Inclination to ecliptic plane $i$ 5.145$^\circ$
------------------------------- ------------------------------- ---------------
\[tab:3\]
Various lunar orbital angles and periods are summarized in Table \[tab:4\]. These are mean angles represented by smooth polynomials. The solar angles with annual periods are $l'$ for mean anomaly (the same as the mean anomaly of the Earth-Moon center of mass) and $L'$ for mean longitude (180$^\circ$ different from the mean longitude of the Earth-Moon center of mass).
Angle Symbol Period
---------------------------------- ---------- ----------
Mean Longitude $L$ 27.322 d
Mean Anomaly $l$ 27.555 d
Mean Argument of Latitude $F$ 27.212 d
Mean Elongation of Moon from Sun $D$ 29.531 d
Mean Node $\Omega$ 18.61 yr
Mean Longitude of Perigee $\varpi$ 8.85 yr
Mean Argument of Perigee $\omega$ 6.00 yr
\[tab:4\]
The lunar orbit is strongly perturbed by the Sun. Chapront-Touzé and Chapront have developed an accurate series using computer techniques. From that series (see ) a few large terms for the radial coordinate (in kilometers) are $$\begin{aligned}
r &=& 385001 - 20905 \cos l - 3699 \cos(2D-l) - 2956 \cos 2D -\nonumber\\
&-& 570 \cos 2l + 246 \cos(2l-2D) + ... + 109 \cos D + ...
\label{eq:rexp}\end{aligned}$$
The constant first term on the right-hand side is the mean distance (somewhat larger than the semimajor axis), the $l$ and $2l$ terms are elliptical terms, and the remaining terms are from solar perturbations. The amplitudes of the solar perturbation terms depend on the masses of the Earth, Moon, and Sun, as well as the lunar orbit and the Earth-Moon orbit about the Sun. The periods of the periodic terms in the order given in Eq. (\[eq:rexp\]) are 27.555 d, 31.812 d, 14.765 d, 13.777 d, 205.9 d, and 29.531 d, so the different terms are well separated in frequency.
If the equivalence principle is violated, there is a dipole term in the expansion of the solar perturbation which gives the $\cos D$ term of subsection \[sec:EP\_earth\_moon\], see Refs. . When the equivalence principle is satisfied the dipole term has zero coefficient. There is a classical $\cos D$ term which arises from the octupole ($P_3$) term in the expansion and that gives the 109 km amplitude in the series expansion for orbital $r$, Eq. (\[eq:rexp\]).
The JPL Lunar Laser Ranging analyses use numerically integrated orbits, not series expansions (see for the polynomial expressions for lunar angles and an LLR data analysis with a higher reliance on analytical series). The uncertainty of the solar perturbation corresponding to the classical $\cos D$ term is very small and is included in the final $M_G/M_I$ and amplitude uncertainties of the EP 1, EP 2, and EP 3 solutions of Tables \[tab:1\] and \[tab:2\], since mass and orbit quantities are also solution parameters in those least-squares solutions.
Separation of the Equivalence Principle Signature {#sec:separation}
-------------------------------------------------
The equivalence principle solution parameter, whether $M_G/M_I$ or $\cos D$, is significantly correlated with $GM$ of the Earth-Moon system and lunar semimajor axis. The mean motion of the Moon is very well determined from the observations so Kepler’s third law strongly relates the $GM$ and mean semimajor axis. The correlation between $GM$ and $\cos D$ is related to the uneven distribution of observations for the angle $D$ (Figures \[fig:7\]a,\[fig:8\]a). The relation between the equivalence principle, $GM$ and the $D$ distribution has been extensively discussed by Nordtvedt[@Nordtvedt_1998]. Some additional effects are briefly described by Anderson and Williams[@Anderson_Williams_2001]. This subsection discusses the consequence of the $D$ distribution and other effects.
The range may be derived from the vector Eq. (\[eq:distance\]). The scalar range equation may be approximated as $$\rho \approx r - (\hat{\bf r}\cdot{\bf R}_{\tt stn}) + (\hat{\bf r}\cdot{\bf R}_{\tt rfl}) - ({\bf R}_{\tt stn} \cdot{\bf R}_{\tt rfl}) / r + ...
\label{eq:(distexp)}$$
The extended series expansion of the range equation is complicated, but with some consideration a few terms may be selected which are relevant to the equivalence principle solution. The dot product between the orbital radius and the station vector involves large near daily and monthly variations and in solutions the station coordinates separate well from the other parameters. Because of the good separation, this dot product will not be considered further here. The series for orbital radius $r$ is given in Eq. (\[eq:rexp\]). The mean distance for an elliptical orbit is given by $a(1+e^2/2)$, where for the Moon $a=384,399$ km and $e=0.0549$ (see Table \[tab:3\]). The terms with mean anomaly $l$ in the arguments have coefficients that depend on eccentricity. The coefficient of the $2D$ term is scaled by the semimajor axis and while the coefficient has sensitivity to other parameters the semimajor axis scaling is the primary concern here. The mean anomaly dependent terms have periods quite different from $D$ and will not be considered further. Considering the dot product between reflector and orbit radius, the term of interest is $X u_1$, where $X$ is the reflector component toward the mean Earth direction, expressed in the body-referenced frame, and the expansion for the $x$ component of the unit vector from Moon center to Earth center in the same frame is Williams[@Williams_2005] $$\begin{aligned}
u_1 &\approx& 0.99342 + 0.00337 \cos 2F + 0.00298 \cos 2l +\nonumber\\
&+& 0.00131 \cos 2D - 0.00124 \cos(2l-2D) + ...
\label{eq:u1}\end{aligned}$$
The angle $F$ is the polynomial for the mean argument of latitude, the lunar angle measured from the node of the orbit on the ecliptic plane. This angle is associated with the tilts of the orbit plane and the lunar equator plane to the ecliptic plane and it has a period of 27.212 days (Table \[tab:4\]).
With the above considerations the relevant combination of terms is
$$N \cos D + a ( 1.00157 - 0.00769 \cos 2D ) - X u_1 - ({\bf R}_{\tt stn}\cdot{\bf R}_{\tt rfl}) / r ,
\label{eq:Nordt}$$
where the first term represents an equivalence principle violation and $N, a,$ and $X$ are to be determined from the data. The linear combination $1.0016 a - 0.9934 X$ is better determined by two orders-of-magnitude than either $a$ or $X$. The separation of the different solution parameters is aided by the time variation of their multiplying functions in Eq. (\[eq:Nordt\]). The periodic $2D$ term provides one way to separate $X$ and $a$. If the angle $D$ were uniformly distributed, then the $D$ and $2D$ terms would be distinct. The nonuniform distribution of $D$ (Figures \[fig:7\]a and \[fig:8\]a) weakens the separation of the two periodicities and causes $N, a$ \[and $GM_{\tt Earth+Moon}$\], and $X$ to be correlated. The separation of $X$ is aided by the periodic terms in $u_1$, such as the two half month terms with arguments $2F$ and $2l$, as well as the dot product between the station and reflector vectors, where $R_{\tt stn} / a = 1/60.3$ sets the scale for daily and longer period terms.
A good equivalence principle test is aided by a) a good distribution of angle $D$, b) a good distribution of orbit angles $l$ and $F$, which is equivalent to a good distribution of orientations of the Moon’s $x$ axis with respect to the direction to the Earth (optical librations), and c) a wide distribution of hour angles and declinations of the Moon as seen from the Earth. Of these three, the first is the hardest for LLR to achieve for the reasons discussed in subsection \[sec:influences\] on selection effects.
Derived Effects {#sec:derived}
===============
The solution EP 1 matches the EP test published in Ref. . The data set of this paper has only one data point more than the data set of the published case. Several consequences can be derived from the equivalence principle test including a test of the strong equivalence principle and PPN parameter $\beta$.
Gravity Shielding - the Majorana Effect {#secLmajorana}
---------------------------------------
The possibility that matter can shield gravity is not predicted by modern theories of gravity, but it is a recurrent idea and it would cause a violation of the equivalence principle test. Consequently, a brief discussion is given in this subsection.
The idea of gravity shielding goes back at least as far as to the original paper by Majorana.[@Majorana_1920] He proposed that the inverse square law of attraction should include an exponential factor $\exp(-h \int\rho(s) ds)$ which depends on the amount of mass between attracting mass elements and a universal constant $h$. If mass shields gravity, then large bodies such as the Moon and Earth will partly shield their own gravitational attraction. The observable ratio of gravitational mass to inertial mass would not be independent of mass, which would violate the equivalence principle. Russell[@Russell_1921] realized that the large masses of the Earth, Moon and planets made the observations of the orbits of these bodies and the asteroid Eros a good test of such a possibility. He made a rough estimate that the equivalence principle was satisfied to a few parts per million, which was much smaller than a numerical prediction based on Majorana’s estimate for $h$.
Majorana gave a closed form expression for a sphere’s gravitational to inertial mass ratio. For weak shielding a simpler expression is given by the linear expansion of the exponential term
$$\frac{M_G}{M_I} \approx 1 - h f R \bar{\rho},
\label{eq:shield}$$
where $f$ is a numerical factor, $\bar{\rho}$ is the mean density, and $R$ is the sphere’s radius. For a homogeneous sphere Majorana and Russell give $f=3/4$. For a radial density distribution of the form $\rho(r)=\rho(0) (1-r^2/R^2)^n$ Russell[@Russell_1921] derives $f=(2n+3)^2/(12n+12)$.
Eckhardt[@Eckhardt_1990] used an LLR test of the equivalence principle to set a modern limit on gravity shielding. That result is updated as follows. The uniform density approximation is sufficient for the Moon and $f R \bar{\rho} = 4.4 \times 10^8$ gm/cm$^2$. For the Earth we use $n\approx 0.8$ with Russell’s expression to get $f R\bar{\rho} = 3.4 \times10^9$ gm/cm$^2$. Using the difference $-3.0\times10^9$ gm/cm$^2$ $h$ along with the LLR EP 1 solution from Table \[tab:2\] for the difference in gravitational to inertial mass ratios gives $h = (3 \pm 5) \times 10^{-23}$ cm$^2$/gm. The value is not significant compared to the uncertainty. To give a sense of scale to the uncertainty, for the gravitational attraction to be diminished by $1/2$ would require a column of matter with the density of water stretching at least half way from the solar system to the center of the galaxy. The LLR equivalence principle tests give no evidence that mass shields gravity and the limits are very strong.
The Strong Equivalence Principle {#sec:sep_solution}
--------------------------------
The total equivalence principle results for the Earth-Moon system have been given in Table \[tab:2\]. This test is a strong result in its own right. The total equivalence principle is the sum of contributions from the WEP, which depends on composition, and the SEP, which depends on gravitational self energy. This subsection extracts a result for the SEP by using WEP results from laboratory experiments at the University of Washington.
Experiments by several groups have tested the WEP. Several of these experiments with different test body compositions were compared in order to limit the WEP effect on the Earth-Moon pair to $10^{-12}$, see Refs. . Recent laboratory investigations have synthesized the composition of the Earth and Moon[@Baessler_etal_1999; @Adelberger_2001] by using test bodies which simulate the composition of core and mantle materials. These WEP results are an order-of-magnitude more accurate.
The most abundant element in the Earth is oxygen, followed by iron (30 weight %), silicon and magnesium.[@Larimer_1986] For the Moon, iron is in fourth place with about $1/3$ of the Earth’s abundance. The composition of the mantles of the Earth and Moon are similar, though there are differences (e.g. the Moon lacks the lower temperature volatiles such as water). Iron and nickel are the heaviest elements which are abundant in both bodies. Hence the difference in iron abundance, and associated siderophile elements, between the Earth and Moon is the compositional difference of most interest for the WEP.
The Earth has a massive core ($\sim$1/3 by mass) with iron its major constituent and nickel and sulfur lesser components. Several lines of evidence indicate that the Moon has a small core which is $<2$% of its mass: moment of inertia[@Konopliv_etal_1998], induced magnetic dipole moment[@Hood_etal_1999], and rotational dynamics[@Williams_etal_2001b]. The lunar core is presumed to be dominated by iron, probably alloyed with nickel and possibly sulfur, but the amount of information on the core is modest and evidence for composition is indirect. In any case, most of the Fe in the Moon is in minerals in the thick mantle while for the Earth most of the Fe is in the metallic core. For an example of lunar models see Ref. .
For consideration of the WEP the iron content is the important difference in composition between the Earth and Moon. Among the elements present at $>1$ weight %, iron (and nickel for the Earth) have the largest atomic weights and numbers. The two University of Washington test bodies reproduce the mean atomic weights and mean number of neutrons for the core material of the Earth (and probably the Moon) and both bodies’ mantles.
The Baessler et al.[@Baessler_etal_1999] and Adelberger[@Adelberger_2001] analyses use 38.2 % for the fraction of mass of Fe/Ni core material in the whole Earth and 10.1 % for the fraction in the Moon. The difference in the experimental accelerations of the two test bodies is converted to the equivalent (WEP) difference in the acceleration of the Earth and Moon by multiplying by the difference (0.281). Since the iron contents of the Earth and Moon are uncertain by a few percent, the effect of composition uncertainties is an order-of-magnitude less than the derived acceleration difference. The Adelberger[@Adelberger_2001] result for the relative acceleration is given as $(1.0 \pm 1.4 \pm 0.2) \times 10^{-13}$, where the first uncertainty is for random errors and the second is for systematic errors. We combine the systematic and random uncertainties and use
$$\left[\left(\frac{M_G}{M_I}\right)_E - \left(\frac{M_G}{M_I}\right)_M\right]_{\tt WEP} = (1.0 \pm 1.4) \times 10^{-13}.
\label{eq:(WEP)}$$
The strong equivalence principle test comes from combining solution EP 1 of Table \[tab:2\] with the above WEP result.
$$\left[\left(\frac{M_G}{M_I}\right)_E - \left(\frac{M_G}{M_I}\right)_M\right]_{\tt SEP} = (-2.0 \pm 2.0) \times 10^{-13}.
\label{eq:(SEPLLR)}$$
This combination of the LLR determination of the equivalence principle and the laboratory test of the weak equivalence principle provides the tightest constraint on the strong equivalence principle.
PPN Beta {#sec:beta}
--------
The test for a possible violation of the strong equivalence principle, the equivalence principle due to self-energy, is sensitive to a linear combination of PPN parameters. For conservative theories this linear relation is $\eta = 4 \beta - \gamma - 3$, given by Eq. (\[eq:eta\]). Using a good experimental determination of PPN $\gamma$, the SEP result can be converted into a result for PPN $\beta$.
The test for any violation of the strong equivalence principle is sensitive to a linear combination of PPN quantities. Considering only PPN $\beta$ and $\gamma$, divide the SEP determination of Eq. (\[eq:(SEPLLR)\]) by the numerical value from Eq. (\[eq:earth\_moon\]) to obtain $$\eta = 4\beta - \gamma - 3 = (4.4\pm4.5)\times 10^{-4}. \label{eq:etaLLR}$$
This expression would be null for general relativity, hence the small value is consistent with Einstein’s theory.
The SEP relates to the non-linearity of gravity (how gravity affects itself), with the PPN parameter $\beta$ representing the degree of non-linearity. LLR provides great sensitivity to $\beta$, as suggested by the strong dependence of $\eta$ on $\beta$ in Eqs. (\[eq:eta\]) and (\[eq:etaLLR\]).
An accurate result for $\gamma$ has been determined by the Cassini spacecraft experiment.[@Bertotti_etal_2003] Using high-accuracy Doppler measurements, the gravitational time delay allowed $\gamma$ to be determined to the very high accuracy of $\gamma - 1 = (2.1 \pm 2.3) \times 10^{-5}$. This value of $\gamma$, in combination with $\eta$, leads to a significant improvement in the parameter $\beta$: $$\beta - 1 = (1.2 \pm 1.1)\times 10^{-4}.
\label{eq:(betaLLR)}$$
We do not consider this result to be a significant deviation of $\beta$ from unity.
The PPN parameter $\beta$ has been determined by combining the LLR test of the equivalence principle, the laboratory results on the WEP, and the Cassini spacecraft determination of $\gamma$. The uncertainty in $\beta$ is a dramatic improvement over earlier results. The data set for the solutions in this chapter differs by only one point from that used in Ref. . Consequently, the equivalence principle solution EP 1, and the derived result above for the strong equivalence principle, $\eta$ and $\beta$ are virtually the same as for the publication.
Emerging Opportunities {#sec:ememrging_oops}
======================
It is essential that the acquisition of new LLR data continue in the future. Centimeter level accuracies are now achieved, and a further improvement is expected. Analyzing improved data would allow a correspondingly more precise determination of gravitational physics and other parameters of interest. In addition to the existing LLR capabilities, there are two near term possibilities that include the construction of the new LLR stations and development and deployment of either new sets of passive laser cornercube retroreflectors or active laser transponders pointed at Earth or both of these instruments.
In this Section we will discuss both of these emerging opportunities - the new LLR station in New Mexico and new LLR instruments on the Moon - for near term advancements in gravitational research in the solar system.
New LLR Data and the APOLLO facility {#sec:future_LLR}
------------------------------------
LLR has remained a viable experiment with fresh results over 35 years because the data accuracies have improved by an order of magnitude (see Figure \[fig:5\]). A new LLR station should provide another order of magnitude improvement. The Apache Point Observatory Lunar Laser-ranging Operation (APOLLO) is a new LLR effort designed to achieve millimeter range precision and corresponding order-of-magnitude gains in measurements of fundamental physics parameters. Using a 3.5 m telescope the APOLLO facility will push LLR into the regime of stronger photon returns with each pulse, enabling millimeter range precision to be achieved.[@Murphy_etal_2000; @Williams_Turyshev_Murphy_2004]
An advantage that APOLLO has over current LLR operations is a 3.5 m astronomical quality telescope at a good site. The site in southern New Mexico offers high altitude (2780 m) and very good atmospheric “seeing” and image quality, with a median image resolution of 1.1 arcseconds. Both the image sharpness and large aperture combine to deliver more photons onto the lunar retroreflector and receive more of the photons returning from the reflectors, respectively. Compared to current operations that receive, on average, fewer than 0.01 photons per pulse, APOLLO should be well into the multi-photon regime, with perhaps 1–10 return photons per pulse, depending on seeing. With this signal rate, APOLLO will be efficient at finding and tracking the lunar signal, yielding hundreds of times more photons in an observation than current operations deliver. In addition to the significant reduction in random error (1/$\sqrt{N}$ reduction), the high signal rate will allow assessment and elimination of systematic errors in a way not currently possible. This station is designed to deliver lunar range data accurate to one millimeter. The APOLLO instrument started producing useful ranges in 2006, thereby, initiating the regular delivery of LLR data with much improved accuracy.[@Murphy_etal_2000; @Williams_Turyshev_Murphy_2004; @Murphy_etal_2007; @Murphy_etal_2008]
The high accuracy LLR station installed at Apache Point should provide major opportunities (see Refs. for details). The APOLLO project will push LLR into the regime of millimetric range precision which translates into an order-of-magnitude improvement in the determination of fundamental physics parameters. An Apache Point 1 mm range accuracy corresponds to $3 \times 10^{-12}$ of the Earth-Moon distance. The resulting LLR tests of gravitational physics would improve by an order of magnitude: the Equivalence Principle would give uncertainties approaching $10^{-14}$, tests of general relativity effects would be $<$0.1%, and estimates of the relative change in the gravitational constant would be 0.1% of the inverse age of the universe. This last number is impressive considering that the expansion rate of the universe is approximately one part in $10^{10}$ per year. Therefore, the gain in our ability to conduct even more precise tests of fundamental physics is enormous, thus this new instrument stimulates development of better and more accurate models for the LLR data analysis at a mm-level.
New retroreflectors and laser transponders on the Moon {#sec:lunar-efforts}
------------------------------------------------------
There are two critical factors that control the progress in the LLR-enabled science – the distribution of retroreflectors on the lunar surface and their passive nature. Thus, the four existing arrays[@Dickey_etal_1994] are distributed from the equator to mid-northern latitudes of the Moon and are placed with modest mutual separations relative to the lunar diameter. Such a distribution is not optimal; it limits the sensitivity of the ongoing LLR science investigations. The passive nature of reflectors causes signal attenuation proportional to the inverse 4th power of the distance traveled by a laser pulse. The weak return signals drive the difficulty of the observational task; thus, only a handful of terrestrial SLR stations are capable of also carrying out the lunar measurements, currently possible at cm-level.
The intent to return to the Moon was announced in January 2004. NASA is planning to return to the Moon in 2009 with Lunar Reconnaissance Orbiter, and later with robotic landers, and then with astronauts in the next decade. The return to the Moon provides an excellent opportunity for LLR, particularly if additional LLR instruments will be placed on the lunar surface at more widely separated locations. Due to their potential for new science investigations, these instruments are well justified.
### New retroreflector arrays
Future ranging devices on the Moon might take two forms, namely passive retroreflectors and active transponders. The advantages of passive retroreflector arrays are their long life and simplicity. The disadvantages are the weak returned signal and the spread of the reflected pulse arising from lunar librations, which can change the retroreflector orientation up to 10 degrees with respect to the direction to the Earth.
Range accuracy, data span, and distributions of earth stations and retroreflectors are important considerations for future LLR data analysis. Improved range accuracy helps all solution parameters. Data span is more important for some parameters, e.g. change in $G$, precession and station motion, than others. New retroreflectors optimized for pulse spread, signal strength, and thermal effects, will be valuable at any location on the moon.
Overall, the separation of lunar 3-dimensional rotation, the rotation angle and orientation of the rotation axis (also called physical librations), and tidal displacements depends on a good geographical spread of retroreflector array positions. The current three Apollo sites plus the infrequently observed Lunokhod 2 are close to the minimum configuration for separation of rotation and tides, so that unexpected effects might go unrecognized. A wider spread of retroreflectors could improve the sensitivity to rotation/orientation angles and the dependent lunar science parameters by factors of up to 2.6 for longitude and up to 4 for pole orientation. The present configuration of retroreflector array locations is quite poor for measuring lunar tidal displacements. Tidal measurements would be very much improved by a retroreflector array near the center of the disk, longitude 0 and latitude 0, plus arrays further from the center than the Apollo sites.
Lunar retroreflectors are the most basic instruments, for which no power is needed. Deployment of new retroreflector arrays is very simple: deliver, unfold, point toward the Earth and walk away. Retroreflectors should be placed far enough away from astronaut/moonbase activity that they will not get contaminated by dust. One can think about the contribution of smaller retroreflector arrays for use on automated spacecraft and larger ones for manned missions. One could also benefit from co-locating passive arrays and active transponders and use a few LLR capable stations ranging retroreflectors to calibrate the delay vs. temperature response of the transponders (with their more widely observable strong signal).
### Opportunity for laser transponders
LLR is one of the most modern and exotic observational disciplines within astrometry, being used routinely for a host of fundamental astronomical and astrophysical studies. However, even after more than 30 years of routine observational operation, LLR remains a non-trivial, sophisticated, highly technical, and remarkably challenging task. Signal loss, proportional to the inverse 4th power of the Earth-Moon distance, but also the result of optical and electronic inefficiencies in equipment, array orientation, and heating, still requires that one observe mostly single photoelectron events. Raw timing precision is some tens of picoseconds with the out-and-back range accuracy being approximately an order of magnitude larger. Presently, we are down to sub-cm lunar ranging accuracies. In this day of routine SLR operations, it is a sobering fact to realize that ranging to the Moon is many orders of magnitude harder than to an Earth-orbiting spacecraft. Laser transponders may help to solve this problem. Simple time-of-flight laser transponders offer a unique opportunity to overcome the problems above. Although there are great opportunities for scientific advances provided by these instruments, there are also design challenges as transponders require power, precise pointing, and thermal stability.
Active laser transponders on the lunar surface are attractive because of the strong return and insensitivity to lunar orientation effects. A strong return would allow artificial satellite ranging stations to range the Moon. However, transponders require development: optical transponders detect a laser pulse and fire a return pulse back toward the Earth.[@Degnan_1993] They give a much brighter return signal accessible to more stations on Earth. Active transponders would require power and would have more limited lifetimes than passive reflectors. Transponders might have internal electronic delays that would need to be calibrated or estimated, since if these delays were temperature sensitive that would correlate with the SEP test. Transponders can also be used to good effect in asynchronous mode,[@Degnan_2002; @Degnan_2006] wherein the received pulse train is not related to the transmitted pulse train, but the transponder unit records the temporal offsets between the two signals. The LLR experience can help determine the optimal location on the Moon for these devices.
In addition to their strong return signals and insensitivity to lunar orientation effects, laser transponders are also attractive due to their potential to become increasingly important part of space exploration efforts. Laser transponders on the Moon can be a prototype demonstration for later laser ranging to Mars and other celestial bodies to give strong science returns in the areas similar to those investigated with LLR. A lunar installation would provide a valuable operational experience.
Summary {#sec:sum}
=======
In this paper we considered the LLR tests of the equivalence principle (EP) performed with the Earth and Moon. If the ratio of gravitational mass to inertial mass is not constant, then there would be profound consequences for gravitation. Such a violation of the EP would affect how bodies move under the influence of gravity. The EP is not violated for Einstein’s general theory of relativity, but violations are expected for many alternative theories of gravitation. Consequently, tests of the EP are important to the search for a new theory of gravity.
We considered the EP in its two forms (Sec. \[sec:ep\]); the weak equivalence principle (WEP) is sensitive to composition while the strong equivalence principle (SEP) considers possible sensitivity to the gravitational energy of a body. The main sensitivity of the lunar orbit to the equivalence principle comes from the acceleration of the Earth and Moon by the Sun. Any difference in those accelerations due to a failure of the equivalence principle causes an anomalous term in the lunar range with the 29.53 d synodic period. The amplitude would be proportional to the difference in the gravitational to inertial mass ratios for Earth and Moon. Thus, lunar laser ranging is sensitive to a failure of the equivalence principle due to either the WEP or the SEP. In the case of the SEP, any violation of the equivalence principle can be related to a linear combination of the parametrized post-Newtonian parameters $\beta$ and $\gamma$.
We also discussed the data and observational influences on its distribution (Sec. \[sec:data\]). The evolution of the data from decimeter to centimeter quality fits is illustrated. The LLR data set shows a variety of selection effects which influence the data distribution. Important influences include phase of the Moon, season, distance, time of day, elevation in the sky, and declination. For the LLR-enabled EP tests, selection with phase of the Moon is an important factor.
An accurate model and analysis effort is needed to exploit the lunar laser range data to its full capability. The model is the basis for the computer code that processes the range data (Sec. \[sec:model\]). Further modeling efforts will be necessary to process range data of millimeter quality. Two small effects for future modeling, thermal expansion and solar radiation pressure, are briefly discussed.
Solutions for any EP violation are given in Section \[sec:data\_analysis\]. Several approaches to the solutions are used as checks. The EP solution parameter can be either a ratio of gravitational to inertial masses or as a coefficient of a synodic term in the range equation. The results are compatible in value and uncertainty. Because $GM_{\tt Earth+Moon}$ correlates with the EP due to lunar phase selection effects, solutions are also made with this quantity fixed to a value based on non-LLR determinations of $GM_{Earth}$ and Earth/Moon mass ratio. In all, five EP solutions are presented in Table \[tab:1\] and four are carried forward into Table \[tab:2\]. As a final check, spectra of the post-fit residuals from a solution without any EP solution parameter are examined for evidence of any violation of the EP. No such signature is evident. The analysis of the LLR data does not show significant evidence for a violation of the EP compared to its uncertainty. The final result for $[(M_G/M_I)_E -(M_G/M_I)_M]_{EP}$ is $(-1.0 \pm 1.4) \times 10^{-13}$.
To gain insight into the lunar orbit and the solution for the EP, short trigonometric series expansions are given for the lunar orbit and orientation which are appropriate for a range expansion. This is used to show how the data selection with lunar phase correlates the EP solution parameter with $GM_{\tt Earth+Moon}$. To separate these and other relevant parameters, one wishes a good distribution of observations with lunar phase, orbital mean anomaly and argument of latitude, and, as seen from Earth, hour angle and declination.
The result for the SEP is derived (subsection \[sec:sep\_solution\]) from the total value determined by LLR by subtracting the laboratory result for the WEP determined at the University of Washington. The Moon has a small core while the Earth has a large iron rich core. Both have silicate mantles. The WEP sensitivity of the Moon depends most strongly on the difference in iron content between the two bodies. The SEP result is $[(M_G/M_I)_E -(M_G/M_I)_M]_{\tt SEP} = (-2.0 \pm 2.0) \times 10^{-13}$, which we do not consider to be a significant difference from the zero of general relativity.
The SEP test can be related to the parametrized post-Newtonian (PPN) parameters $\beta$ and $\gamma$ (subsection \[sec:beta\]). For conservative theories of relativity, one gets $4\beta - \gamma - 3 = (4.4\pm4.5)\times 10^{-4}$. The Cassini spacecraft result for $\gamma$ allows a value for $\beta$ to be extracted. That result is $\beta - 1 = (1.2 \pm 1.1) \times 10^{-4}$, which is the most accurate determination to date. Again, we do not consider this $\beta$ value to be a significant deviation from the unity of general relativity.
Finally, we discussed the efforts that are underway to extend the accuracies to millimeter levels (Sec. \[sec:ememrging\_oops\]). The expected improvement in the accuracy of LLR tests of gravitational physics expected with extended data set with existing stations and also with a new APOLLO instrument will bring significant new insights to our understanding of the fundamental physics laws that govern the evolution of our universe. The scientific results are very significant which justifies the nearly 40 years of history of LLR research and technology development.
The lunar laser ranging results in this paper for the equivalence principle, strong equivalence principle, and PPN $\beta$ are consistent with the expectations of Einstein’s general theory of relativity. It is remarkable that general relativity has survived a century of testing and that the equivalence principle is intact after four centuries of scrutiny. Each new significant improvement in accuracy is unknown territory and that is reason for future tests of the equivalence principle.
Acknowledgments {#acknowledgments .unnumbered}
===============
We acknowledge and thank the staffs of the Observatoire de la Côte d’Azur, Haleakala, and University of Texas McDonald ranging stations. The analysis of the planetary data was performed by E. Myles Standish. The research described in this paper was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration.
[145]{}
[AFCRL\_1969]{} Air Force Cambridge Research Laboratories, Bull. Géodésique 94, 443-444 (1969).
[Abalakin-Kokurin-1981]{} Abalakin, V. K., Kokurin, Yu. L., “Optical detection and ranging of the moon.” Usp. Fiz. Nauk 134, 526-535 (1981).
[Adelberger\_etal\_1990a]{} Adelberger, E. G., Heckel, B. R., Smith, G., Su, Y., and Swanson, H. E.,“Eötvös experiments, lunar ranging, and the strong equivalence principle,” Nature 347, 261-263 (1990).
[Adelberger\_etal\_1990b]{} Adelberger, E. G., Stubbs, C. W., Heckel, B. R., Smith, G., Su, Y., Swanson, H. E., Smith, G., Gundlach, J. H., and Rogers, W. F., “Testing the equivalence principle in the field of the Earth: particle physics at masses below 1 µeV?” Phys. Rev. D, 42, 3267-3292 (1990).
[Adelberger\_2001]{} Adelberger, E. G., “New Tests of Einstein’s Equivalence Principle and Newton’s inverse-square law,” Class. Quantum Grav. 18, 2397-2405 (2001).
[Alley\_1972]{} Alley, C. O., “Story of the development of the Apollo 11 laser ranging retro-reflector experiment,” Adventures in Experimental Physics, ed. by B. Maglich, 132-149 (1972).
[Anderson\_etal\_1996]{} Anderson, J. D., Gross, M., Nordtvedt, K. L., and Turyshev, S. G., “The Solar Test of the Equivalence Principle,” Astrophys. Jour. 459, 365-370 (1996).
[Anderson\_Williams\_2001]{} Anderson, J. D., and Williams, J. G., “Long-Range Tests of the Equivalence Principle,” Class. Quantum Grav. 18, 2447-2456 (2001).
[Baessler\_etal\_1999]{} Bae[ß]{}ler, S., Heckel, B., Adelberger, E. G., Gundlach, J., Schmidt, U., and Swanson, E., “Improved Test of the Equivalence Principle for Gravitational Self-Energy,” Phys. Rev. Lett. 83, 3585-3588 (1999).
[Bertotti\_etal\_2003]{} Bertotti, B., Iess, L., and Tortora, P., “A test of general relativity using radio links with the Cassini spacecraft,” Nature 425, 374-376 (2003).
[Bender\_etal\_1973]{} Bender, P. L., Currie, D. C., Dicke, R. H., Eckhardt, D. H., Faller, J. E., Kaula, W. M., Mulholand, J. D., Plotkin, H. H., Poultney, S. K., Silverberg, E. C., Wilkinson, D. T., Williams, J. G., and Alley, C. O., “The Lunar Laser Ranging Experiment,” Science 182, 229-237 (1973).
[Bod\_etal\_1991]{} Bod, L., Fischbach, E. Marx, G. and Náray-Ziegler, M., “One Hundred Years of the Eötvös Experiment,” Acta Physica Hungarica 69, 335-355 (1991).
[Braginsky\_Panov\_1972]{} Braginsky, V. B., and Panov, V. I., “Verification of Equivalence Principle of Inertial and Gravitational Mass,” Zh. Eksp. Teor. Fiz. 61, 873-876 (1971), \[Sov. Phys. JETP 34, 463-466 (1972)\].
[Braginsky\_etal\_1992]{} Braginsky, V. B., Gurevich, A. V., and Zybin, K. P., “The influence of dark matter on the motion of planets and satellites in the solar system,” Phys. Lett. A 171, 275-277 (1992).
[Braginsky\_1994]{} Braginsky, V. B., “Experimental gravitation (what is possible and what is interesting to measure).” Class. Quantum Grav. 11, A1-A7 (1994).
[Calame\_etal\_1970]{} Calame, O., Fillol, M.-J., Guérault, G., Muller, R., Orszag, A., Pourny, J.-C., Rösch, J., and de Valence, Y., “Premiers échos lumineux sur la lune obtenus par le télémètre du Pic du Midi,” Comptes Rendus Acad. Sci. Paris, Ser. B 270, 1637-1640 (1970).
[Chandler\_etal\_1994]{} Chandler, J. F., Reasenberg, R. D., and Shapiro, I. I., “New results on the Principle of Equivalence,” Bull. Am. Astron. Soc. 26, 1019 (1994).
[Chapront-Touze\_Chapront\_1988]{} Chapront-Touzé, M., and Chapront, J., “ELP 2000-85: a semi-analytical lunar ephemeris adequate for historical times,” Astron. Astrophys. 190, 342-352 (1988).
[Chapront-Touze\_Chapront\_1991]{} Chapront-Touzé, M., and Chapront, J., Lunar Tables and Programs from 4000 B. C. to A. D. 8000 (Willmann-Bell, Richmond, 1991).
[Chapront\_etal\_2002]{} Chapront, J., Chapront-Touzé, M., and Francou, G., “A new determination of lunar orbital parameters, precession constant and tidal acceleration from LLR measurements,” Astron. Astrophys. 387, 700-709 (2002).
[Damour\_1996]{} Damour, T., “Testing the Equivalence Principle: why and how?” Class. Quantum Grav. 13, A33-A42 (1996).
[Damour\_2001]{} Damour, T., “Questioning the Equivalence Principle,” (2001) \[arXiv:gr-qc/0109063\].
[Damour\_Esposito-Farese\_1996a]{} Damour, T., and Esposito-Farèse, G., “Testing gravity to second post-Newtonian order: a field-theory approach,” Phys. Rev. D 53, 5541-5578 (1996a).
[Damour\_Esposito-Farese\_1996b]{} Damour, T., and Esposito-Farèse, G., “Tensor-scalar gravity and binary-pulsar experiments,” Phys. Rev. D, 54, 1474-1491 (1996b).
[Damour\_Nordtvedt\_1993a]{} Damour, T., and Nordtvedt, K., Jr., “General Relativity as a Cosmological Attractor of Tensor Scalar Theories,” Phys. Rev. Lett. 70, 2217-2219 (1993a).
[Damour\_Nordtvedt\_1993b]{} Damour, T., and Nordtvedt, K., Jr., “Tensor-scalar cosmological models and their relaxation toward general relativity,” Phys. Rev. D, 48, 3436-3450 (1993b).
[Damour\_Polyakov\_1994a]{} Damour, T., and Polyakov, A. M., “String Theory and Gravity,” General Relativity Gravit. 26, 1171-1176 (1994a).
[Damour\_Polyakov\_1994b]{} Damour, T., and Polyakov, A. M., “The string dilaton and a least coupling principle,” Nucl. Phys. B423, 532-558 (1994b).
[Damour\_etal\_2002a]{} Damour, T., Piazza, F., and Veneziano, G., “Runaway dilaton and equivalence principle violations,” Phys. Rev. Lett. 89, 081601 (2002a) \[arXiv:gr-qc/0204094\].
[Damour\_etal\_2002b]{} Damour, T., Piazza, F., and Veneziano, G., “Violations of the equivalence principle in a dilaton-runaway scenario,” Phys. Rev. D 66, 046007 (2002b) \[arXiv:hep-th/0205111\].
[Damour\_Schafer\_1991]{} Damour, T., and Schäfer, G., “New tests of the strong equivalence principle using Binary-Pulsar data,” Phys. Rev. Lett. 66, 2549-2552 (1991).
[Damour\_Vokrouhlicky\_1996a]{} Damour, T., and Vokrouhlicky, D., “Equivalence Principle and the Moon,” Phys. Rev. D 53, 4177-4201 (1996a).
[Damour\_Vokrouhlicky\_1996b]{} Damour, T., and Vokrouhlicky, D., “Testing for gravitationally preferred directions using the lunar orbit,” Phys. Rev. D 53, 6740-6740 (1996b).
[Degnan\_1985]{} Degnan, J. J., “Satellite Laser Ranging: Status and Future Prospects,” IEEE Trans. Geosci. and Rem. Sens., Vol. GE-23, 398-413 (1985).
[Degnan\_1993]{} Degnan, J. J., “Millimeter accuracy satellite laser ranging: a review,” Contributions of Space Geodesy to Geodynamics: Technology, Geodynamics Series, D.E. Smith and D.L. Turcotte (Eds.), AGU Geodynamics Series 25, 133-162 (1993).
[Degnan\_2002]{} Degnan, J. J., “Asynchronous Laser Transponders for Precise Interplanetary Ranging and Time Transfer,” Journal of Geodynamics (Special Issue on Laser Altimetry), 551-594, (2002).
[Degnan\_2006]{} J.J. Degnan, “Laser Transponders for High Accuracy Interplanetary Laser Ranging and Time Transfer”. In [*Lasers, Clocks, and Drag-Free: Exploration of Relativistic Gravity in Space*]{}, eds. H. Dittus, C. Lammerzahl, and S.G. Turyshev, pp. 231-242, (Springer, New York, 2006).
[Dickey\_Newhall\_Williams\_1989]{} Dickey, J. O., Newhall, X X, and Williams, J. G., “Investigating Relativity Using Lunar Laser Ranging: Geodetic Precession and the Nordtvedt Effect,” Adv. Space Res. 9(9), 75-78 (1989).
[Dickey\_etal\_1994]{} Dickey, J. O., Bender, P. L., Faller, J. E., Newhall, X X, Ricklefs, R. K., Shelus, P. J., Veillet, C., Whipple, A. L., Wiant, J. R., Williams, J. G., and Yoder, C. F., “Lunar Laser Ranging: A Continuing Legacy of the Apollo Program,” Science 265, 482-490 (1994).
[Eckhardt\_1990]{} Eckhardt, D. H., “Gravitational shielding,” Phys. Rev. D 42, 2144-2145 (1990).
[Eotvos\_1890]{} Eötvös, R. v., Mathematische und Naturwissenschaftliche Berichte aus Ungarn 8, 65 (1890).
[Eotvos\_etal\_1922]{} Eötvös, R. v., Pekár, D., Fekete, E., Annalen der Physik (Leipzig) 68, 11, 1922. English translation for the U. S. Department of Energy by J. Achzenter, M. Bickeböller, K. Bräuer, P. Buck, E. Fischbach, G. Lubeck, C. Talmadge, University of Washington preprint 40048-13-N6. - More complete English text reprinted earlier in Annales Universitatis Scientiarium Budapestiensis de Rolando Eötvös Nominate, Sectio Geologica 7, 111 (1963).
[Faller\_etal\_1969]{} Faller, J. E., Winer, I., Carrion, W., Johnson, T. S., Spadin, P., Robinson, L., Wampler, E. J., and Wieber, D., “Laser beam directed at the lunar retro-reflector array: observations of the first returns,” Science 166, 99-102 (1969).
[Ferrari\_etal\_1980]{} Ferrari, A. J., Sinclair, W. S., Sjogren, W. L., Williams, J. G. and Yoder, C. F., “Geophysical Parameters of the Earth-Moon System,” J. Geophys. Res. 85, 3939-3951 (1980).
[Flasar\_Birch\_1973]{} Flasar, F. M., and Birch, F., “Energetics of core formation: a correction,” J. Geophys. Res. 78, 6101-6103 (1973).
[Hood\_etal\_1999]{} Hood, L. L., Mitchell, D. L., Lin, R. P., Acuna, M. H., and Binder, A. B., “Initial measurements of the lunar induced magnetic dipole moment using Lunar Prospector magnetometer data,” Geophys. Res. Lett., 26, 2327-2330 (1999).
[Kokurin\_2003]{} Kokurin, Yu. I, “Lunar laser ranging: 40 years of research,” Quantum Electronics 33(1), 45-47 (2003).
[Konopliv\_etal\_1998]{} Konopliv, A. S., Binder, A. B., Hood, L. L., Kucinskas, A. B., Sjogren, W. L., and Williams, J. G., “Improved gravity field of the Moon from Lunar Prospector,” Science 281, 1476-1480 (1998).
[Konopliv\_etal\_2002]{} Konopliv, A. S., Miller, J. K., Owen, W. M., Yeomans, D. K., and Giorgini, J. D., “A Global Solution for the Gravity Field, Rotation, Landmarks, and Ephemeris of Eros,” Icarus 160, 289–299 (2002).
[Kozai\_1972]{} Kozai, Y., “Lunar laser ranging experiments in Japan,” Space Research XII, 211-217, (1972).
[Kuskov\_Kronrod\_1998a]{} Kuskov, O. L., and Kronrod, V. A., “A model of the chemical differentiation of the Moon,” Petrology 6, 564-582 (1998a).
[Kuskov\_Kronrod\_1998b]{} Kuskov, O. L., and Kronrod, V. A., “Constitution of the Moon, 5, Constraints on composition, density, temperature, and radius of a core,” Phys. Earth Planet. Inter., 107, 285-306 (1998b).
[Larimer\_1986]{} Larimer, J. W., “Nebular chemistry and theories of lunar origin, in Origin of the Moon,” edited by W. K. Hartmann, R. J. Phillips, and G. J. Taylor, 145-171 (Lunar and Planet. Inst., Houston, Tex., 1986).
[Lorimer\_Freire\_2004]{} Lorimer, D. R., and Freire, P. C. C., “New limits on the strong equivalence principle from two long-period circular-orbit binary pulsars,” (2004) \[arXiv:astro-ph/0404270\].
[Luck\_etal\_1973]{} Luck, J., Miller, M. J., and Morgan, P. J., “The National Mapping Lunar Laser Program,” in proceedings of The Earth’s Gravitational Field and Secular Variations in Position, a conference held 26-30 November, 1973 at New South Wales, Sydney, Australia. Australian Academy of Science and the International Association of Geodesy, 413 (1973).
[Majorana\_1920]{} Majorana, Q., “On gravitation. Theoretical and experimental researches,” Philos. Mag. 39, 488-504 (1920).
[Marini\_Murray\_1973]{} Marini, J. W., Murray, C. W., Jr., “Correction of Laser Range Tracking Data for Atmospheric Refraction at Elevation Angles Above 10 Degrees,” NASA Technical Report, X-591-73-351 (1973).
[McCarthy\_Petit\_2003]{} McCarthy, D. D., and Petit, G. eds. “IERS Conventions (2003)” (2003). IERS Technical Note \#32. Frankfurt am Main: Verlag des Bundesamts für Kartographie und Geodäsie, 2004. 127 pp. Electronic version available at [http://www.iers.org/iers/products/conv/]{}
[Morgan\_King\_1982]{} Morgan, P., King, R. W., “Determination of coordinates for the Orroral Lunar Ranging Station, in High-precision earth rotation and earth-moon dynamics: Lunar distances and related observations” Proceedings of the Sixty-third Colloquium, Grasse, Alpes-Maritimes, France, May 22-27, 1981. (A82-47176 24-89) Dordrecht, D. Reidel Publishing Co., 305-311 (1982).
[Mueller\_etal\_1996b]{} Müller, J., Schnider, M., Soffel, M., and Ruder, H., “Determination of Relativistic Quantities by Analyzing Lunar Laser Ranging Data,” In proceedings of “the Seventh Marcel Grossmann Meeting,” World Scientific Publ., eds. R. T. Jantzen, G. M. Keiser, and R. Ruffini, 1517 (1996).
[Mueller\_Nordtvedt\_1998]{} Müller, J., and Nordtvedt, K., Jr., “Lunar laser ranging and the equivalence principle signal,” Phys. Rev. D 58, 62001/1-13 (1998).
[Murphy\_etal\_2000]{} Murphy, T. M., Jr., Strasburg, J. D., Stubbs, C. W., Adelberger, E. G., Angle, J., Nordtvedt, K., Williams, J. G., Dickey, J. O., and Gillespie, B., “The Apache Point Observatory Lunar Laser-Ranging Operation (APOLLO),” Proceedings of 12th International Workshop on Laser, Ranging, Matera, Italy (November 2000)\
http://www.astro.washington.edu/tmurphy/apollo/matera.pdf
[Murphy\_etal\_2007]{} T. W. Murphy, Jr., E. L. Michelson, A. E. Orin, E. G. Adelberger, C. D. Hoyle, H. E. Swanson, C. W. Stubbs, J. E. Battat, “APOLLO: Next-Generation Lunar Laser Ranging”, Int. J. Mod. Phys. D 16(12a), 2127 (2007).
[Murphy\_etal\_2008]{} T. W. Murphy, Jr., E. G. Adelberger, J.B.R. Battat, L.N. Carey, C.D. Hoyle, P. LeBlanc, E.L. Michelsen, K. Nordtvedt, A.E. Orin, J.D. Strasburg, C.W. Stubbs, H.E. Swanson, E. Williams, “APOLLO: the Apache Point Observatory Lunar Laser-ranging Operation: Instrument Description and First Detections”, Publ. Astron. Soc. Pac. 120(863), 20-37 (2008).
[Nordtvedt\_1968a]{} Nordtvedt, K., Jr., “Equivalence Principle for Massive Bodies. I. Phenomenology,” Phys. Rev. 169, 1014-1016 (1968a).
[Nordtvedt\_1968b]{} Nordtvedt, K., Jr., “Equivalence Principle for Massive Bodies. II. Theory,” Phys. Rev. 169, 1017-1025 (1968b).
[Nordtvedt\_1968c]{} Nordtvedt, K., Jr., “Testing Relativity with Laser Ranging to the Moon,” Phys. Rev. 170, 1186-1187 (1968c).
[Nordtvedt\_1970]{} Nordtvedt, K., Jr., “Solar system Eötvös experiments,” Icarus 12, 91-100, (1970).
[Nordtvedt\_1991]{} Nordtvedt, K., Jr., “Lunar Laser Ranging Re-examined: The Non-Null Relativistic Contribution,” Phys. Rev. D 43, 3131-3135 (1991).
[Nordtvedt\_1994]{} Nordtvedt, K., Jr., “Cosmic Acceleration of the Earth and Moon by Dark-Matter,” Astroph. J. 437, 529-531 (1994).
[Nordtvedt\_1995]{} Nordtvedt, K., Jr., “The relativistic orbit observables in lunar laser ranging,” Icarus 114, 51-62 (1995).
[Nordtvedt\_1998]{} Nordtvedt, K., Jr., “Optimizing the observation schedule for tests of gravity in lunar laser ranging and similar experiments,” Class. Quantum Grav. 15, 3363-3381 (1998).
[Nordtvedt\_1999]{} Nordtvedt, K., Jr., “30 years of lunar laser ranging and the gravitational interaction,” Class. Quantum Grav. 16, A101-A112 (1999).
[Nordtvedt\_2003]{} Nordtvedt, K., Jr., “Lunar Laser Ranging - A Comprehensive Probe of Post-Newtonian Gravity,” (2003) \[arXiv:gr-qc/0301024\].
[Nordtvedt\_etal\_1995]{} Nordtvedt, K. L., Müller, J., and Soffel, M., “Cosmic Acceleration of the Earth and Moon by Dark-Matter,” Astron. Astrophysics 293, L73-L74 (1995).
[Nordtvedt\_Vokrouhlicky\_1997]{} Nordtvedt K., Jr., and Vokrouhlicky, D., “Recent Progress in Analytical Modeling of the Relativistic Effects in the Lunar Motion,” in ‘Dynamics and Astronomy of the Natural and Artificial Celestial Bodies’, eds: I.M. Wytrzysczcak, J. H. Lieske and R. A. Feldman (Kluwer Academic Publishers, Dordrecht), 205 (1997).
[Orellana\_Vucetich\_1988]{} Orellana, R. B., and Vucetich, H., “The principle of equivalence and the Trojan asteroids,” Astron. Astrophys. 200, 248-254 (1988).
[Orellana\_Vucetich\_1993]{} Orellana, R. B., and Vucetich, H., “The Nordtvedt Effect in the Trojan Asteroids,” Astron. Astrophys 273, 313-317 (1993).
[Roll\_etal\_1964]{} Roll, P. G., Krotkov, R., and Dicke, R. H., “The Equivalence Principle of Inertial and Gravitational Mass,” Ann. Phys. (N.Y.) 26, 442-517 (1964).
[Ries\_etal\_1992]{} Ries, J. C., Eanes, R. J., Shum, C. K., and Watkins, M. M., “Progress in the determination of the gravitational coefficient of the Earth,” Geophys. Res. Lett. 19, 529-531 (1992).
[Russell\_1921]{} Russell, H. N., “On Majorana’s theory of gravitation,” Astrophys. J. 54, 334-346 (1921).
[Samain\_etal\_1998]{} Samain, E., Mangin, J. F., Veillet, C., Torre, J. M., Fridelance, P., Chabaudie, J. E., Feraudy, D., Glentzlin, M., Pham Van, J., Furia, M., Journet, A., and Vigouroux, G., “Millimetric Lunar Laser Ranging at OCA (Observatoire de la Côte d’Azur),” Astron. Astrophys. Suppl. Ser. 130, 235-244 (1998).
[Singe\_1960]{} Singe, J. L., Relativity: the General Theory (Amsterdam: North-Holland, 1960).
[Smith\_etal\_1993]{} Smith, G., Adelberger, E. G., Heckel, B. R., Su, Y., “Test of the equivalence principle for ordinary matter falling toward dark matter,” Phys. Rev. Lett. 70, 123-126 (1993).
[Shapiro\_etal\_1976]{} Shapiro, I. I., Counselman, C. C., III, and King, R. W., “Verification of the Principle of Equivalence for Massive Bodies,” Phys. Rev. Lett. 36, 555-558 (1976).
[Shelus\_etal\_2003]{} Shelus, P., Ries, J. G., Wiant, J. R., Ricklefs, R. L., “McDonald Ranging: 30 Years and Still Going,” in Proc. of 13th International Workshop on Laser Ranging, October 7-11, 2002, Washington, D. C. (2003), http://cddisa.gsfc.nasa.gov/lw13/lw$\underline{ }$proceedings.html
[Standish\_Williams\_2005]{} Standish, E. M., and Williams, J. G., “Orbital Ephemerides of the Sun, Moon, and Planets,” Chapter 8 of the Explanatory Supplement to the American Ephemeris and Nautical Almanac, in press (2005).
[Standish\_1998]{} Standish, E. M., “Time scales in the JPL and CfA ephemerides,” Astron. Astrophys. 336, 381-384 (1998).
[Su\_etal\_1994]{} Su, Y., Heckel, B. R., Adelberger, E. G., Gundlach, J. H., Harris, M., Smith, G. L., and Swanson, H. E., “New tests of the universality of free fall,” Phys. Rev. D 50, 3614-3636 (1994).
[Tremaine\_1992]{} Tremaine, S., “The Dynamical Evidence for Dark Matter,” Physics Today 45, 28-36 (1992).
[Turyshev\_etal\_2004]{} Turyshev, S. G., Williams, J. G., Nordtvedt, K., Jr., Shao, M., Murphy, T. W., Jr., “35 Years of Testing Relativistic Gravity: Where do we go from here?”, in Proc. “302.WE-Heraeus-Seminar: Astrophysics, Clocks and Fundamental Constants, 16-18 June 2003. The Physikzentrum, Bad Honnef, Germany.” Springer Verlag, Lect. Notes Phys. 648, 301-320, (2004) \[arXiv:gr-qc/0311039\].
S. G. Turyshev, U. E. Israelsson, M. Shao, N. Yu, A. Kusenko, E. L. Wright, C.W.F. Everitt, M. Kasevich, J. A. Lipa, J. C. Mester, R. D. Reasenberg, R. L. Walsworth, N. Ashby, H. Gould, H. J. Paik, “Space-based research in fundamental physics and quantum technologies,” [*Inter. J. Modern Phys. D **16***]{}(12a), 1879-1925 (2007), arXiv:0711.0150 \[gr-qc\]
S. G. Turyshev and J. G. Williams, “Space-based tests of gravity with laser ranging,” [*Int. J. Mod. Phys. D **16***]{}(12a), 2165-2179 (2007) \[arXiv:gr-qc/0611095\]
Turyshev, S. G., “Experimental Tests of General Relativity,” [*Annu. Rev. Nucl. Part. Sci. **58***]{}, 207-248 (2008), arXiv:0806.1731 \[gr-qc\].
[Ulrich\_1982]{} Ulrich, R. K., “The Influence of Partial Ionization and Scattering States on the Solar Interior Structure,” Astrophys. J. 258, 404-413 (1982).
[Veillet\_etal\_1993]{} Veillet, C., J. F. Mangin, J. E. Chabaudie, C. Dumoulin, D. Feraudy, and J. M. Torre, “Lunar laser ranging at CERGA for the ruby period (1981-1986),” in Contributions of Space Geodesy to Geodynamics: Technology, AGU Geodynamics Series, 25, edited by D. E. Smith and D. L. Turcotte, 133-162 (1993).
[Vokrouhlicky\_1997]{} Vokrouhlicky, D., “A note on the solar radiation perturbations of lunar motion,” Icarus 126, 293-300 (1997).
[Wex\_2001]{} Wex, N., “Pulsar timing - strong gravity clock experiments,” in Gyros, Clocks, and Interferometers: Testing Relativistic Gravity in Space. C. Lämmerzahl et al., eds., Lecture Notes in Physics 562, 381-399 (Springer, Berlin 2001).
[Will\_1971]{} Will, C. M., “Theoretical Frameworks for Testing Relativistic Gravity. II. Parametrized Post-Newtonian Hydrodynamics, and the Nordtvedt Effect,” Astrophys. J., 163, 611-628 (1971).
[Will\_Nordtvedt\_1972]{} Will, C. M. and Nordtvedt, K., Jr., “Conservation Laws and Preferred Frames in Relativistic Gravity 1: Preferred-Frame Theories and an Extended PPN Formalism,” Astrophys. J. 177, 757-774 (1972).
[Will\_1990]{} Will, C. M., “General Relativity at 75: How Right was Einstein?”, Science 250, 770-771 (1990).
[Will\_1993]{} Will, C. M., Theory and Experiment in Gravitational Physics (Cambridge, 1993).
[Will\_2001]{} Will, C. M., “The Confrontation between General Relativity and Experiment,” Living Rev. Rel. 4, 4 (2001) \[arXiv:gr-qc/0103036\].
[Williams\_etal\_1976]{} Williams, J. G., Dicke, R. H., Bender, P. L., Alley, C. O., Carter, W. E., Currie, D. G., Eckhardt, D. H., Faller, J. E., Kaula, W. M., Mulholland, J. D., Plotkin, H. H., Poultney, S. K., Shelus, P. J., Silverberg, E. C., Sinclair, W., S., Slade, M. A., and Wilkinson, D. T., “New Test of the Equivalence Principle from Lunar Laser Ranging,” Phys. Rev. Lett. 36, 551-554 (1976).
[Williams\_etal\_1996a]{} Williams, J. G., Newhall, X X, and Dickey, J. O., “Relativity Parameters Determined from Lunar Laser Ranging,” Phys. Rev. D 53, 6730-6739 (1996).
[Williams\_etal\_1996b]{} Williams, J. G., Newhall, X X, and Dickey, J. O., “Relativity parameters determined from lunar laser ranging,” In the Proc. of “The Seventh Marcel Grossmann meeting on recent developments in theoretical and experimental general relativity, gravitation, and relativistic field theories,” Stanford University, 24-30 July 1994, ed. R. T. Jantzen and G. M. Keiser, World Scientific, Singapore, 1529-1530 (1996).
[Williams\_etal\_2001b]{} Williams, J. G., Boggs, D. H., Yoder, C. F., Ratcliff, J. T., and Dickey, J. O., “Lunar rotational dissipation in solid body and molten core,” J. Geophys. Res. Planets 106, 27933-27968 (2001).
[Williams\_etal\_2002]{} Williams, J. G., Boggs, D. H., Dickey, J. O., and Folkner, W. M., “Lunar Laser Tests of Gravitational Physics,” in proceedings of The Ninth Marcel Grossmann Meeting, World Scientific Publ., eds. V. G. Gurzadyan, R. T. Jantzen, and R. Ruffini, 1797-1798 (2002).
[Williams\_Dickey\_2003]{} Williams, J. G. and Dickey, J. O., “Lunar Geophysics, Geodesy, and Dynamics,” in proc. of 13th International Workshop on Laser Ranging, October 7-11, 2002, Washington, D. C. (2003), [http://cddisa.gsfc.nasa.gov/lw13/lw$\underline{ }$proceedings.html]{}
[Williams\_Turyshev\_Murphy\_2004]{} Williams, J. G., Turyshev, S. G., Murphy, T. W., Jr., “Improving LLR Tests of Gravitational Theory,” International Journal of Modern Physics D 13, 567-582 (2004) \[arXiv:gr-qc/0311021\].
[Williams\_Turyshev\_Boggs\_2004]{} Williams, J. G., Turyshev, S. G., Boggs, D. H., “Progress in Lunar Laser Ranging Tests of Relativistic Gravity,” Phys. Review Letters 93, 261101 (2004) \[arXiv:gr-qc/0411113\].
[Williams\_etal\_2005]{} Williams, J. G., Turyshev, S. G., Boggs, D. H., and Ratcliff, J.T., “Lunar Laser Ranging Science: Gravitational Physics and Lunar Interior and Geodesy,” [Adv. Space Res. **37**]{}(1), 67-71 (2006), arXiv:gr-qc/0412049.
[Williams\_2005]{} Williams, J. G., “Solar System Tides - Formulation and Application to the Moon,” in preparation (2009).
[^1]: The Lick Observatory website: [http://www.ucolick.org/]{}
[^2]: The Orroral Observatory website: [http://www.ga.gov.au/nmd/geodesy/slr/index.htm]{}
[^3]: The McDonald Observatory website: [http://www.csr.utexas.edu/mlrs/]{}
[^4]: The Observatoire de la Côte d’Azur website: [http://www.obs-nice.fr/]{}
[^5]: The Haleakala Observatory website: [http://koa.ifa.hawaii.edu/Lure/]{}
[^6]: The Wettzell Observatory website: [http://www.wettzell.ifag.de/]{}
[^7]: The Matera Observatory: [http://www.asi.it/html/eng/asicgs/geodynamics/mlro.html]{}
[^8]: International Laser Ranging Service (ILRS) website at [http://ilrs.gsfc.nasa.gov/index.html]{}
|
---
abstract: 'Ion mobility and ionic conductance in nanodevices are known to deviate from bulk behavior, a phenomenon often attributed to surface effects. We demonstrate that dielectric mismatch between the electrolyte and the surface can qualitatively alter ionic transport in a counterintuitive manner. Instead of following the polarization-induced modulation of the concentration profile, mobility is enhanced or reduced by changes in the ionic atmosphere near the interface and affected by a polarization force parallel to the surface. In addition to revealing this mechanism, we explore the effect of salt concentration and electrostatic coupling.'
author:
- 'Hanne S. Antila'
- Erik Luijten
date: 'November 29, 2017'
title: Dielectric Modulation of Ion Transport near Interfaces
---
Understanding ion mobility and ionic conductance is of fundamental importance in fields ranging from biology to energy conversion, describing phenomena as diverse as ion channels [@gouaux05] and fuel cells [@kreuer14]. The foundation for this understanding was laid more than a century ago by Kohlrausch [@kohlrausch1879; @kohlrausch07; @atkins2006], who observed that the molar conductivity $\Lambda_{m}$ of electrolytes decreases with increasing salt concentration $c$, $\Lambda_{m}=\Lambda_{0}-A\sqrt{c}$. Debye and H[ü]{}ckel [@debye23b; @debye27], and Onsager [@onsager27; @onsager31] connected this concentration dependence to the counterion atmosphere surrounding moving ions. This atmosphere, which has a size related to the concentration via the Debye length $\lambda_{\rm D} \propto 1/\sqrt{c}$, exerts two types of forces on the central ion, the electrophoretic force and the relaxation force. The electrophoretic force arises from the modification of the viscous drag on the central ion by solvent molecules that are pulled in the opposite direction by the counterions. The relaxation force is a consequence of the asymmetry of the ionic atmosphere under a driving field. The atmosphere around a moving ion is continuously being rebuilt—a process that takes finite time and causes the center of mass of the atmosphere to lag behind the central ion. Due to this asymmetry, the ion cloud exerts a Coulombic force on the central ion, slowing down its motion.
The original derivation of Debye, H[ü]{}ckel, and Onsager relies on various simplifications. Subsequent conductance theories [@fuoss57; @fuoss78] more accurately take into account non-idealities, such as ion association, as well as the coupling between relaxation and electrophoretic effects, notably the modification of the latter by the asymmetry of the ionic atmosphere. These corrections result in higher-order terms in the concentration and extend the validity of the theory to a wider concentration range. Nevertheless, the effect of concentration on molar conductivity remains qualitatively unchanged, namely that ion mobility decreases as concentration increases.
Under nanoscale confinement, ion mobilities [@duan10] and conductances [@stein04] are known to deviate from bulk-like behavior. Moreover, such devices also exhibit other special transport properties, e.g., ion selectivity [@nishizawa95; @cervera06] and rectification [@cervera06]. These deviations from bulk behavior are often attributed to surface effects and to the high surface-to-volume ratio characteristic of nanodevices. For example, net positive surface charge will attract excess negative ions into a nanopore. At low concentrations, this will enhance the conductance compared to the bulk [@stein04]. Conversely, surface charge has been predicted to increase water viscosity and thereby decrease ion mobility near the surface [@qiao05b], where the specific ion mobility depends on the sign of the surface charge [@qiao05a] and on ionic characteristics.
Yet another effect concerns the permittivity. Materials used in synthetic nanoscale devices range from dielectric to metallic, so that the surface polarization induced at the fluid–solid interface may influence ion transport. In addition to dielectric exclusion [@buyukdagli11] of ions from nanopores, the dielectric properties of a pore have been predicted to enhance ion selectivity [@boda06]. Intriguingly, recent calculations [@zhang11] have raised the possibility that the permittivity of the pore surface can be used to tune ionic rectification in conical nanopores. Thus, along with surface charge and pore size, permittivity potentially provides an additional parameter for achieving a high degree of control over ion movement. Nevertheless, to the best of our knowledge, hitherto all studies of the effect of surface polarization on ionic conductivity have concentrated on the distribution, number, or type of ions in the pore [@mamonov03; @zhang11; @tagliazucchi13; @balme15], whereas their mobility, an essential factor in the overall ionic conductivity, has been assumed to be independent of the dielectric properties of the nanodevice surface.
Here, we address this knowledge gap and demonstrate that the mobility of ions near a surface indeed can be controlled by tuning the dielectric mismatch between the wall and the solvent. We relate the origins of this effect to modifications the surface polarization induces in the counterion atmosphere, and in the related relaxation force. Molecular dynamics (MD) simulations permit a microscopic view of ion mobility and counterion clouds as a function of ion distance to the interface. To include fluctuation and correlation effects all ions are treated explicitly, whereas both the solvent and the surface are modeled as dielectric continua. The use of a coarse-grained model allows us to incorporate dielectric effects into the simulations, and to track the movement of ions for long enough times ($10^{9}$ simulation steps, corresponding to more than 5 ms) to allow reliable extraction of the mobility and corresponding forces.
![Schematic of the simulation system and the forces exerted on an ion accompanied by the mobilities \[unit $\sigma^{2}e/(k_{\rm B}T\tau)$\] and ion concentrations \[unit $\sigma^{-3}$\] without dielectric mismatch, i.e., $\varepsilon_{1}=\varepsilon_{2}$. a) Simulation set-up. Ions are confined between two plates and an electric field is applied parallel to the surface. Relaxation force, collision force, and friction force oppose the ion motion. b) Mobility $\mu$ of ions in the absence of dielectric mismatch ($\Delta = 0$) as a function of distance $z$ to the lower surface. Colors denote different ion concentrations $c$. c) Ion concentration profiles for the same systems as in panel c. d) Forces on an ion residing near an interface without dielectric mismatch. The surface distorts the counterion atmosphere, thereby reducing the forces that slow down the ion.[]{data-label="fig:schematic"}](schematic_mobility)
We adopt the restricted primitive model [@luijten02a], modeling ions as monovalent ($q=\pm e$), purely repulsive shifted-truncated Lennard-Jones spheres of mass $m$ and diameter $\sigma$, which we choose as our unit of length. For hydrated ions, $\sigma$ is approximately 0.7 nm. We employ a parallel-plate geometry of width and length $L_x = L_y = 15\sigma$, periodically replicated in both dimensions. The top and bottom surfaces are separated by $L_z = 15\sigma$. The upper surface has the same dielectric constant as the solvent, $\varepsilon_1$, whereas the lower surface has dielectric permittivity $\varepsilon_2$. This geometry makes it possible to account for the effects of the complex surface polarization patterns via image charges [@neumann1883]. To accommodate the image charges, the height of the actual simulation cell is doubled, and all electrostatic interactions are computed via 3D PPPM with accuracy $10^{-5}$, and a slab correction accompanied by $60\sigma$-thick vacuum layer. We use the dielectric mismatch $\Delta = (\varepsilon_{1}-\varepsilon_{2})/(\varepsilon_{1}+\varepsilon_{2})$ to describe the magnitude and sign of the image charge: $\Delta = 1$ for a low-permittivity surface that results in repulsive surface polarization, $\Delta = 0$ for an interface with no dielectric mismatch, and $\Delta = -1$ for a high-permittivity surface with attractive surface polarization. We use a timestep of 0.01$\tau$, where $\tau=\sqrt{m\sigma^{2}/\varepsilon_{LJ}}$, $\varepsilon_{\rm LJ}=k_{\rm B}T/1.2$ is the Lennard-Jones coupling constant, $T$ denotes the absolute temperature, and $k_{\rm B}$ is Boltzmann’s constant. Following the convention in polyelectrolyte simulations [@hsiao06], we employ an enhanced Bjerrum length $l_{\rm B}=3\sigma$. Whereas this enhances the electrostatic effects, we will demonstrate that our findings hold at lower coupling strength as well. Unless stated otherwise, the ion concentration is $c = 0.02\sigma^{-3}$ (corresponding to $0.1$M).
The simulation setup and the forces affecting the movement of ions are depicted in Fig. \[fig:schematic\]a. Ions are driven by an external field $E=0.4k_{\rm B}T/(e\sigma)$ in the $x$-direction. This field strength lies within the linear response regime, and is counteracted by the relaxation force, frictional forces, and the collision force. The friction force (viscous drag) exerted by the solvent on individual ions is captured by a Langevin thermostat, applied in the system with damping constant $\gamma=m\tau^{-1}$ [@note-units]. The short-range drag arising from interacting hydration shells of ions that pass each other is represented by Lennard-Jones collisions between ions. Since our simulations do not incorporate hydrodynamics, the ions do not experience a long-range electrophoretic force. However, as this force has the same functional dependence on salt concentration as the explicitly included relaxation force [@onsager31], this does not qualitatively affect Kohlrausch’s law. Moreover, as we will discuss below, our findings regarding the role of surface permittivity are equally unaffected. The ion mobility \[$\mu=\langle v\rangle/(Eq)$\] is determined by the balance of these force components, and obtained by averaging the instantaneous velocity $\langle v \rangle$ of ions.
To establish a reference system, we first explore ion mobilities (Fig. \[fig:schematic\]b) and the underlying ion concentration profiles (Fig. \[fig:schematic\]c) in a channel without dielectric mismatch. As expected, the ion mobility decreases as concentration increases, in qualitative agreement with Kohlrausch’s law. However, the profiles are not uniform, displaying an increase in the ion mobilities near the surfaces for all concentrations (we examined $c \leq 0.1\sigma^{-3}$). Indeed, this mobility increase reflects the important role of the counterion atmosphere in ion conductivity. The presence of the wall perturbs the ion cloud and leads to a decrease both in the electrostatic relaxation force and in ion–ion collisions (Fig. \[fig:schematic\]d), which in turn increases the mobility near the interface. Even though this can readily be produced in a simple MD simulation, we are unaware of prior previous reports on this effect.
![Effect of dielectric mismatch $\Delta$ on ion distributions and mobilities, at a bulk concentration $c = 0.020\sigma^{-3}$. (a) Ion concentrations as function of distance to the bottom wall for different values of $\Delta$. (b) Corresponding mobility of ions (colors as in panel a). (c–e) 2D charge densities around negative ion within cutoff of 2$\sigma$ from the surface: c) $\Delta=0$, d) $\Delta=-1$, e) $\Delta=1$. Small black circle marks the central ion position, green arrow shows the direction of movement, and a red star labels the center of the charge distribution. Contours of 25% and 75% of maximum charge density are shown to demonstrate shape of ion atmosphere. Charge of the central ion is not taken into account in the visualization; the depletion of charge around the central ion is caused by the $z$-cutoff. Insets are schematics clarifying the image charge effect on ion–ion interactions. Units as in Fig. \[fig:schematic\].[]{data-label="fig:ionatmosphere"}](mobility_density_maps)
The situation becomes more complex when surface polarization is taken into account. Figure \[fig:ionatmosphere\]a shows the expected build-up of ions near an attractive, high-permittivity surface ($\Delta = -1$) and depletion near a low-permittivity material ($\Delta = 1$, Fig. \[fig:ionatmosphere\]a). Based on Kohlrausch’s law, and our observations in Fig. \[fig:schematic\]b,c, the mobilities should consequently decrease near a surface with $\Delta = -1$ and increase near a surface with $\Delta = 1$. Surprisingly, we observe the opposite. Figure \[fig:ionatmosphere\]b shows that near a high-permittivity surface the interfacial mobility is enhanced compared to a system without dielectric mismatch ($\Delta = 0$), whereas a surface with low dielectric constant decreases the mobility.
We hypothesize that this remarkable behavior results from changes in the ionic atmosphere. Indeed, in bulk electrolytes such changes are known to affect ion mobilities. For example, in the Wien effect [@wien28; @onsager57; @luijten13] electrolyte mobility increases in high fields because the fast movement of the ions prevents the formation of the counterion cloud. Similarly, the Debye–Falkenhagen effect [@debyefalk28; @falkenhagen1934] describes how in high-frequency AC fields the fast, continuous switching of the direction of the ion movement suppresses the asymmetry of the ionic atmosphere, so that the relaxation force vanishes.
Accordingly, we examine the effect of surface polarization on counterion atmospheres surrounding ions in the interfacial region. Figure \[fig:ionatmosphere\]c depicts the shape and net charge density of the ionic cloud in the absence of surface polarization. It confirms the distortion of the cloud in the direction of motion, with its center of mass located *behind* the central ion. Attractive polarization ($\Delta = -1$, Fig. \[fig:ionatmosphere\]) weakens the overall counterion cloud and simultaneously suppresses its asymmetry. This in turn diminishes the relaxation force, resulting in the speed-up observed in Fig. \[fig:ionatmosphere\]b. The inset illustrates the underlying mechanism, which is phrased most concisely in term of the image charges that represent the induced surface polarization patterns. Counterions in the cloud are repelled by the image of the central ion. This weakens the ion–ion interactions and thereby not only diminishes the net charge of the ionic cloud, but also makes it more symmetric, since the range of the ionic atmosphere is connected to the relaxation time needed to rebuild it.
Conversely, repulsive surface polarization ($\Delta = 1$, Fig. \[fig:ionatmosphere\]e) enhances both the intensity and asymmetry of the ionic atmosphere. This leads to an increase in the relaxation force, and to a slow-down of ions close to the interface, supporting the mobility profile observed in Fig. \[fig:ionatmosphere\]b. The interaction between ions and their own images is now repulsive, whereas the secondary interaction between an ion and the image of its countercharge is attractive. This leads to enhanced ion–ion attraction and to the elevated net charge density around an ion residing near a low-dielectric surface.
The modulation of ion–ion interactions by polarizable surfaces [@nadler03] and the consequent changes in ionic atmosphere [@buyukdagli11; @zwanikken13] near interfaces have been predicted before. Experimental support for the weakening of ion–ion interactions near a high-permittivity material is provided by the observation of enhanced dissociation of a weak electrolyte, leading to more free charge carriers and an increase in conductivity [@korobeynikov05]. However, to the best of our knowledge, the modulation of ion mobilities by polarizable interfaces through changes in the ionic atmosphere has not been reported before.
An important advantage offered by particle-based modeling is that it permits examination of the individual contributions to the forces exerted on ions near the interface. Figure \[fig:forces\]a presents the total (i.e., arising from ionic as well as induced charges) Coulombic force on ions as a function of distance to the channel wall. As predicted, for attractive polarization the magnitude of the relaxation force decreases near the wall, whereas for repulsive polarization the magnitude of this force increases compared to the case without dielectric mismatch.
![Forces (in the direction of motion; unit $k_{\rm B}T/\sigma$) exerted on the ions for the systems of Fig. \[fig:ionatmosphere\]. (a) Total relaxation force as a function of distance to the channel wall. (b) Surface polarization contribution to the relaxation force (SPF, see main text). (c) Collision force. (d,e) Schematic depiction of the effect of image charges on the relaxation force, and the resulting SPF component parallel to the surface for dielectric mismatch $\Delta=-1$ (d) and $\Delta=1$ (e).[]{data-label="fig:forces"}](forces)
Any asymmetry in the ionic atmosphere will be reflected in the surface polarization charge. Thus, an interesting secondary effect arises, as this surface polarization will also contribute to the relaxation force. This contribution, which we denote the *surface polarization force* (SPF), acts on ions near the wall and can be isolated in the simulations. Due to the asymmetry of the ion cloud, the SPF has a nonzero component parallel to the surface. Figure \[fig:forces\]b shows that for $\Delta = -1$ the SPF diminishes the total relaxation force, whereas for $\Delta = 1$ it provides an enhancement. The reason for this is clarified by the schematics in Fig. \[fig:forces\]d,e. For $\Delta = -1$ (Fig. \[fig:forces\]d) the image cloud carries a charge opposite to that of the ionic atmosphere, thus causing a SPF in the direction of ion movement. For $\Delta = 1$ (Fig. \[fig:forces\]e) the ion cloud and its image carry the same charge, so that the SPF opposes the ionic motion. We observe that the SPF contribution to the total relaxation force is considerably smaller for attractive surface polarization than for the repulsive case, reflecting the weaker and less asymmetric cloud in the first system. Thus, the effect of surface polarization on the relaxation force, and consequently on the ion mobility, is twofold. First, it modifies the ion atmosphere and secondly, it exerts a surface polarization force. Both of these effect diminish the relaxation force when $\Delta = -1$ and enhance it when $\Delta = 1$.
Lastly, the distance dependence of the collision force opposing the ion movement (Fig. \[fig:forces\]c) reflects the concentration profile, increasing as more particles reside near the wall. However, as this force has a weaker dependence on dielectric mismatch, the response of the relaxation force dominates, giving rise to the counterintuitive behavior of the mobility in Fig. \[fig:ionatmosphere\]a,b.
![Average ion mobilities as a function of Bjerrum length and ion concentration in the solution. The average is taken over all ions in the channel. Simulated values are marked with crosses; the color scheme results from 2D interpolation. a) Mobility of ions in the absence of a dielectric mismatch, $\Delta = 0$. (b,c) Absolute deviation in mobility compared to the $\Delta = 0$ situation for attractive polarization, $\Delta = -1$ (b) and repulsive polarization, $\Delta = 1$ (c). Units as in Fig. \[fig:schematic\].[]{data-label="fig:2Dspeeds"}](mobility_heatmap)
The observations presented here depend on the global electrolyte concentration and on the strength of the electrostatic coupling (expressed in terms of the Bjerrum length $l_{\rm B}\propto(T\varepsilon_{1})^{-1}$), as those parameters affect both bulk ion mobility and the screening of the surface polarization. In Fig. \[fig:2Dspeeds\] we explore these dependencies. As a baseline we employ the system without dielectric mismatch (Fig. \[fig:2Dspeeds\]a), which confirms that the mobility decreases with increasing concentration and increases with decreasing $l_{\rm B}$, as expected [@onsager31; @fuoss57; @fuoss78]. Figures \[fig:2Dspeeds\]b,c show the absolute deviations compared to this reference system for attractive and repulsive surface polarization. We note that the effects of positive and negative dielectric mismatch on ion mobility differ in magnitude. To emulate an experimental set-up, the mobility in Fig. \[fig:2Dspeeds\] is determined as an average across the entire channel. Thus, the suppressed electrolyte concentration near low-permittivity surfaces (Fig. \[fig:2Dspeeds\]c) diminishes the influence of reduced ion mobility on the observed average mobility.
As the Bjerrum length is lowered, the region of significant mobility change is reduced to lower concentrations. The lowest concentration studied here is $0.02\sigma^{-3}$, corresponding to $0.1$M, i.e., comparable to physiological salt concentrations. If concentrations are reduced further, the effect of surface polarization is enhanced.
Our simulations lack a description of hydrodynamics beyond the Langevin thermostat, and the long-range electrophoretic force is therefore absent in our simulations [@jardat99]. However, this force is affected by changes in the ionic atmosphere in the same manner as the relaxation force, since the magnitude of both forces is directly related to the amount and distribution of charge within the ion cloud [@onsager31]. Thus, inclusion of this force should only enhance the phenomena reported here. The use of an implicit solvent prevents us from observing effects related to the molecular nature of the solvent. The hydration characteristics of ions can affect their mobility by modulating the ion distribution near an interface [@qiao05a]. We also do not capture the effects of confinement on the solvent structure, such as the formation of oriented hydration layers at the channel edges and consequent slow-down of ions [@qiao05a] due to hindered water motion in these layers. Moreover, such a layer would modify the dielectric jump at the interface [@bonthuis11]. Yet, the presence of a hydration layer should not qualitatively affect the observed differences between attractive and repulsive surface polarization.
Ion mobility and conductance in nanodevices are a delicate balance of several contributions [@balme15], which along with the magnitude of the effect and the nanometer scale of the devices may complicate experimental verification of the dielectric modulation of ion mobilities. This, however, does not mean that this effect is of limited practical importance: it is amplified at low concentration, permittivity, and temperature, and by high surface-to-volume ratio.
In conclusion, we have demonstrated that the mobility of ions near interfaces can be regulated via the dielectric mismatch between the solution and the wall material. Surface polarization affects the mobility through two mechanisms, both working in the same direction, that increase the mobility near a high-permittivity surface and decrease it near a surface with low dielectric constant. First, surface polarization affects ion–ion interactions and consequently the shape and intensity of the ionic atmosphere responsible for the relaxation force. Secondly, due to the asymmetry of the counterion atmosphere, a surface polarization force parallel to the interface emerges. We anticipate that these findings can be exploited to understand and control ionic flux on the nanoscale.
Acknowledgements
================
We thank Jiaxing Yuan for the PPPM implementation of image charges. This work was supported by the National Institutes of Health through Grant No. 1R01 EB018358-01A1. We acknowledge computational resources from the Quest high-performance computing facility at Northwestern University.
[37]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\
12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty [****, ()](\doibase
10.1126/science.1113666) [****, ()](\doibase 10.1021/cm402742u) [****, ()](\doibase
10.1002/andp.18782420102) [****, ()](\doibase
10.1002/bbpc.19070132502) @noop [**]{}, ed. (, , ) @noop [****, ()]{} [****, ()](\doibase
10.1039/TF9272300334) @noop [****, ()]{} [****, ()](\doibase
10.1021/j150341a001) @noop [****, ()]{} [****, ()](\doibase 10.1021/j100511a017) @noop [****, ()]{} @noop [****, ()]{} [****, ()](\doibase 10.1126/science.268.5211.700) [****, ()](\doibase
10.1063/1.2179797) [****, ()](\doibase
10.1021/la0511900) @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} [****, ()](\doibase
http://doi.org/10.1016/S0006-3495(03)75095-4) [****, ()](\doibase 10.1021/nn403686s) @noop [****, ()]{} @noop [****, ()]{} @noop [**]{} (, , ) @noop [****, ()]{} @noop [****, ()](\doibase 10.1002/andp.19283900704) [****, ()](\doibase 10.1021/j150548a015) @noop [****, ()]{} @noop [****, ()]{} [**](https://books.google.com/books?id=FZozAAAAIAAJ) (, , ) [****, ()](\doibase
10.1103/PhysRevE.68.021905) @noop [****, ()]{} [****, ()](http://stacks.iop.org/0022-3727/38/i=6/a=021) [****, ()](\doibase
10.1063/1.478703) [****, ()](\doibase 10.1103/PhysRevLett.107.166102)
|
---
author:
- Matthew Delacorte
date: 'August 30, 2007'
title: 'Graph Isomorphism is PSPACE-complete'
---
Combining the the results of A.R. Meyer and L.J. Stockmeyer “The Equivalence Problem for Regular Expressions with Squaring Requires Exponential Space”, and K.S. Booth “Isomorphism testing for graphs, semigroups, and finite automata are polynomiamlly equivalent problems” shows that graph isomorphism is PSPACE-complete.
Proof
=====
The equivalence problem for regular expressions was shown to be PSPACE-complete by (Meyer and Stockmeyer \[2\]). Booth \[1\] has shown that isomorphism of finite automata is equivalent to graph isomorphism. Taking these two results together with the equivalence of regular expressions, right-linear grammars, and finite automata see \[3\] for example, shows that graph isomorphism is PSPACE-complete.
[99]{}
Booth, K.S. Isomorphism testing for graphs, semigroups, and finite automata are polynomiamlly equivalent problems, SIAM J. Comput. 7, No 3, (1978), 273-279.
Hopcroft, J.E., and Ullman, J.D. (1979), Introduction to Automata Theory, Languages and Computation, Addison-Wesley, Reading, MA.
Meyer, A.R. and Stockmeyer, L.J. The Equivalence Problem for Regular Expressions with Squaring Requires Exponential Space, 13th Annual IEEE Symp. on Switching and Automata Theory, Oct., 1972,125-129.
|
---
abstract: 'We explore the consequences of the existence of a very large number of light scalar degrees of freedom in the early universe. We distinguish between [*participator*]{} and [*spectator*]{} fields. The former have a small mass, and can contribute to the inflationary dynamics; the latter are either strictly massless or have a negligible VEV. In N-flation and generic assisted inflation scenarios, inflation is a co-operative phenomenon driven by $N$ participator fields, none of which could drive inflation on its own. We review upper bounds on $N$, as a function of the inflationary Hubble scale $H$. We then consider stochastic and eternal inflation in models with $N$ participator fields showing that individual fields may evolve stochastically while the whole ensemble behaves deterministically, and that a wide range of eternal inflationary scenarios are possible in this regime. We then compute one-loop quantum corrections to the inflationary power spectrum. These are largest with $N$ spectator fields and a single participator field, and the resulting bound on $N$ is always [*weaker*]{} than those obtained in other ways. We find that loop corrections to the N-flation power spectrum do not scale with $N$, and thus place no upper bound on the number of participator fields. This result also implies that, at least to leading order, the theory behaves like a composite single scalar field. In order to perform this calculation, we address a number of issues associated with loop calculations in the Schwinger-Keldysh “in-in” formalism.'
author:
- Peter Adshead
- Richard Easther
- 'Eugene A. Lim'
bibliography:
- 'nfield.bib'
title: 'Cosmology With Many Light Scalar Fields: Stochastic Inflation and Loop Corrections'
---
Introduction
============
Many candidate theories of fundamental physics predict the existence of large numbers of scalar degrees of freedom. Classically, if these modes are not excited they play no role in the cosmological dynamics. Quantum mechanically, however, light scalar modes fluctuate with an amplitude set by the Hubble scale $H$ and, since everything couples to the graviton, they contribute to loop corrections. Thus, while adding $N$ light scalar modes need not change the classical dynamics of the early universe, we expect quantum contributions that scale with $N$. In addition, if these fields are in thermal equilibrium with the rest of the universe their contribution to the effective number of degrees of freedom modifies the relationship between density and temperature. For a given $H$ we can therefore find an upper limit on $N$, above which these modes dominate the cosmological evolution.
Consider a scenario with $N$ scalar fields, $$S = \int d^4 x \sqrt{g}\left[\frac{M_p^2}{2}R+ \sum_I \left(-\frac{1}{2}(\partial{\phi_I})^2 + V_I(\phi_I)\right) \right] \, ,\label{eqn:Nflationaction}$$ where the total potential $V = \sum_I V_I(\phi_I)$ is a sum of $N$ uncoupled potential terms.[^1] A field $\phi_I$ is “light” if $d^2V_I/d\phi_I^2 /2\equiv m_I^2 \ll H^2$. We make a distinction between [*participator*]{} and [*spectator*]{} fields. The latter are either massless because $V_I$ is strictly zero, or their vacuum expectation value (VEV) is very small. Conversely, participator fields have small but non-zero mass and sufficient VEV for them to help drive inflation via their contribution to the overall density.
Perhaps the simplest route to a meaningful bound on $N$ is to note that all these fields undergo quantum mechanical fluctuations of order $\delta \phi_i \sim H/2\pi$. During inflation these fluctuations freeze out as classical perturbations at scales larger than the Hubble length, $1/H$. Each field has gradient energy $(\nabla \phi)^2/2$, which counts towards the overall energy density. The gradient energy thus scales like $N(\delta \phi /\delta x)^2 /2 \sim N H^4/8 \pi^2$. Given $H$, the energy density is provided by the $0-0$ Einstein equation, $H^2 = \rho/3{M_p^2}$. For self-consistency, the gradient contribution must be much smaller than other contributions to $\rho$, so $$N \ll \frac{M_p^2}{H^2}. \label{eqn:gradientbound}$$ If inflation occurs at the GUT scale, then $M_p/H\sim 10^5$ and $N\ll 10^{10}$. This bound can be derived by many routes (e.g. [@Huang:2007zt; @Dvali:2008sy]), and appears to be robust.[^2]
One can also consider bounds on $N$ from loop corrections to the gravitational constant. Since gravity (and thus the graviton) couples to all fields, matter loop corrections can renormalize its value [@Zee:1981mk; @Adler:1980bx]. Veneziano [@Veneziano:2001ah] argued that in order for the effective value of Newton’s constant to be greater than zero we need $$N \ll \frac{M_p^2}{\Lambda^2}, \label{eqn:speciesbound}$$ where $\Lambda$ is the scale of the invariant UV cut-off, for example the mass scale of the $N$ stable fields. Likewise, Veneziano [@Veneziano:2001ah] points out that this bound prevents potential violations of the holographic bound on the entropy density that can be encountered when the total number of species grows without limit [@Unruh:1982ic; @Sorkin:1981wd], the so-called “Species Problem”. More recently, Dvali [@Dvali:2007hz] noted that in the presence of a large number of species, the Veneziano mechanism weakens the gravitational coupling by a factor of $1/N$. Setting $N\sim 10^{32}$ and working at the TeV scale to satisfy the bound of equation (\[eqn:speciesbound\]) solves the hierarchy problem, provided some ultraviolet completion of the standard mode can produce the requisite value of $N$. Dvali also provides an alternate non-perturbative derivation of equation (\[eqn:speciesbound\]) based on the consistency of black hole physics.
In Section \[sect:eternal\] we begin by considering stochastic inflation [@Vilenkin:1983xq; @Linde:1986fd; @Guth:2000ka] with multiple degrees of freedom. Many simple inflationary potentials possess a range of field values for which the potential is safely sub-Planckian, while the stochastic motion of the field dominates the semi-classical rolling. With a single field, a stochastic phase is necessarily eternal, since the inflationary domains perpetually reproduce themselves. However, once $N>1$, more complicated scenarios become possible. Specifically, with $N$ participator fields we can implement [*assisted*]{} eternal inflation without any single field acquiring a super-Planckian VEV. This scenario is a variety of “assisted inflation” where the inflaton is a composite of many individual fields [@Liddle:1998jc]. Secondly, we find solutions where [*individual*]{} fields move stochastically, but a well-defined composite field rolls smoothly towards its minimum. Finally, we find models where a field (or fields) evolves semi-classically, along with other fields that move stochastically. If these fields have symmetry breaking potentials they yield an apparent “multiverse” of disconnected bubbles. However, the stochastic phase ends globally, as the semi-classically evolving field gives a natural cut-off to what would otherwise be an eternally inflating universe, providing a potential toy model for studying the well-known measure problem in eternal inflation [@Linde:1993xx; @Guth:2000ka; @Garriga:2005av; @Easther:2005wi; @Bousso:2006ev; @Aguirre:2006ak; @Easther:2007sz].
All fields couple to the inflaton gravitationally, and thus contribute loop corrections to the inflation potential, via a coupling of order $(H/M_p)^2$. This is necessarily a small number during the last 60 e-folds of inflation, given that $H$ fixes the scale of tensor fluctuations and is bounded by the absence of an observed B-mode in the microwave background. Consequently, any single loop makes a tiny correction to the inflaton propagator. However, this contribution is amplified by $N$, the number of species that can flow round the loops, and in Section \[sect:loop\] we compute the relevant one-loop corrections to the inflaton propagator. We consider two limits – $N$ spectator fields with a single inflaton, and the N-flation case, with $N$ participator fields [@Dimopoulos:2005ac; @Easther:2005zr]. With $N$ spectator fields we find an upper bound on $N$ similar to those of [@Veneziano:2001ah; @Dvali:2007wp]. We describe these results in Section \[sect:loop\], relegating many details to Appendix \[app:1loop\]. We work in the “in-in" formalism [@Schwinger:1961; @Keldysh:1964ud; @Jordan:1986ug; @Calzetta:1986ey], which has been turned into an extremely powerful tool for studying higher order corrections to cosmological correlations by Maldacena [@Maldacena:2002vr] and Weinberg [@Weinberg:2005vy]. This calculation requires us to develop the fourth order interaction Hamiltonian for a theory of inflation with $N$ uncoupled scalar fields, which we present in Appendix \[App:4ptderivation\], and which will have applications beyond the current calculation. Moreover, it turns out that there are some subtleties with the computation of loop corrections in the in-in formalism which we also clarify in the Appendices. Finally we conclude in Section \[sect:conclusions\].
Stochastic $N$-flation {#sect:eternal}
======================
In simple versions of $N$-flation, one has $N$ identical participator fields, and inflation emerges as a cooperative phenomenon. We assume that the overall potential is the sum of the individual fields’ potential terms, and that cross terms are absent. Each field only feels the “slope” of its own potential, but the corresponding friction term in the field’s equation of motion is still proportional to $H$, and thus grows with $N$. Interestingly, a similar scaling also applies to [*stochastic*]{} inflation, which occurs when the inflaton evolution is dominated by quantum fluctuations, rather than semi-classical rolling [@Vilenkin:1983xq; @Linde:1986fd; @Guth:2000ka]. The individual fields have fluctuations of order $H$, so this amplitude will grow relative to the semi-classical evolution as $N$ is increased.
Let us begin by looking at $N$ fields with identical potential terms and initial VEVs. Assuming slow roll, $$H^2 = \frac{1}{3M_p^2} \sum_I V_I(\phi_I) = \frac{1}{3M_p^2} N {{\tilde V}}, \label{eq:Hgeneric}$$ where ${{\tilde V}}= V_I(\phi_I)$ is the potential for any one of the fields. The amplitude of the quantum fluctuations of the $I-$th field is $$\delta \phi_{I,q} = \delta \phi_q = \frac{H}{2 \pi},$$ where the first equality reflects our assumption of identical fields. From equation (\[eq:Hgeneric\]) $$\delta \phi_q = \sqrt{\frac{1 }{12\pi^2} \frac{N{{\tilde V}}}{M_p^2}} \, .$$ Conversely, the distance travelled by $\phi_I$ in a single Hubble time is $$\delta \phi_c = |\dot{\phi_I}| \frac{1}{H} = \frac{V'_I}{3H} \frac{1}{H} = \frac{M_p^2 V'_I}{ N {{\tilde V}}},$$ where $V'_I = \partial V/\partial \phi_I = \partial V_I/\partial \phi_I$. The defining condition for the stochastic inflation is that $\delta \phi_c < \delta \phi_q$ [@Vilenkin:1983xq; @Linde:1986fd]. Forming the ratio of these terms gives $$\frac{{\delta \phi_q}}{{\delta \phi_c }} = \sqrt{\frac{1}{ 12 \pi^2}}\left(\frac{ N{{\tilde V}}}{M_p^2}\right)^{3/2} \frac{1}{V'_I},$$ and stochastic inflation therefore occurs whenever the above expression is larger than unity. This ratio increases with $N$ if everything else is held fixed. This is to be expected, since boosting $N$ lifts the overall energy density, and thus the amplitude of the quantum fluctuations. Conversely, the semi-classical rolling slows with $N$, as we are increasing $H$ while holding $V_I'$ constant.
The above discussion holds for a generic potential, but now consider $N$ quadratic potentials with identical masses, $V_I = m^2 \phi_I^2 /2$. Assisted inflation can be described in terms of an effective single field, $\varphi$ which, for quadratic potentials is $\phi_I = \varphi/\sqrt{N}$ and [@Dimopoulos:2005ac; @Easther:2005zr] $$\frac{{\delta \phi_q}}{{\delta \phi_c }} = \sqrt{ \frac{ N}{96\pi^2}} \frac{N \phi_I^2}{M_p^2}\frac{m}{M_p}=
\sqrt{ \frac{ N}{96\pi^2}} \frac{\varphi^2}{M_p^2}\frac{m}{M_p}.\label{eqn:stochasticcond}$$ Interestingly, this expression [*retains*]{} an explicit dependence on $N$, whereas N-flation ends at a fixed value of $\varphi$, independently of $N$. Before proceeding, let us consider some specific numbers. The principal virtue of N-flation is that we can build a GUT-scale inflation scenario without invoking trans-Planckian VEVs. Consequently, if $\phi_I \sim M_p$ the critical value of $N$ at which the fields can move stochastically is $$N = (96 \pi^2)^{1/3} \left(\frac{M_p}{m} \right)^{2/3} \approx 10 \left(\frac{M_p}{m} \right)^{2/3} \, .$$ The ratio $m/M_p$ is fixed via the amplitude of the perturbation spectrum, and is around $2 \times 10^{-4}$ [@Dimopoulos:2005ac; @Easther:2005zr]. Consequently, we might see the onset of stochastic motion in the fields if $N \sim {\cal{O}}(10^4)$. This is an order of magnitude larger than the number of fields one expects in N-flation, on the basis of the likely number of two-cycles (from which the N axions are derived) one can find in a realistic Calabi-Yau [@Dimopoulos:2005ac; @Easther:2005zr], but is much less than the absolute upper limit on $N$ given by equation (\[eqn:gradientbound\]). To satisfy this requirement, we need $H^2 \approx N m^2/ 6$ (assuming $\phi_I = M_p$), in which case $$\label{eqn:Nmax}
N\lsim \sqrt{6}\frac{M_p}{m} \sim 10^5 .$$ The theoretical upper limit on $N$ is roughly an order of magnitude larger than the value required for the individual fields to move stochastically.[^3] Conversely, slow roll ends when $\varphi \approx \sqrt{2}M_p $ or $\phi \approx \sqrt{2/N}M_p$. For self-consistency, we expect the $N$ fields to be moving semi-classically at this point, so we can derive a very weak upper bound on $N$ by setting $\varphi \approx \sqrt{2}M_p$ in (\[eqn:stochasticcond\]), namely $N \lsim 24\pi^2 M_p^2/m^2 \sim 10^{10}$, far above even the weak bound of equation (\[eqn:gradientbound\]). Since the last sixty e-folds occur at values of $\varphi$ a few times larger than $\sqrt{2}M_p$, we need not worry that the fields are moving stochastically over astrophysically interesting scales.
With a single field, the onset of stochastic inflation is synonymous with the amplitude of the density fluctuations exceeding unity [@Creminelli:2008es]. For $N$-flation the density fluctuations have an amplitude [@Dimopoulos:2005ac; @Easther:2005zr] $$P_{\cal R}^{1/2} = \sqrt{ \frac{ 1}{96\pi^2}} \frac{\varphi^2}{M_p^2}\frac{m}{M_p},$$ which is $\sqrt{N}$ [*smaller*]{} than the critical ratio for the onset of stochastic motion by the individual fields, so at the threshold for stochastic motion, $P_{\cal R}^{1/2} \sim 1/\sqrt{N} \ll 1$. In this case [*stochastic*]{} inflation is not synonymous with [*eternal*]{} inflation. As described above, in one Hubble time each field makes a random jump of $\sim \delta \phi_q$ with undergoing semiclasical evolution $\delta \phi_c$. For a given field, the sign of $\delta \phi_q$ is random, while the semiclassical motion always points in the downhill direction. The [*average*]{} field thus has a stochastic fluctuation $ \delta \phi_q/\sqrt{N}$, since this is the mean of $N$ signed, random variables. Consequently, the collective motion of the ensemble of fields remains deterministic, and the density of the universe will decrease monotonically with time unless $\delta \phi_q \gsim \sqrt{N} \delta \phi_c$. In this case the density fluctuation is boosted to order unity, and the average density can increase inside a given Hubble patch, so inflation is not just stochastic but eternal. Since $H \propto \rho^{1/2}$, we are in the intermediate range where inflation is stochastic but not eternal, $\delta \phi_q /\delta \phi_c $ diminishes with time. In this case the stochastic motion of the [*individual*]{} fields will eventually become subdominant and for any reasonable value of $N$ the inflationary dynamics will be well-described by the semi-classical motion alone.
From a practical perspective, a period of stochastic inflation in a simple multi-field model need not modify the observable properties of the universe. Certainly in the case described above, the stochastic motion would cease well before $P_{\cal R}^{1/2} \sim 10^{-5}$ unless $N$ is very large. However, since the stochastic motion necessarily increases the variance in the individual field values this phase may have an impact on the likely initial spread in the values the $\phi_I$, which can have an impact on the inflationary observables in $N$-flation. However, there is no clear expectation for the likely initial values of the $\phi_I$ and without this the impact of any stochastic evolution cannot be evaluated.
On the other hand, if the individual potential terms do not all have a single well-defined minimum, any phase where one of more of the individual fields moves stochastically could have a substantial impact on the inflationary phenomenology. For instance, in the case of $N$-flation, the $N$ fields are actually axions and thus have periodic potentials. The individual $m^2 \phi_I^2$ terms arise from assuming that each field is close to it minimum, when measured relative to the scale on which the underlying potential is periodic, and then Taylor expanding. However, if we imagine an initial state when the $N$ fields (or even some subset of them) are close to the maxima of their cosine potentials two effects will occur. The first is that these fields will have a small $\delta\phi_c$, since the corresponding $V_I'$ will be very small in the vicinity of an extremum, making it more likely that these fields move stochastically even if other fields are dominated by the semiclassical rolling. Secondly, with fields moving stochastically near the maxima of these potentials, in individual Hubble domains in which these fields do evolve away from the peaks the fields will then roll towards different minima. In [*eternal*]{} inflation each patch in which the stochastic motion ceases would be identified as a “pocket universe” and this scenario thus produces pockets with many different vacua, depending on the symmetry breaking pattern of the individual axions. However, if enough fields are rolling semi-classically, $H$ is strictly decreasing, and inflation is not future-eternal. In this case we can end inflation globally, and count the number and type of distinct domains created during the stochastic phase. Consequently we now have a toy model that initially resembles an eternally inflating universe, but in which there is a natural late-time cutoff which removes the infinities which otherwise can prevent one calculating the relative creation rates for different types of pocket. This system may thus prove useful for testing different “measures” for eternal inflation [@Linde:1993xx; @Guth:2000ka; @Garriga:2005av; @Easther:2005wi; @Bousso:2006ev; @Aguirre:2006ak; @Easther:2007sz], and we will examine this issue in a separate publication. Conversely, once inflation has ended, these different domains will eventually merge and the late-time universe (in the absence of a cosmological constant term) will be composed of domains with different vacua, separated by domain walls.
One-loop Quantum Corrections {#sect:loop}
============================
In the previous section, we considered stochastic field evolution in cosmological scenarios with many degrees of freedom. We now assume the field evolution is well-described by the semi-classical equations of motion, and compute loop corrections to the perturbation spectrum. Scalar loop corrections to the inflationary spectrum in single field models have been widely discussed [@Weinberg:2005vy; @Seery:2005gb; @Weinberg:2006ac; @Sloth:2006nu; @Sloth:2006az; @Prokopec:2008gw]. Likewise, the correction from graviton loops was computed in [@Dimastrogiovanni:2008af].[^4] Weinberg [@Weinberg:2005vy] looks at loop corrections to a single inflaton in the presence of many massless spectator fields, and we now generalize this result to models with $N$ participator fields. For clarity, we present our results in this section, and discuss technical aspects of the calculation in Appendix \[app:1loop\].
We previously made the distinction between single field inflation with $N$ spectator fields, and assisted scenarios with $N$ participator fields, of which $N$-flation is the most interesting example. The field dynamics differs between these cases, since only the participator fields have non-zero VEV and contribute to the vacuum energy that drives inflation. However, both classes of field can contribute loop corrections to the 2-point function or perturbation spectrum – and we will see that the [*forms*]{} of these contributions are different. The immediate concern is that loop corrections result in a new “species problem”, as pointed out in [@Veneziano:2001ah; @Dimopoulos:2005ac; @ArkaniHamed:2005yv]. Generically, if we insist that quantum corrections to the Planck scale are small, we need $\sim N(\Lambda_{UV}/M_p)^2$ to be small, where $\Lambda_{UV}$ is some UV cut-off scale. Having computed the loop corrections to the 2-point *correlation* function, we can then determine whether these also provide an constraint on $N$, as a function of $\Lambda_{UV}/M_p$. Surprisingly, with $N$ participator fields, there is no bound on $N$. On the other hand, with $N$ spectator fields, we obtain slightly weaker bounds on $N$ as compared to the standard arguments. In what follows we look at the two limiting cases, firstly $N$-spectator fields with a single inflaton and then $N$ participator fields. One could easily generalize our result to the case where one had a mix of both spectator and participator fields.
One-loop corrections of $N$ Spectator fields
--------------------------------------------
Consider inflation driven by a single field field $\phi$, with $N$ spectator fields, $\sigma_I$ where $I$ runs from $1$ to $N$, $$S = \int dx^4 \sqrt{g}\left[\frac{M_p^2}{2}R - \frac{1}{2}\left(\partial \phi\right)^2 + V(\phi) - \sum_I \frac{1}{2}\left(\partial \sigma_I\right)^2\right]. \label{eqn:weinbergcaseaction}$$ In the discussion below, repeated indices are not summed over unless explicitly specified. Unlike the $N$-flation case, the spectator fields remain invariant under the shift $\sigma_I \rightarrow \sigma_I + \delta_I$. Any initial kinetic energy possessed by these fields decays away rapidly, since $\rho_{i} \propto (\dot{\sigma}_i)^2 \propto a^{-6}$. Thus the only contribution to the background energy density is the inflaton potential, $V(\phi)$. This situation has been discussed by Weinberg [@Weinberg:2005vy], who found that the one-loop two-vertex quantum corrections modify the power spectrum equation (\[eqn:zeroPS\]) by a term of order $(H/M_p)^4 \ln k$ per field. Thus, for $N$ $\sigma_I$ fields[^5] the first order correction to the power spectrum is $$P_k^{(1)} = \frac{1}{4(2\pi)^3}N\frac{\pi}{6} \frac{H^4}{M_p^4} \epsilon\ln k. \label{eqn:weinberg1loop}$$ The one-loop corrected power spectrum is then $$P_k\rightarrow \frac{1}{4(2\pi)^3}\frac{1}{\epsilon}\frac{H^2}{M_p^2}\left(1+(c_1+N \frac{\pi}{6}\epsilon)\frac{H^2}{M_p^2} \ln k \right),$$ where $c_1 = -2\pi/3$ is the one-vertex self-correction term which we compute in the appendices. What about the one-loop one-vertex loops of the $\sigma$ fields around the inflaton? As we explained in detail in Appendix \[app:1loop\] and later in Section \[sect:participator\], there is no non-scale-free $\sigma_I$ one-vertex correction to the $\phi$ propagator, and hence Weinberg’s computation is complete modulo the inflaton self-correction.
Requiring that the one-loop corrections do not dominate the “tree-level” two-point correlation, we obtain the bound $$N<\frac{M_p^2}{H^2}\frac{1}{\epsilon}.$$ For successful slow roll inflation $\epsilon \ll 1$, so this bound is necessarily weaker than that obtained from gradient energy considerations. Interestingly, the “tree-level” power spectrum for the tensor modes for single scalar field inflation is $$P_{\rm gw} = \frac{H^2}{M_p^2}.$$ Hence any measurement of $P_{gw}$ would effectively put an *observational* upper bound on $N$, $$N < 1/P_{\rm gw}.$$
One-Loop corrections with $N$ Participator Fields {#sect:participator}
-------------------------------------------------
[fig1]{} $$\begin{aligned}
\parbox{40mm}{
\begin{fmfgraph*}(40,25)
\fmfleft{in}
\fmfright{out}
\fmf{plain}{in,v1}
\fmf{plain}{v2,out}
\fmf{dots,label=$J$,left=0.5,tension=0.3}{v2,v1}
\fmf{dots,label=$J$,left=0.5,tension=0.3}{v1,v2}
\fmflabel{$I$}{in}
\fmflabel{$I$}{out}
\end{fmfgraph*}}
~~~~~+~~~~
\parbox{40mm}{
\begin{fmfgraph*}(40,25)
\fmfleft{in}
\fmfright{out}
\fmf{plain}{in,v1}
\fmf{plain}{v2,out}
\fmf{dots,label=$J$,left=0.5,tension=0.3}{v2,v1}
\fmf{plain,label=$I$,left=0.5,tension=0.3}{v1,v2}
\fmflabel{$I$}{in}
\fmflabel{$I$}{out}
\end{fmfgraph*}}
~~~~~+~~~~
\parbox{40mm}{
\begin{fmfgraph*}(40,25)
\fmfleft{in}
\fmfright{out}
\fmf{plain}{in,v1,out}
\fmf{dots, label=$J$, tension=1}{v1,v1}
\fmflabel{$I$}{in}
\fmflabel{$I$}{out}
\end{fmfgraph*}}
\nonumber\end{aligned}$$
We now turn to the case with $N$ participator fields, N-flation, which is based on the action of equation (\[eqn:Nflationaction\]), and we perturb each individual field, $$\phi_I \rightarrow \bar{\phi_I} + Q_I,$$ where $\bar{\phi_I}$ is the homogeneous background solution and $Q_I$ is the perturbation. In the discussion below, upper case Roman letters $I,J,...$ label the fields, and we have a flat target space $X^I = X_I$ with the summation convention $$X^I Y_I = \sum_I X_I Y_I \, .$$ The power spectrum produced by $N$ uncoupled inflating fields is [@Sasaki:1995aw] $$P_k = \frac{1}{N^2} \sum_I \left(\frac{H}{\dot{\phi}_I}\right)^2\langle Q_I Q_I \rangle, \label{eqn:nfieldPS}$$ The $1/N^2$ factor in front reminds us that the total power spectrum is not simply a sum of the individual power spectra of the fields; the power spectrum is itself the expectation value of a set of random variables. In general, $$\langle Q_I Q_I \rangle = \langle Q_I Q_I \rangle_0 + \langle Q_I Q_I \rangle_1 + \langle Q_I Q_I \rangle_2...,$$ where the numerical subscript denotes the order of the expansion, or the number of vertices. The uncorrected power spectrum is simply $$\langle Q_I Q_I \rangle_0 = \frac{1}{2 (2 \pi)^3}H^2, \label{eqn:noloopifield}$$ leading to the primordial power spectrum, $$P_k = \frac{1}{4(2\pi)^3} \frac{H^2}{M_p^2 N \epsilon_I}. \label{eqn:zeroPS}$$ Our goal is to compute the one-loop corrections to the power spectrum for each field $\langle Q_I Q_I \rangle_{\rm 1-loop} $ generated by all $N$ fields. To compute the one-loop correction, we use (\[eqn:ininformula\]), with the appropriate interaction Hamiltonian $H_{\rm int}$ for each field.[^6] The three-point interaction Hamiltonian is, to leading order in slow roll [@Seery:2005gb], $$H_{\rm int}^{(3)}(t) = \int d^3 x \left[\frac{a^3}{4H} \sum_{I,J}\dot{\phi}_I Q_I \dot{Q}_J \dot{Q}_J + \frac{a^3}{2H} \sum_{I,J}\dot{\phi}_I \partial^{-2} \dot{Q}_I \dot{Q}_J \partial^2 Q_J\right]. \label{eqn:NloopHI}$$
Application of equation (\[eqn:2pt2\]) leads to three sets of diagrams generated by the two interaction terms and their cross-interaction, the 1st two diagrams of Fig. \[fig:loopdiagrams\] where each vertex corresponds to an interaction via one of the terms above. We also need the four-point action, equation (\[eqn:big4pt\]) derived in Appendix \[App:4ptderivation\], $$\begin{aligned}
H_{\rm int}^{(4)}(t) & =& \int d^3 x a^{3}\left[ \frac{1}{4Ha^{2}}\sum_{I,J} \partial_i Q_J\partial_iQ_J\partial^{-2} (\partial_j \dot{Q}_I\partial_jQ_I + \dot{Q}_I\partial^2 Q_I) \right. \nonumber \\
&& + \frac{1}{4H} \sum_{J,I} \dot{Q}_J \dot{Q}_J \partial^{-2}(\partial_i \dot{Q}_I \partial_j Q_I + \dot{Q}_I \partial^2Q_I) \nonumber \\
&& +\frac{3}{4}\sum_{I,J}\partial^{-2} (\partial_j \dot{Q}_J \partial_j Q_J + \dot{Q}_J\partial^2 Q_J)\partial^{-2} (\partial_j \dot{Q}_I \partial_j Q_I + \dot{Q}_I \partial^2 Q_I) \nonumber \\
&&\left. +\frac{1}{4}\beta_{2,j}\partial^2 \beta_{2,j}+\sum_{I}\dot{Q}_I\partial_i Q_I\beta_{2,i}\right],\end{aligned}$$ where $\beta_{2}^j$ is given in equation (\[eqn:beta2j\]) and repeated lower roman indices $i,j,\cdots$ are summed using the Euclidean metric; $a_{i}b_{i} = \sum_{i,j}\delta_{ij}a_{i}b_{j}$. At first order this interaction generates the third diagram of Fig. \[fig:loopdiagrams\] via Eq. (\[eqn:1pt\]). For simplicity, we ignore the self-interaction term of the two-vertex term, as we expect that its correction to be of the same order as the other $N-1$ corrections [@Seery:2007wf; @Seery:2007we].
Note that in deriving the above expression we have made use of the substitution $\mathcal{H}_{int} = -{\cal L}_{int}$, where ${\cal L}_{int}$ is the Lagrangian density computed to the appropriate order in the perturbation. This substitution is trivial with non-derivative interactions, but it is more subtle in the presence of derivative interactions. However, in Appendix \[app:canquant\] we show that we are effectively ignoring a correction that is at most of order ${\cal O}(\epsilon)$, $$\mathcal{H}_{\rm int} = -{\cal L}_{\rm int} + {\cal O}(\epsilon),$$ which can be discarded to leading order in slow-roll.
Each of the field $Q^I$ can be expanded in Fourier modes and their respective creation/anihilation operators, $$Q^I({{\bf{x}}},t) =\int d^3 k\,e^{i{{\bf{k}}}\cdot{{\bf{x}}}} \left[a_I({{\bf{k}}}){U_{k}^{I}}(t)+a_I^{\dagger}({{\bf{-k}}}){U_{k}^{I}{}^{*}}(t)\right], \label{eqn:Qexpansion}$$ where the functions $U^{I}_{k}(t)$ are solutions of the equations of motion obtain from varying the second order action, Eq. (\[eqn:quadraticaction\]), with respect to the field $Q^{I}$. The $a^{I}({\bf k})$ satisfy the usual commutation relations, $$[a_I({{\bf{k}}}),a_J^{\dagger}({{\bf{k}}}')]=\delta({{\bf{k}}}-{{\bf{k}}}')\delta^I_J.\label{eqn:Qcommutator}$$ With these ingredients, we make the final simplifying assumption that the fields are identical, which means that we simply need to compute the one-loop correction from one of the $J$ terms in equation (\[eqn:NloopHI\]) and then multiply them by $N-1$ to get the total correction per field $I$. We leave the details of this computation to Appendix \[app:1loop\].
It turns out that the only the two-vertex diagrams and the one-vertex *self-interaction* diagram contribute the physical logs while *non-self-interaction* diagrams of the one-vertex loops contribute polynomial ultra-violet divergences which we assume are absorbed by renormalization. This is a consequence of the symmetry of the original action where the potentials are uncoupled and the kinetic terms are $SO(N)$ symmetric. This can be understood as follows. The diagrams in Fig.(\[fig:momentumstructure\]) have metric “graviton” propagators and loops hidden inside them – the higher order interactions that appear perturbatively are mediated by gravity. However, we have chosen a gauge where the metric perturbation vanishes. If we choose a different gauge (Newtonian gauge [@Mukhanov:1990me] for example) these interactions are manifest, and each one-vertex diagram receives contribution from terms like $$\begin{aligned}
{}\nonumber \\
{}\nonumber \\
{}\nonumber \\
\parbox{40mm}{
\unitlength =1mm
\begin{fmffile}{fig2a}
\begin{fmfgraph*}(40,15)
\fmfleft{in,in1}
\fmfright{out,out1}
\fmf{phantom}{in1,v2,out1}
\fmf{plain}{in,v1}
\fmf{plain}{v1,out}
\fmf{wiggly,pull=10,left=0.1,tension=0.3}{v1,v2}
\fmf{plain,label=$J$}{v2,v2}
\fmflabel{$I$}{in}
\fmflabel{$I$}{out}
\end{fmfgraph*}
\end{fmffile}}
~+~~~~~
\parbox{40mm}{
\unitlength =0.7mm
\begin{fmffile}{fig2b}
\begin{fmfgraph*}(40,15)
\fmfleft{in}
\fmfright{out}
\fmf{plain}{in,v1}
\fmf{plain}{v2,out}
\fmf{wiggly,left=0.5,tension=0.3}{v1,v2}
\fmf{plain,left=0.5,label=$J$,tension=0.3}{v2,v1}
\fmflabel{$I$}{in}
\fmflabel{$I$}{out}
\end{fmfgraph*}
\end{fmffile}}
~+~\cdots\\
{}\nonumber\end{aligned}$$ where the wiggly lines represent the graviton lines, the solid lines are the field perturbation lines and the ellipses denote terms that are higher order in metric perturbation or the slow roll parameters. The first “balloon” diagram diverges polynomially; the loop that runs around itself does not possess a scale since it is not directly connected to an external leg. Both vertices of the second term must obey the symmetry of the original action, which forces $I=J$. It is clear from this perspective why only the self interaction one-vertex loops can contribute to physical obervables.[^7]
After regularizing the results and assuming that we can always find suitable counterterms to cancel the gravitational backreaction and UV divergences from loops independent of external momenta, the correction of $J$ loops to the $I$ propagator is given by equations (\[eqn:term1result\]) and (\[eqn:4ptselfcorrection\]). The total correction is thus $$\langle Q_I Q_I \rangle_{\rm 1-loop} = \frac{1}{2(2\pi)^3}\frac{H^4}{M_p^4}\left[c_1 + N c_2 \frac{\dot{\phi}_I^2}{M_p^2 H^2}\right] \ln k, \label{eqn:1loopifield}$$ where $c_1 = -2\pi/3$ and $c_2=(2017/240)\pi$, which arise from the one-vertex self-interaction and the two-vertex loops respectively. We have set $N-1 \rightarrow N$, since $N\gg1$, so the two-vertex self-interaction is not important. During $N$-flation, each participator field is corrected by its $N-1$ counterparts in the two-vertex loop, and by itself in the one-vertex loop. However, in equation (\[eqn:1loopifield\]), the two-vertex corrections are suppressed by the [*individual*]{} “slow-roll” parameter, $$\epsilon_I \equiv \frac{1}{2}\frac{\dot{\phi}_I^2}{(H M_p)^2}.$$ Since $\dot{\phi}_I^2$ is the velocity of a single field, while $H^2$ is related to the overall density, this quantity is reduced relative to its value in the single field case by a factor of $N$. We can think of $\epsilon_I$ as the square of the coupling strength of each three-point vertex in the interaction Hamiltonian equation (\[eqn:NloopHI\]). The total correction to the power spectrum is obtained by inserting in equations (\[eqn:noloopifield\]) and (\[eqn:1loopifield\]) into (\[eqn:nfieldPS\]), $$P_k = \frac{1}{4N^2(2\pi)^3}\sum_{I} \frac{1}{\epsilon_I}\frac{H^2}{M_p^2}\left[1+(c_1+ 2c_2N \epsilon_I)\frac{H^2}{M_p^2}\ln k\right]. \label{eqn:finalNPS}$$ Now for slow roll, $$\epsilon \equiv -\frac{\dot{H}}{H^2} = N \frac{1}{2}\frac{\dot{\phi}_I^2}{H^2 M_p^2} =N\epsilon_I \ll 1, \label{eqn:coherentepsilon}$$ and hence the power spectrum can be rewritten as $$P_k = \frac{1}{4(2\pi)^3}\frac{H^2}{M_p^2}\frac{1}{\epsilon}\left[1+(c_1+ 2c_2 \epsilon) \frac{H^2}{M_p^2} \ln k \right],$$ which is equivalent to the power spectrum of a single scalar field inflation $\varphi$ with its one-loop self-correction [@Seery:2007we].
The lack of any $N$-dependence in this bound is somewhat surprising. In fact, we will now show that this non-appearance is true to $\emph{all orders}$ in loops, provided slow roll can be assumed. The pair-wise $SO(N)$ symmetry $X^I X_I$ in the kinetic term of the action is preserved when we expand the action to higher orders.[^8] Consequently, the index structure of terms in $H_{\rm int}$ is always fully pair-wise e.g. $\dot{\phi}^I Q_I Q^J Q_J$, $Q^I Q_I Q^J Q_J$ or $\dot{\phi}^I \dot{\phi}^J Q_I Q_J Q^K Q_K$ etc. since the interactions must preserve this same symmetry. When computing the one-loop diagram, we have to contract *through* the loop to obtain the physical log contribution, in the sense that we have to contract the external lines with *at least one* of the interacting fields in the loop. We have already argued above and in the Appendix \[app:1loop\] that there can be no cross-coupling between $I$ and $J$ fields in the four-point interaction, at least at lowest order in slow-roll. Interaction terms like $\dot{\phi}_J \dot{\phi}_K \partial^{-2}(Q^I Q^I) Q^J Q^K$ do exist and can contribute a log divergence, however, these are higher order in slow-roll thanks to the extra $\dot{\phi}$ terms and so their contribution to the one-loop correction will be $\epsilon/N$ suppressed.
Let us now turn to the diagrams generated by the three-point interaction and the even vertex loops. For our coherent field argument to be true, the *leading* $2n$-vertex correction must be of the order $(N\epsilon_I)^n \equiv \epsilon ^n$. We can now count the number of diagrams to find the factors of $N$, remembering that each time a $\dot{\phi}^I \propto \sqrt{\epsilon^I}$ coupling appears, we get a factor of $\sqrt{1/N}$ from the coherent field relation equation (\[eqn:coherentepsilon\]). Consider the simplest diagram which we have calculated in the Appendix: a two-vertex loop with identical interaction term $\dot{\phi}^I Q_I Q^J Q_J$ at both vertices. We now want to count the factors of $N$ and $\epsilon_I$ for the correction to the $Q^I$ propagator. Since we have to contract *through* the loop, we can only contract the $Q^J$ of one interaction to the $Q_J$ of the other interaction (else we will form a disconnected diagram if we contract the $J$ fields at the same space point). Hence, by counting sums, we get a factor of $N$ from the $J$ contraction and a factor of $\epsilon_I$ from the couplings, for a total correction of $N\epsilon_I = \epsilon$, equivalent to the correction for a single coherent field as we have shown in the above.
Next, consider a more complicated interaction term $\dot{\phi}^I \dot{\phi}^J \dot{\phi}^K Q_I Q_J Q_K$. Since the fields and couplings have to appear pair-wise, there is no ${\cal{O}}(\dot{\phi}^2)$ interaction. There are two ways to contract through the loop, i.e. via $Q^J$ and $Q^K$, so we get two factors of $N$. However, the three factors of $\dot{\phi}^2$ give us the coupling term $\epsilon_I \epsilon_J \epsilon_K$ , and equation (\[eqn:coherentepsilon\]) yields the correction term $N^2\epsilon_i^3 = \epsilon^3/N$, which is $1/N$ suppressed compared to the previous case, and hence contributes even less. Adding loops will add more factors of $\epsilon_I$ into the interaction and thus provide further $1/N$ suppressions – it is easy to see that the extra factors of $N$ coming from the extra loops will never scale more than the suppression that comes from $\epsilon_I = \epsilon/N$ . The point here is clear: since the field labels have to appear pair-wise, we cannot have three-point interactions like $\dot{\phi}_I Q^I Q^J Q^K$ that could have given us an $\sim N \epsilon$ correction that has no coherent field analog. This argument does not depend on the number of loops, since it depends solely on the interaction terms.
In fact, we can write the $N$ fields in terms of a single scalar field model, where the inflaton is the radial field. Assume that each individual field’s potential is $(1/2)m_I^2 \phi_I^2$ and that they have identical masses $m_I = m$. We can then rewrite the fields in polar coordinates, $\psi^2 = \sum_I \phi_I^2$, in which case the Lagrangian becomes [@Dimopoulos:2005ac] $$\mathcal{L} =\frac12 (\partial \psi)^2 - \frac12 m^2 \psi^2 +\frac12 \psi^2 (\partial \Omega)^2, \label{eqn:coherentaction}$$ where $\langle \psi^2 \rangle \sim (N\epsilon_I)^{-1}(H/M_p)^2$. The angular terms thus have large values and damp out quickly, dropping out of the inflationary dynamics. Consequently, the set of $N$ coherent fields can be recast as theory with a single scalar field, and we are thus unable to derive a bound on $N$.
One might worry that the direct coupling term $\psi^2(\partial\omega_I)^2$ with $(\partial \Omega)^2 = \sum (\partial\Omega_I)^2$, will generate $N-1$ loop corrections that are not scale-free. However, we will sketch below that this interaction can, at the most, generate scale-free loops and hence is harmless. Consider the perturbed fields $$\begin{aligned}
\psi& \rightarrow& \bar{\psi} + Q, \\
\Omega_I &\rightarrow & \bar{\Omega}_I + \omega_I.\end{aligned}$$ Using the symmetry of the interaction, one can show that the leading three-vertex interactions are $ \bar{\Omega}_I QQ \omega^I$ and $\bar{\psi} Q \omega_I \omega^I$ while the leading four-vertex interaction is $QQ \omega_I \omega^I$, each with various permutations of derivatives. The first three-vertex term can at the most generate a single term, since the attractor solution drives the background angle fields to a fixed trajectory and hence we can always pick $\Omega_I = (1,0,0,...,0)$. For the latter three-vertex term, we have to contract through 4 instances of $\omega_I$, but it can be shown that the propagator for $\omega_i \propto a^{-3}$, and hence the loops are quickly redshifted away. Meanwhile it is clear from our discussion above that the four-point interaction can at most generate scale-free terms. For example, the lowest order diagram is the one-loop one-vertex diagram; since the external legs are $Q$’s and the $\omega_I$ can only contract with itself this is scale free.[^9]
Discussion {#sect:conclusions}
==========
In this paper we explore modifications to the dynamics of inflationary models in the presence of $N$ light scalar degrees of freedom. We make a distinction between spectator fields, which do not contribute to the background energy density, and participator fields, whose potential terms contribute to the inflationary background. As summarized in the introduction, a number of very general arguments place finite upper bounds on $N$, which typically take the form $N\ll M_p^2/H^2$. We first consider the dynamics of stochastic inflation in the presence of large number of light fields, and show that when $N$ is large there is a distinction between stochastic and eternal inflation which does not apply in the single field case. In particular, with a large number of participator fields we show that there is a regime where the individual field motion is dominated by quantum fluctuations and thus stochastic, while the overall evolution of the universe is deterministic. Moreover, if the stochastic fields have symmetry breaking potentials, then one can create a large number of apparent “pocket” universes, while retaining the ability to end inflation globally, and controlling the divergences characteristic of scenarios in which inflation is genuinely eternal.
Secondly, we explore loop corrections to the 2-point correlation function that provides the inflationary perturbation spectrum. Since any light field can run round a loop, these will typically scale with $N$. We analyze two subcases – a single inflaton with $N$ spectator fields, and $N$ participator fields. In the former case, we find an explicit bound, but one which is weaker (by one over a factor that must be small during slow roll) than the simple form described above, namely $$N\lsim \frac{M_p^2}{H^2}\frac{1}{\epsilon}.$$ On the other hand, with $N$ participator fields ($N$-flation type scenarios), the loop correction is small and independent of $N$. We can understand this result by recasting the action in terms of a composite single scalar field. Finally, in the course of this work, we have had need to look closely at the computation of loop corrections in the in-in formalism. These calculations raise a number of subtle issues, and we give details of our approach in the Appendices.
One might whether bounds on $N$ actually rule out otherwise realistic fundamental theories. Within string theory, Vafa [@Vafa:2005ui] argued that $T$ and $S$ dualities ensure that the volume of scalar moduli space is finite, and that there is a strict upper bound on the number of matter fields. We are not aware of explicit stringy constructions which would saturate the large $N$ bounds described above in any reasonable compactification of string theory and with a finite non-zero Newton constant.
As we have noted, many independent arguments put limits on the allowed value of $N$. For example, reference [@Watanabe:2007tf] notes that if $N$ is too large, the inflaton can decay into a large number of species during reheating, which may cause phenomenological problems in the later universe, while [@Leblond:2008gg] suggests that validity of the perturbative expansion itself provides a constraint on $N$. We have chosen to work in the simplest models of multi-field inflation, where the fields have uncoupled potentials. One can multifield hybrid inflation [@Linde:1993cn] with large number of coupled fields, and it would be interesting to ask if such models which are not ruled out by radiative corrections to their potentials but by the loop corrections to their power spectrum. Also, while higher correlation functions themselves do not seem to impose any bound on $N$ [@Battefeld:2006sz], one could check whether this was also true of their quantum corrections.
Acknowledgments
===============
We thank Nicola Bartolo, Emanuela Dimastrogiovanni, Bei Lok Hu, Daniel Kabat, Eiichiro Komatsu, Louis Leblond, Liam McAllister, Alberto Nicolis, David Seery, Sarah Shandera and Steven Weinberg for a number of useful conversations. We are particularly grateful to Walter Goldberger for a number of extremely helpful suggestions. RE is supported in part by the United States Department of Energy, grant DE-FG02-92ER-40704 and by an NSF Career Award PHY-0747868. This research was supported by grant RFP1-06-17 from The Foundational Questions Institute (fqxi.org)
The “in-in” formalism and one loop corrections to two-point correlation functions for multifield models {#app:1loop}
=======================================================================================================
In this Appendix, we review the canonical quantization approach to the Schwinger-Keldysh “in-in” formalism [@Schwinger:1961; @Keldysh:1964ud] and use it to compute the one-loop correction[^10] to the $I$-th field two-point correlation function. There are two types of loops: a two vertex loop of the type considered by Weinberg [@Weinberg:2005vy], and a one vertex loop of the type considered by Seery [@Seery:2007we] which we calculate below.
The Schwinger-Keldysh “in-in” formalism [@Schwinger:1961; @Keldysh:1964ud], was first applied to cosmology by Jordan [@Jordan:1986ug] and Calzetta and Hu [@Calzetta:1986ey]. This formalism was reintroduced into the computation of cosmological correlations by Maldacena [@Maldacena:2002vr] and extended beyond tree-level by Weinberg [@Weinberg:2005vy]. We follow Weinberg’s notation and methods, with some comments on its relationship with the functional method of [@Jordan:1986ug; @Calzetta:1986ey]. The correlation that we want to compute is $$\label{eqn:ininformula}
\langle W(t)\rangle =\left\langle \left(T e^{-i\int_{-\infty}^{t}H_{\rm int}(t) dt}\right)^{\dagger} ~ W(t)~ \left(Te^{-i\int_{-\infty}^{t} H_{\rm int}(t) dt}\right)\right\rangle,$$ where $W(t)$ is some product of fields, $H_{int}$ is the interaction Hamiltonian, both which are constructed out of Heisenberg (free) fields and $T$ is the time-ordering symbol. The expectation is taken over the true “in” vacuum.
One can use the Dyson expansion for the time evolution operator, together with the Baker-Campbell-Hausdorf formula to express equation (\[eqn:ininformula\]) in what may appear to be a more convenient form [@Weinberg:2005vy]; $$\begin{aligned}
\label{eqn:Wein2}
\langle W(t) \rangle = \sum_{N = 0}^{\infty}i^{N}\int_{-\infty}^{t}dt_{N}\int_{-\infty}^{t_{N}}dt_{N-1}...\int_{-\infty}^{t_{2}}dt_{1}\left\langle \left[H_{\rm int}(t_{1}), \left[H_{\rm int}(t_{2}), ... \left[H_{\rm int}(t_{N}), W(t)\right]...\right]\right]\right\rangle.\end{aligned}$$ While technically equivalent, for these computations this form proves problematic. To perform calculations one assumes that at very early times the vacuum state is the bare vacuum, that is, that the interactions disappear. Operationally, one implements this by deforming the time contour in equation (\[eqn:ininformula\]) off the real axis into the lower half plane to include a small amount of evolution in imaginary time, killing off the interactions in the far past ($-\infty\rightarrow-\infty(1+i\epsilon)$). Unfortunately, in equation (\[eqn:Wein2\]) this scheme can not be easily implemented. Contours entering from the right and the left side of equation (\[eqn:ininformula\]) are treated identically when deriving equation (\[eqn:Wein2\]). However, once the vacuum prescription has been specified these contours are in fact complex conjugates of each other, and can no longer be freely interchanged.[^11] In the rest of this work, we work directly with equation (\[eqn:ininformula\]), expanding it to the desired order.
At first order we have, using hermiticity, $$\label{eqn:1pt}
\langle Q^{I}(t)Q^{I}(t) \rangle_{1} = -2\Im \int_{-\infty_{-}}^{t} dt_1 \langle H^{(4)}_{\rm int}(t_1) Q^I(t)Q^I(t)\rangle,$$ where we have introduced the shorthand $\infty_{\pm}\equiv \infty(1\pm i\epsilon)$. Notice that equation (\[eqn:1pt\]) is manifestly real. This reality is not surprising: we are computing correlation functions and not transition amplitudes as noted in [@Calzetta:1986ey; @Jordan:1986ug]. At second order we have, again using hermiticity, $$\begin{aligned}
\nonumber\label{eqn:2pt2}
\langle Q^I(t) Q^J(t)\rangle_{2} & = & - 2\Re \int^{t}_{-\infty_{-}} dt_2 \int^{t_2}_{-\infty_{-}} dt_1 \langle H^{(3)}_{\rm int}(t_1)H^{(3)}_{\rm int}(t_2)Q^I(t) Q^J(t)\rangle \\ && +\int_{-\infty_{-}}^{t}dt_{1}\int_{-\infty_{+}}^{t}dt_{2}\langle H^{(3)}_{\rm int}(t_1)Q^I(t)Q^I(t)H^{(3)}_{\rm int}(t_2)\rangle,\end{aligned}$$ which is also real. Note the time integral contour of the second term.
At tree-level, the two-point correlation function, $\langle Q^{I}_{k} Q^{I}_{k} \rangle$, is simply the power spectrum $Q^I_kQ^I_k{}^*$. This suggests that instead of Wick contracting equation.(\[eqn:2pt2\]) into Feynman propagators, we should contract them into Wightman functions instead. Let us see how this works by first defining the contraction $$\overline{{Q_{{{\bf{k}}}}^{I}}{Q_{{{\bf{p}}}}^{J}}}\equiv {Q_{{{\bf{k}}}}^{I}}{Q_{{{\bf{p}}}}^{J}}-:{Q_{{{\bf{k}}}}^{I}}{Q_{{{\bf{p}}}}^{J}}: ,\label{eqn:wickcontraction}$$ where $:{Q_{{{\bf{k}}}}^{I}}{Q_{{{\bf{p}}}}^{J}}:$ is the usual normal ordered product and $$\int d^3k~{Q_{{{\bf{k}}}}^{I}} = Q^I({{\bf{x}}},t) = \int d^3 k\,e^{i{{\bf{k}}}\cdot{{\bf{x}}}} \left[a_I({{\bf{k}}}){U_{k}^{I}}(t)+a_I^{*}(-{{\bf{k}}}){U_{k}^{I}{}^{*}}(t)\right] = \int d^3k\left({Q_{{{\bf{k}}}}^{I}{}^{+}}(t)+{Q_{{{\bf{-k}}}}^{I}{}^{-}}(t)\right). \label{eqn:Qexpansion2}$$ The propagator is now [@Seery:2008qj] $$\langle{Q_{{{\bf{k}}}}^{I}}{Q_{{{\bf{p}}}}^{J}}\rangle = \langle[{Q_{{{\bf{k}}}}^{I}{}^{+}},{Q_{{{\bf{p}}}}^{J}{}^{-}}]\rangle = {U_{k}^{I}}{U_{p}^{J}{}^{*}}\delta^I_J\delta^3({{\bf{k}}}+{{\bf{p}}}) \label{eqn:wightman}.$$ With the contraction, equation (\[eqn:wickcontraction\]), it is straightforward to prove that the correlations of equations (\[eqn:2pt2\]) and (\[eqn:1pt\]) are a sum of all possible contractions into both connected and disconnected pieces as per the usual Wick’s Theorem in standard quantum field theory. We ignore the disconnected pieces, i.e. the “vacuum fluctuation” pieces, in which vertices are connected only to other vertices and no external lines. Since we are computing correlation functions[^12], the pieces automatically cancel [@Weinberg:2005vy].
As an aside, we note here that in original “in-in” formalism of Schwinger-Keldysh (and see also [@Jordan:1986ug; @Calzetta:1986ey; @Collins:2005nu]), the original fields are split into $+$ forward time fields and $-$ backward time fields each with their own generating functional. One can then compute all four possible Green’s functions for the fields $(+,+),(-,-),(+,-),(-,+)$, and then all the possible contractions of the correlation will be one of the four above. In other words, the doubling of the fields into $+$ and $-$ sets provides a convenient book-keeping method for keeping track of the contours. In our formalism, we have a single contraction equation (\[eqn:wickcontraction\]), but we pay for this simplicity by with having to explicitly keep track of the contours of the integrals we must perform.
Our strategy is as follows:
- Expand equation (\[eqn:ininformula\]) to the desired order keeping careful track of the vacuum prescription contours.
- [Insert the interaction Hamiltonian $H_{\rm int}$ and expand all fields using equation (\[eqn:Qexpansion\]).]{}
- [Expand using Wick’s theorem and the contraction defined in equation (\[eqn:wickcontraction\]) discarding disconnected diagrams.]{}
- [Perform the integral over time(s) leaving only integrals over the internal momenta.]{}
- [Regularize the remaining integrals to obtain the final answer.]{}
One should note that these loops contain both UV and IR divergences, unlike the case of [@Weinberg:2005vy]. The presence of the IR divergences means that our use of dimensional regularization will yield *incorrect* finite terms. However, we are only interested in the $\log q$ dependence, which is correctly computed by dimensional regularization. In this paper, we are concerned with large-$N$ effects, but a clear understanding of the mechanism which regulates these logs is clearly of critical importance, and we plan to pursue this topic in a future publication.[^13]
Two-vertex loop
---------------
The two-vertex loop is generated by a three-point interaction term [@Seery:2005gb], $$H_{\rm int}^{(3)}(t) = \int d^3 x \sum_{I,J}\left[\frac{a^3}{4}\sqrt{2\epsilon_I} Q_I \dot{Q}_J \dot{Q}_J + \frac{a^3}{2} \sqrt{2\epsilon_I}\partial^{-2} \dot{Q}_I \dot{Q}_J \partial^2 Q_J\right],\label{eqn:ijHI}$$ with the coupling term, $$\epsilon_I \equiv \frac{1}{2}\frac{\dot{\phi}_I^2}{H^2},$$ where here, and in all subsequent calculation in this appendix, we have dropped all factors of $M_p$ to simplify notation. We have also dropped the sums over the $J$ and $I$ since all the fields are identical – most of the diagrams have identical amplitudes and we will sum them later.
For the two-vertex terms generated by equation (\[eqn:ijHI\]), there are two types of vertices, and we have to compute the contributions from all the possible combinations of two vertices and internal loops (see Fig. (\[fig:loopdiagrams\]), giving us a total of six diagrams. In the following, we sketch the derivation for the first diagram of Fig. (\[fig:loopdiagrams\]) with $Q_I\dot{Q}_J\dot{Q}_J$ vertices at both ends – the other diagrams are computed similarly. Using the the first term of equation (\[eqn:ijHI\]) in equation (\[eqn:2pt2\]), and Wick expanding everything with the contraction of equation (\[eqn:wickcontraction\]), we get $$\begin{aligned}
\label{eqn:term1int}
&&\int d^3x~e^{i{{\bf{q}}}\cdot({{\bf{x}}}-{{\bf{x}}}')}\langle{\rm vac, in}| Q^{I}({{\bf{x,\tau}}})Q^{I}({{\bf{x,\tau}}}) |{\rm vac, in}\rangle_2 \nonumber \\
&=&-8(2\pi)^9\sum_{J}\Re \left[\int_{-\infty_-}^{\tau}d\tau_2 \int_{-\infty_-}^{\tau_2}d\tau_1 \frac{a^2(\tau_1)\sqrt{2\epsilon_I(\tau_1)}}{4}\frac{a^2(\tau_2)\sqrt{2\epsilon_I(\tau_2)}}{4}{U_{q}^{I}}(\tau_I){U_{q}^{I}{}^{*}}(\tau){U_{q}^{I}{}^{*}}(\tau){U_{q}^{I}{}^{*}}(\tau)\right. \nonumber \\
&&\times\int d^3k \int d^3k'' {\dot{U}_{k}^{J}}(\tau_1){\dot{U}_{k}^{J}{}^{*}}(\tau_2){\dot{U}_{k'}^{J}}(\tau_1){\dot{U}_{k'}^{J}{}^{*}}(\tau_2)\delta^3({{\bf{q}}}+{{\bf{k}}}+{{\bf{k}}}') \nonumber \\
&& -\int_{-\infty_+}^{\tau}d\tau_2 \int_{-\infty_-}^{\tau} d\tau_1 \frac{a^2(\tau_1)\sqrt{2\epsilon_I(\tau_1)}}{4}\frac{a^2(\tau_2)\sqrt{2\epsilon_I(\tau_2)}}{4}{U_{q}^{I}}(\tau_1){U_{q}^{I}{}^{*}}(\tau){U_{q}^{I}{}^{*}}(\tau_2){U_{q}^{I}}(\tau) \nonumber \\
&& \left.\times \int d^3k \int d^3k'' {\dot{U}_{k}^{J}}(\tau_1){\dot{U}_{k}^{J}{}^{*}}(\tau_2){\dot{U}_{k'}^{J}{}^{*}}(\tau_1){\dot{U}_{k'}^{J}{}^{*}}(\tau_2)\delta^3({{\bf{q}}}+{{\bf{k}}}+{{\bf{k}}}') \right],\end{aligned}$$ where we have dropped all disconnected terms. An overdot in this expression denotes a derivative with respect to conformal time, $\tau$.
We now integrate over the times first, since the interactions of equation (\[eqn:ijHI\]) satisfy the late time convergence conditions[^14]. In near de Sitter space $H\approx \mathrm{const}$, and the mode functions ${U_{i}^{k}}$ can be approximated by $${U_{k}^{I}}(\tau) = \sqrt{\frac{H^2}{2(2\pi)^3k^3}}(1+ik\tau)e^{-ik\tau}. \label{eqn:modefunction}$$ Although in general the couplings $\epsilon_I(\tau)$ are not constant, since they are changing only slowly we make the further simplifying assumption that they are roughly constant at late times. Plugging equation (\[eqn:modefunction\]) into equation (\[eqn:term1int\]), and then integrating over the times, we obtain $$\begin{aligned}
\label{eqn:term1int2}
&&\int d^3x~e^{i{{\bf{q}}}\cdot({{\bf{x}}}-{{\bf{x}}}')}\langle {\rm vac, in}| Q^{I}({{\bf{x,\tau}}})Q^{I}({{\bf{x,\tau}}}) |{\rm vac, in}\rangle_2 \nonumber \\
&=&\frac{H^4}{16(2\pi)^3}\sum_{J} \int d^3k~d^3k' \delta^3({{\bf{q}}}+{{\bf{k}}}+{{\bf{k}}}') ~\left\{\epsilon_I \left[ \frac{5}{4}\frac{kk'}{q^7K}+\frac{3}{4}\frac{kk'}{q^6K^2}++\frac{kk'(K+q)^2}{q^6K^4}\right] \right. \nonumber \\
&+& \left. {\cal{O}}(q\tau) + ~...\right\},\end{aligned}$$ where we have assumed identical fields hence $\epsilon_I = \epsilon_J$ and used $K\equiv k+k'+q$.
Considering only scales that are larger than the Hubble horizon $q\tau \ll 1$ allows us to also drop the second to last term in equation (\[eqn:term1int2\]). The rest of the 2-vertex diagrams can be computed identically, and the final answer we obtain is $$\begin{aligned}
\label{eqn:totalterm2vertex}\nonumber
& & \int d^{3}x\, {\rm e}^{i{\bf q}\cdot({\bf x}-{\bf x'})}\langle {\rm vac, in}|Q^{I}({\bf x},\tau)Q^{I}({\bf x'}, \tau) | {\rm vac, in} \rangle_{2}\\\nonumber & = & \frac{H^{4}}{2(2\pi)^{3}}N\int d^{3}k\int d^{3}k'\delta({\bf k} + {\bf k'} + {\bf q})\frac{\epsilon_I}{8}\Bigg[\frac{1}{K}\bigg[ \frac{39}{4}\frac{kk'}{ q^{7}}+ \frac{5}{8}\frac{k'}{q^{5}k}- \frac{k'}{q^{4}k^{2}}+ \frac{31}{4}\frac{k'}{q^{3}k^{3}} \bigg] \\\nonumber &&+ \frac{1}{K^{2}}\bigg[\frac{51}{4}\frac{kk'}{q^{6}}+ \frac{31}{2}\frac{ k'}{k^{3}q^{2}} +\frac{5}{8}\frac{k'}{q^{4}k}+ \frac{3}{4}\frac{k'}{q^{3}k^{2}}\bigg] + \frac{1}{K^{3}}\left[ 3\frac{kk'}{q^{5}}+\frac{k'}{q^{2}k^{2}} - \frac{2k'}{qk^{3}} \right] \\ && + \frac{1}{K^{4}}\bigg[\frac{3}{2}\frac{k'}{kq^2} -\frac{1}{2}\frac{kk'}{q^{4}} -2\frac{k'}{k^{2}q} \bigg] \Bigg]. \end{aligned}$$
Our task now is to compute the physical logs from equation (\[eqn:totalterm2vertex\]). From simple dimensional analysis, they take the form $$\int d^3k d^3k'~\delta^3({{\bf{q}}}+{{\bf{k}}}+{{\bf{k}}}') \frac{k^m k'{}^l}{(q+k+k')^n} \rightarrow q^{3+l+m-n+\delta}F_{(l,m,n)}\ln q + \mathrm{polynomial~divergences}. \label{eqn:dimreg}$$ To obtain the coefficient $F_{(l,m,n)}$, we use the identity $$q\int d^3k d^3k'~\delta^3({{\bf{q}}}+{{\bf{k}}}+{{\bf{k}}}') f(q,k,k') = 2\pi\int_0^{\infty}k dk \int^{k+q}_{|k-q|}k' dk'~f(q,k,k'). \label{eqn:neatID}$$ We can extract the coefficients of physical logs from the above expression using l’Hôpital’s rule. We find the limit for both terms going to infinity by differentiating the right hand side of the above identity and equation (\[eqn:dimreg\]) as many times as needed to tease out the $\log$ divergences, and comparing the result to the same operation applied to the right hand side of equation (\[eqn:dimreg\]).
Dropping all other polynomially divergent terms and the $\log$ IR divergences, each term in equation (\[eqn:totalterm2vertex\]) contribute the following: $$\begin{aligned}
\nonumber
\begin{array}{llllllll}
F_{(1,1,1)} & = \frac{-\pi}{15}, & F_{(1,1,2)} &=\frac{\pi}{3}, & F_{(1,-3,2)} & =\pi, & F_{(1,-3,3)} & = 2\pi,\\
F_{(1,-1,4)}& =0, & F_{(1,-2,2)}&=\pi, & F_{(1,1,4)}&=\frac{\pi}{2}, & F_{(1,1,3)}&=\frac{2\pi}{3},\\
F_{(1,-2,3)}&=0, & F_{(1,-1,1)}&=\frac{-2\pi}{3}, & F_{(1,-2,1)}&=\pi, & F_{(1,-3,1)}&=0,\\F_{(1,-1,2)}&=\pi.
\end{array}\end{aligned}$$ Given identical fields, such that $\epsilon_I = \epsilon_J$, then putting together everything give us the total contribution from the two-vertex one-loop diagram: $$\label{eqn:term1result}
\int d^3x~e^{i{{\bf{q}}}\cdot({{\bf{x}}}-{{\bf{x}}}')}\langle {\rm vac, in}| Q^{I}({{\bf{x,\tau}}})Q^{I}({{\bf{x,\tau}}}) |{\rm vac, in}\rangle_2 = \frac{H^4 N \epsilon_I}{2(2\pi)^3q^3}\left[\left(\frac{2017}{120}\right)\pi \ln q\right] +...,$$ where ‘$...$’ denotes scale-free polynomial divergences.
One-vertex loop
---------------
On the other hand, the one-vertex loop is generated by a four-point interaction term derived in Appendix \[App:4ptderivation\], $$\begin{aligned}
H_{\rm int}^{(4)}(t) & =& \int d^3x a^{3}\sum_{I,J}\left[ \frac{1}{4Ha^{2}}\partial_i Q_J\partial_iQ_{J}\partial^{-2}(\partial_j \dot{Q}_I\partial_jQ_I + \dot{Q}_I\partial^2 Q_I) \right. \nonumber \\
&& +\frac{1}{4H}\dot{Q}_J \dot{Q}_J \partial^{-2}(\partial_i \dot{Q}_I \partial_j Q_I + \dot{Q}_I \partial^2Q_I) \nonumber \\\nonumber
&& + \frac{3}{4H}\partial^{-2}(\partial_j \dot{Q}_J \partial_j Q_{J} + \dot{Q}_J\partial^2 Q_{J})\partial^{-2}(\partial_j \dot{Q}_I \partial_j Q_I + \dot{Q}_I \partial^2 Q_I) \\
&&\left. +\frac{1}{4}\beta_{2,j}\partial^2 \beta_{2,j}+\dot{Q}_I\beta_{2,i}\partial_i Q_I\right],
\label{eqn:ijHI2}
\end{aligned}$$ where, $$\begin{aligned}
\label{eqn:beta2j1}
\frac{1}{2}\beta_{2,j}\simeq \partial^{-4} \left( \partial_{j}\partial_{k}\dot{Q}^{I}\partial_{k}Q_{I} + \partial_{j}\dot{Q}^{I}\partial^{2}Q_{I}- \partial^{2}\dot{Q}^{I}\partial_{j}Q_{I}-\partial_{m}\dot{Q}^{I}\partial_{j}\partial_{m}Q_{I}\right). \end{aligned}$$ The four-point interaction (\[eqn:ijHI2\]) is explicitly *independent* of the background potential[^15] $V$. This means that the one-vertex one-loop correction to a field $I$ from all other fields $J\neq I$ is the same for whether or not $J$ are spectators or participating fields. Each four-point term has the form $Q^I Q^I Q^J Q^J$, meaning that depending on the external lines, we can contract it with either the $J$ or $I$ fields, thus each term will effectively generate two different diagrams.
Fortunately, only the self-interaction term, i.e. $I=J$ contracted with $I$ external lines, contributes a physical log. In the UV, all other terms diverge polynomially and we assume that they can be absorbed by renormalization. Heuristically, this is because the interactions are secretly mediated by gravitons and this, combined with the symmetry of the action, prevents any non-self-interaction loops from contributing, as described in Section \[sect:loop\]. From a diagrammatic perspective, to yield a log divergence, the final momenta integrand must possess a scale. That is, it must have the form $\sim k^{\alpha}/({{\bf{k}}}\pm{{\bf{q}}})^{\beta}$ for some $\alpha \in \mathbb{Q}$ and $\beta \in \mathbb{Z}^{+}$, where ${{\bf{q}}}$ and ${{\bf{k}}}$ are the external and internal momenta respectively. However, note that the fields operated by $\partial^{-2}$ is always identically paired i.e. they appear as $\partial^{-2}(Q^{I} Q^{I})$ and never $\partial^{-2}(Q^{I}Q^{J})$. Hence if $I\neq J$, the integrand can only possess momentum factors like $1/({\bf{k}}^2)$ or $1/({\bf{q}}+{\bf{q}}')^2$, which only contribute polynomial divergences. For example, a four-point term with an interaction (dropping time derivatives as they do not affect the final result) $Q_J \partial^{-2} (Q_I Q_I) Q_J$ for a $\langle Q^I Q^I \rangle$ correlator will yield the diagrams shown in Fig.\[fig:momentumstructure\] if $I\neq J$, which is simply a vacuum fluctuation diagram multiplied by a propagator.
[fig3]{} $$\begin{aligned}
&&\langle Q_I(x) Q_I(x') \partial^{-2}(Q_I(z) Q_I(z)) Q_J(z) Q_J(z)\rangle \propto \\
{}\nonumber \\
{}\nonumber \\
{}\nonumber \\
&&\parbox{40mm}{
\begin{fmfgraph*}(40,25)
\fmfleft{in}
\fmfright{out}
\fmf{plain}{in,v1,out}
\fmflabel{$I,x$}{in}
\fmflabel{$I,x'$}{out}
\end{fmfgraph*}}
~~~~~~~~~~~~\times
\parbox{40mm}{
\begin{fmfgraph*}(40,25)
\fmfleft{in2}
\fmfright{out2}
\fmf{phantom}{in2,v2,out2}
\fmf{plain,label=$I$,tenstion=1}{v2,v2}
\fmf{dots,label=$J$, left=90,tenstion=1}{v2,v2}
\fmflabel{$z$}{v2}
\end{fmfgraph*}}
+~~~...
\nonumber\end{aligned}$$
We now turn our attention to the self-interaction term, where $I=J$ interaction terms are contracted with $I$ external lines. This calculation is operationally the same as that done by Seery [@Seery:2007we] for the single scalar field case using different techniques. Fourier transforming equation (\[eqn:ijHI2\]) and switching to conformal time, the interaction Hamiltonian $H_{int}^{(4)}$ becomes $$\begin{aligned}
\nonumber
H_{\rm int}^{(4)} &= & (2\pi)^{3}\int_{-\infty}^{\tau}d\tau'\sum_{I,J}\Bigg[
\frac{a}{4H}\frac{\sigma({\bf k}, {\bf p})}{({\bf k}+{\bf p})^{2}}\dot{Q}_{\bf k}^{I}Q_{\bf p}^{I}\dot{Q}_{\bf a}^{J}\dot{Q}_{\bf b}^{J}\delta({\bf k}+{\bf p}+{\bf a}+{\bf b})
-\frac{a}{4H}{\bf a}\cdot{\bf b}\frac{\sigma({\bf k}, {\bf p})}{({\bf k}+{\bf p})^{2}}\dot{Q}_{\bf k}^{I}Q_{\bf p}^{I}Q_{\bf a}^{J}Q_{\bf b}^{J}\delta({\bf k}+{\bf p}+{\bf a}+{\bf b})
\\&&
+a^{2}\left(\frac{3}{4}\frac{\sigma({\bf k},{\bf p})}{({\bf p} + {\bf k})^{2}}\frac{\sigma({\bf a},{\bf b})}{({\bf a} + {\bf b})^{2}}+\frac{{\bf z}({\bf k}, {\bf p})\cdot{\bf z}({\bf a}, {\bf b})}{({\bf k}+{\bf p})^{4}({\bf a}+{\bf b})^{2}}+2\frac{{\bf z}({\bf k}, \bf{p})\cdot{b}}{({\bf k}+\bf{p})^{4}}\right) \dot{Q}_{\bf k}^{I}Q_{\bf p}^{I}\dot{Q}_{\bf a}^{J}Q_{\bf b}^{J}\delta({\bf k}+{\bf p}+{\bf a}+{\bf b})\Bigg],\end{aligned}$$ where overdot again denotes derivative with respect to conformal time. We use the notation of [@Seery:2007we], $$\begin{aligned}
{\bf z}({\bf k}, {\bf p})& =& \sigma({\bf k}, {\bf p})\bf{k} - \sigma({\bf p}, {\bf k}){\bf p}, \\
\sigma({\bf k}, {\bf p}) & = & {\bf k}\cdot{\bf p} + {\bf p}\cdot{\bf p}.\end{aligned}$$
After some tedious but straightforward calculation reminiscent of the previous section, and considering modes well outside the horizon, $q\tau\rightarrow 0$, we find that the one-vertex self-correction is $$\label{eqn:4ptbigmomint}
\langle Q^{I}(\tau)Q^{I}(\tau) \rangle_{1}= (2\pi)^3 \int d^3{{\bf{k}}}\left[A_s + B_s + C_s \right],$$ with the following contributions $$\begin{aligned}
A_s & = & -\frac{H^{4}}{32(2\pi)^{6}q^{7}} {\bf q}\cdot{\bf k} \Bigg[\left(\frac{6q^2}{k^3}-\frac{5 }{k}\right)\frac{k^{2}+{\bf q}\cdot{\bf k}}{({\bf k}+{\bf q})^{2}}+
\frac{10}{k}\frac{q^{2}+{\bf k}\cdot {\bf q}}{({\bf k}+{\bf q})^{2}}\Bigg],\end{aligned}$$ $$\begin{aligned}
B_s & = & -\frac{H^{4}}{32(2\pi)^{6}q^{5}}\Bigg[\frac{1}{k}\frac{k^{2}+{\bf q}\cdot {\bf k}}{({\bf k}+{\bf q})^{2}}+\frac{ 5k}{q^2}\frac{q^{2}+{\bf k}\cdot {\bf q}}{({\bf k}+{\bf q})^{2}}\Bigg],\end{aligned}$$ and $$\begin{aligned}
C_{s} & = &- \frac{H^{4}}{8(2\pi)^{6}q^{5}}\Bigg[
\,\frac{5k}{q^{2}}f({\bf k}, {\bf q}, -{\bf k}, -{\bf q}) +\frac{\left(2q^{2}- k^2\right)}{k^{3}}f({\bf q}, {\bf k}, -{\bf q}, -{\bf k})+\frac{6}{k}f({\bf k}, {\bf q}, -{\bf q}, -{\bf k})\Bigg],\end{aligned}$$ where $$\begin{aligned}
f({\bf k},{\bf p}, {\bf a}, {\bf b}) & = & \left(\frac{3}{4}\frac{\sigma({\bf k},{\bf p})}{({\bf p} + {\bf k})^{2}}\frac{\sigma({\bf a},{\bf b})}{({\bf a} + {\bf b})^{2}}+\frac{{\bf z}({\bf k}, {\bf p})\cdot{\bf z}({\bf a}, {\bf b})}{({\bf k}+{\bf p})^{4}({\bf a}+{\bf b})^{2}}+\frac{{\bf z}({\bf k}, \bf{p})\cdot{b}}{({\bf k}+\bf{p})^{4}} +\frac{{\bf z}({\bf a}, \bf{b})\cdot{p}}{({\bf k}+\bf{p})^{4}}\right)\delta({\bf k}+{\bf p}+{\bf a}+{\bf b}).\end{aligned}$$
The integral in equation (\[eqn:4ptbigmomint\]) is divergent and so it needs to be regularized. After regularization we obtain $$A_s = -\frac{H^4}{2(2\pi)^3q^3}\frac{5\pi}{16} \ln q~,~B_s = -\frac{H^4}{2(2\pi)^3q^3}\frac{\pi}{48} \ln q~,~C_s=-\frac{H^4}{2(2\pi)^3q^3}\frac{\pi}{3} \ln q,$$ yielding the following $\log q$ contribution $$\langle Q^{I}(\tau)Q^{I}(\tau) \rangle = -\frac{H^4}{2(2\pi)^3q^3}\frac{2\pi}{3}\ln q + ... ,\label{eqn:4ptselfcorrection}$$ which is our final answer.
Finally, we sketch the technique we used to extract the $\log q$ terms from equation (\[eqn:4ptbigmomint\]). The key idea is to write the integrals like $$\int d^3 {{\bf{k}}} \frac{\{1,k^i,k^ik^j\}}{k^{2\alpha}({\bf k}\pm{\bf q})^{2\beta}},$$ (where $\alpha$ is a half integer and $\beta \in \{1,2,3\}$) as a sum of terms of the form $$f(q^i)\int d^3 {{\bf{k}}} \frac{1}{k^{2\alpha-n}({\bf k}\pm{\bf q})^{2\beta-m}}.\label{eqn:peskinform}$$ We can then use standard techniques (e.g. [@Peskin:1995ev]) to evaluate the integrals, in combination with the following trick. Define $$\begin{aligned}
\langle\{1,k^i,k^ik^j\}\rangle_{\alpha, \beta} & \equiv & \int d^d k \frac{\{1,k^i,k^ik^j\}}{(k^2)^{\alpha}({\bf k}\pm{\bf q})^{2\beta}},\\
\langle 1 \rangle_{\alpha, \beta} &= & I_{\alpha, \beta}. \label{eqn:trickint}\end{aligned}$$ Now since $k^i$ is integrated out, the only vector quantity left is $q^i$ so the following must be true $$\langle k^i \rangle_{\alpha, \beta} = B_{\alpha, \beta\,}q^i~,~\langle k^i k^j \rangle_{\alpha, \beta} = C_{\alpha, \beta}\,q^i q^j + D_{\alpha, \beta}\, \delta^{ij}q^2, \label{eqn:trickint2}$$ where $B_{\alpha, \beta}$, $C_{\alpha, \beta}$ and $D_{\alpha, \beta}$ are coefficients that may contain ultra-violet divergent components. We can then dot both sides of equation (\[eqn:trickint2\]) with $q$’s and complete the square to eliminate the numerator $({{{\bf{q}}}\cdot{{\bf{k}}}})$ terms. For example (for the $B$ term), $$B_{\alpha, \beta}\,q^2 = \pm\frac{1}{2}\langle ({{\bf{q}}}\pm {{\bf{k}}})^2 \rangle_{\alpha, \beta} \mp \frac{1}{2} \langle k^2 + q^2 \rangle_{\alpha,\beta},$$ and the first term cancels one of the powers of $({\bf k}\pm{\bf q})$ while the second term cancels out one power of $k^{2}$. This leaves us with $$B_{\alpha,\beta} =\pm \frac{1}{2q^{2}} I_{\alpha, \beta-1} \mp\frac{1}{2}I_{\alpha,\beta} \mp \frac{1}{2q^{2}}I_{\alpha-1,\beta},$$ We can iterate this trick until we remove all terms with powers of $({\bf q}\cdot{\bf k})$ in the numerator or cancel all the $({\bf k}\pm{\bf q})$ terms in the denominator, resulting in integrals that are polynomially divergent and hence can be discarded. The remaining integrals are in the form of equation (\[eqn:peskinform\]) and can be easily regularized. The $C_{\alpha, \beta}$ and $D_{\alpha, \beta}$ terms can be similarly computed by dotting twice with $q$’s.
Multifield 3-point and 4-point Action {#App:4ptderivation}
=====================================
In this paper, we make use of the Arnowitt-Deser-Misner (ADM) formalism [@Arnowitt:1962hi] to expand the action equation (\[eqn:Nflationaction\]) to 4th order in perturbations. The derivation is straightforward if rather tedious (see for example refs. [@Seery:2005gb; @Seery:2007we] for a detailed application of this formalism), so we simply collect the results. The background $N$-field action is $$S = \int d^4 x \sqrt{g}\left[\frac{R}{2} + \sum_I \left( -\frac12 (\partial \phi_I)^2 + V_I(\phi_I)\right)\right],$$ the ADM metric is $$ds^2 = -N^2dt^2 + h_{ij}(dx^i + N^idt)(dx^j + N^j dt),$$ and we choose to work in the spatially flat gauge so $h_{ij} = a^2(t)\delta_{ij}$. In other words our metric perturbation has been set to zero by a gauge choice. In addition, we focus on the scalar perturbations, and ignore vector and tensor pieces. This means that the diagrams we computed do not have graviton propagators or loops. In this gauge the fields have non-zero perturbation $$\phi_i \rightarrow \phi_I + Q_I.$$ Field indices are summed over when contracted, $X^I Y_I = \sum_I X_I Y_I$ and $V = \sum_I V_I(\phi_I)$, with no cross-coupling terms between the fields.
The quadratic action is (exactly) [@Seery:2005gb] $$\label{eqn:quadraticaction}
S_2 =\frac{1}{2}\int dt d^3x\, a^3\left[ \dot{Q}_I^2 - a^{-2}(\partial Q_I)^2 - \left(V_{,IJ} - \frac{1}{a^3}\frac{d}{dt}\left(\frac{a^3}{H}\dot{\phi}_I\dot{\phi}_J\right)\right)Q^I Q^J\right].$$ The third order action is, to leading order in slow roll [@Seery:2005gb], $$\begin{aligned}
\nonumber
S_3 &=& \int dt d^3 x\, a^3 \left[-\frac{1}{4H}\dot{\phi}^JQ_J\dot{Q}^I\dot{Q}_I - \frac{1}{2H}\dot{\phi}^J\partial^{-2}\dot{Q}_J\dot{Q}^I\partial^2Q_I \right. \\
&&+ \left. \left. \frac{1}{a^3}\frac{\delta L}{\delta Q^J}\right|_1 \left(\frac{\dot{\phi}^J}{4H}\partial^{-2}(Q^I\partial^2 Q_I) - \frac{\dot{\phi}^J}{8H}Q^IQ_I\right)\right],\end{aligned}$$ where the $\delta L/\delta Q^J $ is the first order equation of motion.
Finally, the fourth order action is, to leading order in slow roll, $$\begin{aligned}
\nonumber S_{4} & = & \int dtd^{3}x\,a^{3}\frac{}{}\left[\frac{3}{4}\left(\partial^{-2}\left( \partial_{j}\dot{Q}^{I}\partial_{j}Q_{I} + \dot{Q}^{I}\partial^{2}Q_{I}\right)\right)^{2} \right. \\ && \left.\frac{}{} -\frac{1}{4}\beta_{2,j}\partial^{2}\beta_{2,j}+\chi_{2}(\partial_{i}\dot{Q}^{I}\partial_{i}Q_{I} +\dot{Q}^{I}\partial^{2}Q_{I}) -\dot{Q}^{I}\beta_{i,2}\partial_{i}Q_{I}\right], \label{eqn:big4pt}\end{aligned}$$ where the auxiliary fields $\chi_2$ and $\beta_{2,j}$ are, to leading order, $$\begin{aligned}
\label{eqn:beta2j}
\frac{1}{2}\beta_{2,j}\simeq \partial^{-4} \left( \partial_{j}\partial_{k}\dot{Q}^{I}\partial_{k}Q_{I} + \partial_{j}\dot{Q}^{I}\partial^{2}Q_{I}- \partial^{2}\dot{Q}^{I}\partial_{j}Q_{I}-\partial_{m}\dot{Q}^{I}\partial_{j}\partial_{m}Q_{I}\right), \end{aligned}$$ $$\partial^{2}\chi_{2} = -\frac{1}{4a^{2}H}\partial_{i}Q^{I}\partial_{i}Q_{I } - \frac{3}{2}\partial^{-2}\left( \partial_{j}\dot{Q}^{I}\partial_{j}Q_{I} + \dot{Q}^{I}\partial^{2}Q_{I}\right)-\frac{1}{4H}\dot{Q}^{I}\dot{Q}_{I}.$$ Note that the 4-point action equation (\[eqn:big4pt\]) will generate $N^2$ non-trivial diagrams, since there are two sums over the field indices. Note also here that this expression reduces to those of [@Seery:2007we] in the single field limit.
Quantization of Theories with Derivative Interactions {#app:canquant}
=====================================================
In this appendix, we describe the procedure we use to canonically quantize the classical theory, as we encounter Lagrangians with interactions containing time-derivatives of the fields. A path integral formalism is given in [@Weinberg:1995mt]. For theories with up to 2nd order in time derivatives, a treatment is given in [@Gerstein:1971fm]. In the problem we are considering, we encounter interactions up to 3rd order in time derivatives, so we extend [@Gerstein:1971fm] at least to the next to leading order in slow-roll. Extension to all orders is straightforward which we will leave for future work. A path-integral approach is pursued by Seery [@Seery:2007we].
We follow the usual procedure in canonically quantizing a classical theory specified by a Lagrangian density, $\mathcal{L}(Q, \dot{Q})$. That is, we define the momenta conjugate to the field $Q$ by; $$\begin{aligned}
\pi = \frac{\partial \mathcal{L}}{\partial \dot{Q}},
\end{aligned}$$ and construct the Hamiltonian density, $\mathcal{H}$, as the Legendre transform of the Lagrangian density; $$\begin{aligned}
\mathcal{H} & = & \pi\dot{Q} - \mathcal{L},
\end{aligned}$$ where $\dot{Q}$ is expressed in terms of $\pi$. We then move to an interaction picture by separating the Hamiltonian into its quadratic part $\mathcal{H}_{0}$ and higher order part $\mathcal{H}_{\rm int}$ and replace $\pi$ in $\mathcal{H}_{\rm int}$ with the interaction picture $\pi_{I}$ given by $$\begin{aligned}
\dot{Q} = \pi_{I} & = & \left.\frac{\partial \mathcal{H}_{0}}{\partial \pi}\right|_{\pi = \pi_{I}}.
\end{aligned}$$
The question we want to address here is, what is $\mathcal{H}_{\rm int}$? Naively, one might guess that $\mathcal{H}_{\rm int} = -\mathcal{L}_{\rm int}$, where $\mathcal{L}_{\rm int} = \mathcal{L}-\mathcal{L}_{0}$ and $\mathcal{L}_{0}$ is the quadratic part of $\mathcal{L}$. If the only time derivatives of the field are in a canonical kinetic term, this is certainly the case. However, when time derivatives are present in the interaction terms, these can modify the relation between $\pi$ and $\dot{Q}$, and the construction of the Hamiltonian then generates extra interactions. Fortunately, at the order we are working the additional terms generated are either subleading in slow roll, or higher order in the fluctuations. To see this, note that the Lagrangians we consider have the schematic form $$\begin{aligned}
\mathcal{L} & = & \frac{1}{2}\dot{Q}^{2}-V(Q) + \left(\sqrt{\epsilon} f_{2}+f_{3}\right) \dot{Q} + \frac{1}{2}\left(\sqrt{\epsilon} g_{1}+g_{2}\right)\dot{Q}^{2}+\frac{1}{3}h_{1}\dot{Q}^{3} + \mathcal{O}(\epsilon Q^{3})+ \mathcal{O}(\epsilon Q^{4}) + \mathcal{O}(Q^{5})
\end{aligned}$$ where $\epsilon$ is the usual slow roll epsilon and the subscripts of $f_{m}$ and $g_m$ denote a term of order $m$ in fluctuations, $Q$. The terms containing no time derivatives are gathered into $V(Q)$. To proceed, we assume that $|Q|\sim|\dot{Q}|\sim|\pi|$. A straightforward calculation then shows that $$\begin{aligned}
\mathcal{H} & = & \frac{\dot{Q}^{2}}{2}+V(Q) - \left(\sqrt{\epsilon} f_{2} + f_{3}\right)\dot{Q} - \frac{1}{2}\left(\sqrt{\epsilon} g_{1}+g_{2}\right)\dot{Q}^{2} - \frac{1}{3}h_{1}\dot{Q}^{3}+\epsilon\left(f_{2}g_{1}\dot{Q} + \frac{1}{2} g_{1}^{2}\dot{Q}^{2}\right)\\\nonumber
&& +\mathcal{O}(\epsilon\, Q^{3})+ \mathcal{O}(\epsilon\, Q^{4}) + \mathcal{O}(Q^{5}).\\
& = & \mathcal{H}_{0}-\mathcal{L}_{I}+\mathcal{O}(\epsilon\, Q^{3})+ \mathcal{O}(\epsilon\, Q^{4}) + \mathcal{O}(Q^{5}).
\end{aligned}$$ So, to leading order in slow roll and to fourth order in fluctuations, it is safe to take $\mathcal{H}_{\rm int} = -\mathcal{L}_{\rm int}$, the correction being at the most of ${\cal O}(\epsilon)$ and thus subleading during inflation.
[^1]: We use the reduced Planck mass $M_p^2 = 1/8\pi G$ throughout.
[^2]: In [@Ahmad:2008vy; @Ahmad:2008eu] this limit on $N$ is derived in the context of N-flation but in reality it applies to [*any*]{} scenario with $N$ light fields.
[^3]: Huang et. al. [@Huang:2007zt; @Huang:2007st] has argued that choatic eternal inflation with large $N$ fields is ruled out by the so-called “weak gravity conjecture” introduced in [@ArkaniHamed:2006dz].
[^4]: In this work we are primarily interested with the scaling behavior of the loops as one increases the number of fields. Since there are only 2 graviton modes, loops involving gravitons cannot scale with the number of fields $N$ (even though individually they are of the same order), and hence we neglect them.
[^5]: The numerical factor in [@Weinberg:2005vy] differs from that of equation (\[eqn:weinberg1loop\]). As we explain in Appendix \[app:1loop\], there is an extra contribution, related to the contour in the time integral needed to pick up the “in” vacuum. We thank Steven Weinberg for a useful discussion of this point.
[^6]: All the fields below are the taken to be Heisenberg fields unless otherwise noted.
[^7]: We thank Dan Kabat for a very useful discussion on this point.
[^8]: The flat target space i.e. $G^{IJ}\nabla_{\mu} \phi_I \nabla^{\mu} \phi_J$ with $G^{IJ}$ diagonal is crucial here. The pair-wise symmetry will not hold if $G^{IJ}$ is not diagonal, and we cannot write down the theory as a single equivalent coherent scalar field. Another way of seeing this that one can always redefine the fields so that the target space is flat at the cost of generating couplings, both direct and gravitational, in the potential.
[^9]: The analog in the $\phi_I$ field picture will be the “balloon” diagram.
[^10]: See also [@Riotto:2008mv] for a discussion of beyond one-loop effects.
[^11]: Note that this issue is not manifest at tree level, and the conclusions of [@Weinberg:2005vy; @Weinberg:2006ac] are robust, other than with respect to the coefficient of the $N$ spectator loop correction, as discussed in Section \[sect:loop\]. Musso [@Musso:2006pt] has developed a diagrammatic formalism for correlation functions in the “in-in” formalism. However, since this is based primarily on equation (\[eqn:Wein2\]), one would need to check it carefully before employing it for an explicit calculation.
[^12]: Whereas when we compute transition amplitudes, they contribute an overall phase [@Peskin:1995ev].
[^13]: One can in principle impose a horizon cut-off, as suggested by Lyth in Ref. [@Lyth:2007jh] and applied in [@Bartolo:2007ti]. Boyanovsky et al. [@Boyanovsky:2004gq; @Boyanovsky:2004ph; @Boyanovsky:2005sh; @Boyanovsky:2005px] have suggested that the IR divergences are regulated by the slow roll limit. However, their approach requires one to analytically continue a combination of the slow roll parameters, which are physical, and in principle measurable, quantities.
[^14]: See also [@vanderMeulen:2007ah]. If we switch the order of integration, we end up swapping an ultraviolet divergence in the integrals over internal momenta for a divergence in $\tau$.
[^15]: The dynamics of the field depends on the potentials and their mode functions, i.e. their green’s functions will differ, but the coupling terms have the same structure.
|
---
abstract: 'Using the coherent state functional integral expression of the partition function, we show that the sine-Gordon model on an analogue curved spacetime arises as the effective quantum field theory for phase fluctuations of a weakly imperfect Bose gas on an incompressible background superfluid flow when these fluctuations are restricted to a subspace of the single-particle Hilbert space. We consider bipartitions of the single-particle Hilbert space relevant to experiments on ultracold bosonic atomic or molecular gases, including, e.g., restriction to high- or low-energy sectors of the dynamics and spatial bipartition corresponding to tunnel-coupled planar Bose gases. By assuming full unitary quantum control in the low-energy subspace of a trapped gas, we show that (1) appropriately tuning the particle number statistics of the lowest-energy mode partially decouples the low- and high-energy sectors, allowing any low-energy single-particle wave function to define a background for sine-Gordon dynamics on curved spacetime and (2) macroscopic occupation of a quantum superposition of two states of the lowest two modes produces an analogue curved spacetime depending on two background flows, with respective weights continuously dependent on the corresponding weights of the superposed quantum states.'
author:
- 'T.J. Volkoff'
- 'Uwe R. Fischer'
bibliography:
- 'ACS12.bib'
title: 'Quantum sine-Gordon dynamics on analogue curved spacetime in a weakly imperfect scalar Bose gas'
---
Introduction
============
The weakly imperfect Bose gas (WIBG) represents a paradigmatic quantum system supporting excitations that propagate in an analogue curved spacetime (ACS) [@barcelorev]. Recent progress in quantum control and measurement of optically trapped ultracold alkali gases suggests that several aspects of quantum field dynamics on analogue curved spacetimes are accessible to experimental studies in dilute ultracold Bose gases. With the advances in experimental precision, such effects as, e.g., Hawking radiation [@steinhauer1] in a black hole laser [@BHlaser], Sakharov oscillations [@Chin; @Sakharov], as well as the analogue of cosmological particle production (dynamical Casimir effect) [@Jaskula; @CPP] have been detected.
To observe the interplay between control of the quantum state of the WIBG and the quantum dynamics on ACS of a relevant effective field, an ideal experiment should be able to address both the mode occupation statistics of the gas and the dynamics of the effective field. An example protocol utilizing the WIBG as a quantum simulator of quantum field theory on curved spacetime could entail: (1) preparation of sufficiently long-lived nonclassical states of a subset of single-particle modes of the WIBG, (2) manipulation of the effective quantum field propagating in ACS, e.g., quenching the effective field, and (3) inference of properties of the effective field through measurements of the Bose gas. However, when a subset of WIBG modes has been prepared in a given quantum state, it is not clear what the effective ACS dynamics of quantum fluctuations of the remaining modes will be. The dynamics depends on the coupling between the mode sectors and the effective potential arising in the fluctuating sector. In particular, the resulting dynamics may not be that of a free particle on curved spacetime, i.e., may not give rise simply to the wave equation $\square_{g}\theta=0$, where $\square_{g}$ is the Laplace-Beltrami operator.
Among continuum quantum models exhibiting a nonlinear interaction in the field operators, the quantum sine-Gordon model is notable for its exact solubility and mapping to a fermionic model in one time dimension and one space dimension (i.e., (1+1)-D) [@izergin; @colemanthirr] and its wide applicability in condensed matter systems exhibiting global U(1) symmetry, e.g., long [@paterno] and annular [@ustinov] Josephson junctions in superconducting circuits. In the context of bosons interacting via $s$-wave scattering, it is known that two tunnel-coupled (1+1)-D WIBG systems in one space and one time dimension exhibit sine-Gordon dynamics of the relative phase between the systems in the limit of Luttinger liquid dynamics [@demlersine]. Furthermore, the (1+1)-D sine-Gordon model in an expanding spacetime described by a Friedmann-Robertson-Walker metric has been studied by including a time-dependent mass term arising from time-dependent tunneling between two WIBGs in the Luttinger hydrodynamic limit [@marquardtcosmo]. In the case of (2+1)-D and (3+1)-D, a candidate system for simulating quantum sine-Gordon dynamics on ACS is, however, lacking. Below, we provide examples of such systems which can, in principle, be experimentally realized with ultracold bosonic quantum gases.
In this paper, we show that a general procedure consisting of (1) partitioning the single-particle modes of the WIBG into two subsets $J_{L}$ and $J_{H}$ and (2) pinning the dynamics of one subset, e.g., $J_{L}$, to its action-extremizing equation of motion (solutions of which self-consistently define the single-particle states of the $J_{L}$ sector and, therefore, the modes comprising the vacuum for the $J_{H}$ quantum fluctuations), allows the phase fluctuations of the $J_{H}$ field to be described as bosons propagating on a curved spacetime in a sine-Gordon potential. Sections \[sec:jhoneloop\] and \[sec:jhhighloop\] contain the general derivation. We discuss the sine-Gordon equation on ACS in Sec. \[sec:sgeqonacs\]. Proceeding to example systems, we first consider in Sec. \[sec:engineer\] the sine-Gordon dynamics on ACS after preparation of the lowest mode in a coherent state (equivalent to the zero mode $c$-number substitution of Bogoliubov) and in a superposition of coherent states of opposite phase. In Sec. \[sec:interplane\], we consider a spatial bipartition of the single-particle modes in tunnel-coupled (2+1)-D planes of two WIBGs. Section \[sec:superposacs\] contains a derivation of the sine-Gordon dynamics on ACS when a WIBG system is projected to a subspace of bosonic Fock space in which the lowest two modes are prepared in a macroscopic superposition state. This extreme case highlights some of the unusual properties of ACS supported on a nonclassical vacuum, departing significantly from the ACS arising from a single semiclassical background field.
sine-Gordon dynamics on ACS\[sec:sgdyn\]
========================================
We consider a single-particle Hilbert space spanned by an orthonormal basis $\lbrace \ket{\varphi_{j}}\rbrace_{j\in J}$ and a nonrelativistic quantum field $\hat{\psi}(x)=\sum_{j\in J}\varphi_{j}(x)\hat{a}_{j}$, where $J$ is an index set, $\varphi_{j}(x) \in L^{2}(\Omega \subset \mathbb{R}^{3})$, and $\hat{a}_{j}$ is the bosonic annihilation operator. The finite volume of the trap containing the WIBG is labeled $\vert \Omega \vert$. The normal ordered weakly imperfect Bose gas Hamiltonian in the presence of a U(1) gauge field $v(x)$ describing, e.g., a rotation, Galilei boost, or other background velocity field, is given by (suppressing the spatial dependence of the field operators): $$\begin{aligned}
\hat{H} &=& \int_{\Omega}d^{3}x \, {\hbar^{2}\over 2m}\overline{D} \hat{\psi}^{\dagger}D\hat{\psi}
+(mV_{\text{ext}}(x) - \mu)\hat{\psi}^{\dagger}\hat{\psi} \nonumber \\ &+&{V_{0}\over 2}\hat{\psi}^{\dagger \, 2}\hat{\psi}^{2}\nn
\label{eqn:wibgham}\end{aligned}$$ where $D := \nabla -i{m\over \hbar}v(x)$ is the U(1) covariant derivative, $V_{\text{ext}}(x)$ is an external one-body potential, $V_{0}=(4\pi \hbar^2/m)a_s$ is the contact interaction coupling, with $a_s$ the $s$-wave scattering length, and $m$ is taken as the bare mass of the atomic or molecular constituent of the gas. The temperature-dependent chemical potential $\mu$ is defined such that $N = -{1\over \beta}\del_{\mu}\log \tr e^{-\beta \hat{H}}$, with $N$ the average number of gas atoms and $\beta$ the inverse temperature.
In this section, we aim to show that when $J$ is partitioned into two subsets $J_{H}$ and $J_{L}$ and the dynamics of one subset is pinned to a self-consistent equation of motion, the complementary subset exhibits sine-Gordon dynamics on ACS. The sine-Gordon mass will be proportional to $V_{0}n_{L,0}n_{H,0}/ \sqrt{-g}$ where $V_{0}$ is the interaction strength, $n_{L,0}n_{H,0}$ is the product of local number densities in the $J_{L}$ and $J_{H}$ sector, and $\sqrt{-g}:= \sqrt{-\det g_{\mu \nu}}$, where $g_{\mu\nu}$ is the ACS metric in Eq. (\[eqn:covar\]). To demonstrate these features, we construct the coherent state path integral [@negele] for the partition function instead of approximating the operator equations of motion for the weakly imperfect Bose gas. This choice allows to derive the effective action for the phase fluctuations of the quantum field in the $J_{H}$ sector without having to explicitly quantize the phase fluctuation field operator on the ACS. The present approach also allows to more easily consider the effect of nonzero temperature on the contribution of quantum fluctuations to the resulting effective action.
The derivation of sine-Gordon dynamics on ACS in the $J_{H}$ sector proceeds as follows: we first show that the one-loop effective dynamics of phase fluctuations in the $J_{H}$ sector is that of a massive Klein-Gordon field on ACS (Sec. \[sec:modepartitionsubsec\] presents the partitioning of the single-particle modes and Sec. \[sec:onelooppartitioned\] presents the massive Klein-Gordon dynamics on ACS). In Sec. \[sec:jhhighloop\], we sum the higher-loop contributions to the dynamics in the $J_{H}$ sector to derive the sine-Gordon dynamics on ACS and in Sec. \[sec:sgeqonacs\], we further analyze the sine-Gordon equation arising from the effective dynamics.
ACS in the $J_{H}$ sector at one-loop order\[sec:jhoneloop\]
------------------------------------------------------------
### Partition of single-particle modes \[sec:modepartitionsubsec\]
When the field operator is decomposed as $\hat{\psi} = \hat{\psi}_{L} + \hat{\psi}_{H}$ where $\hat{\psi}_{L(H)}:= \sum_{j\in J_{L(H)}}\varphi_{j}(x)\hat{a}_{j}$ and $J=J_{L}\sqcup J_{H}$ is a bipartition of the set of modes, one can verify that $\hat{\psi}_{L(H)}(x)\ket{\lbrace \phi \rbrace } = \psi_{L(H)}[\phi]\ket{\lbrace \phi \rbrace }$ where $\ket{\lbrace \phi \rbrace } :=
\exp\left[-{1 \over 2}\int_{\Omega}d^{3}x\, \vert \phi (x) \vert^{2}\right]
\exp\left[\int_{\Omega}d^{3}x\, \phi(x)\hat\psi^{\dagger}(x)\right] \ket{\text{VAC}}$ is the normalized field coherent state and where $\psi_{L(H)}[\phi]:= \left(\sum_{j\in J_{L(H)}} \int_{\Omega}d^{3}x'\, \phi(x')\overline{\varphi_{j}(x')}\varphi_{j}(x) \right)$ is the projection of the function $\phi(x)$ onto the space spanned by the single-particle wave functions in $J_{L}$ or $J_{H}$. It follows that $\hat{\psi}\ket{\lbrace \phi \rbrace }= \left( \psi_{L}[\phi] + \psi_{H}[\phi]\right)\ket{\lbrace \phi \rbrace}$. Therefore, the action $S$ appearing in the imaginary time coherent state path integral for the partition function $Z(\beta)=\text{tr}\left[e^{-\beta\hat{H}}\right] = \int \prod_{j=L,H} \mathcal{D}[\psi_{j},\overline{\psi_{j}}]e^{-S}$ can be written as follows: $$\begin{aligned}
S&=&\int_{0}^{\beta \hbar}{d\tau \over \hbar}\int_{\Omega}d^{3}x\, \left[\vphantom{\sum} (\overline{\psi_{L}}+\overline{\psi_{H}})\hbar\del_{\tau}(\psi_{L} + \psi_{H}) \right. \nonumber \\ &+& \left. H[\overline{\psi_{L}}+\overline{\psi_{H}},\psi_{L}+\psi_{H}] \vphantom{\sum}\right]
\label{eqn:actionfull}\end{aligned}$$ where $H[\overline{\psi_{L}}+\overline{\psi_{H}},\psi_{L}+\psi_{H}]$ is given by the formal substitution $\hat{\psi} \rightarrow \psi_{L}+\psi_{H}$, $\hat{\psi}^{\dagger}\rightarrow \overline{\psi_{L}}+\overline{\psi_{H}}$ in Eq.(\[eqn:wibgham\]), and we have shortened the symbols for the field eigenvalues to $\psi_{L}$ and $\psi_{H}$, respectively. To show how the bipartitioning of modes leads to sine-Gordon dynamics on ACS, we now arbitrarily choose $J_{H}$ as the support modes for the phase fluctuations. We require that the field $\psi_{L}$ satisfy the generalized imaginary time Gross-Pitaevskii equation [@zarembabook] $$\begin{aligned}
&&\hbar\del_{\tau}\psi_{L} -{\hbar^{2}\over 2m}D^{2}\psi_{L} + (mV_{\text{ext}}(x)-\mu + V_{0}\vert \psi_{H} \vert^{2})\psi_{L} \nonumber \\ &&+ V_{0}\vert \psi_{L} \vert^{2}\psi_{L} = 0
\label{eqn:statphase}\end{aligned}$$ and that $\overline{\psi_{L}}$ satisfy the associated adjoint field equation
$$\begin{aligned}
&&-\hbar\del_{\tau}\overline{\psi_{L}} -{\hbar^{2}\over 2m}\overline{D}^{2}\overline{\psi_{L}}+ (mV_{\text{ext}}(x)-\mu + V_{0}\vert \psi_{H} \vert^{2})\overline{\psi_{L}} \nonumber \\ &&+ V_{0}\vert \psi_{L} \vert^{2}\overline{\psi_{L}} = 0 .
\label{eqn:statphaseadj}\end{aligned}$$
These equations are defined on a subspace of $L^{2}(\Omega)$ spanned by the wave functions $\lbrace \varphi_{j}(x) \rbrace_{j\in J_{L}}$. To obtain solutions to the equations above, $\vert \psi_{H}\vert^{2}$ must be calculated at each order in perturbation theory (e.g., at tree order from Eq.(\[eqn:euler\]) below, giving $\vert \psi_{H}\vert^{2} =n_{H,0}$), substituted into the self-consistent equations Eq.(\[eqn:statphase\]) and Eq.(\[eqn:statphaseadj\]), and subsequently solved. Note that by demanding that the field with support in $J_{L}$ satisfy the generalized Gross-Pitaevskii equation, we are neglecting quantum fluctuations in this sector (i.e., there is no longer a path integral over the field $\psi_{L}$ in $Z(\beta)$, only a sum over solutions $\psi_{L,0}$ to Eq.(\[eqn:statphase\])). Equivalently, we must restrict to a state of the weakly imperfect Bose gas such that $\hat{\psi} \approx \hat{\psi}_{H} + \langle \hat{\psi}_{L} \rangle = \hat{\psi}_{H} + \psi_{L,0}$ is a valid approximation for the field operator. For the definition of ACS, it will also be important that $\langle \hat{\psi}_{H} \rangle \neq 0$ in this state. Such a state can occur in a nonuniform Bose gas with large occupation number in both the $J_{L}$ sector and $J_{H}$ sector. If the set $J$ is ordered by, e.g., energy values, and $J_{L}$ corresponds to the low-energy modes, this section can be considered as a derivation of the effective field theory of the high-energy phase fluctuations when the low-energy sector is pinned to tree-level. In Secs. \[sec:engineer\] and \[sec:superposacs\] we show that the complication arising from requiring a self-consistent solution of the above equation can be removed by preparing the $J_{L}$ modes in an appropriate nonclassical state.
To proceed with deriving the action in the $J_{H}$ sector, we substitute solutions $\psi_{L,0}$ and $\overline{\psi_{L,0}}$ of Eq.(\[eqn:statphase\]) and Eq.(\[eqn:statphaseadj\]) for $\psi_{L}$ and $\overline{\psi_{L}}$, respectively, into Eq.(\[eqn:actionfull\]). Multiplying Eq.(\[eqn:statphase\]) and Eq.(\[eqn:statphaseadj\]) by $\overline{\psi_{H}}$ and $\psi_{H}$, respectively, one finds that all monomials in the fields and their derivatives involving both $\psi_{L,0}$ and $\psi_{H}$ in Eq.(\[eqn:actionfull\]) vanish, except for $\left( \overline{\psi_{H}}^{2}\psi_{L,0}^{2} + c.c.\right)$ and $\vert \psi_{L,0} \vert^{2}\vert \psi_{H}\vert^{2}$. Therefore, the action in Eq.(\[eqn:actionfull\]) with the $J_{L}$ fields pinned to their stationary phase configurations simplifies to $S_{L,0} + S_{H}$, where $$\begin{aligned}
S_{H} &=&\int_{0}^{\beta \hbar}{d\tau \over \hbar}\int_{\Omega}d^{3}x \, \left[\vphantom{{A\over B}} \overline{\psi_{H}}\hbar \del_{\tau}\psi_{H} +{\hbar^{2}\over 2m}\overline{D}\,\overline{\psi_{H}}D\psi_{H} \right. \nonumber \\ &+& \left. \left( mV_{\text{ext}}-\mu \right)\vert \psi_{H}\vert^{2} +{V_{0}\over 2}\left( \overline{\psi_{H}}^{2}\psi_{L,0}^{2} + c.c.\right) \right. \nonumber \\ &+& \left. 2V_{0}\vert \psi_{L,0} \vert^{2}\vert \psi_{H}\vert^{2} + {V_{0}\over 2}\vert \psi_{H} \vert^{4}\vphantom{{A\over B}}\right]\end{aligned}$$ and where $S_{L,0}$ is the energy functional $$\begin{aligned}
S_{L,0}&=&\int_{0}^{\beta \hbar}{d\tau \over \hbar}\int_{\Omega}d^{3}x \, \left[\vphantom{{A\over B}} \overline{\psi_{L,0}}\hbar \del_{\tau}\psi_{L,0} +{\hbar^{2}\over 2m}\overline{D}\,\overline{\psi_{L,0}}D\psi_{L,0} \right. \nonumber \\ &+& \left. \left( mV_{\text{ext}}-\mu \right)\vert \psi_{L,0}\vert^{2} +{V_{0}\over 2}\vert \psi_{L,0} \vert^{4} \vphantom{{A\over B}} \right]\label{eqn:lowlandau} \end{aligned}$$ which depends only on the solutions $\psi_{L,0}$, $\overline{\psi_{L,0}}$ to Eq.(\[eqn:statphase\]) and Eq.(\[eqn:statphaseadj\]).
The dynamics of phase fluctuations in the $J_{H}$ sector can be derived by first writing the stationary phase solution of the $J_{L}$ sector in the polar form $\psi_{L,0} = \sqrt{n_{L,0}}e^{i\theta_{L,0} / \hbar}$ and, similarly, performing the change of field variables $\psi_{H}=\sqrt{n_{H}}e^{i\theta_{H} / \hbar}$ in $J_{H}$. The resulting approximate partition function, containing a functional integration over fields $n_{H}$ and $\theta_{H}$ and a sum over all solutions $\psi_{L,0}$ of the generalized Gross-Pitaevskii equation with appropriate boundary conditions imposed on the domain $\Omega \times [0,\beta\hbar]$, is $$\begin{aligned}
Z(\beta)&\approx &\sum_{\psi_{L,0}}\int \mathcal{D}[n_{H}]\mathcal{D}[\theta_{H}] e^{-\left( S_{L,0}+S_{H} \right) }
\label{eqn:partsum}\end{aligned}$$ where the high-energy part $S_{H}$ of the action is given by $$\begin{aligned}
S_{H}&=&\int_{0}^{\beta \hbar}{d\tau \over \hbar}\int_{\Omega} \left[\vphantom{{V_{0}\over 2}} in_{H}\del_{\tau}\theta_{H} + (mV_{\text{ext}} - \mu + 2V_{0}n_{L,0})n_{H} \right. \nonumber \\ &+& \left. {\hbar^{2} \over 8m}n_{H}^{-1}\nabla n_{H} \cdot \nabla n_{H} + {1 \over 2m}n_{H}\nabla \theta_{H} \cdot \nabla \theta_{H} \right. \nonumber \\ &-& \left. n_{H} v\cdot \nabla \theta_{H} + {m\over 2}n_{H}v\cdot v \right. \nonumber \\ &+& \left. V_{0}n_{H}n_{L,0}\cos((2\theta_{H}-2\theta_{L,0})/\hbar)) + {V_{0}\over 2}n_{H}^{2} \vphantom{\sum}\right] .
\label{eqn:polarform} \end{aligned}$$ Here, we note that the temperature $\beta$ enters not only the solution pair $(n_{L,0}(\tau ,x) , \theta_{L,0}(\tau,x))$, which is periodic on $\tau \in [0,\beta\hbar]$, but also defines the equilibrium state in the $J_{H}$ sector. Further discussion of the effect of nonzero temperature on the effective theory of phase fluctuations on ACS is provided in the Appendix.
### Massive Klein-Gordon dynamics at one-loop order and phase-matching condition\[sec:onelooppartitioned\]
We proceed by assuming that the “quantum potential” term ${\hbar^{2} \over 8m}n_{H}^{-1}\nabla n_{H} \cdot \nabla n_{H}$ in Eq.(\[eqn:polarform\]) is negligible [^1], which is an extensively studied (long wavelength) approximation [@BLVBEC]. The action $S_{H}$ is now expanded to one-loop order about a solution pair $( n_{H,0},\theta_{H,0} )$, of the stationary phase equations $\delta S_{H} / \delta n_{H} = 0$ and $\delta S_{H} / \delta \theta_{H} = 0$ in the same way as in the general derivation in Appendix A. In imaginary time, the stationary phase equations are
$$\begin{aligned}
{\delta S_{H} \over \delta n_{H}} = 0 &\Leftrightarrow& i\del_{\tau}\theta_{H} +{1\over 2m}\left( \nabla \theta_{H} - mv\right)\cdot \left(\nabla \theta_{H} - mv \right) \nonumber \\ &+& mV_{\text{ext}}(x) - \mu + V_{0}n_{H}\nonumber \\
&=& -2V_{0}n_{L,0}- V_{0}n_{L,0}\cos ((2\theta_{H}-2\theta_{L,0})/\hbar)) , \nn
{\delta S_{H} \over \delta \theta_{H}} = 0 &\Leftrightarrow& -i\del_{\tau}n_{H} - {1\over m}\nabla \cdot \left( n_{H} \left( \nabla \theta_{H} - mv \right) \right) \nonumber \\ & =& {2V_{0}n_{H}n_{L,0}\over \hbar}\sin((2\theta_{H}-2\theta_{L,0})/\hbar)).
\label{eqn:euler}\end{aligned}$$
These equations are the internal Josephson equation and the mass continuity equation, respectively, in the $J_{H}$ sector and have solution pairs labeled $\theta_{H,0}$, $n_{H,0}$. It is intriguing to note that while the backreaction on the mean field phase in the $J_{H}$ sector due to the mean field phase of the $J_{L}$ sector never vanishes, the backreaction on the mean field $n_{H,0}$ due to the difference in the mean field phases can vanish for certain solutions of the stationary phase equations. From Eq.(\[eqn:polarform\]) and Eq.(\[eqn:euler\]), it is clear that particle number conservation in the $J_{H}$ sector is satisfied at both the highest energy configurations ${2\theta_{H,0} -2\theta_{L,0} \over \hbar} = 2k\pi$ and the lowest energy configurations ${2\theta_{H,0} -2\theta_{L,0} \over \hbar} = (2k+1)\pi$, $k \in \mathbb{Z}$, of the background phases.
The functional Hessian of $S_{H}$ evaluated at $\theta_{H,0}$, $n_{H,0}$ is given by
$$\begin{aligned}
\delta^{2} S_{H} \over \delta n_{H}(x,\tau)\delta n_{H}(x',\tau') &=& V_{0} \delta(x-x')\delta(\tau-\tau ') , \nonumber \\
{\delta^{2} S_{H} \over \delta n_{H}(x,\tau)\delta \theta_{H}(x',\tau')} &=& -i \del_{\tau}\delta(x-x')\delta(\tau-\tau ') -{1\over m}\nabla \theta_{H,0}\cdot \nabla \delta(x-x')\delta(\tau-\tau ') - {1\over m}\delta(x-x')\delta(\tau-\tau ')\nabla^{2}\theta_{H,0} \nonumber \\ &{}& + v\cdot \nabla \delta(x-x')\delta(\tau-\tau ') -2{V_{0}\over \hbar}n_{H,0}\sin\left({2\theta_{H,0}-2\theta_{L,0} \over \hbar}\right) \delta(x-x')\delta(\tau-\tau ') , \nonumber \\ {\delta^{2} S_{H} \over \delta \theta_{H}(x,\tau)\delta n_{H}(x',\tau')}&=&i \del_{\tau}\delta(x-x')\delta(\tau-\tau ') + {1\over m} \nabla \theta_{H,0} \cdot \nabla \delta(x-x')\delta(\tau-\tau ') - v\cdot \nabla \delta(x-x')\delta(\tau-\tau ') \nonumber \\ &{}& -2{V_{0}\over \hbar}n_{H,0}\sin\left({2\theta_{H,0}-2\theta_{L,0} \over \hbar}\right)\delta(x-x')\delta(\tau-\tau '), \nonumber \\ {\delta^{2} S_{H} \over \delta \theta_{H}(x,\tau)\delta \theta_{H}(x',\tau')} &=& -{1\over m}\nabla n_{H,0} \cdot \nabla \delta(x-x')\delta(\tau-\tau ') - {1\over m}n_{H,0}\nabla^{2}\delta(x-x')\delta(\tau-\tau ') \nonumber \\ &{}& -4{V_{0} \over \hbar^{2}}n_{L,0}n_{H,0}\cos\left({2\theta_{H,0}-2\theta_{L,0} \over \hbar}\right)\delta(x-x')\delta(\tau-\tau ').
\label{eqn:highoneloop}\end{aligned}$$
At this point, it is useful to note that at one-loop order, the action is given schematically by $$\begin{aligned}
S&=&S_{L,0} + S_{H}[n_{H,0},\theta_{H,0};n_{L,0},\theta_{L,0}] \nonumber \\ &+&{1\over 2!}\int (n_{H,d},\theta_{H,d})S^{(2)}_{H}(n_{H,d},\theta_{H,d})^{T}
\label{eqn:schematic}\end{aligned}$$ where $S^{(2)}_{H}$ is defined by the Hessian kernel in Eq.(\[eqn:highoneloop\]), the symbol $\int$ indicates integration over $\tau, \tau ' ,x ,x'$, and $\theta_{H,d}$ and $n_{H,d}$ are the quantum fluctuation fields. The partition function at this order contains a sum over all solutions $n_{H,0}$, $\theta_{H,0}$, $n_{L,0}$, $\theta_{L,0}$ of Eqs.(\[eqn:euler\]), (\[eqn:statphase\]), (\[eqn:statphaseadj\]). In what remains of the derivation, we restrict to those phase configurations that satisfy the lowest-energy condition ${2\theta_{H,0} -2\theta_{L,0} \over \hbar} = (2k+1)\pi$. This restriction is valid throughout $\Omega$ for temperatures lower than the maximal energy scale associated with background phase differences in the $J_{L}$ and $J_{H}$ sector, i.e., for $k_{B}T\ll V_{0}\min_{x\in \Omega}n_{H,0}n_{L,0}$. With the low-energy restriction now assumed, the ACS arising at one-loop order for the phase fluctuation field $\theta_{H,d}$ can be derived following the recipe in Appendix A. The only difference occurs in the last term of the expression for ${\delta^{2} S \over \delta \theta_{H}(x,\tau)\delta \theta_{H}(x',\tau')}$ in Eq.(\[eqn:highoneloop\]), which gives rise to a mass term for the field $\theta_{H,d}$ propagating on the ACS. The action becomes that of a Klein-Gordon boson with spacetime-dependent mass propagating on ACS: $$\begin{aligned}
S&=&S_{L,0} + S_{H}[n_{H,0},\theta_{H,0};n_{L,0},\theta_{L,0}] \nonumber \\ &+&{1\over 2}
\int \sqrt{-g} \left[ \vphantom{{A\over B}}
g^{\mu \nu}\del_{\mu}\theta_{H,d}\del_{\nu}\theta_{H,d} \right.
\nonumber \\
&+&\left. {4V_{0}n_{H,0}n_{L,0} \over \hbar^{2}\sqrt{-g}} \theta_{H,d}^{2}\right] .
\label{eqn:kgoneloop}\end{aligned}$$
In the following subsection, we go beyond one-loop order to derive the full effective theory on ACS for the field $\theta_{H,d}$ when the low energy phase matching condition ${2\theta_{H,0} -2\theta_{L,0} \over \hbar} = (2k+1)\pi$ is satisfied throughout the domain.
Sine-Gordon interaction in the $J_{H}$ sector \[sec:jhhighloop\]
----------------------------------------------------------------
From Eq.(\[eqn:polarform\]), it is clear that the tree-level energy $S_{H}[n_{H,0},\theta_{H,0};n_{L,0},\theta_{L,0}]$ is minimized by the phase matching condition $\theta_{H,0} - \theta_{L,0} = (2k+1)\pi \hbar / 2$, $k\in \mathbb{Z}$. The sine-Gordon term in the effective action for $\theta_{H,d}$ is derived by summation of all higher-loop contributions $\delta^{n}S_{H} / \delta \theta_{H}^{n}$. Specifically, when the phase matching condition is satisfied for all $x\in \Omega$, the higher-loop contribution is given by $$\begin{gathered}
\sum_{n=1}^{\infty}\int{1\over 2n!}{\delta^{2n}S_{H} \over \delta \theta_{H}^{2n}}\Big\vert_{\theta_{H,0}}\theta_{H,d}^{2n} \\
= \int V_{0}n_{H,0}n_{L,0}\left(1-\cos {2\theta_{H,d}\over \hbar}\right).
\label{eqn:higherloop}\end{gathered}$$ In Eq.(\[eqn:higherloop\]), the integral on the left hand side (right hand side) symbolizes $2n$ integrations over imaginary time variables and over the space $\Omega$ (symbolizes a single integration over imaginary time and space $\Omega$). Furthermore, the term in $\delta^{2}S_{H} / \delta \theta_{H}^{2}$ that contributes only to the ACS has been omitted.
In addition to the standard approximations presented in Appendix A and the low-energy phase matching condition derived in Sec. \[sec:jhoneloop\], there is one more approximation that should be made which guarantees that the dynamics of the phase fluctuation $\theta_{H,d}$ is given by the sine-Gordon model on ACS. From Eq.(\[eqn:highoneloop\]), one can see that there are additional contributions which can be summed exactly coming from the mixed functional derivatives:
$$\begin{aligned}
\sum_{n=1}^{\infty}\int{1\over 2n+1 !}\left({\delta^{2n+1}S_{H} \over \delta n_{H}\delta \theta_{H}^{2n}} + {\delta^{2n+1}S_{H} \over \delta \theta_{H}^{2n}\delta n_{H}}\right)\Big\vert_{n_{H,0}}n_{H,d}\theta_{H,d}^{2n} &=& 2V_{0}\int_{[0,\beta \hbar]}\int_{\Omega}{1\over 3! \hbar^{2}}n_{L,0}n_{H,d}\theta^{2}_{H,d} -{1\over 5! \hbar^{4}}n_{L,0}n_{H,d}\theta^{4}_{H,d}+\ldots \nonumber \\ &=& \int_{[0,\beta \hbar]}\int_{\Omega}2V_{0}n_{L,0}n_{H,d}\left(1-{\hbar \sin \left( 2\theta_{H,d}/\hbar \right) \over 2\theta_{H,d}}\right)
\label{eqn:sineterm}\end{aligned}$$
when $\theta_{H,0} - \theta_{L,0} = (2k+1)\pi \hbar / 2$. In the following, we omit this term from the analysis due to the fact that after Gaussian integration over the amplitude fluctuation field $n_{H,d}$, the function $1-\hbar\sin (2\theta_{H,d}/\hbar)/2\theta_{H,d}$ appears in the following two types of expressions: 1) in a term with characteristic energy scaling as $\mathcal{O}(V_{0}^{2})$ which can be neglected due to the weakness of the interaction, and 2) in terms of the form $V_{0}n_{L,0}\left( \del_{j} \theta_{H,d} \right) \left(1-{\hbar \sin \left( 2\theta_{H,d}/\hbar \right) / 2\theta_{H,d}}\right)$, $j=0,1,2,3$, which can be assumed to approximately vanish if $\theta_{H,d}(\omega_{-n},-k)\approx \theta_{H,d}(\omega_{n},k)$ in Matsubara and momentum space. We also note that Eq.(\[eqn:sineterm\]) vanishes as $\theta_{H,d} \rightarrow 0$, the same limit for which the sine-Gordon dynamics on ACS is well approximated by Klein-Gordon dynamics of a massive boson on ACS.
Taking this additional approximation into account, implementing the phase matching condition, and following the derivation of Appendix A for the ACS arising at one-loop order gives the action for the phase field fluctuations $\theta_{H,d}$, differing from the free action of a massless particle on ACS by the addition of a nonperturbative sine-Gordon interaction arising from the summation of higher-loop contributions shown in Eq.(\[eqn:higherloop\]): $$\begin{aligned}
S_{\text{sG}}&:=&{1\over 2}
\int \sqrt{-g} \left[ \vphantom{{A\over B}}
g^{\mu \nu}\del_{\mu}\theta_{H,d}\del_{\nu}\theta_{H,d} \right.
\nonumber \\
&+&\left. {2V_{0}n_{H,0}n_{L,0} \over \sqrt{-g}}\cos(2\theta_{H,d}/\hbar) \vphantom{{A\over B}}\right] .
\label{eqn:sgacs}\end{aligned}$$ In Eq.(\[eqn:sgacs\]), the integral is over the imaginary time interval $[0,\beta \hbar]$ and over the space $\Omega$. We emphasize that the above action is exact when the well-defined approximations of the present section and those of Appendix A hold. As is the case for the Klein-Gordon mass arising at one-loop order in Eq.(\[eqn:kgoneloop\]), the sine-Gordon mass exhibits a spacetime dependence. Using $\sqrt{-g}=n_{H,0}^{2}/m^{2}c_{s}$, with $c_{s}:=\left( V_{0}n_{H,0} / m \right)^{1/2}$ the local speed of sound in the $J_{H}$ sector (see Appendix A), the sine-Gordon mass is seen to be $2(V_{0}m)^{3/2}n_{L,0}/n_{H,0}^{1/2}$.
The contravariant metric $g^{\mu\nu}$ \[see Eq.(\[eqn:contravar\])\] depends on the gauge field $v$ and a solution pair $n_{H,0}$, $\theta_{H,0}$ of Eq.(\[eqn:euler\]). It follows from the phase matching condition $\theta_{H,0} - \theta_{L,0} = (2k+1)\pi \hbar / 2$, $k\in \mathbb{Z}$, for the background phase fields that $\nabla \theta_{H,0} = \nabla \theta_{L,0}$. This low-energy configuration also implies that if one exchanges $L$ and $H$ throughout the above calculation, the phase fluctuations $\theta_{L,d}$ in the $J_{L}$ sector propagate on a space $g^{\mu\nu}$ with the same form as derived in this section. Deviation from the phase-matching condition has two consequences: 1) the $\theta_{H,d} \rightarrow -\theta_{H,d}$ symmetry of the action is broken, thereby resulting in nonconservation of the particle number in the $J_{L}$ and $J_{H}$ sectors, and, 2) the fluctuations in the $J_{L}$ and $J_{H}$ sectors propagate on different ACS geometries when their complementary sectors are, respectively, pinned to tree-level. Note that when the $\theta_{H,d}$ dynamics are described by Eq.(\[eqn:sgacs\]), the effective sine-Gordon theory is locally destroyed when $n_{L,0}=0$ (in such regions, the $\theta_{H,d}$ field becomes a free massless particle on $g^{\mu \nu}$ as in Appendix A), or when the high-energy configuration $2(\theta_{H,0} - \theta_{L,0}) = 2k \pi \hbar $, $k\in \mathbb{Z}$, is generated.
Sine-Gordon equation on ACS\[sec:sgeqonacs\]
--------------------------------------------
The equation of motion for $\theta_{H,d}$ resulting from taking $\delta S_{\text{sG}}/\delta \theta_{H,d} = 0$ in Eq.(\[eqn:sgacs\]) has the form of a nonlinear wave equation on the ACS. Using Eq.(\[eqn:sgacs\]) and setting the functional derivative $\delta S_{\text{sG}} / \delta \theta_{H,d} = 0$ gives the equation of motion $$\del_{\mu}\left( \sqrt{-g}g^{\mu\nu}\del_{\nu}\theta_{H,d} \right) + {2V_{0}n_{H,0}n_{L,0} \over \hbar}\sin\left({2\theta_{H,d} \over \hbar} \right) = 0
\label{eqn:sgeoncurvedspacetime}$$ in coordinates $(x_{0},x_{1},x_{2},x_{3}) = (-i\tau , x_{1},x_{2},x_{3} )$. In terms of the quantum theory of phase fluctuations in the $J_{H}$ sector, Eq.(\[eqn:sgeoncurvedspacetime\]) is the sine-Gordon equation on curved spacetime that is satisfied by $\langle \hat{\theta}_{H,d} \rangle$ at tree order. Written in real time, this equation is $$\begin{aligned}
{}&{}&\del_{tt}\theta_{H,d} - \del_{t}\left( (v-{1\over m}\nabla \theta_{H,0}) \cdot \nabla \theta_{H,d} \right) \nonumber \\
&{}& - \nabla \cdot \left((v-{1\over m}\nabla \theta_{H,0}) \del_{t} \theta_{H,d} \right)\nonumber \\
&{}& -\nabla \cdot \left( \left({V_{0}n_{H,0}\over m}\mathbb{I}_{3\times 3} \nonumber \right. \right. \\ &{}& \left. \left. - (v-{1\over m}\nabla \theta_{H,0})^{T}(v-{1\over m}\nabla \theta_{H,0}) \right)\nabla\theta_{H,d} \right) \nonumber \\ &{}& + {2V_{0}^{2}n_{H,0}n_{L,0} \over \hbar} \sin\left( {2\theta_{H,d}\over \hbar} \right) = 0.
\label{eqn:onedsg}\end{aligned}$$ When restricted to one spatial dimension, the above equation does not immediately reduce to the usual sine-Gordon equation $\left(\del_{t}^{2}-\del_{x}^{2}\right)\Phi + {M^{2}\over \beta}\sin (\beta \Phi)$ for a scalar field $\Phi(x,t)$ and constants $M$, $\beta >0$. Rather, the assumptions $\del_{x}v=0$, $\del_{xx}\theta_{H,0}=0$ that were used in our derivation of $S_{\text{sG}}$ imply that $$\begin{aligned}
{}&{}&\del_{tt}\theta_{H,d}- 2\left(v-{1\over m}\del_{x}\theta_{H,0} \right) \del_{xt}\theta_{H,d} \nonumber \\ &{}&
-\left({V_{0}n_{H,0}\over m} - (v-{1\over m}\del_{x} \theta_{H,0})^{2} \right)\del_{xx} \theta_{H,0} \nonumber \\ &{}& +\del_{x}\left( {1\over m}\del_{t}\theta_{H,0} - {V_{0}n_{H,0}\over m} \right) \del_{x}\theta_{H,d} \nonumber \\ &{}&+ {2V_{0}^{2}n_{H,0}n_{L,0} \over \hbar} \sin\left( {2\theta_{H,d}\over \hbar} \right) = 0.
\label{eqn:odsg}\end{aligned}$$ The assumption $\del_{xx}\theta_{H,0}=0$ means that $\del_{x}\theta_{H,0}$ is a function of time only. It follows from the stationary phase equation $\delta S_{H}/\delta n_{H}=0$ in Eq.(\[eqn:euler\]) that if $\del_{x}\theta_{H,0}/m = v$ and if $n_{H,0}$ is well approximated by the Thomas-Fermi limit $n_{H,0}={1\over 2V_{0}}\left( \mu - V_{\text{ext}}(x) - V_{0}n_{L,0} \right)$, then $$\del_{t}\theta_{H,0}= V_{0}n_{H,0}(x,t) \label{eqn:cond1}$$ and, therefore, the term of Eq.(\[eqn:odsg\]) linear in $\del_{x}\theta_{H,d}$ vanishes. In this case, Eq.(\[eqn:odsg\]) becomes $$\left( \del_{t}^{2}-{V_{0}n_{H,0}\over m}\del_{x}^{2} \right)\theta_{H,d} + {2V_{0}^{2}n_{H,0}n_{L,0} \over \hbar}\sin \left( {2\theta_{H,d}\over \hbar} \right) = 0
\label{eqn:nonautosge}$$ which, for nonconstant $n_{H,0}$ or $n_{L,0}$, is a nonautonomous partial differential equation. Equation (\[eqn:nonautosge\]) can be put into the usual sine-Gordon form with $\beta = \hbar/2$ and $M^{2} = V_{0}^{2}n_{H,0}n_{L,0}$ by taking $n_{H,0}$, $n_{L,0}$ to be constant and changing from the laboratory coordinates $(\tau ,x)$ to the canonical coordinates [@evanspde] for the second-order PDE. In (1+1)-D, the gauge field $v$ is constant and can be completely removed from the equation of motion for $\theta_{H,d}$ by using Eq.(\[eqn:euler\]). Specifically, when the background phases are in their lowest energy configuration, one has $ \left( v-{1\over m}\del_{x}\theta_{H,0}\right) = \del_{t}n_{H,0} / \del_{x}n_{H,0}$.
Examples
========
Engineered condensate state\[sec:engineer\]
-------------------------------------------
Without loss of generality, the set $J$ in the previous section can be taken as countable, partially ordered and $J_{L}$ ($J_{H}$) considered as low-energy (high-energy) modes. In this subsection, we consider the effective action for a phase fluctuation field having support only on modes $J_{H}=\lbrace j >0 \rbrace$ when the $J_{L}=\lbrace 0 \rbrace$ mode is prepared in an engineered state. For example, one can take $\alpha \in \mathbb{C}$ and construct the subspace $\mathcal{B}_{\alpha}$ of the bosonic Fock space $\mathcal{F}_{B}$ defined as the completion of the complex linear span of pure states of the form $$\ket{\psi_{\vec{n}}}:=e^{-{\vert \alpha \vert^{2}\over 2}}\sum_{j_{0}=0}^{\infty}{\alpha^{j_{0}}\over \sqrt{j_{0}!}}\ket{j_{0},n_{1},n_{2},\ldots }$$ where $\vec{n}:=(n_{1},n_{2},\ldots )$ with $n_{k}\ge 0$. Clearly, $\langle \psi_{\vec{n}'} \vert \psi_{\vec{n}} \rangle = \delta_{\vec{n},\vec{n}'}$ and $a_{0}\ket{\psi_{\vec{n}}}=\alpha \ket{\psi_{\vec{n}}}$ for all $\vec{n}$. We now compress the Hamiltonian Eq.(\[eqn:wibgham\]) to the subspace $\mathcal{B}_{\alpha}$ by defining $\hat{H}_{\alpha}:= P_{\mathcal{B}_{\alpha}}\hat{H}P_{\mathcal{B}_{\alpha}}$. Explicitly, $P_{\mathcal{B}_{\alpha}}\hat{H}P_{\mathcal{B}_{\alpha}}$ is given by taking $\hat{\psi} \rightarrow \alpha \varphi_{0}(x) + \hat{\psi}_{H}$ in Eq.(\[eqn:wibgham\]), where $\hat{\psi}_{H}$ has support only on $J_{H}$. Utilizing the compressed Hamiltonian $\hat{H}_{\alpha}$ to predict the thermodynamic properties of the WIBG is traditionally known as the Bogoliubov approximation [@bogo; @zagrebnov; @liebcnumber]. Within the Bogoliubov approximation, the partition function is written $$\begin{aligned}
Z(\beta) &\approx& \text{tr}'\left[e^{-\beta P_{\mathcal{B}_{\alpha}}\hat{H}P_{\mathcal{B}_{\alpha}}}\right]\end{aligned}$$ where $\text{tr}'$ is the trace over $\mathcal{B}_{\alpha}$ only and $\beta$ represents the inverse temperature as measured in subspace $\mathcal{B}_{\alpha}$. The field operator $\hat{\psi}$ has nonzero expectation value for any state of $\mathcal{B}_{\alpha}$, so we are working in the Bose-Einstein condensed phase. We can also take $\alpha \in \mathbb{R}$ by making the change of variable $\hat{\psi}_{H} \mapsto e^{i\text{Arg}{\alpha}}\hat{\psi}_{H}$.
The approximate partition function can be written as a coherent state functional integral over functions orthogonal (in $L^{2}(\Omega)$) to $\varphi_{0}(x)$. When terms of order $\mathcal{O}(\psi_{H}^{3})$ and $\mathcal{O}(\psi_{H}^{4})$ are neglected in the action of this partition function, the thermodynamics of the $j>0$ sector is determined by a noninteracting gas of bosons with Bogoliubov spectrum [@abrikosov]. We now show that keeping these terms allows for derivation of sine-Gordon dynamics on ACS in the $j>0$ sector as in Sec. \[sec:sgdyn\]. Similar to the procedure in Sec. \[sec:sgdyn\], we demand that the single-particle wave function $\varphi_{0}$ satisfy the time-independent generalized Gross-Pitaevskii equation given by an equation similar to Eq.(\[eqn:statphase\]): $$\begin{aligned}
-{\hbar^{2}\over 2m}(\nabla -i{m\over \hbar}v(x))^{2}\varphi_{0} &+& (mV_{\text{ext}}(x)+ V_{0}\vert \psi_{H} \vert^{2})\varphi_{0} \nonumber \\ &+& \vert \alpha\vert^{2}V_{0}\vert \varphi_{0} \vert^{2}\varphi_{0} = 0.
\label{eqn:gpusual}\end{aligned}$$ Substituting a solution of the form $\varphi_{0} = \vert \varphi_{0} \vert e^{i\theta_{0} /\hbar}$ back into the action gives Eq.(\[eqn:polarform\]) with $n_{L,0} \mapsto \vert \alpha \vert^{2}\vert \varphi_{0} \vert^{2}$ and $\theta_{L,0} \mapsto \theta_{0}$.
Because the resulting action takes the same form as Eq.(\[eqn:polarform\]), therefore the phase fluctuation field $\theta_{H,d}$ with support only on single-particle modes $j>0$ exhibits sine-Gordon dynamics on ACS under the same assumptions as in Sec. \[sec:sgdyn\]. Although we do not touch on the subject of spectral analysis of the effective field theory, we note that when the sine-Gordon interaction is neglected, the values of $\mu$ and $\alpha$ should be chosen so that ${\mu \over \vert \alpha \vert^{2} }= V_{0}$ to enforce gapless excitations [@griffinshi].
One inconvenient feature of the derivation of the sine-Gordon dynamics on ACS in the $J_{H}$ sector presented in Sec. \[sec:sgdyn\] is that the field with support $J\setminus J_{H}$ must satisfy the self-consistent mean field equations Eq.(\[eqn:statphase\]) and Eq.(\[eqn:statphaseadj\]). This requirement can be removed if the full dynamics is restricted to a subspace of $\mathcal{F}_{B}$ such that terms of the form $\hat{\psi}_{L}^{\dagger}\hat{\psi}_{H} +h.c.$, $\hat{\psi}_{L}^{\dagger}\hat{\psi}_{L}\hat{\psi}_{H} + h.c.$, and $\hat{\psi}_{H}^{\dagger}\hat{\psi}_{H}\hat{\psi}_{L} + h.c.$ vanish. As an example, consider again $J_{L}=\lbrace 0\rbrace$ and, instead of making the Bogoliubov approximation by projecting the Hamiltonian to the $j=0$ mode coherent state subspace $\mathcal{B}_{\alpha}$, prepare the $j=0$ mode so that the system dynamics occurs in the subspace $\mathcal{B}_{\alpha,+}\subset \mathcal{F}_{B}$, where $\mathcal{B}_{\alpha,+} $ is defined as the completion of the complex linear span of pure states of the form: $$\ket{\psi_{\vec{n}}^{+}}:={1\over \sqrt{2+ 2e^{-2\vert \alpha \vert^{2}}}}\sum_{j_{0}=0}^{\infty}{\alpha^{j_{0}}(1+(-1)^{j_{0}})\over \sqrt{j_{0}!}}\ket{j_{0},n_{1},n_{2},\ldots }.
\label{eqn:evencat}$$
One finds that $\langle \psi_{\vec{n}}^{+} \vert a_{0}\vert{\psi_{\vec{n}'}^{+}} \rangle=0$ while $ a_{0}^{2}\vert{\psi_{\vec{n}}^{+}} \rangle=\alpha^{2}\vert{\psi_{\vec{n}}^{+}} \rangle$. The parameter $\alpha$ controls the expected number of atoms in the $j=0$ mode via $\langle \psi_{\vec{n}}^{+} \vert a_{0}^{\dagger}a_{0}\vert{\psi_{\vec{n}}^{+}} \rangle = \vert \alpha \vert^{2}\tanh^{2}\vert \alpha \vert^{2}$ and may be taken as real because of the global U(1) symmetry. Taking the partial trace of $\ket{\psi_{\vec{n}}^{+}}\bra{\psi_{\vec{n}}^{+}}$ over the bosonic Fock space generated from $\lbrace \varphi_{j}\rbrace_{j\in J\setminus \lbrace 0 \rbrace}$ gives the single-mode even coherent state (a photonic cat state), commonly studied in continuous variable quantum information theory [@volkoff; @dodonov; @milburncatcode].
Compressed to the subspace $\mathcal{B}_{\alpha,+}$, the normally ordered Hamiltonian becomes $$\begin{aligned}
P_{\mathcal{B}_{\alpha,+}}:\hat{H}:P_{\mathcal{B}_{\alpha,+}} &=& \left[\vphantom{\sum_{k}^{\infty}} H[\overline{\alpha}\overline{\varphi_{0}},\alpha \varphi_{0}] \nonumber \right. \\ & +& \left.\int_{\Omega} \left[\vphantom{{V_{0}\over 2}} {\hbar^{2}\over 2m}\overline{D} \hat{\psi}_{H}^{\dagger}D\hat{\psi}_{H} \nonumber \right. \right. \\ & +& \left. \left.
(mV_{\text{ext}}(x) - \mu)\hat{\psi}_{H}^{\dagger}\hat{\psi}_{H}
\nonumber \right. \right. \\ &+ & \left. \left.
2V_{0}\vert \varphi_{0} \vert^{2}\vert \alpha \vert^{2}\tanh^{2}\vert \alpha \vert^{2}\hat{\psi}_{H}^{\dagger}\hat{\psi}_{H} \nonumber \right. \right. \\ & +& \left. \left.
\left({V_{0}\over 2}\alpha^{2}\varphi_{0}^{2}\hat{\psi}_{H}^{\dagger \,2} + h.c. \right) \nonumber \right. \right. \\ & +& \left. \left.
{V_{0}\over 2}\hat{\psi}_{H}^{\dagger \, 2}\hat{\psi}_{H}^{2} \vphantom{{V_{0}\over 2}}\right] \vphantom{\sum_{k}^{\infty}} \right]P_{\mathcal{B}_{\alpha,+}}
\label{eqn:catham}\end{aligned}$$ where $H[\overline{\alpha}\overline{\varphi_{0}},\alpha \varphi_{0}]$ is a scalar obtained by the formal substitution $\hat{\psi} \rightarrow \alpha \varphi_{0}$, $\hat{\psi}^{\dagger}\rightarrow \overline{\alpha}\overline{\varphi_{0}}$ in Eq.(\[eqn:wibgham\]) and $h.c.$ symbolizes the adjoint of the preceding term in the parentheses. Compared to Eq.(\[eqn:actionfull\]), Eq.(\[eqn:catham\]) does not contain the terms $ :a_{0}^{\dagger}a_{0}a_{0}^{\dagger}\hat{\psi}_{H}: +h.c.$ or $:a_{0}\hat{\psi}_{H}^{\dagger}\hat{\psi}_{H}\hat{\psi}_{H}^{\dagger}: +h.c.$ because single-particle tunneling events involving the $j=0$ mode are forbidden in $\mathcal{B}_{\alpha,+}$. As a consequence of this partial decoupling, $\varphi_{0}$ is no longer required to satisfy a generalized, self-consistent Gross-Pitaevskii equation. The phase-fluctuation dynamics in the $J_{H}$ sector are given by a sine-Gordon potential on ACS because the $J_{H}$ sector of the coherent state path integral is equivalent to Eq.(\[eqn:polarform\]). In both the zero-mode coherent state case and the zero-mode even coherent state case, the sine-Gordon potential is given by $2V_{0}\vert \alpha \vert^{2}\vert \varphi_{0} \vert^{2}\left( 1- \cos\left( {2\theta_{H,d}\over \hbar} \right) \right)$ when the background phases are in the lowest energy configuration.
Any single-particle wave function orthogonal to the $J_{H}$ sector and prepared in an even coherent state can support sine-Gordon dynamics on analogue curved spacetime for the phase fluctuations in $J_{H}$. The statistical thermodynamics of the $J_{H}$ “universe” is determined by $P_{\mathcal{B}_{\alpha,+}}\left(:\hat{H}:-H[\overline{\alpha}\overline{\varphi_{0}},\alpha \varphi_{0}]\right) P_{\mathcal{B}_{\alpha,+}}$ and the single-particle wave function chosen for $\varphi_{0}$ appears only in setting the spacetime geometry, mass, and vacuum energy for the phase fluctuations.
Interplane tunneling between WIBGs\[sec:interplane\]
----------------------------------------------------
Another experimentally relevant class of sine-Gordon dynamics on ACS arises when both $J_{L}$ and $J_{H}$ sectors exhibit phase fluctuations, but the sectors are not coupled by $s$-wave WIBG scattering. An example of this situation is encountered in a system consisting of two planar (i.e., (2+1)-D) reservoirs of WIBG exhibiting single-particle tunneling with amplitude $t_{\perp}$ independent of time $\tau$ and the planar coordinate $x \in \Omega$: $$\begin{aligned}
S_{T}&=&{t_{\perp}\over 2}\int_{[0,\beta\hbar]}\int_{\Omega} \overline{\psi}_{L}\psi_{R} + c.c.\end{aligned}$$ In this model, the phase fluctuations in each plane are coupled only by the interplane tunneling. Appealing to the general derivation in Appendix A, one finds that for $t_{\perp}\rightarrow 0$ the WIBG part of the action describes two independent massless boson fields $\theta_{L,d}$, $\theta_{R,d}$ propagating on ACS defined by the velocity fields $v_{L}+{1\over m}\nabla \theta_{L,0}$, $v_{R}+{1\over m}\nabla \theta_{R,0}$, respectively. For $\vert t_{\perp} \vert \neq 0$, however, the higher-order functional derivatives are $$\begin{aligned}
{1\over 2n!}{\delta^{2n}S_{LR}\over \delta \theta_{L}^{m}\delta \theta_{R}^{k} }&=& {(-1)^{n+k}t_{\perp}\sqrt{n_{L}n_{R}}\over 2n!\hbar^{2n}}\cos \left( {\theta_{L} - \theta_{R} \over \hbar} \right), \nonumber \\
{1\over 2n+1!}{\delta^{2n+1}S_{LR}\over \delta \theta_{L}^{m}\delta \theta_{R}^{k} }&=& {(-1)^{n+k+1}t_{\perp}\sqrt{n_{L}n_{R}}\over 2n+1!\hbar^{2n+1}}\sin \left( {\theta_{L} - \theta_{R} \over \hbar} \right)\nn \end{aligned}$$ where $m+k = 2n$ ($m+k = 2n+1$) in the first (second) line and in the $(n,m,k)=(1,2,0)$ and $(n,m,k)=(1,0,2)$ terms in the first line, we have neglected free Bose gas contribution to the second-order functional derivative. These higher loop contributions can be summed to produce the following action: $$\begin{aligned}
S_{LR}&=& \int_{[0,\beta \hbar]}\int_{\Omega} \sum_{j\in \lbrace L , R \rbrace}\sqrt{-g_j}\left[\vphantom{\sum_{j\in \lbrace L , R \rbrace}}g_j^{\mu\nu}\del_{\mu}\theta_{j,d}\del_{\nu}\theta_{j,d} \right. \nonumber \\ &+& \left. t_{\perp}{\sqrt{n_{H,0}n_{L,0}}\over \sqrt{-g_j}} \left[ \cos \gamma_{LR,0}\left(1-\cos \gamma_{LR,d} \right) \nonumber \right. \right. \\ &+& \left. \left. \sin \gamma_{LR,0} \left(\gamma_{LR,d}-\sin \gamma_{LR,d} \right) \right] \vphantom{\sum_{j\in \lbrace L , R \rbrace}} \right]
\label{eqn:leftrightplane}\end{aligned}$$ where $\gamma_{LR}:= {\theta_{L}-\theta_{R} \over \hbar}$, the 0 subscript on the fields indicates that they are solutions to the coupled stationary phase equations for $\theta_{L(R)}$, $n_{L(R)}$, and the $d$ subscript indicates fluctuation fields.
Equation (\[eqn:leftrightplane\]) becomes the action of a Josephson tunnel junction between the $L$ and $R$ “universes" in the low-energy configuration given by $\gamma_{LR,0}$ an odd multiple of $\pi$; a discussion of oscillations around $\gamma_{LR,0} = \pi$ in bosonic Josephson junctions can be found in [@Raghavan]. Otherwise, the junction has mixed sinusoidal phase dynamics. As in the previous sections, partitioning the single-particle Hilbert spaces of $L$ and $R$ systems would lead to internal Josephson oscillations in each system coupled to the external Josephson oscillation between the systems.
Quantum ACS from two-mode superposition state\[sec:superposacs\]
================================================================
The analysis in Sec. \[sec:engineer\] suggests the possibility of projecting more than one mode to an engineered state of interest. In this section, we briefly analyze the dynamics defined by $P_{\mathcal{B}^{(2)}_{w,\alpha}}\hat{H}P_{\mathcal{B}^{(2)}_{w,\alpha}}$, where $B^{(2)}_{w,\alpha}\subset \mathcal{F}_{B}$ is defined as the completion of the complex linear span of pure states of the form $$\begin{aligned}
\ket{\psi_{\vec{n}}}&:=& {e^{-\vert \alpha \vert^{2}/2}\over \sqrt{\mathcal{N}(w,\alpha)}}\left( \sum_{j_{0}=0}^{\infty}{\alpha^{j_{0}}(1+(-1)^{j_{0}})\over \sqrt{j_{0}!}}\ket{j_{0},0,n_{2},n_{3},\ldots } \nonumber \right. \\ &+&\left. w \sum_{j_{1}=0}^{\infty}{\alpha^{j_{1}}(1+(-1)^{j_{1}})\over \sqrt{j_{1}!}}\ket{0,j_{1},n_{2},n_{3},\ldots } \right)
\label{eqn:hierarchmany}\end{aligned}$$ where $\mathcal{N}(w,\alpha):=4e^{-\vert \alpha \vert^{2} }\left( (1+\vert w \vert^{2})\cosh \vert \alpha \vert^{2} + 2\text{Re}w \right)$, $w\in \mathbb{C}$, and $\vec{n} = (n_{2},n_{3},\ldots)$. Compression of the original Hamiltonian to this subspace allows for (1) taking $\varphi_{0}(x)$, $\varphi_{1}(x)$ to be arbitrary orthogonal single-particle wave functions in the $J_{L}$ sector and (2) dependence of the metric $g^{\mu\nu}$ on both background superfluid flows ${\hbar\over m}\nabla\text{Arg}\varphi_{0}$ and ${\hbar\over m}\nabla\text{Arg}\varphi_{1}$. Taking the partial trace of the state in Eq.(\[eqn:hierarchmany\]) over the bosonic Fock space generated by single-particle modes $j>1$ gives a symmetrized two-mode state closely related to the hierarchical Schrödinger cat states introduced in Ref.[@volkoffcat].
Similarly to Eq.(\[eqn:catham\]), compressing the Hamiltonian to the subspace $B^{(2)}_{w,\alpha}$ produces a Hamiltonian of the form (again using the global U(1) invariance to take $\alpha \in \mathbb{R}$) $$\begin{aligned}
P_{\mathcal{B}^{(2)}_{w,\alpha}}\hat{H}P_{\mathcal{B}^{(2)}_{z,\alpha}} &=& \left[ \vphantom{\sum_{k}^{\infty}} \sum_{j\in \lbrace 0,1\rbrace}H[\overline{\varphi_{j}},\varphi_{j}] \nonumber \right. \\ &+& \left. {2V_{0}\vert \alpha \vert^{4}e^{-\vert \alpha \vert^{2}} \over \mathcal{N}(\alpha, w)}\left( w\varphi_{1}^{2}\overline{\varphi_{0}}^{2} + c.c. \right) \nonumber \right. \\ &+& \left. \int_{\Omega} \left[ \vphantom{{A\over B}} {\hbar^{2}\over 2m}\overline{D} \hat{\psi}_{H}^{\dagger}D\hat{\psi}_{H} \nonumber \right. \right. \\ &+& \left. \left. (mV_{\text{ext}}(x) - \mu +G[\varphi_{0},\varphi_{1};\alpha ,w])\hat{\psi}^{\dagger}_{H}\hat{\psi}_{H} \nonumber \right. \right. \\ &+& \left. \left. {V_{0}\over 2}{\alpha^{2}\over \mathcal{N}(\alpha ,w)}\left((\overline{\varphi_{0}}^{2}+w^{2}\overline{\varphi_{1}}^{2})\hat{\psi}_{H}^{2}+h.c \right) \nonumber \right. \right. \\ &+& \left. \left. {V_{0}\over 2}\hat{\psi}_{H}^{\dagger \, 2}\hat{\psi}_{H}^{2} \vphantom{{A\over B}} \right] \vphantom{\sum_{k}^{\infty}} \right]P_{\mathcal{B}^{(2)}_{z,\alpha}}
\label{eqn:twomodefixed}\end{aligned}$$ where the function $G[\varphi_{0},\varphi_{1};\alpha ,w]) := 2V_{0}\vert \alpha\vert^{2}\tanh\vert \alpha\vert^{2}\left( {\vert \phi_{0} \vert^{2} + \vert w \varphi_{1}\vert^{2} \over \mathcal{N}(w,\alpha)} \right)$, and $H[\overline{\varphi_{j}},\varphi_{j}]$ is a scalar functional which contributes only to the vacuum energy of the phase fluctuation theory. As $\alpha \rightarrow \infty$, the intermode interaction energy in the $J_{L}$ sector appearing in the second line of Eq.(\[eqn:twomodefixed\]) vanishes. In the coherent state path integral expression for the partition function corresponding to $P_{\mathcal{B}^{(2)}_{w,\alpha}}\hat{H}P_{\mathcal{B}^{(2)}_{z,\alpha}}$, the pair exchange term appearing in the fifth line of Eq.(\[eqn:twomodefixed\]) is given by $\vert \varphi_{0}^{2} + \vert w\vert ^{2}\varphi_{1}^{2}\vert {V_{0} \alpha^{2} \over \mathcal{N}(w,\alpha)}\cos({2\theta_{H} \over \hbar} - \xi )$ where $\xi := \text{Arg}\left( \varphi_{0}^{2} + \vert w\vert^{2} \varphi_{1}^{2} \right)$.
![a) Velocity field $\nabla \theta_{H,0}$ corresponding to $\theta_{H,0}$ in Eq.(\[eqn:vortnovort\]) for $\Vert x \Vert \ge 1$ and $w=1$ in Eq.(\[eqn:hierarchmany\]). The gauge field $v$ is taken to be zero, and lengths are scaled by the superfluid coherence length $\xi_0$ appearing in . b) Magnitude of velocity field when $w=1$ in Eq.(\[eqn:vortnovort\]), with unit of velocity $2c_{s}$. The yellow region (representing velocity magnitude $\ge 1/2$ because the color scale has been cut off at 1/2) lies inside the ergosurface. The vortex core is given by the disk of radius 1; fluid velocity has been set to zero in the core. The white region is a neighborhood of a phase singularity surface. c) Same as b) except with $w=1/2$. []{data-label="fig:ergovel"}](velfield.pdf)
Constructing the partition function via a trace of $\exp \left[ - \beta P_{\mathcal{B}^{(2)}_{w,\alpha}}\hat{H}P_{\mathcal{B}^{(2)}_{z,\alpha}} \right]$ over $B^{(2)}_{w,\alpha}$ and proceeding in the same way as in previous sections, one can derive the sine-Gordon theory on ACS for the phase fluctuation field $\theta_{H,d}$. The intriguing novelty in the present case is that the low-energy tree-level configuration for $\theta_{H,0}$, which appears in $g^{\mu\nu}$ for the phase fluctuation field, is given by $\theta_{H,0} = \hbar \text{Arg}\left( \varphi_{0}^{2} + \vert w\vert^{2} \varphi_{1}^{2} \right) /2 + (2k+1)\hbar\pi /2$. Therefore, the metric contains contributions from both background velocity fields $\hbar\text{Arg} \nabla\varphi_{0}/m$ and $\hbar\text{Arg}\nabla \varphi_{1}/m$. Furthermore, the parameter $w$ that weights the $J_{L}$ sector superposition state toward being an even coherent state in the $j=0$ mode ($z\rightarrow 0$) or in the $j=1$ mode $z\rightarrow \pm\infty$ appears in the metric tensor, causing $\xi \rightarrow 2\text{Arg}\varphi_{0}$ ($w\rightarrow 0$) or $\xi \rightarrow 2\text{Arg}\varphi_{1}$ ($w\rightarrow \pm \infty$). In essence, the preparation of the state Eq.(\[eqn:hierarchmany\]) is equivalent to the preparation of a false vacuum [@coleman] for the phase fluctuation field $\theta_{H,d}$. If a low energy atom is detected, the superposition collapses to either the $\varphi_{0}$ even coherent state or the $\varphi_{1}$ even coherent state conditioned on the low energy atom being found in $\varphi_{0}$ or $\varphi_{1}$, respectively. Because the locally-defined coupling constant of the sine-Gordon theory for $\theta_{H,d}$ is proportional to $\cos(2\theta_{H,0}/\hbar - \xi)$, the collapse event changes the energy density of the system.
Finally, we mention that in the present case, the properties of the ACS geometry can arise from classically disallowed background flows. Consider a (2+1)-D example when $\varphi_{0}$ is a U(1) quantum vortex centered on the origin with circulation $2\pi \hbar n / m$. The following unnormalized approximate wave function is valid for $\Vert x\Vert > \xi_{0}$ [@pitaevskii] $$\varphi_{0}(x)
=A e^{in\tan^{-1}\left({x_{2}\over x_{1}}\right)}\left(1-{\xi_{0}^{2}n^{2}\over \Vert x \Vert^{2}}\right)^{1/2}
\label{eqn:vortex}$$ with $A$ a positive constant and $\xi_{0}$ a microscopic length scale characterizing the radius of the vortex core (the superfluid coherence length). The superposed single-particle mode $\varphi_{1}$ can be taken as a U(1) quantum vortex of the same form as above and also centered at the origin, but with different circulation $2\pi \hbar n'/m$. The quantum nature of such a superposition can be seen by noting that there exists an annular region in the gas which is in a superposition of the gas state comprising the vortex core and the superfluid state.
From $g_{00}$ in Eq.(\[eqn:covar\]), one can determine the ergosurface condition in terms of $\Vert v-{1\over m}\nabla \theta_{H,0}\Vert^{2}$. Taking $\theta_{H,0}$ to be unitless for now and taking $v=0$ gives the condition ${\hbar^{2}\over m^{2}}\Vert \nabla \theta_{H,0}\Vert^{2} = c_{s}^{2}$. Working in units where $\xi_{0} = 1$, the ergosurface is given by $(x,y)$ such that $\Vert \nabla \theta_{H,0}(x,y)\Vert^{2} = 1/2$. In the absence of a gauge field, the connection coefficients, Riemann curvature tensor, Ricci curvature tensor, scalar curvature, and Einstein tensor for a U(1) vortex velocity field have been computed in Ref.[@fischervisser1; @fischervisser2] in terms of the deformation rate $D_{ij}={1\over 2}(\del_{i}\del_{j} \theta_{H,0}+\del_{j}\del_{i} \theta_{H,0})$ when the $n_{H,0}$ and the speed of sound are taken to be spatially constant. Therefore, the Riemannian geometry of the spacetime is completely determined by the background velocity field. In Fig.\[fig:ergovel\], we show the velocity field and ergosurface for two values of $w$ in when $\varphi_{0}$ is a vortex of circulation $n=1$ given by Eq.(\[eqn:vortex\]), and $\varphi_{1}$ is taken as spatially constant. The superfluid velocity potential is $$\begin{gathered}
\theta_{H,0}(x_{1},x_{2}) \\
= {1 \over 2}\tan^{-1}\left( {2w^{2}(x^{2}_{1}+x_{2}^{2}-1)x_{1}x_{2} \over (1-w^{2})x_{1}^{2}+(1+w^{2})x_{2}^{2} + w^{2}(x_{1}^{4}-x_{2}^{4})}\right).
\label{eqn:vortnovort}\end{gathered}$$ From Fig.\[fig:ergovel\]a), b), it is clear that the superposition of a spatially homogeneous state and an $n=1$ vortex in the lowest two modes drastically alters the velocity field from the azimuthal field of a U(1) quantum vortex. Although it appears that the U(1) symmetry has been broken, this is an artifact of having chosen $w,\alpha \in \mathbb{R}$. In reality, the phase singularities can occur along any direction, but engineering of the state of the $J_{L}$ sector can break the symmetry. When $w$ is decreased toward zero in Fig.\[fig:ergovel\]c), the velocity field exhibits a larger “quiet area” separating the ergosurface from the vortex core. The regions of highest velocity are associated with phase singularities. For the phase field in Eq.(\[eqn:vortnovort\]), the local coupling constant of the effective sine-Gordon theory depends only on $\vert \varphi_{0}^{2} + w^{2}\varphi_{1}^{2}\vert$ because the phase field $\theta_{H,0}$ is pinned to $\hbar \xi /2$ in Eq.(\[eqn:twomodefixed\]). Therefore, the velocity fields shown in Fig.\[fig:ergovel\] affect the dynamics of the phase fluctuation field $\theta_{H,d}$ only through their appearance in the analogue metric $g^{\mu\nu}$.
Conclusion
==========
By expanding the action functional appearing in the coherent state path integral for the partition function of the WIBG, we have shown that the sine-Gordon model on ACS arises as the effective theory for phase fluctuations in the WIBG when the fluctuations are restricted to a subspace of the single-particle Hilbert space. From our analysis of the ACS existing on top of a low-energy mode prepared in a coherent state or even coherent state, one can see that the effective spacetime arising in the $J_{H}$ sector depends on how the sectors $J_{L}$ and $J_{H}$ comprising the bipartition are coupled. We considered the case of coupled phase fluctuations in both mode sectors by analyzing a system consisting of tunnel-coupled (2+1)-D planes of WIBG. To demonstrate the dramatic effects of quantum state engineering on the effective spacetime for the high-energy phase fluctuations, we calculated the ACS that arises when two low-energy single-particle modes are prepared in a macroscopic superposition state.
In this paper, we have not delved into methods for generating the states of the quantum vacuum (i.e., the states of $J_{L}$) on which the sine-Gordon model lives. One could envision a combination of optical pumping and stirring to tune the low energy particle occupation statistics and superfluid velocity profile, respectively. In any case, the combination of control of atomic transitions and hydrodynamic regimes makes ultracold alkali gases an ideal experimental setting for realization of the present dynamics. Our method for generating quantum sine-Gordon dynamics within well-defined physical constraints is expected to provide a platform for simulation of interacting (2+1)- and (3+1)-D quantum field theory on ACS.
This research was supported by the NRF Korea, Grant No. 2014R1A2A2A01006535.
ACS IN FINITE-TEMPERATURE WIBG WITHOUT MODE PARTITION\[sec:app1\]
=================================================================
If the phase fluctuation field is allowed to have components on all single-particle wave functions, i.e., if it is not restricted to any sector, the phase fluctuation is a free, massless boson on ACS. A standard derivation of this fact is predicated on three conditions [@barcelorev; @BLVBEC]: 1) making the Bogoliubov approximation for the field operator $\hat{\psi}$, 2) making a self-consistent mean field approximation to the Heisenberg equation of motion generated by $\hat{H}$ in Eq.(\[eqn:wibgham\]) and neglecting particle number nonconserving products of field operators, and 3) neglecting the contribution of phase field fluctuations on a length scale smaller than the healing length of the WIBG. Here, we provide a derivation showing that propagation of a massless bosonic field on ACS arises simply as the one-loop contribution to the action of a locally gauge invariant WIBG. The derivation is predicated on three minimal physical assumptions, where we recall that $v(x)$ is the U(1) gauge field in Eq.(\[eqn:wibgham\]) and $(n_{0}(x),\theta_{0}(x))$ is a solution pair to the coupled stationary phase equations for the amplitude and argument of the field $\psi$:
- Assumption 1: $\nabla \cdot v(x) = 0$. This assumption can be disposed of if there is no gauge field or external flow.
- Assumption 2: $\nabla \cdot \nabla \theta_{0} = 0$. This assumption requires that the solution of the stationary phase equation be harmonic on $\Omega$.
- Assumption 3: ${\hbar^{2} \over 8m}n^{-1}\nabla n \cdot \nabla n = 0$. This assumption is the same as the third condition above. It is assumed to hold as an identity for the positive semidefinite operator $\hat{\psi}^{\dagger}\hat{\psi}$, not just at the level of equations of motion.
- Assumption 3’: The two loop contribution $\mathcal{O}((\nabla\theta_{d})^2)$ vanishes, where $\theta_{d}$ is the phase fluctuation field introduced below. As with assumption 3, it is also a condition on a quantum field.
Assumption 1 is equivalent to assumption 2 precisely when the gauge field is exact, i.e., can be written as the gradient of some scalar. In that case, the gauge field can be canceled by an appropriate local U(1) gauge transformation on the fields $\psi$, $\overline{\psi}$ and assumption 2 becomes sufficient. Assumption 3’ is related to assumption 3 in that both assumptions are satisfied if the theory is allowed to hold only on a length scale greater than the healing length $\xi_{0}$. In terms of our general treatment in terms of the single-particle Hilbert space spanned by orthonormal wave functions $\varphi_{j}$, we should assume that these do not vary greatly on length scales $\lesssim \xi_{0}$. In the coherent state path integral for the partition function associated with Eq.(\[eqn:wibgham\]), one makes the change of field variables $\psi \mapsto \sqrt{n}e^{i\theta / \hbar}$. The stationary phase equations are given by
$$\begin{aligned}
{\delta S \over \delta n} = 0 &\Rightarrow& i \del_{\tau}\theta + mV_{\text{ext}}(x,\tau)-\mu + {m\over 2}v\cdot v +{1\over 2m}\nabla \theta \cdot \nabla \theta - v\cdot \nabla \theta + V_{0}n = 0 , \\
{\delta S \over \delta \theta} = 0 &\Rightarrow& -i \del_{\tau}n - {1\over m}\nabla \cdot (n\nabla \theta) + \nabla \cdot (nv) = 0 \nonumber \\ &\Rightarrow& -i \del_{\tau}n +\nabla n \cdot \left( v- {1\over m} \nabla \theta \right) +n \nabla \cdot \left( v - {1\over m}\nabla \theta \right) = 0 \label{eqn:statph1}\\ &\Rightarrow& -i\del_{\tau}n +\nabla n \cdot \left( v- {1\over m} \nabla \theta \right) - {1\over m} n \nabla^{2}\theta = 0
\label{eqn:statphase2} \\ &\Rightarrow & -i \del_{\tau}n +\nabla n \cdot \left( v- {1\over m} \nabla \theta \right) = 0
\label{eqn:statphase3}\end{aligned}$$
where in passing from Eq.(\[eqn:statph1\]) to Eq.(\[eqn:statphase2\]), we have used assumption 1 and in passing from Eq.(\[eqn:statphase2\]) to Eq.(\[eqn:statphase3\]) we have used assumption 2 because $(\theta_{0},n_{0})$ is defined to be a solution to these coupled equations.
The second-order functional derivatives of $S$ are:
$$\begin{aligned}
\delta^{2} S \over \delta n(x,\tau)\delta n(x',\tau') &=& V_{0} \delta(x-x')\delta(\tau-\tau ') ,\nonumber \\
{\delta^{2} S \over \delta n(x,\tau)\delta \theta(x',\tau')} &=& -i \del_{\tau}\delta(x-x')\delta(\tau-\tau ') -{1\over m}\nabla \theta_{0}\cdot \nabla \delta(x-x')\delta(\tau-\tau ') - {1\over m}\delta(x-x')\delta(\tau-\tau ')\nabla^{2}\theta_{0} \nonumber \\ &{}& + v\cdot \nabla \delta(x-x')\delta(\tau-\tau '), \nonumber \\ {\delta^{2} S \over \delta \theta(x,\tau)\delta n(x',\tau')}&=&i \del_{\tau}\delta(x-x')\delta(\tau-\tau ') + {1\over m} \nabla \theta_{0} \cdot \nabla \delta(x-x')\delta(\tau-\tau ') - v\cdot \nabla \delta(x-x')\delta(\tau-\tau '), \nonumber \\ {\delta^{2} S \over \delta \theta(x,\tau)\delta \theta(x',\tau') } &=& -{1\over m}\nabla n_{0} \cdot \nabla \delta(x-x')\delta(\tau-\tau ') - {1\over m}n_{0}\nabla^{2}\delta(x-x')\delta(\tau-\tau ') .
\label{eqn:usualsecond}\end{aligned}$$
Note that in first line in Eq.(\[eqn:usualsecond\]), the condition $V_{0}\neq 0$ is necessary for the field $n_{H,d}$ to appear at quadratic order in the action and thereby promote the nonrelativistic dynamics of the bosonic field $\theta_{H,d}$ to dynamics on ACS. Introducing the fluctuation fields $n_{d}(x,\tau)$ and $\theta_{d}(x,\tau)$, the action expanded to one-loop order is given by $S=S^{(0)} +S^{(2)}$ where $S^{(0)}$ is the tree-order action, $$\begin{aligned}
S^{(2)} &=&{1\over 2}\int_{0}^{\beta \hbar}{d\tau \over \hbar}\int_{\Omega}d^{3}x \, \left[ V_{0}n_{d}^{2} + n_{d}\left( -2i\del_{\tau}\theta_{d} - {2\over m}\del_{j}\theta_{d}\del_{j}\theta_{0} + 2 v_{j}\del_{j}\theta_{d} \right)
+ {1 \over m}n_{0}\del_{j}\theta_{d}\del_{j}\theta_{d} \right]\end{aligned}$$ and where repeated summation over spatial index $j$ is implied. Gaussian integration over the real fluctuation $n_{d}$ field results in the following expression: $$S^{(2)} = -{1\over 2V_{0}}\int_{0}^{\beta \hbar}{d\tau \over \hbar}\int_{\Omega}d^{3}x \, \left[ \left( -i\del_{\tau}\theta_{d} + \nabla\theta_{d}\cdot(v-{1\over m}\nabla \theta_{0}) \right)^{2} -V_{0}{1\over m}n_{0}\nabla\theta_{d}\cdot \nabla\theta_{d}\right].$$
Taking $(x_{1},x_{2},x_{3})\in \Omega$ and $x_{0} = -i\tau$ so that $\del_{0}\theta_{d} = i\del_{\tau}\theta_{d}$ gives the action up to one-loop order $$S=S[n_{0},\theta_{0}]+{1\over 2}\int_{0}^{\beta \hbar}{d\tau \over \hbar}\int_{\Omega}d^{3}x\, f^{\mu \nu}\del_{\mu}\theta_{d}\del_{\nu}\theta_{d}
\label{eqn:masslessact}$$ where $$\begin{aligned}
f^{00}=-{1\over V_{0}},\qquad
f^{0j}={1\over V_{0}}\left( v_{j}(x)-{1\over m}\del_{j}\theta_{0}(x) \right),\qquad
f^{j0}=f^{0j}, \nonumber \\
V_{0}f^{ij} = {V_{0}\over m}n_{0}(x)\delta_{ij} -\left( v_{i}(x)-{1\over m}\del_{i}\theta_{0}(x) \right)\left( v_{j}(x)-{1\over m}\del_{j}\theta_{0}(x) \right) .\end{aligned}$$
Equation (\[eqn:masslessact\]) can be written in the canonical form of integration over a compact Riemannian manifold by finding $g^{\mu \nu}$ such that $\sqrt{-g}g^{\mu\nu}=f^{\mu\nu}$, where $g=\det g_{\mu \nu}$ is the determinant of the covariant metric. Because $\det f^{\mu \nu} = -c_{s}^{6}/V_{0}^{4}$ where $c_{s}:=\left( {V_{0}n_{0} / m} \right)^{1/2}$ is the local speed of sound in the WIBG, one finds that $\sqrt{-g}={n_{0}^{2}/ m^{2}c_{s}}$. Therefore, the contravariant and covariant expressions for the metric (written as 4$\times$4 matrices) are given by
$$g^{\mu\nu} = {m \over n_{0}c_{s}}\left[
\begin{array}{c|c}
-1 & \left(v - {1 \over m}\nabla \theta_{0}\right) \\
\hline
\left(v - {1 \over m}\nabla \theta_{0}\right)^{T} & {V_{0}n_{0} \over m}\mathbb{I}_{3\times 3}-\left(v - {1 \over m}\nabla \theta_{0}\right)^{T}\left(v - {1 \over m}\nabla \theta_{0}\right)
\end{array}
\right]
\label{eqn:contravar}$$
and $$\begin{aligned}
g_{\mu\nu}
&=&{n_{0}\over mc_{s}} \left[
\begin{array}{c|c}
-c_{s}^{2}+\left(v - {1 \over m}\nabla \theta_{0}\right)\cdot\left(v - {1 \over m}\nabla \theta_{0}\right) & \left(v - {1 \over m}\nabla \theta_{0}\right) \\
\hline
\left(v - {1 \over m}\nabla \theta_{0}\right)^{T} & \mathbb{I}_{3\times 3}
\end{array}
\right]
\label{eqn:covar}\end{aligned}$$ respectively.
All $n$-loop contributions to the action vanish for $n\ge 3$. The functional derivatives contributing to the two loop action scale as $\mathcal{O}(n_{0}(x)^{0})$ and are given by: $$\begin{aligned}
{\delta^{3} S \over \delta n(x'',\tau '')\delta n(x',\tau') \delta\theta(x,\tau)}&=& 0 ,\nonumber \\
{\delta^{3} S \over \delta n(x'',\tau '')\delta \theta(x',\tau') \delta n(x,\tau)}&=& 0 ,\nonumber \\
{\delta^{3} S \over \delta n(x'',\tau '')\delta \theta(x',\tau') \delta n(x,\tau)}&=& 0 ,\nonumber \\
{\delta^{3} S \over \delta \theta (x'',\tau '')\delta n(x',\tau') \delta \theta(x,\tau)} &=& -{1\over m}\nabla \delta(x''-x)\cdot \nabla \delta (x-x') - {1\over m}\delta(x-x')\nabla^{2}\delta(x''-x) ,\nonumber \\
{\delta^{3} S \over \delta \theta(x'',\tau '')\delta \theta(x',\tau') \delta n(x,\tau)}&=&{1\over m}\nabla \delta(x''-x)\cdot \nabla \delta (x-x'), \nonumber \\
{\delta^{3} S \over \delta n(x'',\tau '')\delta \theta(x',\tau') \delta \theta(x,\tau)} &=& -{1\over m}\nabla \delta(x''-x)\cdot \nabla \delta (x-x') - {1\over m}\delta(x''-x)\nabla^{2}\delta(x'-x).
\label{eqn:usualthird}\end{aligned}$$
There is a single nonvanishing two-loop contribution representing the scattering of two phase fluctuations to produce an amplitude fluctuation. The contribution of this term to the effective action is $-{1\over 3!}\int_{[0,\beta \hbar]}\int_{\Omega}{1\over m}n_{d}\left( \nabla \theta_{d}\cdot \nabla \theta_{d} - \theta_{d}\nabla^{2}\theta_{d}\right)$. Integration over $n_{d}$ would produce an action that contains third-order and fourth-order derivatives of $\theta_{d}$. These terms should be included if one aims to deduce the short wavelength spectrum of the phase fluctuations, but can be neglected if one restricts to the long wavelength limit of the dynamics. Assuming that this restriction is made, the theory is exact at one-loop order.
By utilizing the imaginary time coherent state path integral, we can understand the effect of nonzero temperature on the effective ACS dynamics of $\theta_{d}$. From a physical perspective, it is clear that increasing the temperature of the nonrelativistic weakly imperfect Bose gas will destroy the analogue curved spacetime description of its phonon modes for two reasons: (1) at high temperatures, the spectrum of the gas approximates that of nonrelativistic free particles, so that there is no local Lorentz symmetry for any degree of freedom and (2) the fluctuations that propagate on the analogue curved spacetime are fluctuations of the phase of the bosonic field, which has a nonzero expectation value only at low temperatures. Mathematically, when the temperature is larger than any energy scale of the Hamiltonian, only the zeroth Matusbara frequency of the Fourier transformed phase fluctuation field $\tilde{\theta_{d}}(x,\omega_{n})$ contributes to the partition function [@tsvelikbook]. Therefore, there is no analogue curved spacetime in this case because the phase fluctuation field is time independent.
[^1]: This condition is one of four assumptions that, taken together, are sufficient for the derivation of a metric tensor that defines the local spacetime on which phase fluctuations of the WIBG propagate. The four assumptions are discussed in Appendix A.
|
---
abstract: 'Grover’s quantum search algorithm evolves a quantum system from a known source state $|s\rangle$ to an unknown target state $|t\rangle$ using the selective phase inversions, $I_{s}$ and $I_{t}$, of these two states. In one of the generalizations of Grover’s algorithm, $I_{s}$ is replaced by a general diffusion operator $D_{s}$ having $|s\rangle$ as an eigenstate and $I_{t}$ is replaced by a general selective phase rotation $I_{t}^{\phi}$. A fast quantum search is possible as long as the operator $D_{s}$ and the angle $\phi$ satisfies certain conditions. These conditions are very restrictive in nature. Specifically, suppose $|\ell\rangle$ denote the eigenstates of $D_{s}$ corresponding to the eigenphases $\theta_{\ell}$. Then the sum of the terms $|\langle \ell|t\rangle|^{2}\cot(\theta_{\ell}/2)$ over all $\ell \neq s$ has to be almost equal to $\cot(\phi/2)$ for a fast quantum search. In this paper, we show that this condition can be significantly relaxed by introducing appropriate modifications of the algorithm. This allows access to a more general class of diffusion operators for fast quantum search.'
author:
- |
Avatar Tulsi\
[Department of Physics, IIT Bombay, Mumbai-400076, India]{}
title: On the class of diffusion operators for fast quantum search
---
INTRODUCTION
============
Grover’s quantum search algorithm or more generally quantum amplitude amplification evolves a quantum system from a known *source* state $|s\rangle$ to an unknown but desired *target* state $|t\rangle$ [@grover; @qaa1; @qaa2]. It does so by using selective phase inversion operators, $I_{s}$ and $I_{t}$, of these two quantum states. The algorithm iteratively applies the operator $\mathcal{A}(s,t) = -I_{s}I_{t}$ on $|s\rangle$ to get $|t\rangle$. The required number of iterations is $O(1/\alpha)$ where $\alpha = |\langle t|s\rangle|$. For search problem, $|s\rangle$ is chosen to be the uniform superposition of all $N$ basis states to be searched i.e. $|s\rangle = \sum_{i}|i\rangle/\sqrt{N}$. In case of a unique solution, the target state $|t\rangle$ is a unique basis state and $\alpha = |\langle t|s\rangle| = 1/\sqrt{N}$. Thus Grover’s algorithm outputs a solution in just $O(\sqrt{N})$ time steps whereas *classical* search algorithms take $O(N)$ time steps to do so. Quantum search algorithm and amplitude amplification are proved to be strictly optimal [@optimal].
A generalization of quantum search algorithm was presented in [@general]. The general quantum search algorithm (hereafter referred to as general algorithm) replaces $I_{s}$ by a more general diffusion operator $D_{s}$ with the only restriction of having $|s\rangle$ as an eigenstate. This restriction is reasonable as the diffusion operator should have some special connection with the source state. Let the normalized eigenspectrum of $D_{s}$ be given by $D_{s}|\ell\rangle = e^{\imath\theta_{\ell}}|\ell\rangle$ with $|\ell\rangle$ as the eigenstates and $e^{\imath\theta_{\ell}}$ ($\theta_{\ell}$) as the corresponding eigenvalues (eigenphases). We choose $D_{s}|s\rangle = |s\rangle$, i.e. $\theta_{\ell=s} = 0$. The general algorithm also replaces $I_{t}$ by a general selective phase rotation $I_{t}^{\phi}$ which multiplies $|t\rangle$ by a phase factor of $e^{\imath \phi}$ but leaves all states orthogonal to $|t\rangle$ unchanged. When $\phi$ is $\pi$, $I_{t}^{\phi}$ becomes $I_{t}$. Thus the general algorithm iterates the operator $\mathcal{S} = D_{s}I_{t}^{\phi}$ on $|s\rangle$ and its dynamics can be understood by analyzing the eigenspectrum of $\mathcal{S}$.
This analysis was done in [@general] and we found that the performance of the general algorithm depends upon the moments $\Lambda_{1}$ and $\Lambda_{2}$ given by $$\Lambda_{p} = \sum_{\ell \neq s}|\langle \ell|t\rangle|^{2}\cot^{p}\frac{\theta_{\ell}}{2}. \label{momentdefine}$$ Thus $\Lambda_{p}$ is the $p^{\rm th}$ moment of $\cot\frac{\theta_{\ell}}{2}$ with respect to the distribution $|\langle \ell |t\rangle|^{2}$ over all $\ell \neq s$. Using these moments, we can define two quantities $A$ and $B$ as $$A = \Lambda_{1} - \cot \frac{\phi}{2},\ \ B = \sqrt{1+\Lambda_{2}}. \label{ABdefine}$$ It has been shown in [@general] that a fast general algorithm is possible if and only if $A = O(\alpha B) \approx 0$ as typically $\alpha \ll 1$. Thus $\sum_{\ell \neq s}|\langle \ell|t\rangle|^{2}\cot\frac{\theta_{\ell}}{2}$ must be almost equal to $\cot \frac{\phi}{2}$ for a fast quantum search. This is a very restrictive condition. In this paper, we present a modification of the general algorithm which does not require this kind of restrictive condition for its success. With this modification, the general algorithm becomes significantly flexible and works with a more general class of diffusion operators. Thus it allows for a successful quantum search in more general situations.
In next section, we present a brief review of the general algorithm. In Section III, we present the modification of the general algorithm. In Section IV, we discuss possible applications and conclude the paper.
GENERAL ALGORITHM
=================
We briefly review the general algorithm [@general]. It iterates the operator $\mathcal{S} = D_{s}I_{t}^{\phi}$ on $|s\rangle$. For simplicity, $|s\rangle$ is assumed to be a non-degenerate eigenstate of $D_{s}$ with eigenvalue $1$. The normalized eigenspectrum of $D_{s}$ is given by $D_{s}|\ell\rangle = e^{\imath\theta_{\ell}}|\ell\rangle$. We have $\theta_{\ell = s} = 0$. Let other eigenvalues satisfy $$|\theta_{\ell \neq s}| \geq \theta_{\rm min} > 0,\ \ \theta_{\ell} \in [-\pi,\pi]. \label{othereigenvalues}$$ We need to find the eigenspectrum of $\mathcal{S}$ to analyse its iteration on $|s\rangle$. The secular equation was found in [@general] to be $$\sum_{\ell}|\langle \ell|t\rangle|^{2}\cot\frac{\lambda_{k}-\theta_{\ell}}{2} = \cot\frac{\phi}{2}. \label{secular}$$ Any eigenvalue $e^{\imath \lambda_{k}}$ of $\mathcal{S}$ has to satisfy above equation.
Since $\cot x$ varies monotonically with $x$ except for the jump from $-\infty$ to $\infty$ when $x$ crosses zero, there is a unique solution $\lambda_{k}$ between each pair of consecutive $\theta_{\ell}$’s. As $\theta_{\ell = s} = 0$, there can be at most two solutions $\lambda_{k}$ in the interval $[-\theta_{\rm min},\theta_{\rm min}]$. Let these two solutions be $\lambda_{\pm}$. We have $|\lambda_{\pm}| < \theta_{\rm min}$. The two eigenstates $|\lambda_{\pm}\rangle$ corresponding to these two eigenvalues $e^{\imath \lambda_{\pm}}$ are the only relevant eigenstates for our algorithm as the initial state $|s\rangle$ is almost completely spanned by these two eigenstates provided we assume $|\lambda_{\pm}| \ll \theta_{\rm min}$.
As shown in [@general], the eigenphases $\lambda_{\pm}$ are given by $$\lambda_{\pm} = \pm\frac{2\alpha}{B}(\tan \eta)^{\pm 1}\ ;\ \cot 2\eta = \frac{A}{2\alpha B}\ .\label{solutions}$$ where $\eta$ is chosen to be within the interval $[0,\pi/2]$ and the quantities $A$ and $B$ are as defined in Eq. (\[ABdefine\]).
Eq. (20) of [@general] gives us the target state $|t\rangle$ in terms of two relevant eigenstates $|\lambda_{\pm}\rangle$. We have $$|t\rangle = \frac{|w\rangle}{B|\sin \frac{\phi}{2}|} + |\lambda_{\perp}\rangle,\ \ |w\rangle = \sin \eta |\lambda_{+}\rangle + \cos \eta |\lambda_{-}\rangle,$$ where $|w\rangle$ is the normalized projection of $|t\rangle$ on the $|\lambda_{\pm}\rangle$-subspace, and $|\lambda_{\perp}\rangle$ is a state orthogonal to this subspace.
Eq. (23) and (24) of [@general] gives us the initial state $|s\rangle$ and the effect of iterating $\mathcal{S}$ on $|s\rangle$ in terms of two relevant eigenstates $|\lambda_{\pm}\rangle$. We have $$|s\rangle = e^{-\imath \phi/2}[e^{\imath \lambda_{+}/2}\cos \eta|\lambda_{+}\rangle - e^{\imath \lambda_{-}/2} \sin \eta|\lambda_{-}\rangle], \label{slambdapmexpansion}$$ and $$\mathcal{S}^{q}|s\rangle = e^{-\imath \phi/2} [e^{\imath q'\lambda_{+}} \cos \eta|\lambda_{+}\rangle - e^{\imath q'\lambda_{-}}\sin \eta|\lambda_{-}\rangle],
\label{stateexpand}$$ where $q' = q+\frac{1}{2}$.
The success probability of the algorithm is the probability of obtaining $|t\rangle$ upon measuring $\mathcal{S}^{q}|s\rangle$ which is $|\langle t|\mathcal{S}^{q}|s\rangle|^{2}$. Let this probability obtain its first maximum for $q=q_{\rm m}$. Let us define a state $|u\rangle$ such that $|u\rangle = \mathcal{S}^{q_{\rm m}}|s\rangle$. Then, by definition, the maximum success probability is $$P_{\rm m} = \beta^{2},\ \ \beta = |\langle t|u\rangle|.$$ Eq. (27) of [@general] gives $q_{\rm m}$ and $\beta$ as $$q_{\rm m} \approx \frac{\pi B\sin 2\eta}{4\alpha},\ \ \beta = \frac{\sin 2\eta}{B \sin \frac{\phi}{2}}. \label{qmb}$$ The target state can be obtained with constant probability by $O(1/P_{\rm m})$ times repetitions of the general algorithm. Hence the total query complexity of the algorithm becomes $$Q = \frac{q_{\rm m}}{P_{\rm m}} = \frac{q_{\rm m}}{\beta^{2}} = \frac{\pi}{4\alpha}\frac{B^{3}\sin^{2}\frac{\phi}{2}}{\sin 2\eta}\ .$$ The minimum required number of queries by any quantum algorithm is $O(1/\alpha)$ and hence the general algorithm becomes inferior to the optimal algorithm if and only if $\sin 2\eta \ll 1$ which is true if $\cot 2\eta \gg 1$. By definition, this is true when $A \gg 2\alpha B$. Thus $A = O(\alpha B)$ is a necessary condition for the success of algorithm and as typically $\alpha \ll 1$, $A$ must be close to zero. This is a very restrictive condition. In next section, we introduce a modification of the general algorithm which helps in getting rid off this condition.
MODIFIED GENERAL ALGORITHM
==========================
To get the basic idea behind modification, we note that $O(1/P_{\rm m})$ times repititions of the general algorithm to boost the success probability to a constant value is basically a classical and inefficient process. In quantum setting, a far efficient method is available in the form of quantum amplitude amplification (hereafter referred to as QAA). In QAA, the $|u\rangle$ state is evolved to the target state $|t\rangle$ by $O(1/\beta)$ iterations of the QAA operator $I_{t}I_{u}$ on $|u\rangle$. By definition of the $|u\rangle$ state, we have $$I_{u} = \mathcal{S}^{q_{\rm m}}I_{s} \mathcal{S}^{-q_{\rm m}}. \label{uintermofs}$$ Thus implementation of $I_{u}$ requires implementation of $I_{s}$. The question is that though $I_{t}$ can be implemented easily, the same is not true for $I_{s}$ and the entire motivation behind the construction of the general algorithm started with the hypothesis that we have only the operator $D_{s}$ available and not $I_{s}$. This is what prevents us to use QAA to get the target state $|t\rangle$. We point out that $I_{s}$ is not easily implementable in cases of physical interest [@spatial; @fastersearch; @fastergeneral; @clause; @kato; @shenvi; @realambainis]
To understand the modification of the general algorithm, we closely examine the possibility of implementing $I_{s}$. The $|s\rangle$ state is an eigenstate of the $D_{s}$ operator with a known eigenvalue $1$. Thus the phase estimation algorithm [@phase] (hereafter referred to as PEA) can be used to approximate $I_{s}$, the selective phase inversion of the $|s\rangle$ state, using multiple applications of the operator $D_{s}$.
In a recent paper (see Section III of [@postprocessing], we have presented a detailed algorithm for the approximate implementation of the selective phase inversion of the unknown eigenstates. There, the algorithm is presented to implement the operator $I_{\lambda_{\pm}}$ which is the selective phase inversion of the $|\lambda_{\pm}\rangle$ subspace of the operator $\mathcal{S}$. Note that there we have considered only a special case of the operator $\mathcal{S} = D_{s}I_{t}^{\phi}$ when $\phi$ is $\pi$ and $D_{s}$ is such that $\Lambda_{1}$ is zero. We have shown there that due to the assumption $|\lambda_{\pm}| \ll \theta_{\rm min}$, the operator $I_{\lambda_{\pm}}$ can be approximated with an error of $\epsilon$ using $O(\ln \epsilon^{-1}/\theta_{\rm min})$ applications of $\mathcal{S}$.
This is straightforward to extend the same ideas for approximate implementation of $I_{s}$ as by definition, all eigenstates of $D_{s}$ orthogonal to $|s\rangle$ have eigenphases greater than $\theta_{\rm min}$ and $|s\rangle$ is the only eigenstate satisfying $\theta_{s} = 0 \ll \theta_{\rm min}$. Thus $I_{s}$ can be implemented with an error of $\epsilon$ using $O(\ln \epsilon^{-1}/\theta_{\rm min})$ applications of $D_{s}$.
In the general algorithm, the application of $I_{t}$ as well as $D_{s}$ takes unit time step. Thus the operator $\mathcal{S}$ takes two time steps and Eq. (\[uintermofs\]) implies that $T[I_{u}]$ is $2q_{\rm m} + T[I_{s}]$ where $T[X]$ denotes the time steps needed to implement the operator $X$. The discussion of the previous paragraph implies that $$T[I_{t}I_{u}] = 2q_{\rm m} + 1+T[I_{s}] = O\left(q_{\rm m } +\frac{\ln \epsilon^{-1}}{\theta_{\rm min}}\right).$$ Here $\epsilon$ is the desired error in implementation of $I_{s}$. For QAA, we need $O(1/\beta)$ applications of the operator $I_{t}I_{u}$ and hence same number of the approximate implementations of $I_{s}$. Thus the desired error in each approximate implementation of $I_{s}$ is $O(\beta)$. The total time complexity of the algorithm is then $$\frac{1}{\beta}O\left(q_{\rm m} + \frac{\ln \beta^{-1}}{\theta_{\rm min}}\right). \label{totaltime1}$$
Let us assume for a moment that $$\frac{\ln \beta^{-1}}{\theta_{\rm min}} \not\gg q_{\rm m}. \label{assumption}$$ Then the second term in Eq. (\[totaltime1\]) can be ignored and the total time complexity of the algorithm becomes $$O\left(\frac{q_{\rm m}}{\beta}\right) = \frac{\pi B^{2}}{4\alpha} \sin \frac{\phi}{2},$$ where we have used Eq. (\[qmb\]). Thus as desired, the time complexity is completely independent of $\eta$ and $A$. As typically $B$ and $\phi$ are $\Omega(1)$, the time complexity is close to the optimal performance of $O(1/\alpha)$.
The only condition to be satisfied by the algorithm is the assumption (\[assumption\]). Ignoring the logarithmic factor and using Eq. (\[qmb\]), the assumption becomes $$\theta_{\rm min} \not\ll \frac{1}{q_{\rm m}} = \frac{4\alpha}{\pi B}\frac{1}{\sin 2\eta} \approx \frac{4\alpha}{\pi B}\frac{A}{2\alpha B} = \frac{2A}{\pi B^{2}},$$ where we have used Eq. (\[solutions\]) and the fact that $1/\sin 2\eta \approx \cot 2\eta$ whenever $\sin 2\eta \ll 1$. Note that if $\sin 2\eta \not\ll 1$ then there is no need to modify the general algorithm as the original general algorithm is also fast enough. The above condition can be rewritten as $$A \not\gg 1.57 B^{2}\theta_{\rm min}.$$ We compare it with the condition $A \not\gg 2\alpha B$ required for the success of the original general algorithm. As typically $B$ is $\Theta(1)$ and $\theta_{\rm min} \gg \alpha$, the condition for the modified general algorithm is significantly relaxed compared to that for the original general algorithm.
DISCUSSION AND CONCLUSION
=========================
We have shown that a modification of the general algorithm allows us to get a successful quantum search algorithm using a more general class of diffusion operators. The modification crucially depends upon the phase estimation algorithm and hence the quantum fourier transform.
A very important application of this modification is in tackling errors in diffusion operators. The original condition $A \not\gg 2\alpha B$ is a very restrictive condition as typically $\alpha$ is a very small quantity. Hence even minor deviations in the diffusion operator can cause failure of the original general algorithm. But the modified general algorithm is robust to such kind of small errors as this allows $A$ to be as large as $O(\theta_{\rm min}B^{2})$. This is a big relief as for typical diffusion operators, the quantity $\theta_{\rm min}$ is much bigger than $\alpha$.
We hope that this modification will help us in designing fast quantum search algorithms under more general situations.
[99]{} L.K. Grover, Phys. Rev. Lett. **79**, 325 (1997). L.K. Grover, Phys. Rev. Lett. **80**, 4329 (1998). G. Brassard, P. Hoyer, M. Mosca, and A. Tapp, Contemporary Mathematics (American Mathematical Society, Providence), **305**, 53 (2002) \[arXiv.org:quant-ph/0005055\]. C. Bennett, E. Bernstein, G. Brassard, and U. Vazirani, SIAM J. Computing [**26**]{}, 1510 (1997) \[arXiv.org:quant-ph/9701001\]. A. Tulsi, Phys. Rev. A [**86**]{}, 042331 (2012). A. Ambainis, J. Kempe, and A. Rivosh, Proc. 16th ACM-SIAM SODA, p. 1099 (2005) \[arXiv.org:quant-ph/0402107\]. A. Tulsi, Phys. Rev. A [**78**]{}, 012310 (2008). A. Tulsi, Phys. Rev. A [**91**]{}, 052307 (2015). A. Tulsi, Phys. Rev. A [**91**]{}, 052322 (2015). G. Kato, Phys. Rev. A **72**, 032319 (2005). N. Shenvi, J. Kempe, and K. B. Whaley, Phys. Rev. A **67**, 052307 (2003). A. Ambainis, SIAM J. Computing, [**37**]{}, 210 (2007) \[arXiv.org:quant-ph/0311001\]. M.A. Nielsen and I.L. Chuang, Quantum Computation and Quantum Information (Cambridge University Press, Cambridge, England, 2000). A. Tulsi, Phys. Rev. A [**92**]{}, 022353 (2015).
|
---
abstract: 'One of the unique features of non-Hermitian Hamiltonians is the non-Hermitian skin effect, namely that the eigenstates are exponentially localized at the boundary of the system. For open quantum systems, a short-time evolution can often be well described by the effective non-Hermitian Hamiltonians, while long-time dynamics calls for the Lindblad master equations, in which the Liouvillian superoperators generate time evolution. In this Letter, we find that Liouvillian superoperators can exhibit the non-Hermitian skin effect, and uncover its unexpected physical consequences. It is shown that the non-Hermitian skin effect dramatically shapes the long-time dynamics, such that the damping in a class of open quantum systems is algebraic under periodic boundary condition but exponential under open boundary condition. Moreover, the non-Hermitian skin effect and non-Bloch bands cause a chiral damping with a sharp wavefront. These phenomena are beyond the effective non-Hermitian Hamiltonians; instead, they belong to the non-Hermitian physics of full-fledged open quantum dynamics.'
author:
- Fei Song
- Shunyu Yao
- Zhong Wang
bibliography:
- 'dirac.bib'
title: 'Non-Hermitian skin effect and chiral damping in open quantum systems'
---
Non-Hermitian Hamiltonians provide a natural framework for a wide range of phenomena such as photonic systems with loss and gain[@feng2017non; @el2018non; @ozawa2018rmp; @peng2014lossinduced; @feng2013experimental], open quantum systems[@rotter2009non; @zhen2015spawning; @diehl2011topology; @verstraete2009quantum; @malzard2015open; @Dalibard1992; @carmichael1993; @Anglin1997; @choi2010coalescence; @diehl2008quantum; @bardyn2013topology], and quasiparticles with finite lifetimes[@kozii2017; @papa2018bulk; @shen2018quantum; @Zhou2018arc; @Yoshida2018heavy]. Recently, the interplay of non-Hermiticity and topological phases have been attracting growing attentions. Considerable attentions have been focused on non-Hermitian bulk-boundary correspondence[@shen2017topological; @lee2016anomalous; @yao2018edge; @yao2018chern; @Alvarez2018; @leykam2017; @kunst2018biorthogonal; @xiong2017; @alvarez2017; @Yokomizo2019; @jin2019; @Zirnstein2019; @herviou2018restoring], new topological invariants[@yao2018edge; @yao2018chern; @esaki2011; @leykam2017; @gong2018nonhermitian; @liu2019second; @lieu2018ssh; @yin2018ssh; @Yokomizo2019; @Deng2019; @Ghatak2019; @jiang2018invariant], generalizations of topological insulators[@harari2018topological; @yuce2015topological; @yao2018chern; @zhu2014PT; @lieu2018bdg; @yuce2016majorana; @menke2017; @wang2019nonQuantized; @Philip2018loss; @chen2018hall; @hirsbrunner2019topology; @klett2017sshkitaev; @Zeng2019quasicrystal; @zhou2018floquet; @kawabata2018PT] and semimetals[@xu2017weyl; @wang2019nodal; @cerjan2018weyl; @budich2019; @yang2019hopf; @Carlstrom2018; @Okugawa2019; @Moors2019; @zyuzin2018flat; @yoshida2018exceptional; @zhou2018exceptional], and novel topological classifications[@kawabata2019unification; @zhou2019periodic; @kawabata2018symmetry], among other interesting theoretical[@rudner2009topological; @McDonald2018phase; @hu2017exceptional; @Silveirinha2019; @kawabata2019exceptional; @Louren2018kondo; @rui2019; @Bliokh2019; @Luo2019higher; @Turker2019open; @Hatano1996] and experimental[@zeuner2015bulk; @xiao2017observation; @poli2015selective; @weimann2017topologically; @cerjan2018experimental; @zhan2017detecting] investigations.
One of the remarkable phenomena of non-Hermitian systems is the *non-Hermitian skin effect*[@yao2018edge; @Alvarez2018](NHSE), namely that the majority of eigenstates of a non-Hermitian operator are localized at boundaries, which suggests the non-Bloch bulk-boundary correspondence[@yao2018edge; @kunst2018biorthogonal] and non-Bloch band theory based on the generalized Brillouin zone[@yao2018edge; @yao2018chern; @liu2019second; @Yokomizo2019; @Deng2019]. Broader implications of NHSE have been under investigations[@yao2018chern; @jiang2019interplay; @lee2018anatomy; @lee2018hybrid; @jin2019; @Zirnstein2019; @kunst2018transfer; @liu2019second; @Edvardsson2019; @Borgnia2019; @wang2019nodal; @wang2018phonon; @ezawa2018higher; @Ezawa2018; @ezawa2018electric; @yang2019non; @ge2019topological]. Very recently, NHSE has been observed in experiments[@Helbig2019NHSE; @Xiao2019NHSE; @Ghatak2019NHSE].
In open quantum systems, non-Hermiticity naturally arises in the Lindblad master equation that governs the time evolution of density matrix (see e.g., Refs.[@diehl2011topology; @verstraete2009quantum]): = -i\[H,\]+\_(2L\_L\_\^-{L\_\^L\_,}) , \[master\] where $H$ is the Hamiltonian, $L_\mu$’s are the Lindblad dissipators describing quantum jumps due to coupling to the environment, and $\mathcal{L}$ is called the Liouvillian superoperator. Before the occurrence of a jump, the short-time evolution follows the effective non-Hermitian Hamiltonian $H_\text{eff}=H -i\sum_\mu L^\dag_\mu L_\mu$ as $d\rho/dt = -i(H_\text{eff}\rho -\rho H^\dag_\text{eff})$[@Dalibard1992; @carmichael1993; @Nakagawa2018kondo].
It is generally believed that when the system size is not too small, the effect of boundary condition is insignificant. As such, periodic boundary condition is commonly adopted, though open-boundary condition is more relevant to experiments. In this paper, we show that the long time Lindblad dynamics of an open-boundary system dramatically differ from that of a periodic-boundary system. Furthermore, this is related to the NHSE of the damping matrix derived from the Liouvillian. Notable examples are found that the long time damping is algebraic (i.e., power law) under periodic boundary condition while exponential under open boundary condition. Moreover, NHSE implies that the damping is unidirectional, which is dubbed the “chiral damping”. Crucially, the theory is based on the full Liouvillian. Although $H_\text{eff}$ may be expected to play an important role, it is in fact inessential here (e.g., its having NHSE or not does not matter).
![ (a) SSH model with staggered hopping $t_1$ and $t_2$, with the ovals indicating the unit cells. The Bloch Hamiltonian is $h(k)=(t_1+t_2\cos k)\sigma_x +t_2\sin k\sigma_y$. The fermion loss and gain are described by the dissipators $L^l$ and $L^g$ \[Eq.(\[dissipator\])\] in the master equation framework. (b) A different realization of the same model. The hopping Hamiltonian $h(k)=(t_1+t_2\cos k)\sigma_x +t_2\sin k\sigma_z$, and the dissipators are $L^l_x=\sqrt{\gamma_l}c_{xA}$ and $L^g_x=\sqrt{\gamma_g}c^\dag_{xA}$. (b) is equivalent to (a) via a basis change $\sigma_y\leftrightarrow \sigma_z$. Because the gain and loss is on-site, (b) is more feasible experimentally. []{data-label="illustration"}](sketch.eps){width="7cm" height="3.7cm"}
*Model.–*The system is illustrated in Fig.\[illustration\](a). Our Hamiltonian $H=\sum_{ij}h_{ij}c^{\dag}_{i}c_{j}$, where $c^\dag_i,c_i$ are fermion creation and annihilation operators at site $i$ (including additional degrees of freedom such as spin is straightforward). We will consider single particle loss and gain, with loss dissipators $L_\mu^l=\sum_i D_{\mu i}^lc_i$ and gain dissipators $L_\mu^g=\sum_i D_{\mu i}^gc_i^\dag$, respectively. For concreteness, we take $h$ to be the Su-Schrieffer-Heeger (SSH) Hamiltonian, namely $h_{ij}=t_1$ and $t_2$ on adjacent links. A site is also labelled as $i=xs$, where $x$ refers to the unit cell, and $s=A,B$ refers to the sublattice. For simplicity, let each unit cell contain a single loss and gain dissipator (namely, $\mu$ is just $x$): L\_x\^l=(c\_[xA]{}-ic\_[xB]{}),L\_x\^g=(c\_[xA]{}\^+ic\_[xB]{}\^);\
\[dissipator\] in other words, $D^l_{x,xA}=iD^l_{x,xB}=\sqrt{\gamma_l/2}, D^g_{x,xA}=-iD^g_{x,xB}=\sqrt{\gamma_g/2}$. We recognized in Eq.(\[dissipator\]) that the $\sigma_y=+1$ states are lost to or gained from the bath. A seemingly different but essentially equivalent realization of the same model is shown in Fig.\[illustration\](b), which can be obtained from the initial model \[Fig.\[illustration\](a)\] after a basis change $\sigma_y\leftrightarrow\sigma_z$. Accordingly, the dissipators in Fig.\[illustration\](b) are $\sigma_z=1$ states. As the gain and loss are on-site, its experimental implementation is easier. Keeping in mind that Fig.\[illustration\](b) shares the same physics, hereafter we focus on the setup in Fig.\[illustration\](a).
To see the evolution of density matrix, it is convenient to monitor the single-particle correlation $\Delta_{ij}(t)=\text{Tr}[c^{\dag}_{i}c_{j}\rho(t)]$, whose time evolution is $d\Delta_{ij}/dt=\text{Tr}[c^{\dag}_{i}c_{j}d\rho/dt]$. It follows from Eq.(\[master\]) that (see the Supplemental Material) =i\[h\^T,(t)\]-{M\^T\_l+M\_g,(t)}+2M\_g, \[evolution-1\] where $(M_g)_{ij}=\sum_\mu D_{\mu i}^{g*}D_{\mu j}^g$ and $(M_l)_{ij}=\sum_\mu D_{\mu i}^{l*}D_{\mu j}^l$, and both $M_l$ and $M_g$ are Hermitian matrices. Majorana versions of Eq.(\[evolution-1\]) appeared in Refs.[@diehl2011topology; @Prosen2011; @bardyn2013topology]. We can define the damping matrix X=ih\^T-(M\_l\^T+M\_g), \[damping\] which recasts Eq.(\[evolution-1\]) as = X(t) +(t) X\^+2M\_g. \[evolution2\] The steady state correlation $\Delta_s=\Delta(\infty)$, to which long time evolution of any initial state converges, is determined by $d\Delta_s/dt=0$, or $
X\Delta_s+\Delta_sX^\dag+2M_g=0$. In this paper, we are concerned mainly about the dynamics, especially the speed of converging to the steady state, therefore, we shall focus on the deviation $\tilde{\Delta}(t)=\Delta(t)-\Delta_s$, whose evolution is $d \tilde{\Delta}(t)/dt= X\tilde{\Delta}(t) +\tilde{\Delta}(t) X^\dag$, which is readily integrated to (t)=e\^[Xt]{}(0)e\^[X\^t]{}.\[evolve\] We can write $X$ in terms of right and left eigenvectors[^1], X= \_n \_n |u\_[Rn]{}u\_[Ln]{}|, \[Xexpansion\] and express Eq.(\[evolve\]) as (t)=\_[n,n’]{}|u\_[Rn]{}u\_[Ln]{}|(0)|u\_[Ln’]{}u\_[Rn’]{}|.\
\[expansion\] By the dissipative nature, $\text{Re}(\lambda_n)\leq 0$ always holds true. The Liouvillian gap $\Lambda=\text{min}[2\text{Re}(-\lambda_n)]$ is decisive for the long-time dynamics. A finite gap implies exponential converging rate towards the steady state, while a vanishing gap implies algebraic convergence[@Cai2013algebraic].
![(a) The damping of fermion number towards the steady state of a periodic boundary chain with length $L=150$ (unit cell). The damping is algebraic for cases $A,B$ with $t_1\leq t_2$, while exponential for $C,D$ with $t_1>t_2$. The initial state is the completely filled state $\prod_{x,s}c^\dag_{xs}|0\ra$. (b) The eigenvalues of the damping matrix $X$. Blue: periodic-boundary; Red: open-boundary. The Liouvillian gap of the periodic-boundary chain vanishes for $A$ and $B$, while it is nonzero for $C$ and $D$. For the open-boundary chain, the Liouvillian gap is nonzero in all four cases. This drastic spectral distinction between open and periodic boundary comes from the NHSE (see text). []{data-label="fig2"}](fig2a.eps "fig:"){width="7cm" height="4.6cm"} ![(a) The damping of fermion number towards the steady state of a periodic boundary chain with length $L=150$ (unit cell). The damping is algebraic for cases $A,B$ with $t_1\leq t_2$, while exponential for $C,D$ with $t_1>t_2$. The initial state is the completely filled state $\prod_{x,s}c^\dag_{xs}|0\ra$. (b) The eigenvalues of the damping matrix $X$. Blue: periodic-boundary; Red: open-boundary. The Liouvillian gap of the periodic-boundary chain vanishes for $A$ and $B$, while it is nonzero for $C$ and $D$. For the open-boundary chain, the Liouvillian gap is nonzero in all four cases. This drastic spectral distinction between open and periodic boundary comes from the NHSE (see text). []{data-label="fig2"}](fig2b.eps "fig:"){width="7cm" height="6.3cm"}
![(a) The particle number damping of a periodic boundary chain (solid curve) and open-boundary chains for several chain length $L$. The long time damping of a periodic chain follows a power law, while the open boundary chain follows an exponential law after an initial power law stage. (b) The site-resolved damping. The left end ($x=1$) enters the exponential stage from the very beginning, followed sequentially by other sites. For both (a) and (b), the initial state is the completely filled state $\prod_{x,s}c^\dag_{xs}|0\ra$, therefore, $\Delta(0)$ the identity matrix $I_{2L\times 2L}$. $t_1=t_2=1,\gamma_g=\gamma_l=0.2$. []{data-label="fig3"}](fig3a.eps "fig:"){width="7.0cm" height="4.3cm"} ![(a) The particle number damping of a periodic boundary chain (solid curve) and open-boundary chains for several chain length $L$. The long time damping of a periodic chain follows a power law, while the open boundary chain follows an exponential law after an initial power law stage. (b) The site-resolved damping. The left end ($x=1$) enters the exponential stage from the very beginning, followed sequentially by other sites. For both (a) and (b), the initial state is the completely filled state $\prod_{x,s}c^\dag_{xs}|0\ra$, therefore, $\Delta(0)$ the identity matrix $I_{2L\times 2L}$. $t_1=t_2=1,\gamma_g=\gamma_l=0.2$. []{data-label="fig3"}](fig3b.eps "fig:"){width="7.0cm" height="4.3cm"}
*Periodic chain.–*Let us study the periodic boundary chain, for which going to momentum space is more convenient. It can be readily found that $h(k)=(t_1+t_2\cos k)\sigma_x+t_2\sin k\sigma_y$ and M\_l(k)=(1+\_y), M\_g(k)=(1-\_y).These $M(k)$ matrices are $k$-independent because the gain and loss dissipators are intra-cell. The Fourier transformation of $X$ is $X(k)=ih^T(-k)-M_l^T(-k)-M_g(k)$ (the minus sign in $-k$ comes from matrix transposition), therefore, the damping matrix in momentum space reads X(k) = i\[(t\_1+t\_2k)\_x+ (t\_2k-i)\_y\] -I,\
\[Xexpression\] where $\gamma\equiv\gamma_l+\gamma_g$. If we take the realization in Fig.\[illustration\](b) instead of Fig.\[illustration\](a), the only modification to $X(k)$ is a basis change $\sigma_y\rw\sigma_z$ in Eq.(\[Xexpression\]), with the physics unchanged. Diagonalizing $X(k)$, we find that the Liouvillian gap $\Lambda=0$ for $t_1\leq t_2$, while the gap opens for $t_1>t_2$ \[Fig.\[fig2\](b)\]. The damping rate is therefore expected to be algebraic and exponential in each case, respectively. To confirm this, we numerically calculate the site-averaged fermion number deviation from the steady state, defined as $\tilde{n}(t)=\sqrt{\sum_x \tilde{n}^2_x(t)/L}$, where $\tilde{n}_x(t)=n_x(t)-n_x(\infty)$ with $n_x(t)=n_{xA}(t)+n_{xB}(t)$, $n_{xs}\equiv\Delta_{xs,xs}$ being the fermion number at site $xs$. The results are consistent with the vanishing (nonzero) gap in the $t_1\leq t_2$ ($t_1> t_2$) case \[Fig.\[fig2\](a)\].
Although our focus here is the damping dynamics, we also give the steady state. In fact, our $M_l$ and $M_g$ satisfy $M_l^T+M_g=M_g \gamma/\gamma_g $, which guarantees that $\Delta_s=(\gamma_g/\gamma)I_{2L\times 2L}$ is the steady state solution. It is independent of boundary conditions.
Now we show the direct relation between the algebraic damping and the vanishing gap of $X$. The eigenvalues of $X(k)$ are \_(k)=-/2i. \[lambda\] Let us consider $t_1=t_2\equiv t_0$ for concreteness (case $A$ in Fig.\[fig2\]), then $\lambda_{-}(\pi)=0$ and the expansion in $\delta k\equiv k-\pi$ reads \_[-]{}(+k)-it\_0k-(k)\^4. Now Eq.(\[Xexpansion\]) becomes $X=\sum_{k,\alpha=\pm}\lambda_{\alpha}(k){|u_{Rk\alpha}\rangle}{\langleu_{Lk\alpha}|}$, and Eq.(\[expansion\]) reads (t)= \_[kk’,’]{}e\^[\_(k)t+\_[’]{}\^\*(k’)t]{} [|u\_[Rk]{}]{}[\_[Lk]{}|]{}(0) [|u\_[Lk’’]{}]{}[\_[Rk’’]{}|]{}.\
For the initial state with translational symmetry, we have ${\langleu_{Lk\alpha}|}\tilde{\Delta}(0){|u_{Lk'\alpha'}\rangle} =\delta_{kk'}{\langleu_{Lk\alpha}|}\tilde{\Delta}(0)|u_{Lk\alpha'}\rangle$. The long-time behavior of $\tilde{\Delta}(t)$ is dominated by the $\alpha=\alpha'=-$ sector, which provides a decay factor $\sum_{\delta k} \exp\left(2\text{Re}[\lambda_-(\pi+\delta k)]t\right) \approx \int d(\delta k)\exp[- \frac{t_0^2}{2\gamma} (\delta k)^4t]\sim t^{-1/4}$. Similarly, for $t_1<t_2$ we have $\tilde{\Delta}(t)\sim t^{-1/2}$.
*Chiral damping.–*Now we turn to the open boundary chain. Although the physical interpretation is quite different, our $X$ matrix resembles the non-Hermitian SSH Hamiltonian[@yao2018edge; @yin2018ssh], as can be appreciated from Eq.(\[Xexpression\]). Remarkably, all the eigenstates of $X$ are exponentially localized at the boundary (i.e., NHSE[@yao2018edge]). As such, the eigenvalues of open boundary $X$ cannot be obtained from $X(k)$ with real-valued $k$; instead, we have to take complex-valued wavevectors $k+i\kappa$. In other words, the usual Bloch phase factor $e^{ik}$ living in the unit circle is replaced by $\exp[i(k+i\kappa)]$ inhabiting a generalized Brillouin zone[@yao2018edge], whose shape can be precisely calculated in the non-Bloch band theory[@yao2018edge; @yao2018chern; @Yokomizo2019; @liu2019second; @Deng2019].
From the non-Bloch band theory[@yao2018edge], we find that $\kappa=-\ln \sqrt{\left|\frac{t_1+\gamma/2}{t_1-\gamma/2}\right|}$, and that the eigenvalues of $X$ of an open boundary chain are $\lambda_{\pm}(k+i\kappa)$, where $\lambda_\pm$ are the $X(k)$ eigenvalues given in Eq.(\[lambda\]). We can readily check that, for $|\gamma|<2|t_1|$, \_(k+i)= - i E(k), \[openlambda\] where $E(k)=\sqrt{t_1^2+t_2^2-\frac{\gamma^2}{4}+2 t_2\sqrt{t_1^2-\frac{\gamma^2}{4}}\cos k}$, which is real. We have also numerically diagonalized $X$ for a long open chain \[red dots in Fig.\[fig2\](b)\], which confirms Eq.(\[openlambda\]). An immediate feature of Eq.(\[openlambda\]) is that the real part is a constant, $-\gamma/2$, which is consistent with the numerical spectrums \[Fig.\[fig2\](b))\]. We note that the analytic results based on generalized Brillouin zone produce the continuum bands only, and the isolated topological edge modes \[Fig.\[fig2\](b), A and B panels\] are not contained in Eq.(\[openlambda\]), though they can be inferred from the non-Bloch bulk-boundary correspondence[@yao2018edge; @kunst2018biorthogonal]. Here, we focus on bulk dynamics, and these topological edge modes do not play important roles[^2].
It follows from Eq.(\[openlambda\]) that the Liouvillian gap $\Lambda=\gamma$, therefore, we expect an exponential long-time damping of $\tilde{\Delta}(t)$. This exponential behavior has been confirmed by numerical simulation \[Fig.\[fig3\](a)\]. Before entering the exponential stage, there is an initial period of algebraic damping, whose duration grows with chain length $L$ \[Fig.\[fig3\](a)\]. To better understand this feature, we plot the damping in each unit cell \[Fig.\[fig3\](b)\]. We find that the left end ($x=1$) enters the exponential damping immediately, and other sites enter the exponential stage sequentially, according to their distances to the left end. As such, there is a “damping wavefront” traveling from the left (“upper reach”) to right (“lower reach”). This is dubbed a “chiral damping”, which can be intuitively related to the fact that all eigenstates of $X$ are localized at the right end[@yao2018edge].
![ Time evolution of $\tilde{n}_x(t)=n_x(t)-n_x(\infty)$, which shows damping of particle number $n_x(t)$ towards the steady state. (a) $\tilde{n}_x(t)$ of the main model with dissipators given by Eq.(\[dissipator\]) (referred to as “model I”). Left: periodic boundary; Right: open boundary. The chiral damping is clearly seen in the open boundary case. The dark region corresponds to the exponential damping stage seen in Fig.\[fig3\]. (b) $\tilde{n}_x(t)$ of model II, whose damping matrix $X$ \[Eq.(\[XII\])\] has no NHSE. The Liouvillian gap is nonzero and the same for periodic and open boundary chains. Common parameters: $t_1=t_2=1; \gamma_g=\gamma_l=0.2$. \[chiral\] ](skin.eps "fig:"){width="9.6cm" height="4cm"} ![ Time evolution of $\tilde{n}_x(t)=n_x(t)-n_x(\infty)$, which shows damping of particle number $n_x(t)$ towards the steady state. (a) $\tilde{n}_x(t)$ of the main model with dissipators given by Eq.(\[dissipator\]) (referred to as “model I”). Left: periodic boundary; Right: open boundary. The chiral damping is clearly seen in the open boundary case. The dark region corresponds to the exponential damping stage seen in Fig.\[fig3\]. (b) $\tilde{n}_x(t)$ of model II, whose damping matrix $X$ \[Eq.(\[XII\])\] has no NHSE. The Liouvillian gap is nonzero and the same for periodic and open boundary chains. Common parameters: $t_1=t_2=1; \gamma_g=\gamma_l=0.2$. \[chiral\] ](noskin.eps "fig:"){width="9.6cm" height="4cm"}
More intuitively, the damping of $\tilde{n}_x(t)= n_x(t)-n_x(\infty)$ is shown in Fig.\[chiral\](a). In the periodic boundary chain it follows a slow power law. In the open boundary chain, a right-moving wavefront is seen. After the wavefront passes by $x$, the algebraically decaying $\tilde{n}_x(t)$ enters the exponential decay stage and rapidly diminishes.
The wavefront can be understood as follows. According to Eq.(\[evolve\]), the damping of $\tilde{\Delta}(t)$ is determined by the evolution under $\exp(Xt)$, which is just the evolution under $\exp(-\gamma t/2)\exp(-iH_\text{SSH}t)$, where $H_\text{SSH}$ is the non-Hermitian SSH Hamiltonian[@yao2018edge] (with an unimportant sign difference). Now the propagator $\la xs|\exp(-iH_\text{SSH}t)|x's'\ra$ can be decomposed as propagation of various momentum modes with velocity $v_k=\partial E/\partial k$. Due to the presence of an imaginary part $\kappa$ in the momentum, propagation from $x'$ to $x$ acquires an $\exp[-\kappa(x-x')]$ factor. If this factor can compensate $\exp(-\gamma t/2)$, exponential damping can be evaded, giving way to a power law damping. For simplicity we take $\gamma$ small, so that $\kappa\approx -\gamma/2t_1$, therefore $\exp[-\kappa(x-x')]\approx \exp[v_k(\gamma/2t_1)t]$ and the damping of propagation from $x'$ to $x$ is $\exp[(-\gamma/2+v_k\gamma/2t_1)t]$ for the $k$ mode. By a straightforward calculation, we have $\max(v_k)=t_2$ (for $t_1>t_2$) or $\sqrt{|t_1^2-\gamma^2/4|}\approx t_1$ (for $t_1\leq t_2$). Let us consider $t_1\leq t_2$ first. When $x>\max(v_k)t$, the propagation from $x'=x-\max(v_k)t$ to $x$ carries a factor $\exp[(-\gamma/2+\max(v_k)\gamma/2t_1)t]=1$; while for $x<\max(v_k)t$, we need the nonexistent $x'=x-\max(v_k)t<0$, therefore, compensation is impossible and we have exponential damping. This indicates a wavefront at $x=\max(v_k)t$. For $t_1=t_2=1$, we have $\max(v_k)\approx 1$, which is consistent with the wavefront velocity ($\approx 1$) in Fig.\[chiral\](a).
As a comparison, we introduce the “model II” (the model studied so far is referred to as “model I”) that differs from model I only in $L_x^g$, which is now $L_x^g=\sqrt{\frac{\gamma_g}{2}}(c_{xA}^\dag-ic_{xB}^\dag)$ \[compare it with Eq.(\[dissipator\])\]. The damping matrix is X(k) = i\[(t\_1+t\_2k)\_x+ (t\_2k-i)\_y\] -I, \[XII\] which has no NHSE when $\gamma_l=\gamma_g$. Accordingly, the open and periodic boundary chains have the same Liouvillian gap, and chiral damping is absent \[Fig.\[chiral\](b)\].
In realistic systems, there may be disorders, fluctuations of parameters, and other imperfections. Fortunately, the main results here are based on the presence of NHSE, which is a quite robust phenomenon unchanged by modest imperfections. As such, it is expected that our main predictions are robust and observable.
*Final remarks.–*(i) The chiral damping originates from the NHSE of the damping matrix $X$ rather than the effective non-Hermitian Hamiltonian. Unlike the damping matrix, the effective non-Hermitian Hamiltonian describes short time evolution. It is found to be $H_\text{eff} = \sum_{ij}c^\dag_i (h_\text{eff})_{ij} c_j -i \gamma_g L$, where $h_\text{eff}$, written in momentum space, is $h_\text{eff}(k) =(t_1+t_2\cos k)\sigma_x+(t_2\sin k-i\frac{\gamma_l-\gamma_g}{2})\sigma_y -i\frac{\gamma_l-\gamma_g}{2}I$. For $\gamma_g=\gamma_l$, $h_\text{eff}$ has no NHSE, though $X$ has. Although damping matrices with NHSE can arise quite naturally (e.g., in Fig.\[illustration\]), none of the previous models (e.g., Ref.[@Ashida2018]) we have checked has NHSE.
\(ii) The periodic-open contrast between the slow algebraic and fast exponential damping has important implications for experimental preparation of steady states (e.g. in cold atom systems). In the presence of NHSE, approaching the steady states in open-boundary systems can be much faster than estimations based on periodic boundary condition.
\(iii) It is interesting to investigate other rich aspects of non-Hermitian physics such as PT symmetry breaking[@peng2014parity] in this platform (Here, we have focused on the cases that the open-boundary $iX$ is essentially PT-symmetric, meaning that the real parts of $X$ eigenvalues are constant).
\(iv) When fermion-fermion interactions are included, higher-order correlation functions are coupled to the two-point ones, and approximations (such as truncations) are called for. Moreover, the steady states can be multiple[@Albert2018; @Zhou2017dissipative], in which case the damping matrix depends on the steady state approached, leading to even richer chiral damping behaviors. These possibilities will be left for future investigations.
This work is supported by NSFC under grant No. 11674189.
Derivation of the differential equation of correlation functions
================================================================
We shall derive Eq.(3) in the main article, which is reproduced as follows: =i\[h\^T,(t)\]-{M\^T\_l+M\_g,(t)}+2M\_g. \[supp\]
In fact, after inserting the Lindblad master equation into $d\Delta_{ij}/dt=\text{Tr}[c^{\dag}_{i}c_{j}d\rho/dt]$ and reorganize the terms, we have &=& i(\[H,c\^\_i c\_j\] (t) )+\
&& \_. \[EOM\] By a straightforward calculation, we have \[H,c\_i\^c\_j\] &=&\_[mn]{}h\_[mn]{}\[c\^\_m c\_n,c\_i\^c\_j\]\
&=&\_[mn]{}h\_[mn]{}(-\_[mj]{}c\_i\^c\_n+\_[in]{}c\_m\^c\_j)\
&=&\_n (-h\_[jn]{} c\_i\^c\_n + h\_[ni]{}c\_n\^c\_j), therefore, the Hamiltonian commutator term in Eq.(\[EOM\]) is reduced to $i[h^T,\Delta(t)]_{ij}$, which is the first term of Eq.(\[supp\]).
The commutators terms from the loss dissipators $L_\mu^l=\sum_i D_{\mu i}^lc_i$ are 2\_L\_\^[l ]{}\[c\^\_i c\_j,L\_\^l\] &=& 2\_[m]{}D\^[l\*]{}\_[m]{}c\^\_m\[c\_i\^c\_j,\_[n]{}D\^l\_[n]{}c\_n\]\
&=&2\_[mn]{}D\^[l\*]{}\_[m]{}D\^l\_[n]{}c\^\_m\[c\_i\^c\_j,c\_n\]\
&=&2\_[mn]{}D\^[l\*]{}\_[m]{}D\^l\_[n]{}(-\_[in]{}c\^\_mc\_j)\
&=&-2\_m (M\_l)\_[mi]{}c\_m\^ c\_j, and \_&=&\_[mn]{}D\_[m]{}\^[l\*]{} D\_[n]{}\^l \[c\^\_mc\_n,c\_i\^c\_j\]\
&=& \_[mn]{}D\_[m]{}\^[l\*]{} D\_[n]{}\^l (-\_[mj]{}c\_i\^c\_n+\_[in]{}c\_m\^c\_j)\
&=& \_n (-(M\_l)\_[jn]{} c\^\_i c\_n +(M\_l)\_[ni]{}c\_n\^c\_j). The corresponding terms in Eq.(\[EOM\]) sum to $-\{M^T_l,\Delta(t)\}_{ij}$,
Similarly, for the commutators from the gain dissipators, we have $$\begin{aligned}
2\sum_\mu L_\mu^{g \dagger}[c^\dagger_i c_j,L_\mu^g]
=2\sum_m (M_g)_{mj}c_m c_i^{\dagger},
\end{aligned}$$ and $$\begin{aligned}
\sum_\mu [L_\mu^{g\dagger}L_\mu^{g},c^\dagger_i c_j ]=\sum_n \left( -(M_g)_{nj}c_n c_i^\dagger +(M_g)_{in}c_j c^\dagger_n \right).
\end{aligned}$$ Writing $c_n c_i^\dag=\delta_{ni}-c^\dag_i c_n$, we see that the corresponding terms in Eq.(\[EOM\]) sum to $2(M_g)_{ij}\text{Tr}(\rho)-\{M_g,\Delta(t)\}_{ij} =2(M_g)_{ij} -\{M_g,\Delta(t)\}_{ij}$. Therefore, all terms at the right hand side of Eq.(\[EOM\]) sum to that of Eq.(\[supp\]).
[^1]: As a non-Hermitian matrix, $X$ can have exceptional points, and we have checked that our main results are qualitatively similar therein.
[^2]: Topological edge modes have been investigated recently in Ref. [@Caspel2019; @Kastoryano2019] in models without NHSE.
|
---
abstract: 'We study the prospects for observing H$_2$ and HD emission during the assembly of primordial molecular cloud cores. The primordial molecular cloud cores, which resemble those at the present epoch, can emerge around $1+z \sim 20$ according to recent numerical simulations. A core typically contracts to form the first generation of stars and the contracting core emits H$_2$ and HD line radiation. These lines show a double peak feature. The higher peak is the H$_2$ line of the $J=2-0$ (v=0) rotational transition, and the lower peak is the HD line of the $J=4-3$ (v=0) rotational transition. The ratio of the peaks is about 20, this value characterising the emission from primordial galaxies. The expected emission flux at the redshift of $1+z \sim 20$ (e.g. $\Omega_m = 0.3$ and $\Omega_\Lambda =0.7$), in the $J=2-0$ (v=0) line of H$_2$ occurs at a rate $\sim 2 \times 10^{-7}$ Jy, and in the $J=4-3$ (v=0) line of HD at a rate $\sim 8 \times 10^{-9}$ Jy. The former has a frequency of 5.33179$\times 10^{11}$ Hz and the latter is at 5.33388 $\times 10^{11}$Hz, respectively. Since the frequency resolution of ALMA is about 40 kHz, the double peak is resolvable. While an individual object is not observable even by ALMA, the expected assembly of primordial star clusters on subgalactic scales can result in fluxes at the 2000-50 $\mu$Jy level. These are marginally observable. The first peak of H$_2$ is produced when the core gas cools due to HD cooling, while the second peak of HD occurs because the medium maintains thermal balance by H$_2$ cooling which must be enhanced by three-body reactions to form H$_2$ itself.'
author:
- |
Hideyuki Kamaya$^{1}$[^1] and Joseph Silk$^{2}$\
$^{1}$ Department of Astronomy, School of Science, Kyoto University, Kyoto, 606-8502, Japan\
$^{2}$ Astrophysics, Department of Physics, Denys Wilkinson Building, University of Oxford, Oxford OX1 3RH
title: ' On the possibility of observing the double emission line feature of H$_2$ and HD from primordial molecular cloud cores'
---
cosmology: observations — galaxies: formation — ISM: molecules — submillimeter
INTRODUCTION
============
Observation of the first generation of stars presents one of the most exciting challenges in astrophysics and cosmology. Hydrogen molecules (H$_2$s) play an important role as a cooling agent of the gravitationally contracting primordial gas (Saslaw & Zipoy 1967) in the process of primordial star formation. It has been argued that H$_2$ is an effective coolant for the formation of globular clusters (Peebles & Dicke 1969), if such objects precede galaxy formation. The contraction of primordial gas was also studied in the pioneering work of Matsuda, Sato, & Takeda (1969). Thus, we expect the detection of H$_2$ line emission from primordial star-forming regions (Shchekinov 1991; Kamaya & Silk 2002; Ripamonti et al. 2002). Observational feedback from such Population III objects was examined in Carr, Bond, & Arnett (1984). In the context of the CDM (Cold Dark Matter) scenario for cosmic structure formation, Couchman & Rees (1986) suggested that the feedback from the first structures is not negligible for e.g. the reionisation of the Universe and the Jeans mass at this epoch (e.g. Haiman, Thoul, & Loeb 1996; Gnedin & Ostriker 1997; Ferrara 1998; Susa & Umemura 2000). Recent progress on the role of primordial stars formation in structure formation is reviewed in Nishi et al. (1998) and its future strategy is discussed in Silk (2000).
More recently, the first structures in the Universe have been studied by means of very high resolution numerical simulations (Abel, Bryan, & Norman 2000). The numerical resolution is sufficient to study the formation of the first generation of molecular clouds. According to their results, a molecular cloud emerges with a mass of $\sim 10^5$ solar masses as a result of the merging of small clumps which trace the initial perturbations for cosmic structure formation. Due to the cooling of H$_2$, a small and cold prestellar object appears inside the primordial molecular cloud. It resembles the cores of molecular clouds at the present epoch. These numerical results are consistent with other numerical simulations by Bromm, Coppi, & Larson (1999; 2001). All the simulations predict that the primordial molecular clouds and their cores appear at an epoch of $1+z \sim 20$ ($z$ is redshift).
According to these results, the first generation of young stellar objects has a mass of $\sim$ 200 solar masses, density of $\sim 10^5$ cm$^{-3}$, and temperature of $\sim$ 200 K. We stress that these are cloud cores, and not stars. Inside the cores, a very dense structure appears. We call this a [*kernel*]{} for clarity of presentation (Kamaya & Silk 2001). Its density increases to a value as high as $\sim 10^8$ cm$^{-3}$ where three-body reactions for H$_2$ formation occurs (Palla, Salpeter, & Stahler 1983, Omukai 2000). Further evolution is by fragmentation (Palla, Salpeter, & Stahler 1983) and/or collapse (Omukai & Nishi 1998). In the previous paper, we considered the collapsing kernel and the possibility for its observation by future facilities. H$_2$ line emission tracing the temperature structure of the kernel is potentially detected by ASTRO-F[^2] and ALMA[^3] if the kernels are collectively assembled, as might be expected in a starburst. We also comment here that if there is dust grain in starforming region, the situation changes (e.g. Hirashita, Hunt, Ferrara 2002). For example, the three body reaction dominates over the grain surface formation if dust grain is present type and only when gas density is above $10^{11}$ cm$^{-3}$.
Now in the low temperature regime of the typical core, HD is also an important coolant (e.g. Shchekinov 1986). Indeed, Uehara & Inutsuka (1998) show numerically that the core collapses at a relatively low temperature if HD exists. According to them, the primordial filament can reach a temperature of 50 K. Thus, the effect of HD should be examined. Indeed as long as HD is a dominant coolant, it is useful to consider the observational possibility of HD emission from primordial molecular cloud cores, as we do below.
Furthermore, because of the large electric dipole moment, LiH is also potentially an important coolant (Lepp & Shull 1984). Despite the very low abundance of LiH, which means that it is never the dominant coolant, as shown in Lepp & Shull, if its lines are optically thin, it can still be important. We also comment on line cooling by LiH in this paper.
In addition to a contracting primordial cloud, another molecular emission line is discussed by Ciardi & Ferrara (2001) who predict the detection of mid-infrared emission from primordial supernova shell by NGST[^4]. According to Ciardi & Ferrara, the shell can cool and radiate in rotational-vibrational emission lines, and then mid-infrared emission is expected. In particular, the shell can also emit the possible ground rotational line of H$_2$ ($J=2-0$ and $v=0$). In our first paper, we also predict the primordial emission of the ground rotational line of H$_2$. If and when primordial emission of the ground rotational transition is detected, it will be important to judge which is its origin. In this paper, we present a very simple answer to this question.
In §2, we formulate the cooling function for HD and LiH. For H$_2$, this was presented in a previous paper (Kamaya & Silk 2001). We summarise the cooling function of H$_2$ in the Appendix. In §3, we present a model structure for the primordial molecular cloud cores. In §4, we estimate that the luminosity from the cores is primarily due to the emission of H$_2$, with a secondary role played by HD, and that LiH is not important. In §5, the observational feasibility of detection is reviewed (differently from the discussion in our previous paper). The theory underlying the observational predictions is presented in §6, and our paper is summarised in the final section.
DESCRIPTION OF MOLECULAR EMISSION
=================================
Primordial molecular cloud cores are expected to be low temperature objects (Abel et al. 2000), where HD (Shchekinov 1986) and LiH (Lepp & Shull 1984) line cooling are frequently considered to be important. In this section, first of all, we present a formalism for the molecular line emission to estimate the luminosity from the cores. The lower rotational transition emission of H$_2$ is also important. Since a detailed description has been presented in the previous paper (Kamaya & Silk 2001), only a brief summary is given in the Appendix.
HD
--
HD has a weak electric dipole moment, since the proton in the molecule is more mobile than the deuteron; the electron then does not follow exactly the motion of the positive charge, producing a dipole (Combes & Pfenniger 1997). This moment has been measured in the ground vibrational state from the intensity of the pure rotational spectrum by Trefler & Gush (1968). The measured value is about 5.85 $10^{-4}$ Debye (1 Debye $=$ $10^{-18}$ in cgs unit). Since the first rotational level is about 128 K, the corresponding wavelength is 112 $\mu $m. Since the temperature of the primordial molecular cloud cores (Abel et a. 2000) is about 200 K, HD has sufficient potential to be the dominant coolant (Shchekinov 1986). In this paper, we set the D abundance to be $4\times 10^{-5}$, keeping consistency for our structural model of cores with their thermal conditions predicted by Uehara & Inutsuka (2000), which is the most recent and detailed analysis for a primordial cloud undergoing HD cooling.
For the cooling function, we adopt the formalism of Hollenbach & McKee (1979). According to this, $$\Lambda_{\rm HD,thin} = n \times n_{\rm HD} \times
\frac{4 (kT)^2 A_0}{n E_0 ( 1 + (n_{\rm cr}/n)
+ 1.5 (n_{\rm cr}/n)^{0.5})}
~~~{\rm erg}~{\rm cm}^{-3}~{\rm s}^{-1}
\eqno(1)$$ where $k$ is the Boltzmann constant, $n$ is the gas number density, $n_{\rm cr}$ is the critical density, $n_{\rm HD}$ is HD number density, $A_0$ is $5.12 \times 10^{-8}$ s$^{-1}$ (Abgrall et al. 1992), and $E_0 = 64 k$ erg for HD. Determining $ n_{\rm cr}$ exactly is a complex calculation, while for our purpose it is sufficient to consider $ n_{\rm cr}$ for the dominant cooling transition if we are interested in the dominant process. Hence, we estimate $n_{\rm cr}$ via the formalism of Hollenbach & McKee (1979) as $n_{\rm cr} = 7.7
\times 10^3$ cm$^{-3}$ $(T_{\rm g}/1000.0)^{0.5}$ where $T_{\rm g}$ is the thermal temperature of gas. For reader’s convenience, we would like to note that our $\Lambda_{\rm HD,thin}$ has changed from $L$ of (6.23) of Hollenbach & McKee (1979) by the amount of $n \times n_{\rm HD}$, while both are always consistent.
However, the above formula is inadequate if the temperature is low since the population of the rotationally excited level is small. We consider the cooling function of HD in the optically thin and low temperature regime below 255 K (Galli & Palla 1998) if the density is below the critical density for a given temperature; $$\Lambda_{\rm HD;n\to 0} =
2\gamma_{10}E_{10} {\rm exp}(-E_{10}/kT_{\rm g})
+(5/3)\gamma_{21}E_{21} {\rm exp}(-E_{21}/kT_{\rm g}).
\eqno(2)$$ Here, $E_{10} = 128k$ erg, $E_{21} = 255k$ erg , $\gamma_{10} = 4.4 10^{-12} + 3.6 10^{-13} T_{\rm g}^{0.77}$ cm$^3$ s$^{-1}$, and $\gamma_{21} = 4.1 10^{-12} + 2.1 10^{-13} T_{\rm g}^{0.92}$ cm$^3$ s$^{-1}$. If the gas density is above the critical density, we simply estimate the cooling function by two methods. The first is for the temperature range from $128$ K to $255$ K. In this range, we simply use Eq.(1) multiplied by exp($-256 {\rm K}/T_{\rm g}$). This is because the population of $J=2$ to that of $J=1$ is reduced by the factor of exp($256 {\rm K}/T_{\rm g}$). The second is for if the temperature is below 128 K. In such low temperature case, as simple modification as for the first method for correction is not applicable. Fortunately, the two-level system is a good approximation. Then, we estimate the cooling function from the spontaneous de-excitation rate ($A$) of the most probable rotational level of $J=1$ and the collisional de-excitation rate. The de-excitation rate is estimated from the collisional cross section of de-excitation divided by a sound speed. The collisional cross section of de-excitation is typically $2.0\times
10^{-16}$ cm$^2$ for HD (Schaefer 1990). Our approximation breaks at the outer edge of the core, where we smoothly extrapolate keeping the thermal balance. Fortunately, this correction does not alter the main conclusion and the dominant cooling level of $J$.
On the other hand, to find the cooling function in the optically thick regime, we always need the Einstein A coefficient. We calculate it approximately in the standard way. The coefficient is $$A_{J'J} = \frac{64\pi^4\nu_{J'J}^3}{3hc^3}D^2\frac{J}{2J+1}
\eqno(3)$$ where $\nu_{J'J}$ is the frequency of the $J'\to J $ transition, $h$ the Planck constant, $c$ the light speed, $D$ the electric dipole moment, and $J$ the rotational quantum number. The results via Eq.(3) describe the exact values of Abgrall et al. (1982) for HD very well.
Once $J_{\rm d}(T(r))$ which is the dominant cooling level of $J$ is determined at each position of the core, we can modify the optically thin cooling function to the optically thick cooling function by multiplication of the escape probability, $\epsilon$. To obtain the cooling function for the thick case, hence we define: $$\epsilon_{\nu_{J'J}}
= \frac{1 - {\rm exp}(\tau_{\nu_{J'J}}) }{\tau_{\nu_{J'J}}}
\eqno(4)$$ and $$\tau_{\nu_{J'J}} =
\frac{A_{J'J} c^3}{8 \pi \nu^3_{J'J}}
\left(\frac{g_{J'}}{g_J} - \frac{n_{J'}}{n_{J}}\right) n_J
\frac{R_J}{\delta v}
\eqno(5)$$ where $R_J$ is the Jeans length, $\delta v$ is the velocity dispersion, and $g_J$ the statistical weight of $2J+1$. The velocity dispersion corresponds to Doppler broadening, and is estimated to be given by the sound speed. The inferred optically thick cooling function is then $$\Lambda_{\rm HD,thick} \equiv
\Lambda_{\rm HD,thin} \times \epsilon_{\nu_{J'J}} .
\eqno(6)$$ The procedure of Jeans length shielding in (5) is useful for a simple analytical analysis (Low & Lynden-Bell 1976; Silk 1977).
LiH
---
The LiH molecule has a much larger dipole moment of 5.9 Debye and the first rotational level is only at 21 K. Thus, although its abundance is very small ($\sim 10^{-10}$; we adopt this value), there is a plenty of possibility for LiH to be an important coolant (Lepp & Shull 1984). For LiH, fortunately, the formalism of Hollenbach & McKee (1979) is a very good approximation. Then, we use Eq.(1) with parameters for LiH in the thin regime, and the escape probability correction is applied if the optical depth of the dominant line emission is above unity. The adopted parameters are $A_0 = 0.0113$ s$^{-1}$ and $E_0/k = 21.0$ K, and $n_{\rm LiH}$ instead of $n_{\rm HD}$. For the corresponding de-excitation rates, we use the results of detailed balance analysis (Bougleux & Galli 1997). The expected collisional de-excitation cross section is found to be approximately $5.6 \times 10^{-16}$ cm$^{2}$, then the de-excitation rate becomes $2\times 10^{-10}$ cm$^{3}$ s$^{-1}$ at $T_{\rm g}=3000$ K. Since the de-excitation cross sections tend to be nearly constant at low energies for collisions with neutral particles, the approximate de-excitation rate at lower temperature can be obtained multiplied by the factor of $(T_{\rm g}/3000 ~{\rm K})^{0.5}$. If the comparison with the case of collision of He are interested, the reduced factor of $3^{0.5}$ is adopted to account for the difference in the reduced mass (Bougleux & Galli 1997). Their appendix B is useful for further details. If only the optically thin cooling function is needed, appendix A.3 of Galli & Palla (1998) is useful as long as the gas density is below any critical density.
STRUCTURAL MODEL OF PRIMORDIAL MOLECULAR CLOUD CORE
===================================================
According to recent numerical simulations, molecular cloud cores appear prior to the formation of population III objects. Cores contain a dense and cool kernel. In our previous paper, we consider how much the kernel emits H$_2$ line luminosity. In the current paper, we consider the case of the cores. Since the cores have lower temperature than the kernels (e.g. Uehara & Inutsuka 2000), HD and LiH must be considered.
The emission properties are determined by the density and temperature structure of the cores. Fortunately, a reasonably simple model is possible according to Uehara & Inutsuka (2000) and Omukai (2000). We find a fitting formula for the distribution of H$_2$ and temperature which is found by trial and error to approximately reach thermal equilibrium (see the next section). For $f_2(r)$ (solid line in Fig.1); $$f_2(r) = 0.0001 + 0.495 \times
\frac{ {\rm exp}\left( \frac{n(r)}{10^{11.0} {\rm cm^{-3}}} \right)
-{\rm exp}\left( \frac{n(r)}{10^{11.0} {\rm cm^{-3}}} \right)}
{ {\rm exp}\left( \frac{n(r)}{10^{11.0} {\rm cm^{-3}}} \right)
+{\rm exp}\left( \frac{n(r)}{10^{11.0} {\rm cm^{-3}}} \right)}
\eqno(8)$$ where $n(r) = 10.0^8 {\rm cm^{-3}} (r/0.01~{\rm pc})^{-2.2}$ (the maximum of $f_2(r)$ is set to be 0.5 by definition). This describes approximately the effect of the three body reaction to form H$_2$. For $T(r)$ (dashed line in Fig.1); $$T(r) = 50 ~{\rm K}
\left( \frac{n(r)}{10^{4.0} {\rm cm^{-3}}} \right)^{\frac{1}{5}}
. \eqno(9)$$ We hypothesise that all D and Li are in molecular form above a density of $10^4$ cm$^{-3}$. This is partially supported by Uehara & Inutsuka (2000).
To examine the total emission energy, we need to determine the mass distribution around the centre of the core, where the first star emerges. Also, we assume a spherical configuration for the mass distribution. According to Omukai & Nishi (1998), a high accretion rate is realized if a similarity collapse occurs with the adiabatic heat ratio of 1.1 (e.g. Suto & Silk 1988). The density distribution is described as $$\frac{\partial {\rm ln} \rho (r)}{\partial {\rm ln} r}
= \frac{-2}{2-\gamma} .
\eqno(8)$$ Here, $r$ is the radial distance from the centre, $\rho (r)$ is the mass density of atomic H, H$_2$ and He, and $\gamma $ is the specific heat ratio. We set the mass-density distribution of a protostellar-core with $\gamma = 1.1$ as $\rho(r) = \rho_{\rm
0}(r/r_0)^{-2.2}$ where $\rho_{\rm 0}$ is 2.0$\times 10^{-20}$ g cm$^{-3}$ and $r_0$ is 0.63 pc. These values are appropriate for fitting a typical protostellar core of Abel et al. (2000).
MOLECULE EMISSION LUMINOSITY
============================
We are interested in the dominant rotational emission, then we calculate $$J_{\rm max} =
\frac
{\int_{\rm core} 4\pi r^2 \Lambda_{i, {\rm thick}}(r) J_{\rm d}(T(r)) dr}
{\int_{\rm core} 4\pi r^2 \Lambda_{i, {\rm thick}}(r) dr}.
\eqno(10)$$ Here, for HD ($i$=HD) $$J_{\rm d}(T(r)) = J_{\rm d, HD}(T(r)) \equiv
\left( \frac72 \times \frac{T(r)}{64.0~{\rm K}} \right)^{0.5}
, \eqno(11)$$ and for LiH ($i$=LiH) $$J_{\rm d}(T(r)) = J_{\rm d, LiH}(T(r)) \equiv
\left( \frac72 \times \frac{T(r)}{21.0~{\rm K}} \right)^{0.5}
, \eqno(12)$$ and for H$_2$ ($i$=H$_2$) $$J_{\rm d}(T(r)) = J_{\rm d, H_2} (T(r)) \equiv
\left( \frac72 \times \frac{T(r)}{85.0~{\rm K}} \right)^{0.5}
. \eqno(13)$$ Here, all $J_{\rm d}(T(r))$ mean the dominantly contributing $J$-level to the statistical weight at temperature of $T(r)$ (Silk 1983). Using our fitting formula of $f_2(r)$ and $T(r)$, we find each $J_{\rm max}$ is about 4.0 for HD, 17.0 for LiH, and 2.3 for H$_2$, respectively. Then, to estimate the dominant line luminosity of $L_{\rm thick}$ of Eq.(6), we use $J=4$ for HD, $J=17$ for LiH, and $J=2$ for H$_2$, respectively. The exceptional treatment (but reviewed in Kamaya & Silk 2001) for H$_2$ is given in the Appendix, according to which we can discuss $J=2-0$ transition of H$_2$ independent of any other transition.
The estimated total luminosity for a single core ($\int_{\rm core} 4\pi r^2 \Lambda_{\rm thick} dr$), $L_{\rm single}$, is $4.2 \times 10^{35}$ erg s$^{-1}$. Each component is $4.1 \times 10^{35}$ erg s$^{-1}$ for H$_2$, $2.1 \times 10^{34}$ erg s$^{-1}$ for HD, and $6.9 \times 10^{30}$ erg s$^{-1}$ for LiH. Here, first of all, we find that the contribution due to LiH is very small. This contradicts the conclusion of Lepp & Shull (1984). Fortunately, the reason is simple. Although Lepp & Shull regarded all LiH lines as optically thin, they are optically thick for our case in the region where LiH would be important as suggested by Lepp & Shull.
The line broadening is estimated to be $\sim \delta v_{\rm D} $ $= \left( {2k T}/{\mu (r) m_{\rm H}} \right)^{0.5} $ in the dimension of velocity. Here, $\mu (r)$ is mean molecular weight at each position. Adopting this, the luminosity per Hz (T=1000 K is assumed) is as follows; H$_2$ rotation emission of $J=2-0$ (1.1$\times 10^{13}$ Hz; 2.8$\times 10^{-2}$ mm) is 4.0 $\times 10^{27}$ erg Hz$^{-1}$, HD rotation emission of $J=4-3$ (1.1$\times 10^{13}$ Hz; 2.8$\times 10^{-2}$ mm) is 2.0 $\times 10^{26}$ erg Hz$^{-1}$, and LiH rotation emission of $J=17-16$ (7.4$\times 10^{12}$ Hz; 4.0$\times 10^{-2}$ mm) is 9.8 $\times 10^{22}$ erg Hz$^{-1}$. Here, we consider the dominant $J_{\rm max}$ to $J_{\rm max}-1$ transitions for HD and LiH, and $J_{\rm max}$ to $J_{\rm max}-2$ transition for H$_2$.
The total cooling rate at each position is summarised in figure 2. Below $10^8$ cm$^{-3}$, HD is the dominant coolant, while H$_2$ is dominant above this density. This confirms the results of Lepp and Shull (1984). We also check if our model of the molecular cloud core is reasonable or not. To do it, we estimate the heating rate. In a contracting cloud without dust, the compressional heating is generally dominant. When the three-body reactions for H$_2$ formation occur, chemical heating is also important. We consider these two heating mechanisms. For compressional heating, we estimate $c_s^2/t_{\rm ff}$ (e.g. Omukai 2000). The specific free energy for the three-body reaction is 4.48 eV. Our result is displayed in figure 3. The ratio of cooling and heating is given in figure 4. As clearly shown in this figure, the deviation from the thermal balance is within a factor of three. Thus, our model structure for the core is consistent with thermal balance between molecular line cooling and the expected heating over all of the density range considered in this paper.
OBSERVATIONAL FEASIBILITY OF DOUBLE PEAK EMISSION
=================================================
According to our first paper (Kamaya & Silk 2001), some bright emission lines from assembly of primordial young stellar objects usually have sub-mJy flux as long as the redshift of $z$ is about 10–40. Then, when we are interested in the same range of the redshift, it is sufficient to discuss only a typical case of $z=19$. The expected emission of the same emission lines from different redshifts can also have the similar flux level. This realises because the flux per frequency has apparently positive effect of the redshift (e.g. Eq.(6) of Ciardi & Ferrara 2001) and the sound speed at the primordial starforming region should also be redshifted (Toleman 1966, Kamaya & Silk 2001). Furthermore, our estimate is reasonable if we do not consider smaller $z$ than $\sim 6$ at which the reionisation of the Universe occurs and the adopted assumptions for analysis break.
In the current paper, we need the redshifted observational frequency and wavelength. For the three lines, we obtain 0.53$\times 10^{12}$ Hz; 5.6$\times 10^{-1}$ mm for $J=2-0$ of H$_2$, 0.53$\times 10^{12}$ Hz; 5.6$\times 10^{-1}$ mm for $J=4-3$ of HD, and 3.7$\times 10^{12}$ Hz; 8.1$\times 10^{-1}$ mm for $J=17-16$ of LiH. The adopted $1+z$ is 20 according to the results of recent numerical simulations. Obviously, the predicted frequency is located in the range of ALMA. Hence, we discuss the observational possibility of detection by ALMA in the current paper. ALMA is a ground-based radio interferometric facility, and will consist of many 12-m antennas. A detailed recent review is found in Takeuchi et al. (2001). According to this, the 5$\sigma $ sensitivities at 350 $\mu$m, 450, 650, 850, 1.3 mm, 3.0 mm are expected to be 390, 220, 120, 16, 7.5, 4.6 $\mu$Jy, respectively (8-GHz bandwidth).
The most prominent feature is predicted to be a double peak of HD and H$_2$ emission. A schematic view is presented in figure 5, but it is depicted at the coordinate of the core (i.e. they are not redshifted). The most obvious feature is the difference of the peaks of each line intensity. The ratio of H$_2$ to HD is about 20. Detection of this double peak feature would confirms the presence of primordial molecules in forming galaxies. The difference in frequency of both the molecules is about $10^8$ Hz. Each of the precise values is 5.3317 $10^{11}$ Hz for H$_2$ (J=2-0) and 5.3338 $10^{11}$ Hz for HD (J=4-3), respectively. The frequency difference is resolved sufficiently by ALMA since it has frequency resolution of about $4
\times 10^4$ Hz.
To estimate the observed flux, we determine the distance to the object. Then, we calculate it numerically from the following standard formula: $$D_{19} = \frac{c}{H_0}
\int_0^{19} \frac{dz}{(\Omega_\Lambda +
\Omega_{\rm M}(1+z)^{3.0})^{0.5}}
\eqno(14)$$ where $H_0$ is the Hubble parameter taken to be 75 km sec$^{-1}$ Mpc$^{-1}$, $\Omega_\Lambda$ is the cosmological constant parameter, and $\Omega_{\rm M}$ is the density parameter. We consider the case $\Omega_\Lambda + \Omega_{\rm M} =1$, since our discussion bases on the numerical results (e.g. Abel et al. 2000). Adopting $D_{19}$, we obtain the observed fluxes of each of the lines. The results are summarised in table 1. Although the line-broadening is estimated from $\nu_0 \delta v_{\rm D} /c$, we also correct it for the redshift effect ($\nu_0$ is the central frequency). For each of the parameter sets of ($\Omega_\Lambda,\Omega_{\rm M}$), we obtain $D_{19} = 0.62 \times 10^{10}$ pc ($\Omega_\Lambda=0,\Omega_{\rm M}=1$), $D_{19} = 1.00\times 10^{10}$ pc ($\Omega_\Lambda=0.7,\Omega_{\rm M}=0.3$), and $D_{19} = 4.58 \times 10^{10}$ pc ($\Omega_\Lambda=0.9,\Omega_{\rm M}=0.1$). According to Table 1, the rotational line fluxes of $J=2-0$ (v=0) for H$_2$ are 0.16 $\mu$Jy; 0.0083 $\mu$Jy for $J=4-3$ (v=0) for HD; and 0.000001 $\mu$Jy for $J=17-16$ (v=0) for LiH if $\Omega_{\rm M} +
\Omega_\Lambda =1$. Thus, we conclude that a single core is not easily observable even by ALMA.
However we note that if the cores are collectively assembled on a sub-galactic scale (Shchekinov 1991), the agglomeration can be detected by ALMA (Kamaya & Silk 2001). We shall estimate the number of cores in a primordial galaxy. Firstly, we must consider the lifetime of a core able to show double peak emission. The required density and temperature distribution for such a core is possible when the accretion rate of the contracting gas is about 0.01 $M_\odot $ year$^{-1}$ (Omukai & Nishi 1998; Kamaya & Silk 2001). Then, the life-time of the core with double peak emission is about $10^4$ years as long as a massive star of $\sim 100 M_\odot$ forms inside the core. A $10^6 M_\odot$ cloud (Abel et al. 2000) is expected to form 1000 such cores at a plausible efficiency of 0.1 in mass during its life-time. Taking its lifetime to be $\sim 10^5$ years as the free-fall time of a $10^6 M_\odot$ cloud, we find that its luminosity can reach 100 $L_{\rm single}$.
Next, we consider an entire primordial galaxy with $10^{11} M_\odot$. It may form $10^9$ supernovae over its entire lifetime. We assume that it makes $10^7$ primordial massive stars, since this number of massive stars gives enrichment to roughly 1 percent of the solar metallicity. If the burst of formation of primordial molecular cloud cores occurs in the central region of the primordial galaxy (we postulate 1 kpc as the size of the core-forming region), then there are $10^4$ such giant molecular clouds with $10^6 M_\odot$ in the core-forming region, as long as the massive star forms with a high accretion rate of 0.01 $M_\odot$ year$^{-1}$ (Kamaya & Silk 2001). During this phase, the cumulative luminosity would be $10^6$ $L_{\rm
single}$. However, the dynamical time-scale in the core-forming region may be $\sim 10^7$ years (e.g. the duration of the starburst). Then, we obtain $10^4~L_{\rm single}$ for the luminosity of the primordial galaxy undergoing its first star formation burst, conservatively assuming $\sim 10^5$ years as the life-time of giant molecular clouds. Finally, for molecular-line emitting proto-galaxies, we obtain 1.6 mJy for $J=2-0$ of H$_2$, 0.83 mJy for $J=4-3$ of HD and 0.00001 mJy for $J=17-16$ of LiH ($\Omega_{\rm M} =
0.3$ and $\Omega_\Lambda = 0.7$). The estimated flux levels of H$_2$ and HD are consistent with Shchekinov (1991), in which only the lowest transition lines for both the molecules were discussed. It may also be better to say that these values are optimistic values. In more realistic conditions, furthermore, our simple time-dependent summation scheme might break down.
According to the current status of the instrumentation for ALMA, 80-890 GHz is the allowed detectable range. It is feasible for our prediction of the redshifted emission. Unfortunately, however, the transmission is bad for the predicted feature at 530 GHz because of the atmosphere of the Earth. Then, it may be necessary to detect H$_2$ and HD emission from a protogalaxy forming larger than $1+z=20$. Since $\Omega_M < 1$, which seems to be reasonable if $\Omega_\Lambda
= 0.7$, the formation of the cores is prompted. Then, we can expect to detect emission at a larger frequency than 530 GHz, and detection should be feasible of the H$_2$ and HD double emission from the assembly of the primordial molecular cores. Of course, we also expect the emission later than $1+z=20$, since the Universe has been reionised at $z \sim 6$. The latter case is also favourable for ALMA because of the better sensitivity.
In the previous paper, we advocated a deep blank field survey. Then, in the current paper, we propose another observational strategy to detect the double emission feature from the primordial molecular cloud cores. Measurement of the number counts of submillimeter sources is planned as one key galaxy evolution project for ALMA (Takeuchi et al. 2001). According to the predictions, many submillimeter sources are expected below sub-mJy levels. Once the survey is operating, we can utilise the submillimeter source count data. Firstly, we pick up the faint sources around QSOs since QSOs are expected at high density peaks in the usual hierarchical model of cosmic structure formation from primordial Gaussian-distributed density fluctuations. Primordial galaxies are concentrated around QSOs. Secondly, we roughly check the spectra. Primordial galaxies do not have dust, while evolved systems have dust. This means the continuum level of the flux at submillimeter wavelengths is significantly different between primordial and evolved galaxies. Unfortunately, it seems that the faint sources are observed in the low frequency resolution mode according to Takeuchi et al. (2001). But, one can expect to find unresolved double peaks as a bump in the observed spectral energy distribution, which is a different feature from the blackbody emission by dust. To find the bump due to the double emission lines should not be a difficult task. After this simple selection, we re-observe the candidates for primordial submillimeter sources in the high resolution mode in frequency. Finally, it is expected that the primordial cores can be discovered showing the double peak spectral features due to H$_2$ and HD, in which the ratio of the two peaks, which is about 50, confirms the hypothesis of primordial emission.
Finally, we stress that we are able to recognise the difference between H$_2$ emission from the assembly of primordial molecular clouds and that supernova shells in primordial starforming regions (Ciardi & Ferrara 2001). According to figure 1 of Ciardi & Ferrara, the emission of the ground rotational line of H$_2$ $(v=0)$, which is never the dominant cooling line because of the high temperature of the shell, can have a flux level of sub-mJy to mJy at $z\sim $ 10. This flux level is obtained via adoption of the sound velocity of 10 km sec$^{-1}$ in the shell gas. Thus, both our predictions and those of Ciardi & Ferrara yield similar flux levels for the same molecular line of H$_2$. The difference between the two predictions is the HD line to be associated with the H$_2$ line in our case. To conclude, if and when the double peak emission is found, it would strongly indicate the primordial emission from the assembly of the first starforming molecular clouds.
THEORY OF DOUBLE PEAK EMISSION
==============================
How is the double-peak emission produced? We describe a theory for double-peak emission from primordial molecular cloud cores. In the entire region of the core, the temperature is significantly lower than 515 K which corresponds to the transition energy of H$_2$ (J=2-0). Such low temperatures are possible only when HD cooling is efficient. Then, the effect of the large volume of the low temperature region lets the first and larger peak be a rotational emission line of H$_2$ (J=2-0). This is possible because of the thermalisation of the gas as long as the gas temperature is above 100 K $(i.e. \sim 2^2
\times 85\times 2/7; $J=2$)$, since the rotational level of $J=2$ is excited by chance.
The situation of the second peak is a little more complicated. We define a characteristic density for the three-body reaction to set in as being $n_{\rm three} \sim 10^8$ cm$^{-3}$. Around the region of this density, the cooling rates of HD and H$_2$ are comparable. Once it is confirmed that the temperature is determined by the balance of the cooling of HD and H$_2$ with adiabatic heating, we find that a temperature of $\sim 300$ K at a density of $n_{\rm three}$ is realized. In other words, if the cooling of H$_2$ were to be neglected, the temperature would be lower than 300 K since the corresponding heating was not allowed there. By the way, since the temperature of 300 K permits the thermal excitation of $J=4$ of HD, then the rotational emission of HD (J=4-3) becomes possible. This emission occurs since the gas temperature can reach 300 K because of the H$_2$ cooling balancing the adiabatic heating of the gravitationally contracting core. This efficiency of H$_2$ cooling is realized only when the three-body reaction among H atoms occurs. Thus, the second peak is possible owing to the three-body reaction to form H$_2$.
SUMMARY
=======
One of the main goals of cosmology is to find the first generation of stars. When they form, strong H$_2$ emission and weak HD emission is expected as a double peak feature. The weak HD emission is important since it distinguishes between the H$_2$ emission from the primordial molecular clouds and that from the primordial supernova shell (Ciardi & Ferrara 2001). We have examined the observational feasibility of the detection of the double peak emission. According to our analysis, the double peak feature is marginally detectable by ALMA. However, the transmissivity of air for the expected typical emission is low for the ALMA project as pointed out in our previous paper. If future telescopes are able to detect the double peak feature emission from primordial molecular cloud cores, then either primordial cores and the first stars have formed at different redshift from $z=19$.
ACKNOWLEDGEMENT {#acknowledgement .unnumbered}
===============
H.K. is grateful to Profs. S.Inagaki, S.Mineshige, and Yu.Shchekinov for their encouragement. We have appreciated the referee’s careful reading very much.
APPENDIX {#appendix .unnumbered}
========
We re-formulate the energy loss due to line emission of H$_2$. The details are found in Kamaya & Silk (2001). Line emission of H$_2$ occurs due to the changes among rotation and vibration states. We can basically use the formulation of Hollenbach & McKee (1979) for rotational and vibrational emission of H$_2$. Adopting their notation, we get: $$L_{\rm r} \equiv
\left(
\frac{9.5\times 10^{-22} T_3^{3.76}}{1+0.12T_3^{2.1}}
{\rm exp}\left[ -\left( \frac{0.13}{T_3}\right)^3 \right]
\right)$$ $$+
3.0 \times 10^{-24}{\rm exp}\left( -\frac{0.51}{T_3} \right)
~~~{\rm erg}~{\rm s}^{-1}, \eqno({\rm A}1)$$ then, we estimate the cooling rate as $$\Lambda ({\rm rot}) =
n_{\rm H_2} L_{\rm r} ( 1 + \zeta_{\rm Hr} )^{-1}
+ n_{\rm H_2} L_{\rm r} ( 1 + \zeta_{\rm H_2r} ) ^{-1}
~~~{\rm erg}~{\rm cm}^{-3}~{\rm s}^{-1} ,\eqno({\rm A}2)$$ where $\zeta_{\rm Hr} = n_{\rm Hcd}({\rm rot})/n_{\rm H}$, $\zeta_{\rm H_2r} = n_{\rm H_2cd}({\rm rot})/n_{\rm H_2}$, $n_{\rm Hcd}({\rm rot}) = A_J/\gamma_J^{\rm H}$, $n_{\rm H_2cd}({\rm rot}) = A_J/\gamma_J^{\rm H_2}$, and $A_J$ is the Einstein $A$ value for the $J$ to $J-2$ transition; $\gamma_J^{\rm H}$ is the collisional de-excitation rate coefficient due to neutral hydrogen; and $\gamma_J^{\rm H_2}$ is that due to molecular hydrogen. The first term of $L_{\rm r}$ denotes the cooling coefficient due to the higher rotation level ($J>2$) and the second one due to $J = 2 \to 0$ transition. The vibrational levels of both terms are set to be zero.
For the vibrational transitions; $$L_{\rm v} =
6.7 \times 10^{-19} {\rm exp}\left[ -\left( \frac{5.86}{T_3}\right)^3 \right]
+
1.6 \times 10^{-18}{\rm exp}\left( -\frac{11.7}{T_3} \right)
~~{\rm erg}~{\rm s}^{-1}, \eqno({\rm A}3)$$ then, we get the cooling rate as being $$\Lambda ({\rm vib}) =
n_{\rm H_2} L_{\rm v} ( 1 + \zeta_{\rm Hv} )^{-1}
+ n_{\rm H_2} L_{\rm v} ( 1 + \zeta_{\rm H_2v} )^{-1}
~~{\rm erg}~{\rm cm}^{-3}~{\rm s}^{-1} ,\eqno({\rm A}4)$$ where $\zeta_{\rm Hv} = n_{\rm Hcd}({\rm vib})/n_{\rm H}$, and $\zeta_{\rm H_2v} = n_{\rm H_2cd}({\rm vib})/n_{\rm H_2}$. Here, $n_{\rm Hcd}({\rm vib}) = A_{ij}/\gamma_{ij}^{\rm H}$, and $n_{\rm H_2cd}({\rm vib}) = A_{ij}/\gamma_{ij}^{\rm H_2}$ where $A_{ij}$ is the Einstein $A$ value for the $i$ to $j$ transition; $\gamma_{ij}^{\rm H}$ is the collisional de-excitation rate coefficient due to neutral hydrogen; and $\gamma_{ij}^{\rm H_2}$ is that due to molecular hydrogen. In our formula, only the levels of $v=0,1$ and 2 are considered. This is sufficient since the temperature is lower than 2000 K. The first term of $L_{\rm v}$ is a cooling coefficient of $\delta v =1$ and the second term is that of $\delta v = 2$. The second term has effect on only the central region of our structural model, then it has no significant contribution to our conclusion. Combining Eq.(A2) and Eq.(A4), we obtain the total cooling rate as $\Lambda^{\rm thin} =
\Lambda ({\rm rot}) + \Lambda ({\rm vib}) $ erg cm$^{-3}$ s$^{-1}$ in the optically thin regime. When we need a cooling function which can be used in the optically thick regime, $\Lambda^{\rm thin}$ is multiplied by the escape probability like the other cooling function in the main text.
It may be better to comment on the continuum absorption. The effect of the continuum absorption below 2000 K would be multiplied by ${\rm
exp} (-\tau_{\rm cont})$, in which $$\tau_{\rm cont} = \rho (r) \lambda_{\rm J}
\left[
4.1\left(\frac1{\rho (r)}-\frac1\rho_0\right)^{-0.9} {T_3}^{-4.5}
+
0.012\rho^{0.51}(r) T_3^{2.5}
\right]
\eqno({\rm A}5)$$ according to the estimate of Lenzuni, Chernoff, & Salpeter (1991) who obtain a fitting formula for the Rossland mean opacity in a zero-metallicity gas. Here, $\lambda_{\rm J}$ is the Jeans length and $\rho_0$ is 0.8 g cm$^{-3}$. Their fitting formula is reasonable if we consider the temperature range $T>1000$ K. Our lowest temperature of the collapsing core is about 50 K. Then, their formula may not be appropriate, while it gives an sufficiently upper limit if we adopt $\tau_{\rm cont}$ of 1000 K instead of really having $\tau_{\rm cont}$ below 1000 K. For all of our estimates, $\tau_{\rm cont}$ is much smaller than unity. Then, we can neglect the effect of continuum absorption.
We summarise the parameters in our calculation for a rotational transition with $v=0$; $A_{2,0} = 2.94 \times 10^{-11}$ sec$^{-1}$; and $A_{J,J-2} = 5A_{2,0}/162 \times J(J-1)(2J-1)^4/(2j+1) $ sec$^{-1}$ are considered. For a vibrational transition, $A_{10} = 8.3 \times 10^{-7}$ sec$^{-1}$; $A_{21} = 1.1 \times 10^{-6}$ sec$^{-1}$; and $A_{20} = 4.1 \times 10^{-7}$ sec$^{-1}$ are considered. In the main text, we find that the rotational line cooling of H$_2$ is dominated by $J=2-0$ transition ($v=0$). Then, we regard that the second term of Eq.(A1) is important. This means our estimate for $J=2-0$ transition is very reliable, while the other estimate has some uncertainty.
Finally, we comment on the uncertainty of the cooling function owing to rot-vibrational transitions of H$_2$. The formula for the cooling function of H$_2$ is examined by Martin et al. (1996), Forrey et al. (1997), Galli & Palla (1998), and Fuller & Couchman (2000). Fuller & Couchman especially stress that there is uncertainty in the H$_2$ cooling function because of the difficulty in calculating the interaction potential at low temperatures. Then, different choices for rotational and vibrational H-H$_2$ rate coefficients will produce differences in the cooling function. Fortunately, we can consider our cooling function to be applicable since the cooling rate via H$_2$ transitions balances the release of gravitational potential energy consistently as shown in our first paper (Kamaya & Silk 2001).
Abel T., Bryan G. L., Norman M. L., 2000, ApJ, 540, 39 Abgrall H., Roueff R., Roueff E., 1992, A&AS, 50, 505 Bougleux E., Galli D., 1997, MNRAS, 288, 638 Bromm V., Coppi P. S., Larson R. B., 1999, ApJ, 527, L5 Bromm V., Coppi P. S., Larson R. B., 2001, ApJ, 564, 23 Carr B. J., Bond J. R., Arnett W. D., 1984, ApJ, 277, 445 Ciardi B., Ferrara A., 2001, MNRAS, 324, 648 Combes F., Pfenniger D., 1997, A&A, 327, 453 Couchman H., Rees M., 1986, MNRAS, 221, 445 Ferrara A., 1998, ApJ, 499, L17 Forrey R. C., Balakrishnan N., Dalgarno A., Lepp S., 1997, ApJ, 489, 100 Fuller T. M., Couchman H. M. P., 2000, ApJ, 544, 6 Galli D., Palla F., 1998, A&A, 335, 403 Gnedin N. Y., Ostriker J. P., 1997, ApJ, 486, 581 Haiman Z., Thoul A. A., Loeb A., 1996, ApJ, 464, 523 Hirashita H., Hunt L. K., Ferrara A., 2002, MNRAS, 330, L19 Hollenbach D., McKee C. F., 1979, ApJS, 41, 555 Kamaya H., Silk J., 2001, MNRAS, 332, 251 Lenzuni P., Chernoff D. F., Salpeter E. E., 1991, ApJS, 76, 759 Lepp D., Shull J. M., 1984, ApJ, 270, 578 Low C., Lynden-Bell D., 1976, MNRAS, 176, 367 Martin P. G., Schwarz D. H., Mandy M. E., 1996, ApJ, 461, 265 Matsuda T., Sato H., Takeda H., 1969, Prog.Theor.Phys., 41, 840 Nishi R., Susa H., Uehara H., Yamada M., Omukai K., 1998, Prog.Theor.Phys., 100, 881 Omukai K., Nishi R., 1998, ApJ, 508, 141 Omukai K., 2000, ApJ 534, 809 Palla F., Salpeter E. E., Stahler W., 1983, ApJ, 271, 632 Peebles P. J., Dicke R. H., 1968, ApJ, 154, 891 Ripamonti, E., Haardt, F., Ferrara, A., Colpi, M., 2002, MNRAS, 334, 401 Saslaw W. C., Zipoy D., 1968, Nature, 216, 976 Schaefer J., 1990, ApJS, 85, 1101 Shchekinov Iu., 1991, Ap&SS, 175, 57 Shchekinov Yu., 1986, SvAL, 12, 211 Silk J., 1977, ApJ, 214, 152 Silk J., 1983, MNRAS, 205, 705 Silk J., 2000, PASP, 112, 1003 Susa H., Umemura M., 2000, MNRAS, 316, L17 Suto Y., Silk J., 1988, ApJ, 326, 527 Takeuchi T. T., Kawabe R., Kohno K., Nakanishi K., Ishii T. T., Hirashita H., Yoshikawa K., 2001, PASP, 113, 586 Toleman, R. C., 1966, [*Relativity, thermodynamics and cosmology*]{}, (Oxford; Clarendon Press), section 10 Trefler M., Gush H.P., 1968, Phys.Rev.Let., 20, 703 Uehara H., Inutsuka S., 2000, ApJ, 531, L91
---------------------------------------------- ----------- ----------- ------------- ------------ ------------------ -------------------------
— H$_2$ HD LiH $\Omega_m$ $\Omega_\Lambda$ $D_{19}$ ($10^{10}$ pc)
($J=2-0$) ($J=4-3$) ($J=17-16$)
$\nu$ (10$^{13}$ Hz; $z=0$) 1.06 1.06 0.74 – – –
$\nu$ (10$^{12}$ Hz; $z=19$) 0.53 0.53 0.37 – – –
$\lambda$ (10$^{-3}$ mm; $z=0$) 28.29 28.29 40.30 – – –
$\lambda$ (10$^{-3}$ mm; $z=19$) 565.8 565.8 806.1 – – –
$L_\nu$ (10$^{26}$ erg sec$^{-1}$ Hz$^{-1}$) 40.1 2.01 0.001 – – –
$f_\nu$ (10$^{-8}$ Jy; $z=19$) 43.18 2.16 0.0003 1.0 0.0 0.62
$f_\nu$ (10$^{-8}$ Jy; $z=19$) 16.60 0.83 0.0001 0.3 0.7 1.00
$f_\nu$ (10$^{-8}$ Jy; $z=19$) 7.58 0.37 0.00004 0.1 0.9 1.48
---------------------------------------------- ----------- ----------- ------------- ------------ ------------------ -------------------------
: Expected Emission Lines
[^1]: Email:kamaya@kusastro.kyoto-u.ac.jp
[^2]: http:// www.ir.isas.ac.jp/ASTRO-F/index-e.html
[^3]: http:// www.alma.nrao.edu
[^4]: http://ngst.gsfc.nasa.gov
|
---
abstract: 'A D-5-brane bound state with a self-dual field strength on a 4-torus is considered. In a particular case this model reproduces the D5-D1 brane bound state usually used in the string theory description of 5-dimensional black holes. In the limit where the brane dynamics decouples from the bulk the Higgs and Coulomb branches of the theory on the brane decouple. Contrasting with the usual instanton moduli space approximation to the problem the Higgs branch describes fundamental excitations of the gauge field on the brane. Upon reduction to 2-dimensions it is associated with the so-called instanton strings. Using the Dirac-Born-Infeld action for the D-5-brane we determine the coupling of these strings to a minimally coupled scalar in the black hole background. The supergravity calculation of the cross section is found to agree with the D-brane absorption probability rate calculation. We consider the near horizon geometry of our black hole and elaborate on the corresponding duality with the Higgs branch of the gauge theory in the large $N$ limit. A heuristic argument for the scaling of the effective string tension is given.'
---
ø Ø =msbm10 at 12pt \#1 =msbm10 at 10pt \#1
DAMTP-1998-75\
[**Black hole dynamics from instanton strings**]{}
Miguel S. Costa[^1]\
*D.A.M.T.P.\
University of Cambridge\
Cambridge CB3 9EW\
UK*
Introduction
============
Over the past two years several issues in black hole physics have been successfully addressed within the framework of string theory (see [@Youm; @Peet] for reviews and complete lists of references). The black hole dynamics may be recovered from an effective string description [@StroVafa; @Mald1; @DasMath]. In the dilute gas approximation [@MaldStro], i.e. when the left- and right-moving modes on the effective string are free and when anti-branes are suppressed, the Bekenstein-Hawking entropy is correctly reproduced. Further, assuming that this effective string couples to the bulk fields with a Dirac-Born-Infeld type action it has been possible to find agreement with the classical cross section calculations for scalar and fermionic bulk fields \[5-18\]. These calculations provide a highly non-trivial test of the effective string model. However, the derivation of the effective string action including its coupling to the bulk fields requires several assumptions. In other words, we would like to deduce this action from first principles as it is the case for similar calculations involving the D-3-brane [@Kleb; @Kleb..; @KlebGubs]. One of the purposes of this paper is to fill in this gap.
We shall consider the D-5-brane bound state with a constant self-dual worldvolume field strength on a compact $T^4$ studied in [@CostaPerry]. This configuration includes as a special case the D5-D1 brane bound state used in the original derivation of the Bekenstein-Hawking entropy formula [@StroVafa]. The gauge theory fluctuating spectrum associated with this bound state was found to agree with the spectrum derived from open strings ending on the D-5-brane bound state [@Polc; @CostaPerry; @CostaPerry1]. For this reason the modes associated with the worldvolume fields should be regarded as fundamental excitations of the D-brane system. This includes some modes of the gauge field that are self-dual on $T^4$ and may be called instantons but should not be interpreted as solitons. We shall see that in the limit where the brane dynamics decouples from the bulk we may define two supersymmetric branches of the theory on the brane corresponding to the self-dual modes and to the modes associated with the movement of the brane system in the transverse directions. They define the Higgs and Coulomb branches of the theory that are shown to decouple in the above limit. The Higgs branch is the one associated with the dynamics of the black hole. We derive from first principles the action for the bosonic fields in the Higgs branch which we call instanton strings action rather then effective string action. We also consider the coupling of these instanton strings to a minimally coupled scalar in the black hole background, finding agreement with the scattering cross section calculation on the supergravity side. This agreement follows because both string and classical calculations have an overlapping domain of validity (this will be our analogue of the double scaling limit introduced by Klebanov [@Kleb]), giving a rationale for why both descriptions yield the same result. A deeper explanation is uncovered by Maldacena’s duality proposal [@Mald2] and subsequent works [@Gubs..; @Witt3]. We shall elaborate on this proposal. In particular, we argue that [*the Higgs branch of the large $N$ limit of 6-dimensional super Yang-Mills theory with a ’t Hooft twist on a compact $T^4$ is dual to supergravity on $AdS_3\times S^3\times T^4$*]{}. Based on this interpretation of the duality conjecture the effective string action should be associated with this large $N$ limit of the theory. We shall give a heuristic derivation of the effective string tension which agrees with previous results [@Math; @Gubs; @HassWadia].
The paper is organised as follows: In section two we shall revise the model studied in [@CostaPerry] and analyse the brane dynamics when it decouples from the bulk. The regions of validity of both D-brane and supergravity approximations are explained. In section three we shall find a minimally coupled scalar in our black hole background and derive the corresponding coupling to the instanton strings. Section four is devoted to the supergravity calculation of the scattering cross section as well as the corresponding D-brane absorption probability rate. In section five we shall describe the double scaling limit where both calculations are expected to agree. After analysing the near horizon geometry associated with our black hole we consider Maldacena’s duality proposal. We give our conclusions in section six.
The model
=========
In this section we shall review the D-brane model associated with our five-dimensional black hole. The dynamics of the D-brane system will be derived by starting from the super Yang-Mills (SYM) action. We shall comment on the validity of such approximation. We review the fluctuating spectrum, study the decoupling of the Higgs and Coulomb branches of the theory when the brane dynamics decouples from the bulk and derive the action for the instanton strings determining the black hole dynamics. We then write the supergravity solution describing the geometry of our black hole and comment on the validity of the supergravity approximation.
Because we are claiming that our model also describes the D5-D1 brane bound state we shall keep referring to this special case as we proceed.
D-brane phase
-------------
We consider a bound state of two D-5-branes wrapped on $S^1\times T^4$ with coordinates $x^1,...,x^5$ (the generalisation to the case of $n$ D-5-branes is straightforward). Each D-5-brane has winding numbers $N_i$ along $S^1$, $p_i$ along the $x^2$-direction and $\bar{p}_i$ along the $x^4$-direction. Thus, the worldvolume fields take values on the $U(N_1p_1\bar{p}_1+N_2p_2\bar{p}_2)$ Lie algebra [@Witt4]. In order to have a non-trivial D-5-brane configuration we turn on the worldvolume gauge field such that the corresponding field strength is diagonal and self-dual on $T^4$. The non-vanishing components are taken to be (we assume without loss of generality that $\tan{\th_1}>\tan{\th_2}$) G\^0\_[23]{}=G\^0\_[45]{}=(,) ,\
[N\_1p\_1|[p]{}\_1 [times]{} N\_2p\_2|[p]{}\_2 [times]{} ]{} \[2.1\] where == , \[2.2\] with $q_i$ and $\bar{q}_i$ integers and $L_{\ha}=2\pi R_{\ha}$ the length of each $T^4$ circle ($\ha=2,...,5$). This vacuum expectation value for the gauge field breaks the gauge invariance to $U(N_1p_1\bar{p}_1)\otimes U(N_2p_2\bar{p}_2)$. Because the branes are wrapped along the $x^1$-, $x^2$- and $x^4$-directions the gauge invariance is further broken to $U(1)^{N_1p_1\bar{p}_1+N_2p_2\bar{p}_2}$. Each D-5-brane carries $Q_{5_i}=N_ip_i\bar{p}_i$ units of D-5-brane charge. Thus, the total D-5-brane charge is Q\_5=N\_1p\_1|[p]{}\_1+N\_2p\_2|[p]{}\_2 . \[Q5\] Each brane carries fluxes in the $x^2x^3$ and $x^4x^5$ 2-tori. The total fluxes are \_[23]{}=[\_[T\^[\^[2]{}]{}\_[\_[(23)]{}]{}]{}]{} [tr]{} G\^0= ( N\_1q\_1|[p]{}\_1+N\_2q\_2|[p]{}\_2) ,\
[F]{}\_[45]{}=[\_[T\^[\^[2]{}]{}\_[\_[(45)]{}]{}]{}]{} [tr]{} G\^0= ( N\_1p\_1|[q]{}\_1+N\_2p\_2|[q]{}\_2) . \[fluxes\] These fluxes induce a ’t Hooft twist on the fields \[30-33\], i.e. the worldvolume fields obey twisted boundary conditions on $T^4$. Also, due to this vacuum expectation value for the field strength the D-5-branes carry other D-brane charges. There are $Q_3={\cal F}_{45}$ D-3-brane charge units associated with D-3-branes parallel to the $(123)$-directions, and $Q_{3'}={\cal F}_{23}$ D-3-brane charge units associated with D-3-branes parallel to the $(145)$-directions. Furthermore, the instanton number associated with the background field strength is non-zero. As a consequence the bound state carries the D-string charge [@Doug] Q\_1=N\_[ins]{}=\_[T\^4]{}[tr]{} (G\^0G\^0)= N\_1q\_1|[q]{}\_1+N\_2q\_2|[q]{}\_2 . \[Q1\] It is now clear how we can obtain a bound state with the same charges as de D5-D1 brane system. We just have to set the fluxes in (\[fluxes\]) to zero and the charges $Q_5$ and $Q_1$ are given by (\[Q5\]) and (\[Q1\]), respectively. For example, if we set $q_1=\bar{q}_1=1$ and $q_2=\bar{q}_2=-1$, then $N_1p_1=N_2p_2$, $N_1\bar{p}_1=N_2\bar{p}_2$ and the D-string charge is $Q_1=N_1+N_2$.
Now we consider the region of validity of the D-brane description of our bound state. Throughout this paper we shall always assume that $g\ll 1$ so closed string effects beyond tree level are suppressed. Also, we assume that the size of $T^4$ is small, i.e. $L_{\ha}\sim\sqrt{\a'}$. The effective coupling constant for D-brane string perturbation theory is usually $gN$ for $N$ D-branes on top of each other. However, the presence of a condensate on the D-brane worldvolume induces a factor $\sqrt{1+(2\pi\a'G^0)^2}$ in the effective coupling [@Abou..]. Thus, in our case the effective string coupling reads g\_[eff]{}=gN\_ip\_i|[p]{}\_i . \[2.3\] The length scales $r_i$ will enter the supergravity solution below and we assume for simplicity $r_1\sim r_2$. D-brane perturbation theory is valid for [@MaldStro] r\_i1 , \[2.4\] where the $r_i$ are now written in string units. In this region open string loop corrections may be neglected and the dynamics for the low lying modes on the brane is determined by the Dirac-Born-Infeld (DBI) action. Our tool to study this region of parameters is the ten-dimensional SYM action reduced to six dimensions. The corresponding bosonic action is S\_[YM]{}= -d\^6x [tr]{}{(G\_)\^2+ (\_\_m+i\[B\_,\_m\])\^2- \[\_m,\_n\]\^2} , \[2.5\] where $\a,\b=0,...,5$ and $m,n=6,...,9$. We are taking the fields to be hermitian matrices with the field strength given by $G_{\a\b}=\p_{\a}B_{\b}-\p_{\b}B_{\a}+i[B_{\a},B_{\b}]$. The Yang-Mills coupling constant is related to the D-5-brane tension $T_5$ by g\_[YM]{}\^2==(2)\^3g’ . \[coupling\] Note that in our conventions both $B_{\a}$ and $\phi_m$ have the dimension of length$^{-1}$. This action is the leading term in the $\a'$ expansion of the DBI action. In this approximation we have (2’ G\_\^0)\^2 1 ||1 . \[2.6\] Physically this condition may be obtained from the requirement M\_[5\_i]{}M\_[3\_i]{}, M\_[3’\_i]{}, M\_[1\_i]{} , \[2.7\] where $M_{3_i}$ and $M_{3'_i}$ are the masses of the D-3-branes dissolved in the D-5-brane with mass $M_{5_i}$ and similarly for $M_{1_i}$. If this condition does not hold we expect the D-5-branes to be bent or deformed [@Mald3]. Because we are assuming that $L_{\ha}\sim \sqrt{\a'}$ we see from eqn. (\[2.2\]) that the condition (\[2.6\]) gives $p_i\gg |q_i|$ and $\bar{p}_i\gg |\bar{q}_i|$. We remark that in this limit there is perfect agreement between the string and the SYM spectrum derived in [@CostaPerry].
Next we review the fluctuating spectra of the SYM theory. The starting point is to expand the action around the background (\[2.1\]). The result is S\_[SYM]{}&=& -d\^6x [tr]{}{-2A\^D\^2A\_ -4iA\^\[G\_\^0,A\^\]-2\^mD\^2\_m.\
\
&& +2i(D\_A\_-D\_A\_)\[A\^,A\^\] +4i\^mD\_\[\_m,A\^\] \[2.8\]\
\
&& . -\[A\_,A\_\]\^2-2\[A\_,\_m\]\^2 -\[\_m,\_n\]\^2} ,where we have done the following splitting of the gauge field &&B\_=B\_\^0+A\_ , G\_=G\^0\_+F\_ ,\
\
&&G\^0\_=\_B\_\^0-\_B\_\^0+i\[B\_\^0,B\_\^0\] ,\
\[2.9\]\
&&F\_=D\_A\_-D\_A\_+i\[A\_,A\_\] , with $D_{\a}\equiv \p_{\a}+i[B_{\a}^0,\ \ ]$. The quantum fields $A_{\a}$ and $\phi_m$ are in the adjoint representation of $U(N_1p_1\bar{p}_1+N_2p_2\bar{p}_2)$ and $A_{\a}$ satisfies the background gauge fixing condition $D_{\a}A^{\a}=0$. These fields obey twisted boundary conditions on $S^1\times T^4$ \[30-33\]. We have $(\b\ne 0)$ A\_(x\^+L\_)=Ø\_A\_(x\^)Ø\_\^[-1]{} , \[2.10\] and similarly for $\phi_m$. The $\O$’s are called multiple transition functions and take values on $U(N_1p_1\bar{p}_1+N_2p_2\bar{p}_2)$. They are given by Ø\_=[Diag]{}(Ø\^[(1)]{}\_,Ø\^[(2)]{}\_) , where $\O^{(i)}_{\a}\in\ U(N_ip_i\bar{p}_i)$ and in terms of $U(p_i)\otimes U(\bar{p}_i)\otimes U(N_i)$ matrices reads &&Ø\_1\^[(i)]{}=[**1**]{}\_[p\_i]{}\_[|[p]{}\_i]{} V\_[N\_i]{} ,\
\
&&Ø\_2\^[(i)]{}=\_[p\_i]{}\_[|[p]{}\_i]{}\_[N\_i]{} ,\
\
&&Ø\_3\^[(i)]{}= ([**U**]{}\_[p\_i]{})\^[q\_i]{}\_[|[p]{}\_i]{}\_[N\_i]{} ,\
\
&&Ø\_4\^[(i)]{}=\_[p\_i]{}\_[|[p]{}\_i]{}\_[N\_i]{} ,\
\
&&Ø\_5\^[(i)]{}=\_[p\_i]{}([**U**]{}\_[|[p]{}\_i]{})\^[|[q]{}\_i]{}\_[N\_i]{} . The matrices ${\bf V}_{N_i}$ are the $N_i\times N_i$ shift matrices and similarly for ${\bf V}_{p_i}$ and ${\bf V}_{\bar{p}_i}$. The matrices ${\bf U}_{p_i}$ are given by \_[p\_i]{}=[diag]{} ( 1,e\^[2i]{},...,e\^[2i]{}) , and similarly for ${\bf U}_{\bar{p}_i}$. The tensors $n^i_{\ha\hb}$ are called the twist tensors and are given by n\^i\_=&( 0&q\_i/p\_i&0&0\
-q\_i/p\_i&0&0&0\
0&0&0&|[q]{}\_i/|[p]{}\_i\
0&0&-|[q]{}\_i/|[p]{}\_i&0 ) . In order to analyse the spectrum it is convenient to decompose the fields $A_{\a}$ and $\phi_m$ as A\_=( a\_\^1&b\_\
b\_\^&a\_\^2 ) , \_m=( c\_m\^1&d\_m\
d\_m\^&c\_m\^2 ) . \[2.11\] The fields $a^i_{\a}$ and $c^i_m$ are in the adjoint representation of $U(N_ip_i\bar{p}_i)$ and the fields $b_{\a}$ and $d_m$ in the fundamental representation of $U(N_1p_1\bar{p}_1)\otimes U(\overline{N_2p_2\bar{p}_2})$. Substituting the ansatz (\[2.11\]) in the action (\[2.8\]) and keeping only the quadratic terms in the fields together with the boundary conditions (\[2.10\]) we may derive the spectrum of the theory. The result is resumed in table 1. Note that we are using the complex coordinates $z_k=(x^{2k}-ix^{2k+1})/\sqrt{2}$ with $k=1,2$ to express the fields $b_{\ha}$. It is important to realize that the functions $\chi^r_{m_1m_2}$ determining the mode expansion of the fields $b_{\a}$ and $d_m$ on $T^4$ have the same “status” as the usual modes $e^{ik_{\ha}x^{\ha}}$. They form a basis for functions satisfying the twisted boundary conditions obeyed by these fields on $T^4$ and are expressed in terms of $\Theta$-functions. Also the quadratic operator $\hat{M}$ is the analogue of $(\p_{\ha})^2$. Each eigenvalue of this operator has a degeneracy $n_L\bar{n}_L$ associated with the number of Landau levels in the system.
[Fields]{} [Quadr. operators]{} [Modes]{} [On-shell cond.]{} [No. of d.o.f.]{}
----------------- --------------------------------- --------------------------------------------- ----------------------------- --------------------
$a_{\ha}^i$ $-(\p_{\s})^2-(\p_{\ha})^2$ $e^{ik_{\s}x^{\s}+ik_{\ha}x^{\ha}}$ $k_{\s}^2+k_{\ha}^2=0$ $4N_ip_i\bar{p}_i$
$b_{z_k}$ $-(\p_{\s})^2+(\hat{M}-4\pi f)$ $e^{ik_{\s}x^{\s}}\chi^r_{m_1m_2}(x^{\ha})$ $k_{\s}^2+\la^-_{m_1m_2}=0$ $4n_L\bar{n}_L$
$b_{\bar{z}_k}$ $-(\p_{\s})^2+(\hat{M}+4\pi f)$ $e^{ik_{\s}x^{\s}}\chi^r_{m_1m_2}(x^{\ha})$ $k_{\s}^2+\la^+_{m_1m_2}=0$ $4n_L\bar{n}_L$
$c_m^i$ $-(\p_{\s})^2-(\p_{\ha})^2$ $e^{ik_{\s}x^{\s}+ik_{\ha}x^{\ha}}$ $k_{\s}^2+k_{\ha}^2=0$ $4N_ip_i\bar{p}_i$
$d_m$ $-(\p_{\s})^2+\hat{M}$ $e^{ik_{\s}x^{\s}}\chi^r_{m_1m_2}(x^{\ha})$ $k_{\s}^2+\la_{m_1m_2}=0$ $8n_L\bar{n}_L$
: [Spectrum of the theory presented in a form suitable for reduction to two dimensions. We have imposed the Coulomb gauge condition $A_0=0$ and used the fact that $D_{\a}A^{\a}=0$ to fix $A_1$ (for the mode of $b_{z_k}$ with $\la^-=0$ this gives $A_1=0$). The operator $\hat{M}$ is given by $\hat{M}=(i\p_{\ha}+\pi J_{\ha\hb}x^{\hb})^2$ with $J_{\ha\hb}=(n^1_{\ha\hb}-n^2_{\ha\hb})/(L_{\ha}L_{\hb})$. The functions $\chi^r_{m_1m_2}$ are eigenfunctions of the operator $\hat{M}$ with eigenvalues $\la_{m_1m_2}=4\pi f(m_1+m_2+1)$ where $4\pi f=(\tan{\th_1}-\tan{\th_2})/(\pi\a')$ and $\la^{\pm}_{m_1m_2}=\la_{m_1m_2}\pm 4\pi f$. The index $r$ in the functions $\chi^r_{m_1m_2}$ runs from $1$ to $n_L\bar{n}_L$ with $n_L=|p_1q_2-p_2q_1|$ and $\bar{n}_L=|\bar{p}_1\bar{q}_2-\bar{p}_2\bar{q}_1|$. The index $\s$ runs from $0$ to $1$.]{}
In table 1 we wrote the mode expansion for the various fields but some care is necessary because each field carries Lie algebra indices. Consider first the case of the fields $b_{\ha}$ and $d_m$. The corresponding modes in the table are defined on a $S^1_{eff}\times T^4_{eff}$ 5-torus while a given $a\bar{b}$ Lie algebra component of these fields takes values on $S^1\times T^4$ determined by a given segment of $S^1_{eff}\times T^4_{eff}$. The different Lie algebra components are then related by the boundary conditions. In the case of these fields we have $S^1_{eff}$ with a length $L_{eff}=N_1N_2L_1$ and $T^4_{eff}$ with radi ($p_1p_2R_2,R_3,\bar{p}_1\bar{p}_2R_4,R_5$). A similar comment applies to the $a^i_{\ha}$ and $c^i_m$ fields but now $S^1_{eff}$ has radius $N_iR_1$ and $T^4_{eff}$ has radi ($p_iR_2,R_3,\bar{p}_iR_4,R_5$). To be more explicite we consider the modes on $T^4$ of the fields $b_{z_k}$ with the lowest eigenvalue $\la^-=0$ (i.e. $m_1=m_2=0$). These are the only modes coming from the fields $b_{\ha}$ and $d_m$ that are associated with massless particles in two dimensions. We write a given Lie algebra component of the fields, say the $1\bar{1}$ component, as $(b_{\bar{z}_k}^{1\bar{1}}=0)$ &&b\_[z\_1]{}\^[1|[1]{}]{}=\_[r=1]{}\^[n\_L|[n]{}\_L]{} \^r\_1(x\^)\^r(x\^) ,\
\[2.12\]\
&&b\_[z\_2]{}\^[1|[1]{}]{}=\_[r=1]{}\^[n\_L|[n]{}\_L]{} \^r\_2(x\^)\^r(x\^) . The complex fields $\xi^r_k$ are defined on an effective circle with radius $R_{eff}=N_1N_2R_1$ and $\chi^r(x^{\ha})$ takes values on the $T^4_{eff}$ defined above. The fields $b_{z_k}^{1\bar{1}}$ take values on $S^1\times T^4$ and all the other Lie algebra components may be obtained from this one by using the boundary conditions (\[2.10\]).[^2] It is important to realize that the fields $b_{z_k}^{a\bar{b}}$ are operators in the quantum theory and therefore the fields $\xi^r_k$ are also quantum operators.
### Theory on the brane in the decoupling limit
We consider the limit where the brane dynamics decouples from the bulk. This limit corresponds to take $\a'\rightarrow 0$ and $N_i\rightarrow\infty$, i.e. we are considering the large $N$ theory in the infrared limit. Noting that the radi of $T^4$ scale as $\sqrt{\a'}$ we conclude that the massive Kałuża-Klein modes on $T^4$ associated with the fields $a^i_{\ha}$ and $c_m^i$ decouple from the theory in the above limit. Also, with the exception of the massless modes associated with the fields $b_{z_k}$ in (\[2.12\]) all the excitations associated with $b_{\ha}$ and $d_m$ decouple. Thus, we are left with the massless excitations associated with the fields $c_m^i(x^{\s})$, $a_{\ha}^i(x^{\s})$ and $\xi^r_k(x^{\s})$.
To analyse the resulting theory it is convenient to consider the T-dual six-dimensional theory with worldvolume given by the string directions $x^{\s}$ and by the tranverse space to the D-5-brane system $\bE^4$. We follow very closely the analysis given in [@Mald3]. In fact, our model provides an explicite realization of the results there derived. We start with $N=2$ SUSY in $D=6$ but the self-dual background field strength on the $T^4$ breaks half of the supersymmetries leaving $N=1$ SUSY in $D=6$. There are two possible multiplets, the vector multiplet and the hypermultiplet [@Mald5]. The fields $c_m^i$ correspond to the gauge independent degrees of freedom of gauge bosons and therefore fall into a vector multiplet. The fields $a_{\ha}^i$ are $4$ scalars and fall into a hypermultiplet. The resulting theory is just two copies of $10D$ SYM compactified on $T^4$, each copy with gauge group $U(N_ip_i\bar{p}_i)$. So far we have the field content of $N=2$ SUSY in $D=6$. The fields $c_m^i$ in the vector multiplets are left invariant under the $SO(4)_I$ rotational symmetry associated with the $T^4$ while the fields $a_{\ha}^i$ in the hypermultiplets transform as $\bf{\bar{4}}$ under this symmetry. The theory is not $SO(4)_I\simeq SU(2)_L\otimes SU(2)_R$ invariant because the background field strength breaks this symmetry. If this field was not (anti) self-dual we would be left with a $U(1)\otimes U(1)$ symmetry corresponding to rotations in the $x^2, x^3$ and $x^4, x^5$ directions. For (anti) self-dual fields this symmetry gets enhanced to $U(2)\simeq SU(2)\otimes U(1)$ [@Baal]. The action of this group in the $z_k, \bar{z}_k$ coordinates is generated by [i]{}**[\_3]{}**[1]{}, [i]{}**[\_3]{}**[\_3]{}, [i]{}**[1]{}**[\_2]{}, [i]{}**[\_3]{}**[\_1]{}, where the first generator corresponds to the $U(1)$ factor and the $\sigma$’s are the Pauli matrices. Thus, the resulting $N=1$ theory as a $U(2)$ $R$-symmetry. We still have to consider the complex fields $\xi^r_k$. For each $r$, they describe $4$ scalar fields and therefore fall into a hypermultiplet [@Dijk..]. The fields $\xi^r=(\xi^r_1\ \xi^r_2)$ transform as $\bf{\bar{2}}$ under the $U(2)$ $R$-symmetry.****************
The reduction of the theory to two dimensions results in a theory with $N=4$ SUSY in $2D$. Now both the hypers and the vectors have $4$ scalar fields. They are distinguished by the different transformation properties under $R$-symmetries. The theory has an extra $SO(4)_E\simeq SU(2)_{\tilde{L}}\otimes SU(2)_{\tilde{R}}$ $R$-symmetry that leaves the scalars in the hypermultiplets unchanged but acts on the scalars in the vector multiplets. This theory has two supersymmetric branches, the Higgs branch where the hypers are excited and the Coulomb branch where the vectors are excited. Supersymmetry implies that there is no coupling between vector and hyper multiplets [@Mald3]. We shall describe the different branches of the theory in the next subsection, for now let us just note that in the Higgs phase the fields $a_{\ha}^i$ condense and the only independent degrees of freedom are associated with the fields $\xi^r_k$.
All this resembles the moduli space approximation to the dynamics of the D5-D1 brane bound state. The Higgs branch describing the moduli of instantons on $T^4$ and the Coulomb branch the fluctuations of the system in the transverse space. However, there is a crucial difference in our description. The modes of the quantum fields $b_{z_k}$ that survived the decoupling limit are self-dual on $T^4$ and in that sense deserve to be called instantons but rather then being interpreted as solitons they should be interpreted as fundamental modes of the fields (just like the standard $e^{ik_{\ha}x^{\ha}}$ modes). In other words, we are [*not*]{} quantising the collective coordinates of a soliton (instanton). There are two reasons for this: Firstly, they are the field theory realization of the low lying modes corresponding to open strings with ends on the D-5-branes with a different background field strength. Thus, they are as fundamental as the other modes corresponding to the $a^i_{\a}$ and $c^i_m$ fields associated with open strings ending on the same D-5-branes. Secondly, these instantons do not really have a size in the sense that there is no moduli associated with its size. In fact, all the dependence of $b_{z_k}$ on $T^4$ is through $x^{\ha}/L_{\ha}$. Thus, if the volume of $T^4$ is scaled the fields scale uniformly [@Doug..]. Also, this means that we can take the limit $L_{\ha}\rightarrow 0$ and the field configuration remains well defined.
There is a potential problem when we take the size of $T^4$ to be of order one in string units. Because the fields $b_{z_k}$ are $x^{\ha}$ dependent we could expect that string derivative corrections to the DBI (or SYM) action become important [@Kita; @AndrTsey]. It turns out that for $\Delta\th\equiv\th_1-\th_2\ll 1$ which holds when the DBI corrections are suppressed the derivative corrections are also suppressed. To see this recall that the wave functions $\chi_{m_1m_2}$ in table 1 may be generated by the creation operators $a_k^{\dagger}$ with [@CostaPerry] &&a\_k\^= (\_[z\_k]{}-f|[z]{}\_k) , k=1,2,\
\[operators\]\
&&a\_l= (\_[|[z]{}\_l]{}+fz\_l) , l=1,2, and $[a_l,a_k^{\dagger}]=\d_{lk}$. We then have $\langle z,\bar{z}|m_1m_2\rangle=\chi_{m_1m_2}(z,\bar{z})$, where $|m_1m_2\rangle$ is normalised to unit. Considering for example a typical derivative correction term like $\sqrt{\a'}\p_2\chi_0$ we obtain from (\[operators\]) \_2\_0=[i]{} ( \_[1,0]{}+ \_0) , which is negligible for $\Delta\th\ll 1$ (note that $4\pi f=\D\th/(\pi\a')$). Thus, our field theory description is valid for $V_4\sim\a'^2$. As an aside note that the disagreement between the string and gauge theory normalisation for the masses of the excitations on the brane bound state may be due to this derivative corrections. A fact that as been suggested in [@HashTayl].
To summarise, rather then doing a moduli space approximation we have a vacuum state defined by the background field strength $G^0_{\ha\hb}$ which is a (constant) instanton on $T^4$. The quantum fluctuations around this vacuum are well defined by open strings ending on the D-5-brane bound state and their low energy field theory realization is resumed in table 1. We have a quantum mechanical description of the excitations around the instanton vacuum state pretty much as the description of the D-5-brane/D-string configuration given by Callan and Maldacena [@CallMald]. In the decoupling limit $\a'\rightarrow 0$ we ended up with the quantum fields $c_m^i(x^{\s})$, $a_{\ha}^i(x^{\s})$ and $\xi^r_k(x^{\s})$.
### Higgs and Coulomb branches
Now we describe the Higgs and Coulomb branches of the theory. We shall see that in the decoupling limit here considered these branches decouple [@Ahar; @Witt]. We want to define a supersymmetric branch of the theory on the brane such that it will describe the dynamics of the black holes considered in the next section. These black holes will be appropriately identified with some state of the theory on the brane which may or may not preserve some supersymmetry.
Since the fields $b_{z_k}^{a\bar{b}}$ originate a self-dual field strength on $T^4$ the resulting compactified theory is supersymmetric. However, when we consider the interactions between these fields it is seen that the fluctuating field strength $F_{\ha\hb}$ is no longer self-dual. To next order in the fields $b_{z_k}^{a\bar{b}}$ the self-duality condition holds if the fields $a^i_{\ha}$ condense. They are determined by &a\^i\_=\^[-1]{}\_S\^i\_ ,\
\
&(S\^1\_)\^[ab]{}=-i , \[2.13\]\
\
&(S\^2\_)\^[ab]{}=-i , where ${\Box}\equiv \p^2_{\ha}$. We could have $N_ip_i\bar{p}_i$ commuting components of the free fields $a^i_{\hb}$ and still have self-duality. The boundary conditions (\[2.10\]) imply that these components would have to be on the diagonal or on a shifted diagonal when the fields are expressed in terms of $U(p_i)\otimes U(\bar{p}_i)\otimes U(N_i)$ matrices. This would give $4$ massless particles defined on an effective length $N_iL_1$. This contribution is subleading in the large $N$ limit considered here. The condensation of the fields is a familiar fact. It corresponds to require the D-term $[A_{\ha},A_{\hb}]^2$ in the action (\[2.8\]) to vanish which does not happen when we consider just the fields $b_{z_k}^{a\bar{b}}$ (the cubic term in $A_{\ha}$ vanishes). Further, when the fields $\xi^r_k$ are excited the commutator term $[A_{\ha},\phi_m]^2$ in the action gives a mass term to the $c^i_m$ fields and vice-versa. Thus, in the low energy limit we have the Higgs branch with the fields $\xi^r_k$ excited and the Coulomb branch with the fields $c^i_m$ excited.
A more careful analysis is as follows: We start by considering a classical field configuration that defines a supersymmetric branch of the theory, i.e. we consider the moduli space of supersymmetric classical vacua. This corresponds to set all the D-terms of the theory to zero. The D-terms are V\_1=-[tr]{}\[\_m,\_n\]\^2 , V\_2=-[tr]{}\[A\_,\_m\]\^2 , V\_3=-[tr]{}\[A\_,A\_\]\^2 . The unusual minus signs are because we took our fields to be hermitian. $V_3$ vanishes because the $a^i_{\ha}$ fields condense, therefore we are just left with $V_1$ and $V_2$. They become &V\_1=-[tr]{}\[c\^1\_m,c\^1\_n\]\^2-[tr]{}\[c\^2\_m,c\^2\_n\]\^2 ,\
\
&V\_2=-[tr]{}\[a\^1\_,c\^1\_m\]\^2-[tr]{}\[a\^2\_,c\^2\_m\]\^2 +2[tr]{}\[(c\^1\_m)\^2b\_b\^\_+ (c\^2\_m)\^2b\^\_b\_-2c\^2\_mb\^\_c\^1\_mb\_\] . Now there are only two possibilities (apart from the trivial case $\phi_m\sim {\bf 1}$): (1) $c^i_m=0$ and then $\xi^r_k$ may be generic. This is the Higgs branch. (2) The $c^i_m$ are generic but all commute. Because the branes are wrapped the boundary conditions require that these fields take the form c\^i\_m\~(V\_[p\_i]{})\^r(V\_[|[p]{}\_i]{})\^s (V\_[N\_i]{})\^t , \[diag\] where the $V$’s are the shift matrices and $r,s,t$ are integers. The claim is that in order to vanish $V_2$ we need $\xi^r_k=0$. This may be seen by noting that if $\xi^r_k\ne 0$ the condensate formed by the fields $a^i_{\ha}$ will give a non-vanishing ${\rm tr}[a^i_{\ha},c^i_m]^2$.
We conclude that classically we either have a Higgs or a Coulomb branch. Quantum mechanically we consider fluctuations of the fields around the classical vacua that obey the D-flatness conditions. Each branch defines a different superconformal field theory. This has to be the case because a $(4,4)$ superconformal field theory has a $SU(2)\otimes SU(2)$ group of left- and right-moving symmetries that must leave the scalars in the theory invariant [@Witt1]. In the Higgs branch this group originates from the $SO(4)_E$ symmetry while in the Coulomb branch from the $SO(4)_I$ symmetry (which is broken to $U(2)$ in the Higgs branch).
### Instanton strings action
The action for the fields $\xi^r_k$ in the Higgs branch may be obtained by replacing the field configurations corresponding to (\[2.12\]) and (\[2.13\]) in the action (\[2.8\]). We normalise the functions $\chi^r$ according to \_[T\_[eff]{}\^4]{} d\^4x(\^r)\^\*\^s= (2)\^4\^[rs]{} , r,s=1,...,n\_L|[n]{}\_L , \[norm\] which is well defined in the limit $\a'\rightarrow 0$ because $R_{\ha}\sim\sqrt{\a'}$. Defining $4n_L\bar{n}_L$ real fields $\z^r$ from the $\xi^r_k$ complex fields and replacing the field configuration corresponding to (\[2.12\]) in the action (\[2.8\]) we obtain after some algebra the following $1+1$-dimensional free action S=-dt\_0\^[L\_[eff]{}]{}dx\^1 \_[r=1]{}\^[4n\_L|[n]{}\_L]{}\_\^r\^\^r , \[action\] where T\_[ins]{}= , L\_[eff]{}=2R\_1N\_1N\_2 , f=4n\_L|[n]{}\_L , \[tension\] are the instanton strings tension, the effective length and the number of bosonic (and fermionic) species in our model, respectively. In order to compare these results with the effective string model for the D5-D1 system used in the literature let us write the corresponding quantities T\_[eff]{}= , L\_[eff]{}=2R\_1Q\_5Q\_1 , f=4 . \[tension1\] The particular combination of $Q_1$ and $Q_5$ in $T_{eff}$ has been derived in [@Math; @Gubs; @HassWadia]. We shall argue in section 5 that by taking the large $N$ limit of our field theory the instanton strings tension gets normalised reproducing an effective string tension which agrees with this prediction. Using the result $Q_5Q_1=N_1N_2n_L\bar{n}_L$ which holds for the D5-D1 system we see that our results for $L_{eff}$ and $f$ are not necessarily in contradiction with (\[tension1\]). Note that for the D5-D1 brane bound state described by our field configuration we always have $f\ge 16$.[^3] The case $f=16$ corresponds to the example given after equation (\[Q1\]) if we set $p_i=\bar{p}_i=1$. If this is the case one could argue that our results are not reliable because the DBI corrections are important. This is certainly true but things may not be as bad as they look. The reason is that the supersymmetric configuration that we have found depends exclusively on the boundary conditions satisfied by the fields and on the self-duality condition. The former depends only on the gauge invariance of the theory and it is certainly independent of the specific Lagrangian describing the dynamics of the system. The latter is sufficient to show that our field configuration preserves a fraction of the supersymmetries and it is also independent of the specific Lagrangian. In fact, the self-duality condition was shown in [@Brec] to be a sufficient condition to minimise the non-abelian DBI action proposed by Tseytlin [@Tsey]. These arguments together with the string analysis given in [@CostaPerry] provide evidence for the validity of the supersymmetric field configuration even when the DBI corrections are expected to be important. Thus, we do not expect $L_{eff}$ and $f$ to be altered. What changes is the interacting theory and not the free action (\[action\]).
We should now worry about the supersymmetry completion of the action (\[action\]). In [@Mald3] it was shown that this action takes the form S=-d\^2x ( G\_[rs]{}()\_\^r\^\^s + ) , where $G_{rs}$ is a hyperkähler metric. This defines the superconformal field theory describing the Higgs phase. From our knowledge of the $b_{z_k}$ and $a^i_{\ha}$ field configurations corresponding to (\[2.12\]) and (\[2.13\]) one could in principle attempt to find the $\z^r$ corrections to the flat metric $G_{rs}=\d_{rs}$ in (\[action\]).
We end this subsection by considering the dilute gas regime. Stability of the D-brane bound state requires that the energy associated with the string modes should be much smaller then all the energy scales associated with the D-brane bound state. This gives the condition (a similar derivation of the dilute gas regime was given in [@Mald4]) M\_[1\_i]{} (M\_[3\_i]{}, M\_[3’\_i]{}M\_[5\_i]{}\^) , \[dilute\] where $N_{L,R}$ are the left- and right-moving momenta carried by the instanton strings along the $x^1$-direction (note that, e.g. $N_R'=N_1N_2N_R$ is the level of the right-moving sector because $L_{eff}=2\pi R_1N_1N_2$). Condition (\[dilute\]) gives r\_0, r\_nr\_i , \[2.14\] where we define the length scales $r_n$ and $r_0$ according to [@Horo..] r\_n\^2=r\_0\^2\^2 , N\_[L,R]{}=r\_0\^2e\^[2]{} , \[2.15\] with $V_4$ the volume of $T^4$. The condition (\[2.14\]) defines the dilute gas regime derived in [@MaldStro].
Supergravity phase
------------------
The supergravity solution associated with our D-brane bound state is a solution of the type IIB supergravity equations of motion. The corresponding bosonic action is S\_[IIB]{}&=& {d\^[10]{}x - \_4 \_3 } ,where $\kappa_{10}$ is the ten-dimensional gravitational coupling, ${\cal F}_5'=d{\cal A}_4+\frac{1}{2}({\cal B} \wedge {\cal F}_3
-{\cal A}_2\wedge {\cal H})$ is a self-dual 5-form, ${\cal H}=d{\cal B}$ and ${\cal F}_3 = d{\cal A}_2$. The fields $\chi$, ${\cal A}_{2}$ and ${\cal A}_{4}$ are the 0-, 2- and 4-form ${\rm R}\otimes {\rm R}$ potentials and the field ${\cal B}$ the 2-form ${\rm NS}\otimes{\rm NS}$ potential. $\phi_{10}$ is the dilaton field with its zero mode subtracted. The ${\rm NS}\otimes{\rm NS}$ background fields describing our bound state are ds\^[2]{}&=& H\^ ,\
\
e\^[2]{}&=&H\^[-2]{} , \[2.17\]\
\
[B]{}&=&- \_ir\_i\^2 (dx\_2\^[1]{}dx\_3+dx\_4dx\_5) , where &&H=1++ ,\
\[2.18\]\
&&=1+ , with $r$ the radial coordinate on $\bE^4$. The constants $\th_i$ and $r_i$ are defined in (\[2.2\]) and (\[2.3\]), respectively. The exact form of the ${\rm R}\otimes {\rm R}$ fields is rather complicated because the Chern-Simons terms for this solution do not vanish. We write all non-vanishing components of the ${\rm R}\otimes {\rm R}$ fields keeping only the corresponding leading order terms at infinity. The result is \_[at1]{}&\~&d( )\_a \_ir\_i\^2\^2[\_i]{}+[O]{}() ,\
\
[F]{}\_[abc]{}&\~&-d()\_[abc]{} \_ir\_i\^2\^2[\_i]{}+[O]{}() ,\
\[2.17a\]\
[F]{}\_[at123]{}=[F]{}\_[at145]{}&\~& d()\_a \_ir\_i\^2 +[O]{}() ,\
\
[F]{}\_[abc23]{}=[F]{}\_[abc45]{}&\~& -d()\_[abc]{} \_ir\_i\^2 +[O]{}() , where $\star$ is the dual operation with respect to the Euclidean metric on $\bE^4$. This solution corresponds to the vacuum state of our D-brane bound state.
Next we obtain the D5-D1 brane solution as a special case. All we have to do is to require that the D-3-brane charges vanish, i.e. r\_1\^2+r\_2\^2=0 , and redefine the parameters $r_5$ and $r_1$ as &&r\_5\^2r\_1\^2\^2[\_1]{}+r\_2\^2\^2[\_2]{} ,\
\[parameters\]\
&&r\_1\^2r\_1\^2\^2[\_1]{}+r\_2\^2\^2[\_2]{} . We have then that $H=H_1H_5$, $\tilde{H}=H_5$ (note that $r_1r_2\sin{\D\th}\equiv r_5r_1$) and the resulting solution simplifies dramatically (specially the ${\rm R}\otimes {\rm R}$ fields) to the well known D5-D1 solution. Note that by taking $\th_1=0$ and $\th_2=\pi/2$ we also obtain the D5-D1 solution. However, our field theory description does not hold because the gauge field diverges. In this case the correct description is given by the D-5-brane/D-string picture [@CallMald].
We may add some momentum along the string direction in (\[2.17\]). In the D-brane picture this corresponds to excite the left- and right-moving sectors of the instanton strings theory. If we keep in the dilute gas region defined in (\[2.14\]) and further assume that r\_0\^2, r\_n\^2r\_1r\_2 , \[neglmass\] then all the fields in (\[2.17\]), (\[2.17a\]) remain unchanged but the metric which becomes ds\^[2]{}&=& H\^ , where $r_0$, $r_n$ and $\b$ are defined in (\[2.15\]). Note that in the case of the D5-D1 system the condition (\[neglmass\]) follows from (\[2.14\]) and (\[2.6\]). For given values of $r_n$ and $r_0$ the total left- and right-moving momenta along the strings are completely fixed. This means that the state of the instanton strings is described by the microcanonical ensemble. Using the asymptotic density of states for a conformal field theory with $4n_L\bar{n}_L$ species of bosons and fermions we obtain the usual matching with the Bekenstein-Hawking entropy S\_[BH]{}=( r\_nr\_1r\_2)= 2(+) . \[2.20\] This agreement occurs for $N_{L,R}N_1N_2\gg n_L\bar{n}_L$. This fact may be interpreted in the following way. We may approximate the microcanonical ensemble by the canonical ensemble with the left- and right-moving temperatures [@MaldStro] T\_[L,R]{}== . \[2.21\] The occupation number for a given mode is then easily calculated in the canonical ensemble. This approximation is valid for $T_{L,R}\gg E_g$, where $E_g\sim (R_1N_1N_2)^{-1}$ is the energy gap on the field theory side. Physically this means that in the thermodynamical description the energy spectrum may be regarded as continuous. Replacing for the values of $T_{L,R}$ we obtain precisely the condition $N_{L,R}N_1N_2\gg n_L\bar{n}_L$. Thus, the non-extreme case (\[2.19\]) is associated with a thermal state of the instanton strings.
Now we comment on the region of validity of the supergravity approximation. We keep $g\ll 1$ in order to suppress closed string loop effects. We are also making an $\a'$ expansion. Thus, the length scales in our solution have to be much larger then one in string units, i.e. r\_0, r\_n, r\_i, 1 . \[2.22\] The supergravity approximation is valid for processes involving energy scales such that $\o l_{max}\ll 1$ where $l_{max}$ is the maximal length scale [@Mald3]. We conclude that the D-brane and supergravity phases are mutually exclusive. Considering the last condition in (\[2.22\]) for the region of validity of the supergravity phase we have in terms of the D-brane system $g\sqrt{N_1N_2n_L\bar{n}_L}\gg 1$. We shall see below that consistency between the supergravity and D-brane phases requires $n_L\bar{n}_L$ not to be very large. Hence we have (for $N_1\sim N_2$) gN\_i1 . \[largeN\] Since $g$ is small we conclude that $N_i\gg 1$. Thus, the supergravity phase is associated with a large $N_i$ D-brane system.
Next we show that to compare with the supergravity phase it is perfectly consistent to neglect the massive string states on the field theory side. This corresponds to the $\a'\rightarrow 0$ decoupling limit where these states become infinitely massive. The condition to neglect such modes is T\_[L,R]{} r\_n\^2, r\_0\^2(r\_1r\_2)\^2 . \[2.23\] Using the conditions (\[neglmass\]) and (\[2.22\]) it is seen that (\[2.23\]) holds. Thus, on the supergravity side we do not expect to find effects caused by these fields and it is consistent to drop them in the field theory approach (note that in this case we are not protected by supersymmetry as it was the case in [@CostaPerry]).
Another check of consistency between both descriptions is concerned with the mass gap. In the field theory description this equals $(N_1N_2R_1)^{-1}$, while on the supergravity side it is given by the inverse of the temperature such that the specific heat is of order unit [@MaldSuss; @Pres..; @HolzWilc; @KrausWilc]. This condition gives M\~ \~(N\_1N\_2n\_L|[n]{}\_LR\_1)\^[-1]{} . \[massgap\] Thus, we can not have $n_L\bar{n}_L$ very big. For the D5-D1 brane system this fact brings us to the case where the DBI corrections are important that we have discussed in subsection 2.1.3. In the more general case we may have $n_L\bar{n}_L\sim 1$ while keeping $p_i\gg |q_i|$ and $\bar{p}_i\gg |\bar{q}_i|$ (for example this happens for $q_i=\bar{q}_i=1$ and $p_1=p_2-1$, $\bar{p}_1=\bar{p}_2-1$ while keeping $p_i\gg 1$ and $\bar{p}_i\gg 1$).
Minimally coupled scalar
========================
In this section we shall find a minimally coupled scalar in the supergravity backgrounds of section 2.2. We shall follow the same strategy of [@Call..] by reducing the type IIB action to five-dimensions. Then we linearise the DBI action and generalise the result to the non-abelian case in order to determine the coupling of the minimally coupled scalar to the instanton strings.
Reduction to five dimensions
----------------------------
To find a minimally coupled scalar in our black hole backgrounds we reduce the action (\[2.16\]) with the following metric ansatz [@Call..] ds\^2=e\^[\_5]{}g\_[ab]{}dx\^adx\^b+ e\^[2\_5]{}( dx\^1+[A]{}\_a\^[(K)]{}dx\^a)\^2+ e\^[2]{}\_dx\^dx\^ , \[3.1\] where $g_{ab}$ is the five-dimensional Einstein metric[^4]. Truncating the action (\[2.16\]) such that the only non-vanishing form fields are those appearing in the solution (\[2.17\]), (\[2.17a\]) and assuming as it is the case that ${\cal A}_{a 1}$, ${\cal A}_{a 1\ha\hb}$ and ${\cal A}_a^{(K)}$ are electric we obtain the following five-dimensional action ($S_1$ and $S_2$ were vanishing in the case considered in [@Call..]) S&=&d\^5x +S\_1+S\_2 ,\
\
S\_1&=&d\^5x ,\
\
S\_2&=&d\^5x \^\^[abcde]{}( \_[a1]{}[F]{}\_[bcd]{}[H]{}\_[e]{} -\_[ab]{}[F]{}\_[cd1]{}[H]{}\_[e]{} ) , where $\kappa_5$ is the five-dimensional gravitational coupling and =\_[10]{}-2=\_5+ , =\_5-=\_5- . \[3.3\] The 5-form ${\cal F}'_5$ reduces to &&[F]{}’\_[ab1]{}=[F]{}\_[ab1]{}+ ([B]{}\_[F]{}\_[ab1]{} -2[A]{}\_[1\[a]{}[H]{}\_[b\]]{}) ,\
\[3.4\]\
&&[F]{}’\_[abc]{}=[F]{}\_[abc]{}+ ([B]{}\_[F]{}\_[abc]{} -3[A]{}\_[\[ab]{}[H]{}\_[c\]]{}) , and the ten-dimensional self-duality condition $\star{\cal F'}={\cal F'}$ becomes ’\_[abc]{}=e\^[-]{} \_[abcde]{}\_[F’]{}\^[de]{}\_[ 1]{} . \[3.5\] We conclude that the field $\Phi$ (dilaton field in the six-dimensional theory) is minimally coupled.
Coupling to the instanton strings
---------------------------------
The coupling of the scalar field $\Phi$ to the instanton strings may be found following a similar approach to the D-3-brane case [@Kleb; @Kleb..]. Start with the DBI action for the D-5-brane written in the static gauge &&S\_[DBI]{}=-T\_5d\^6x e\^[-\_[10]{}]{} +[ RR couplings]{} ,\
\
&&[G]{}\_=2’G\_-\_ , \[3.6\]\
\
&&\_= g\_+2g\_[m(]{}\_[)]{}X\^m+g\_[mn]{}\_X\^m\_X\^n . As for $\hat{g}_{\a\b}$ the field $\hat{{\cal B}}_{\a\b}$ is the pull-back to the D-5-brane worldvolume of the ${\rm NS}\otimes{\rm NS}$ 2-form potential. We set ${\cal B}$ to zero and expand the metric around flat space: $g_{ab}=\eta_{ab}+h_{ab}$. Then we expand the action (\[3.6\]) keeping the quadratic terms in the worldvolume fields and the linear terms in the bulk fields. Defining the scalar fields $\phi^m=X^m/(2\pi\a')$ the result is S\_[DBI]{}&\~&-(2’)\^2T\_5d\^6x , \[3.7\]\
\
T\_&=&G\_\^[ ]{}G\_ -\_(G\_)\^2 +\_\^m\_\_[m]{} -\_\_\^m\^\_m , where the indices are raised and lowered with respect to the Minkowski metric and $T_{\a\b}$ is the energy-momentum tensor of the abelian YM action (free terms in (\[3.7\])). The coupling between the fields $\Phi$ and $B_{\a}$ is determined by the coupling of $\phi_{10}$ and $h^{\a\b}$ to $B_{\a}$. Therefore we drop the last term in the action (\[3.7\]). The obvious generalisation of the interacting action to the $U(N)$ case is S\_[int]{}&=&d\^6x,\
\
T\_&=&[tr]{}( G\_\^[ ]{}G\_ -\_(G\_)\^2 +(\_\^m+i\[B\_,\^m\])(\_\_m+i\[B\_,\_m\]). \[3.8\]\
\
&&. -\_(\_\_m+i\[B\_,\_m\])\^2 +\_\[\_m,\_n\]\^2) . The situation is analogous to the calculation involving the D-3-brane [@Kleb; @Kleb..]. Note that it is straightforward to write the supersymmetric completion of (\[3.8\]) because $\phi_{10}$ couples to the SYM action and $h^{\a\b}$ to the corresponding energy-momentum tensor. We are just writing the interacting terms that follow from the SYM action but there will be DBI type corrections as well as there may be modifications to the energy-momentum tensor imposed by conformal invariance [@KlebGubs].
In the linear approximation the scalar fields in the ansatz (\[3.1\]) are identified with the tensor $h_{\a\b}$ according to h=h\^\_[ ]{}=8 , h\_[00]{}=-\_[5]{} , h\_[11]{}=2\_5 . \[3.9\] Keeping only the interacting terms with the field $\Phi$ that are quadratic in the worldvolume fluctuating field $A_{\a}$ we have (note that $G_{\a\b}=G^0_{\a\b}+F_{\a\b}$) S\_[int]{}=d\^6x [tr]{}( F\_F\^) . \[3.10\] As explained before we are just considering processes involving the massless excitations on the brane and keeping only the fields associated with the instanton strings. A similar calculation to the one in subsection 2.1.3 gives the following interacting term between $\Phi$ and the instanton strings S\_[int]{}=-dt\_0\^[L\_[eff]{}]{}dx\^1 \_[r=1]{}\^[4n\_L|[n]{}\_L]{}\_\^r\^\^r , \[3.11\] where $T_{ins}$ and $L_{eff}$ are given in (\[tension\]). The factor multiplying the integral is important.
Cross section
=============
In this section we calculate the cross section for the scattering of the D-brane bound state by the scalar particle $\Phi$ both in the supergravity and D-brane picture. We shall consider the supergravity solutions corresponding to the D-brane system vacuum and thermal states.
Classical calculation
---------------------
The equation of motion satisfied by the scalar field in the background (\[2.17\]) is for the s-wave mode (r)=0 . \[4.1\] Writing $\Phi=\rho^{-3/2}\Psi(\rho)$ where $\rho=\o r$ we have ()=0 . \[4.2\] For $\rho\gg\o r_i\sin{\D\th}$ (i.e. $r\gg r_i\sin{\D\th}$) we may neglect the ${\cal O}(1/\rho^4)$ term in comparison with the ${\cal O}(1/\rho^2)$ terms. In the low energy limit $\o r_i\ll 1$ we are considering, the differential equation satisfied by $\Psi(\rho)$ becomes ()=0 , \[4.3\] which is solved in terms of Bessel functions of degree one.[^5]
If we perform instead the coordinate transformation $y=\o r_1r_2\sin{\D\th}/(2r^2)$ the differential equation (\[4.1\]) becomes (y)=0 . \[4.4\] The last term may be neglected for $y\gg\o \sin{\D\th}$ (i.e. $r\ll r_i$). In the coordinate $z=\sqrt{2y\o r_1r_2\sin{\D\th}}$ and with $\Phi=z\Upsilon(z)$ we have in the limit $\o r_i\ll 1$ (z)=0 , \[4.5\] which is again solved in terms of Bessel functions of degree one. Since $\D\th\ll 1$ we conclude that both the equations (\[4.3\]) and (\[4.5\]) have a large overlapping domain and the corresponding solutions may be patched together.[^6]
The cross section may be calculated by using the flux method [@MaldStro]. In the near zone we require a purely infalling solution at $r=0$ and match it to the solution in the far zone. The result is & =z( J\_1(z)+N\_1(z)) ,\
\[4.6\]\
[Far region:]{} & =-J\_1() , where $J_1$ and $N_1$ are Bessel and Neumann functions, respectively. The cross section is obtained from the ratio between the flux at the horizon and the incoming flux at infinity \_[abs]{}== \^3ø( r\_1r\_2)\^2 . \[4.7\]
The calculation of the cross section for the non-extreme case is similar to the calculation presented in [@MaldStro]. In the far region the solution is the same as in the previous case and in the near region $\Phi$ is expressed in terms of hypergeometric functions. The result is =A\_h , \[4.8\] where $A_h$ is the horizon area, the left- and right-moving temperatures were defined in (\[2.21\]) and =(+) , \[4.9\] is the inverse of Hawking temperature.
D-brane calculation
-------------------
Now we calculate the absorption probability for incoming scalar particles when the D-brane system is in the vacuum state. The canonically normalised fields are $\tilde{\Phi}=\Phi/\kappa_6$ and $\tilde{\z}^r=\sqrt{T_{ins}}\z^r$. These fields have the following mode expansion \^r&=&\_q (\^r\_qe\^[iq\_x\^]{}+ \_q\^[r]{}e\^[-iq\_x\^]{}) ,\
\[4.10\]\
&=&\_[k\_1,]{} (\_ke\^[ikx]{}+\_k\^e\^[-ikx]{}) , where $q$ and $k_1$ are the corresponding momenta along the string direction and $\vec{k}$ the momentum in the transverse space with volume ${\cal V}_4$. Note that we are considering a six-dimensional free action for the field $\Phi$ that arises from compactification of the IIB theory on $T^4$ and not the five-dimensional action (\[3.2\]). The dependence on the string direction will in fact be irrelevant because we shall consider modes of the field satisfying $k_1=0$, i.e. we do not consider charged particles [@MaldStro; @KlebGubs1]. However, it is important to realize that the scalar particle is defined on a length $L_1=2\pi R_1$ while the instanton strings on a length $L_{eff}=L_1N_1N_2$. We have normalised the states such that $|q\rangle=\z_q^{r\dagger}|0\rangle$ represents a single particle with momentum $q$ in the length $L_{eff}$ and $|k\rangle=\Phi_k^{\dagger}|0\rangle$ a single particle with momentum $k$ in the volume $L_1{\cal V}_4$. Thus, from the spacetime perspective (i.e. integrating over the string direction) a state $|k\rangle$ carries a flux $1/{\cal V}_4$.
In terms of the canonically normalised fields the interacting vertex (\[3.11\]) becomes S\_[int]{}=dt\_0\^[L\_[eff]{}]{}dx\^1 \_[r=1]{}\^[4n\_L|[n]{}\_L]{}\_\^r\^\^r . \[4.11\] The initial and final states for the process considered are |i=\_k\^|0 , & k=(k\_0,0,) ,\
\[4.12\]\
|f=\_q\^[r]{}\_p\^[r]{}|0 , & q=(q\_0,q\_1) , p=(p\_0,p\_1) . The amplitude for this process is then T\_[fi]{}=- 2 . \[4.13\] The reason the supersymmetric completion of (\[4.11\]) was not considered is that on-shell fermions give a vanishing contribution to this amplitude. The final factor of two is because either of the $\z$’s in (\[4.11\]) may annihilate either of the final particle states in (\[4.12\]) [@DasMath]. The probability per unit of time for this transition to occur is then \_[fi]{}=L\_[eff]{}(2)\^2(k\_0-p\_0-q\_0)(p\_1+q\_1)|T\_[fi]{}|\^2 . \[4.14\] To obtain the total probability rate we have to sum over the $4n_L\bar{n}_L$ species of particles and integrate over the final momenta dividing by two due to particle identity. The result is \_[abs]{}=4n\_L|[n]{}\_L\_[p,q]{}\_[fi]{}= L\_[eff]{}n\_L|[n]{}\_L\_5\^2ø . \[4.15\] Since the state $|i\rangle=|k\rangle$ carries the flux $1/{\cal V}_4$ we have that $\s_{abs}={\cal V}_4\Gamma_{abs}$ which agrees exactly with (\[4.7\]). Besides the rather successfully D-3-brane case [@Kleb; @Kleb..; @KlebGubs] this is the first example where this calculation has been done by deducing from first principles the coupling between the bulk and the worldvolume fields.
The calculation when the D-brane system is described by a thermal state of left- and right-movers is done in the following way. We consider a unit normalised state $|n_R,n_L\rangle$ of the instanton strings with $n(p_R)$ and $n(p_L)$ right- and left-mover occupation numbers. Now the initial and final states for the process are |i=\_k\^|n\_R,n\_L , & k=(k\_0,0,) ,\
\[4.16\]\
|f=\_q\^[r]{}\_p\^[r]{}|n\_R,n\_L , & q=(q\_0,q\_1) , p=(p\_0,p\_1) . The amplitude for this process is T\_[fi]{}=- 2 (n(p)+1)(n(q)+1) . \[4.17\] The total probability rate is obtained by summing over all final states and averaging over all initial states in the thermal ensemble [@Dhar..]. This gives the desirable Bose-Einstein thermal factors. Agreement with (\[4.8\]) is found by using $\s_{abs}=V_4(\Gamma_{abs}-\Gamma_{emis})$, where $\Gamma_{emis}$ is the probability rate for the time reversed process [@Call..].
CFT/AdS duality
===============
In this section we start by analysing the region of validity of the previous cross section calculations. We shall define a double scaling limit [@Kleb] where the supergravity cross section calculation and our gauge theory calculation of the D-brane absorption probability should in fact agree. The last subsections are concerned with the near horizon geometry and Maldacena’s duality proposal [@Mald2].
Double scaling limit
--------------------
Consider the ground state of the D-brane system. We argued in subsection 2.2 that the supergravity approximation holds if the length scales in the solution are big in string units. In particular we have r\_1r\_21 g\_[eff]{}1 , \[5.1\] where $g_{eff}$ is the D-brane effective string coupling defined in (\[2.3\]). In this limit string corrections to the metric are suppressed (the string loop corrections are also suppressed because we are considering $g\ll 1$). The curvature of this background is bounded by its value at $r=0$ where \~ \~ . \[5.2\] The classical cross section is naturally expanded in powers of $\o^4{\rm curv}^{-2}$. Thus, for energies such that ø\^4(g\_[eff]{}’)\^21 , \[5.3\] we expect the classical approximation to the scattering process to be good. Both conditions (\[5.1\]) and (\[5.3\]) are satisfied in the double scaling limit [@Kleb] g\_[eff]{} , ø\^4’\^20, \[5.4\] such that (g\_[eff]{})\^2 (ø\^4’\^2) , \[5.5\] is held fixed and small, i.e. (\[5.3\]) holds. The second condition in (\[5.4\]) implies that the massive excitations on the D-5-brane may be neglected when comparing with the supergravity cross section calculation, i.e. it corresponds to the decoupling limit of the brane theory. These states have a mass that scales as $1/\sqrt{\a'}\gg \o$ (note that some of the massive states have a mass proportional to $\sqrt{\D\th}$ which is held fixed in the limit (\[5.4\])). This is the reason why they were dropped in the field theory description. They may be neglected in the double scaling limit.
Now we show that in the limit (\[5.4\]) the D-brane calculation may in fact be trusted (even if $g_{eff}\rightarrow\infty$). The only scale in the scattering calculation is given by the gravitational coupling $\kappa_6$ as may be seen in the interacting Lagrangian when written in terms of the canonically normalised fields. The cross section is then an expansion in powers of ø\^4\_6\^2n\_L|[n]{}\_L . \[5.6\] The $n_L\bar{n}_L$ factor is because we sum over all different species in the final state and the $L_{eff}/L_1=N_1N_2$ factor because the scalar particles leave in a length $L_1$ while the instanton strings in a length $L_{eff}$ (the state $|k\rangle=\Phi_k^{\dagger}|0\rangle$ corresponds to a single particle in the volume $L_1{\cal V}_4$ or $L_{eff}/L_1$ particles in the volume $L_{eff}{\cal V}_4$). The $\o^4$ factor follows from dimensional analysis. Thus, the perturbative string calculation is valid for ø\^4N\_1N\_2n\_L|[n]{}\_L\_6\^21 ø\^4(’g\_[eff]{})\^21 , \[5.7\] which holds in the limit (\[5.4\]).
We conclude that both the classical and string cross section calculations have an overlapping domain of validity and it is therefore not surprising that agreement is found.
Near horizon geometry
---------------------
As it is the case for the D5-D1 brane configuration [@Hyun; @Boon..; @SfetSken] the near horizon geometry associated with our D-brane bound state is $AdS_3\times S^3\times M$, where $M$ is a compact manifold ($T^4$ in our case). Taking the limit $r\rightarrow 0$ we obtained the following fields describing the near horizon geometry ds\_[10]{}\^2&\~& ds\_3\^2+ ds\^2(T\^4)+R\^2dØ\_3\^2 ,\
\
ds\_3\^2&=& -d\^2+d\^2+\^2d\^2 , \[5.8\]\
\
e\^[2]{}&\~& ,\
\
[F]{}\_3&\~& 2(r\_1\^2\^2[\_1]{}+r\_2\^2\^2[\_2]{})\_[3]{} , where $R^2=r_1r_2\sin{\D\th}$, $\tau=\frac{R}{R_1}t$, $\varphi=\frac{x_1}{R_1}$, $\rho=\frac{R_1}{R}r$ and $\e_{3}$ is the unit 3-sphere volume form. To obtain the horizon value for ${\cal F}_3$ we used the behaviour at the horizon of the Chern-Simons terms in the solution. Note that the electric terms in ${\cal F}_3$ as well as ${\cal H}$ and ${\cal F}_5$ vanish at the horizon. $R$ is the 3-sphere radius and the $AdS_3$ cosmological constant is given by $\Lambda=-R^{-2}$. This geometry is interpreted as the ${\rm R}\otimes {\rm R}$ ground state of string theory on the $AdS_3\times S^3\times T^4$ background [@CousHenn; @Stro].
For later convenience we express the parameters in the solution (\[5.8\]) in terms of the field theory quantities R\^2&=&g’ \~g’g’ ,\
\
[F]{}\_3&\~&2g’(N\_1p\_1|[p]{}\_1+N\_2p\_2|[p]{}\_2)\_[3]{} 2g’Q\_5\_[3]{} , \[5.9\]\
\
v\_f&&= . The last identifications in these equations correspond to the D5-D1 brane case. The $T^4$ volume has its fixed value at the horizon while the six-dimensional string coupling $g_6$ has the same value as in the original solution where it was constant [@Mald2].
For the non-extreme solution the three-dimensional geometry in (\[5.8\]) valid in the near horizon region is replaced by the BTZ black hole [@Bana..]. In the previous $(\tau,\varphi)$ coordinates and defining a new radial coordinate $\rho^2=R_1^2(r^2+r_0^2\sinh^2{\b})/R^2$ [@BalaLars] the resulting metric reads &&ds\_3\^2=-N\^2d\^2+N\^[-2]{}d\^2+ \^2(d-N\_d)\^2 ,\
\[5.10\]\
&&N\^2=- + , N\_= . The derivation of this metric assumes that we are in the dilute gas regime and that the condition to neglect the massive string states is satisfied, i.e. r\^2, r\_n\^2, r\_0\^2r\_i\^2\^2[\_i]{}, r\_1r\_2 , \[5.11\] where we are assuming that we are close enough to the horizon such that $r$ satisfies these conditions. This geometry corresponds to an excited (thermal state) of string theory on the $AdS_3\times S^3\times T^4$ background [@Mald2]. Using the formulation of quantum gravity in $2+1$ dimensions as a topological Chern-Simons theory [@AchuTown; @Witt2], Carlip found that the degrees of freedom at the horizon are described by a $1+1$-dimensional conformal field theory reproducing the entropy formula for the BTZ black hole [@Carl]. A different approach originally due to Strominger [@Stro; @BalaLars] is based on the fact that any quantum theory of gravity on $AdS_3$ has an asymptotic algebra of diffeomorphisms given by the Virasoro algebra [@CousHenn].[^7] Physical states will form representations of such algebra and the correct entropy formula follows (for the correct central charge). Of course all these results are valid in our model because the near horizon geometry is similar to the D5-D1 brane case. The only difference is the way we parametrise the solutions.
CFT/AdS correspondence
----------------------
The motivation for the conjectured duality between the decoupled theory on the brane and supergravity on the $AdS_3\times S^3\times T^4$ background [@Mald2] relies in part on the agreement between the entropy calculations and the scattering calculations in the double scaling limit (\[5.4\]). The region of validity of the supergravity description of the near horizon geometry is given in eqn. (\[5.1\]) which reads \~g1 . This may be accomplished by taking the $\a'\rightarrow 0$ limit g\_[YM]{}\^2=(2)\^3g’0 , N\_1\~N\_2 , \[5.12\] such that R\^2\~g\^2\_[YM]{} , \[5.13\] is held fixed. Note that in this limit all the fields in (\[5.8\]) are held fixed as may be seen from (\[5.9\]). This limit is equivalent to the double scaling limit (\[5.4\]), the difference is that the energies are held fixed and $\a'\rightarrow 0$.
Now on the D-brane side the limit (\[5.12\]) is just the ’t Hooft large $N$ limit. The advantage of formulating Maldacena’s duality conjecture using this model is that we know, at least in principle, the action for the D-5-brane and its coupling to the bulk fields. As in the analysis given by Alwis for the D-3-brane [@Alwis] we consider the ’t Hooft scaling for the D-5-brane action and see what conclusions we may draw. Schematically ’t Hooft scaling may be analysed by writing (the factor $\sqrt{N_1N_2}$ replaces the usual factor of $N$ because we are considering the Higgs branch of the theory and the fields $b_{\a}$ are in the fundamental representation of $U(N_1p_1\bar{p}_1)\otimes U(\overline{N_2p_2\bar{p}_2})$) S\~- d\^6x . \[5.14\] Rescaling the fields as G\~ \~ , =d+\[,\] , \[5.15\] we obtain the action S\~- d\^6x . \[5.16\] Note that the background field $\tilde{G}^0$ remains finite, i.e. $\tilde{G}^0\sim {\rm diag}(\tan{\th_1},...)/R$. The $1/\a'^2$ factor in the front of the action is important because we are compactifying the theory on $T^4$ with a volume $V_4\sim\a'^2$ and therefore it ensures that the action remains well defined in the limit $\a'\rightarrow 0$. We are keeping $|\tan{\th_i}|\ll 1$ such that our gauge theory fluctuating spectrum does not suffer from DBI and derivative corrections. However, for processes involving energies such that $E\sim 1/R$ there will be DBI corrections [@Alwis]. In the infrared limit $E\ll 1/R$ we recover the SYM description and after reduction to $1+1$ dimensions we recover the superconformal limit in the original derivation of the duality [@Mald2]. Also, this limit corresponds on the supergravity side to the $r\rightarrow 0$ limit and we recover the near horizon geometry (moving in $r$ corresponds to moving in the energy scales on the field theory side [@Mald2]).
Our model gives a definite proposal for the conformal theory and for the coupling of the conformal fields to the bulk fields on the $AdS_3$ boundary. In other words Maldacena’s duality proposal may be recasted in the following form: [*The Higgs branch of the large $N$ limit of 6-dimensional SYM theory compactified on $T^4$ with a ’t Hooft twist is dual to supergravity on $AdS_3\times S^3\times T^4$*]{}. The parameters relating the dual theories have already been explained. The coupling to the bulk fields is determined by the DBI action.
A more precise formulation of the duality conjecture was given by means of calculating conformal field theory correlators using the supergravity near horizon geometry [@Gubs..; @Witt3]. Unfortunately the number of calculations that may be done to test this conjecture is very limited because in the overlapping domain of validity of the dual theories the ’t Hooft coupling of the gauge theory is very large. Also, one would like to investigate whether this duality is carried away from the conformal and near horizon limits. In this case we need the full strongly coupled DBI action. Our model is a starting point to perform such computations in parallel with the D-3-brane case [@Gubs..1; @GubsHash] (see also [@Teo; @Mari] for work on the D5-D1 brane system).
In the following we shall argue that in the ’t Hooft limit the tension of the instanton strings in (\[tension\]) gets normalised and it scales as T\_[eff]{}\~\~ , \[5.17\] confirming the results in [@Math; @Gubs; @HassWadia]. The argument is rather heuristic and by no means rigourous. The compactification of the action (\[5.16\]) gives a bosonic action of the type S=- d\^2x ( G\_[rs]{}() \_\^r\^\^s+ ...) , \[5.18\] where ... denote the DBI corrections and the fields $\tilde{\z}$ are dimensionless. Now it is hoped that in the limit $N_i\rightarrow \infty$ the Feynman rules that follow from this action should define an effective action reproducing such rules. Of course this will involve very large Feynman graphs because the ’t Hooft coupling is becoming very large. Such effective action should be identified with the rather successful effective string action used in the computations of scattering amplitudes. In this limit the only scale in the problem is $R$. We are forced to conclude that the effective string tension scales as in (\[5.17\]). Note that we are arguing here that it is the gauge field $A_{\a}$ that is associated with the effective string description. An opposite point of view was advocated in [@Alwis]. One could argue that a tension like T\_[eff]{}\~\~ , \[5.19\] would remain finite in the large $N$ limit. This is in fact true. However, it is difficult to see how it would arise when considering the Feynman rules that should originate the effective string action defined above. The reasons are: Firstly, the fields $b_{\ha}$ associated with the instanton strings transform under the fundamental representation of $U(N_1p_1\bar{p}_1)\otimes U(\overline{N_2p_2\bar{p}_2})$, therefore the trace of any gauge invariant combination of these fields depends on $N_i$ through a power of $N_1N_2$ only; Secondly, all couplings in the action (\[5.16\]) depend on $N_i$ in the same way. It follows that any scattering amplitude is bound to depend on $N_i$ through the particular combination $N_1N_2$.
Conclusion
==========
Let us start by summarising our results. We have argued that a model based on D-5-branes with a constant self-dual field strength on $T^4$ describes 5-dimensional black holes within a perturbative string theory framework and that the D5-D1 system constitutes a special case of this model. The fluctuating spectrum of this bound state is described by Polchinski’s open strings ending on the D-5-branes. This means that the Higgs branch of the theory, which describes the “internal” excitations of the bound state, is associated with fundamental modes of the worldvolume fields. We may take the volume of the $T^4$ to satisfy $V_4\sim \a'$ while string derivative corrections are negligible. The explicite knowledge of the microscopics of the D-brane bound state allowed us to make a definite proposal for the conformal field theory governing the Higgs branch of the theory. This means that we may deduce from first principles the coupling of the bulk fields to the worldvolume fields. We have done this for a minimally coupled scalar and find agreement with the supergravity scattering cross section calculation. Also, the explicite knowledge of this conformal theory is relevant for Maldacena’s duality proposal. We think that our model could be a starting point to the investigation of the field theory side of this duality conjecture.
Regretfully we have left for the future a number of calculations that should prove or disprove the validity of our approach to black hole dynamics. One should calculate the coupling of the instanton strings to the minimally coupled scalars arising from the internal metric on the $T^4$ as well as to the fixed scalars. In these cases we expect that the fermions will contribute to the corresponding scattering processes. We may use fermionisation of the bosonic action or find the supertorons, i.e. the fermionic partners of the toronic excitations associated with the fields $b_{\a}$ (and $d_m$). An alternative approach to this problem is to use the string description of the D-brane bound state and to calculate the corresponding scattering amplitude using string techniques. This will involve the usual disk diagram with three vertices (two of which are on the disk boundary). One should also generalise the D-5-brane bound state so that it allows $f=4$ in the D5-D1 brane bound state case. It would also be interesting to reproduce this bound state while suppressing the DBI corrections. Another interesting problem is to find the hyperkähler metric $G_{rs}$ determining the superconformal field theory of the instanton strings . This would provide us with a better understanding of the interacting theory. Another problem is to consider a D-5-brane configuration with instantons and anti-instanton [@Dijk..] (in this case there are tachyonic modes in the spectra which signal the instability of the configuration). Knowledge of the interacting theory could be relevant to understand the entropy formula away from extremality and the dilute gas regime. One would also like to generalise the field theory description with a ’t Hooft twist to other compact manifolds, for example $K3$.
Hopefully, a better understanding of this model will shed light on the string theory approach to black hole physics.
Acknowledgements {#acknowledgements .unnumbered}
================
I would like to thank Jerome Gauntlett and Bobby Acharya for a very informative discussion and Malcolm Perry for discussions and for reading the paper. The financial support of FCT (Portugal) under programme PRAXIS XXI is gratefully acknowledged.
Note added {#note-added .unnumbered}
==========
After this work appeared as a pre-print I learned that a similar calculation to the one presented in sections 3 and 4 for the D-brane emission rates using the DBI action was carried out by S.D. Mathur [@Math1].
[40]{} D. Youm, [*Black holes and solitons in string theory*]{}, hep-th/9710046. A.W. Peet, [*The Bekenstein formula and string theory (N-brane theory)*]{}, hep-th/9712253. A. Strominger and C. Vafa, Phys. Lett. [**B379**]{} (1996) 99. J.M. Maldacena, Nucl. Phys. [**B477**]{} (1996) 168. S.R. Das and S.D. Mathur, Nucl. Phys. [**B478**]{} (1996) 561; Nucl. Phys. [**B482**]{} (1996) 153. J.M. Maldacena and A. Strominger, Phys. Rev. [**D55**]{} (1997) 861. A. Dhar, G. Mandal and S.R. Wadia, Phys. Lett. [**B388**]{} (1996) 51. S.S. Gubser and I.R. Klebanov, Nucl. Phys. [**B482**]{} (1996) 173; Phys. Rev. Lett. [**77**]{} (1996) 4491. C.G. Callan, S.S. Gubser, I.R. Klebanov and A.A. Tseytlin, Nucl. Phys. [**B489**]{} (1997) 65. I.R. Klebanov and S.D. Mathur, Nucl.Phys. [**B500**]{} (1997) 115. I.R. Klebanov and M. Krasnitz, Phys. Rev. [**D55**]{} (1997) 3250. J.M. Maldacena and A. Strominger, Phys. Rev. [**D56**]{} (1997) 4975. S.W. Hawking and M.M. Taylor-Robinson, Phys. Rev. [**D55**]{} (1997) 7680. I.R. Klebanov, A. Rajaraman and A.A. Tseytlin, Nucl.Phys. [**B503**]{} (1997) 157. S.D. Mathur, Nucl. Phys. [**B514**]{} (1998) 204. S.S. Gubser, Phys. Rev. [**D56**]{} (1997) 4984. M. Cvetič and F. Larsen, Phys. Rev. [**D56**]{} (1997) 4994; Nucl. Phys. [**B506**]{} (1997) 107. K. Hosomichi, Nucl. Phys. [**B524**]{} (1998) 312. I.R. Klebanov, Nucl. Phys. [**B496**]{} (1997) 231. S.S. Gubser, I.R. Klebanov and A.A. Tseytlin, Nucl. Phys. [**B499**]{} (1997) 217. S.S. Gubser and I.R. Klebanov, Phys. Lett. [**B413**]{} (1997) 41. J. Polchinski, Phys. Rev. Lett. [**75**]{} (1995) 4724; [*TASI Lectures on D-branes*]{}, hep-th/9611050. M.S. Costa and M.J. Perry, Nucl. Phys. [**B524**]{} (1998) 333. M.S. Costa and M.J. Perry, Nucl. Phys. [**B520**]{} (1998) 205. J.M. Maldacena, [*The Large N limit of superconformal field theories and supergravity*]{}, hep-th/9711200. S.S. Gubser, I.R. Klebanov and A.M. Polyakov, Phys. Lett. [**B428**]{} (1998) 105. E. Witten, [*Anti De Sitter Space And Holography*]{}, hep-th/9802150. S.F. Hassan and S.R. Wadia, [*Gauge theory description of D-brane black holes: Emergence of the effective SCFT and Hawking radiation*]{}, hep-th/9712213. E. Witten, Nucl. Phys. [**B460**]{} (1995) 335. G. ’t Hooft, Nucl. Phys. [**B153**]{} (1979) 141; Commun. Math. Phys. [**81**]{} (1981) 267. P. Van Baal, Commun. Math. Phys. [**85**]{} (1982) 529; Commun. Math. Phys. [**94**]{} (1984) 397. Z. Guralnik and S. Ramgoolam, Nucl. Phys. [**B499**]{} (1997) 241; Nucl. Phys. [**B521**]{} (1998) 129. A. Hashimoto and W. Taylor, Nucl. Phys. [**B503**]{} (1997) 193. M. Douglas, [*Branes within Branes*]{}, hep-th/9512077. A. Abouelsaood, C.G. Callan, C.R. Nappi and S.A. Yost, Nucl. Phys. [**B280**]{} (1987) 599. J.M. Maldacena, Phys. Rev. [**D55**]{} (1997) 7645. J.M. Maldacena, [*Black Holes in String Theory*]{}, Ph.D. Thesis, hep-th/9607235. R. Dijkgraaf, E. Verlinde and H. Verlinde, Nucl. Phys. [**B506**]{} (1997) 121. M. Douglas, J. Polchinski and A. Strominger, J. High Energy Phys. [**12**]{} (1997) 003. Y. Kitazawa, Nucl. Phys. [**B289**]{} (1987) 599. O.D. Andreev and A.A. Tseytlin, Nucl. Phys. [**B311**]{} (1988) 205. C. Callan and J.M. Maldacena, Nucl. Phys. [**B472**]{} (1996) 591. O. Aharony, M. Berkooz, S. Kachru, N. Seiberg and E. Silverstein, Adv. Theor. Math. Phys. [**1**]{} (1998) 148. E. Witten, J. High Energy Phys. [**07**]{} (1997) 3. E. Witten, Strings ’95 (World Scientific, 1996), ed. I. Bars et al., 501. D. Brecher, [*BPS States of the Non-Abelian Born-Infeld Action*]{}, hep-th/9804180. A.A. Tseytlin, Nucl. Phys. [**B501**]{} (1997) 41. J.M. Maldacena, [*Branes probing black holes*]{}, hep-th/9710014. J.M. Maldacena and L. Susskind, Nucl. Phys. [**B475**]{} (1996) 679. G.T. Horowitz, J.M. Maldacena and A. Strominger, Phys. Lett. [**B383**]{} (1996) 151. J. Preskill, P. Schwarz, A. Shapere, S. Trivedi and F. Wilczek, Mod. Phys. Lett. [**A6**]{} (1991) 2353. C. Holzhey and F. Wilczek, Nucl. Phys. [**B380**]{} (1992) 447. P.Kraus and F. Wilczek, Nucl. Phys. [**B433**]{} (1995) 403. S. Hyun, [*U-duality between Three and Higher Dimensional Black Holes*]{}, hep-th/9704005. H.J. Boonstra, B. Peeters and K. Skenderis, Phys. Lett. [**B411**]{} (1997) 59. K. Sfetsos and K. Skenderis, Nucl. Phys. [**B517**]{} (1998) 179. O. Coussaert and M. Henneaux, Phys. Rev. Lett. [**72**]{} (1994) 183. A. Strominger, J. High Energy Phys. [**02**]{} (1998) 009. M. Banados, C. Teitelboim and J. Zanelli, Phys. Rev. Lett. [**69**]{} (1992) 1849. A. Achúcarro and P.K. Townsend, Phys. Lett. [**B180**]{} (1986) 89. E. Witten, Nucl. Phys. [**B311**]{} (1989) 46. S. Carlip, Nucl. Phys. [**B362**]{} (1991) 111. V. Balasubramanian and F. Larsen, [*Near Horizon Geometry and Black Holes in Four Dimensions*]{}, hep-th/9802198. D. Birmingham, I. Sachs and S. Sen, Phys. Lett. [**B424**]{} (1998) 275. T. Lee, [*Topological Ward identity and anti-de Sitter space/CFT correspondence*]{}, hep-th/9805182; [*The Entropy of the BTZ black hole and AdS/CFT correspondence*]{}, hep-th/9806113. K. Behrndt, [*Branes in N=2, D = 4 supergravity and the conformal field theory limit*]{}, hep-th/9801058. M. Banados, T. Brotz and M.E. Ortiz, [*Boundary dynamics and the statistical mechanics of the (2+1)-dimensional black hole*]{}, hep-th/9802076. N. Kaloper, [*Entropy count for extremal three-dimensional black strings*]{}, hep-th/9804062. J.M. Maldacena and A. Strominger, [*AdS(3) black holes and a stringy exclusion principle*]{}, hep-th/9804085. S. Deger, A. Kaya, E. Sezgin and P. Sundell, [*Spectrum of D = 6, N=4b supergravity on $AdS_3\times S^3$*]{}, hep-th/9804166. K. Behrndt, I. Brunner and I. Gaida, [*Entropy and conformal field theories of AdS(3) models*]{}, hep-th/9804159; [*AdS(3) gravity and conformal field theories*]{}, hep-th/9806195. M. Cvetič and F. Larsen, [*Near Horizon Geometry of Rotating Black Holes in Five Dimensions*]{}, hep-th/9805097; [*Microstates of Four-Dimensional Rotating Black Holes from Near-Horizon Geometry*]{}, hep-th/9805146. M. Banados, K. Bautier, O. Coussaert, M. Henneaux M. Ortiz, [*Anti-de Sitter/CFT correspondence in three-dimensional supergravity*]{}, hep-th/9805165. J.M. Evans, M.R. Gaberdiel and M.J. Perry, [*The no ghost theorem for AdS(3) and the stringy exclusion principle*]{}, hep-th/9806024. M. Banados and M.E. Ortiz, [*The Central charge in three-dimensional anti-de Sitter space*]{}, hep-th/9806089. F. Larsen, [*The Perturbation Spectrum of Black Holes in N=8 Supergravity*]{}, hep-th/9805208; [*Anti-DeSitter Spaces and Nonextreme Black Holes*]{}, hep-th/9806071. J. de Boer, [*Six-dimensional supergravity on $S^3\times AdS_3$ and 2-D conformal field theory*]{}, hep-th/9806104. S. Hwang. [*Unitarity of strings and noncompact Hermitian symmetric spaces*]{}, hep-th/9806049. R. Emparan and I. Sachs, [*Quantization of AdS(3) black holes in external fields*]{}, hep-th/9806122. A. Giveon, D. Kutasov and N. Seiberg, [*Comments on String Theory on $AdS_3$*]{}, hep-th/9806194. S.P. Alwis, [*Supergravity the DBI Action and Black Hole Physics*]{}, hep-th/9804019. S.S. Gubser, A. Hashimoto, I.R. Klebanov and M. Krasnitz, [*Scalar absorption and the breaking of the world volume conformal invariance*]{}, hep-th/9803023. S.S. Gubser and A. Hashimoto, [*Exact absorption probabilities for the D3-brane*]{}, hep-th/9805140. E. Teo, [*Black hole absorption cross-sections and the anti-de Sitter-conformal field theory correspondence*]{}, hep-th/9805014. M.M. Taylor-Robinson, [*The D1-D5 brane system in six dimensions*]{}, hep-th/9806132. S.D. Mathur, private communication.
[^1]: M.S.Costa@damtp.cam.ac.uk
[^2]: We are assuming here that $p_1, p_2$ and $\bar{p}_1, \bar{p}_2$ are co-prime. It is not difficult to drop this condition [@CostaPerry].
[^3]: We remark that this fact may be related to the fact that our field strength background does not have a minimal integer instanton number. $N_{ins}$ is always a multiple of $2$. Generalising our results to arbitrary integer instanton number may allow the possibility $f=4$.
[^4]: In this subsection the indices $a,b,...$ run over 0,6,...9, otherwise they are ten-dimensional spacetime indices.
[^5]: An alternative resolution is to keep the $\o r_i$ terms, solve the differential equation in terms of Bessel functions of degree $\pm\sqrt{1-\o^2\left( r_1^2+r_2^2\right)}$ and take the limit $\o r_i\ll 1$ at the end. Within our approximation the final result is the same.
[^6]: If $\Delta\th\sim1$ which happens for the D5-D1 brane system (with $f=16$) the near and far regions do not overlap. In this case there are $\omega r_i$ corrections which are suppressed within our approximation [@MaldStro].
[^7]: See refs. \[64-80\] for recent work on the subject.
|
---
abstract: 'We provide an algorithm that uses Bayesian randomized benchmarking in concert with a local optimizer, such as SPSA, to find a set of controls that optimizes that average gate fidelity. We call this method Bayesian ACRONYM tuning as a reference to the analogous ACRONYM tuning algorithm. Bayesian ACRONYM distinguishes itself in its ability to retain prior information from experiments that use nearby control parameters; whereas traditional ACRONYM tuning does not use such information and can require many more measurements as a result. We prove that such information reuse is possible under the relatively weak assumption that the true model parameters are Lipshitz-continuous functions of the control parameters. We also perform numerical experiments that demonstrate that over-rotation errors in single qubit gates can be automatically tuned from $88\%$ to $99.95\%$ average gate fidelity using less than $1kB$ of data and fewer than $20$ steps of the optimizer.'
author:
- John Gamble
- Chris Granade
- Nathan Wiebe
bibliography:
- 'apsrev-control.bib'
- 'bayesian-acronym.bib'
date: authors in alphabetical order
nocite: '[@apsrev41Control]'
title: Bayesian ACRONYM Tuning
---
Introduction
============
Tuning gates in quantum computers is a task of fundamental importance to building a quantum computer. Without tuning, most quantum computers would have insufficient accuracy to implement a simple algorithm let alone achieve the stringent requirements on gate fidelity imposed by quantum error correction [@fowler2009high; @cross_comparative_2007]. Historically, qubit tuning has largely been done by experimentalists refining an intelligent initial guess for the physical parameters by hand to account for the ideosyncracies of the device. Recently, alternatives have been invented that allow devices to be tuned in order to improve performance on real-world estimates of gate quality. These methods, often based on optimizing quantities such as average gate fidelities, are powerful but come with two drawbacks. At present all such methods require substantial input data to compute the average gate fidelity and estimate its gradient, and at present no method can use information from the history of an optimization procedure to reduce such data needs. Our approach, which we call Bayesian ACRONYM tuning (or BACRONYM), addresses these problems.
BACRONYM is based strongly on the ACRONYM protocol invented by Ferrie and Moussa [@fm_robust_2015]. There are two parts to the ACRONYM gate tuning protocol. The first uses randomized benchmarking [@magesan_characterizing_2012] to obtain an estimate of gate fidelity as a function of the controls. The second optimizes the average gate fidelity using a local optimizer such as Nelder-Mead or stochastic gradient descent. While many methods can be used to estimate the average gate fidelity, randomized benchmarking is of particular significance because of its ability to give an efficient estimate of the average gate fidelity under reasonable assumptions [@pry+_what_2017], and because of its amenability to experimental application [@heeres_implementing_2016]. The algorithm then uses a protocol, similar to SPSA [@spa_multivariate_1992], to optimize the estimate of the gate fidelity by changing the experimental controls and continues to update the parameters until the desired tolerance is reached.
The optimization used in ACRONYM simply involves varying a parameter slightly and applying the fidelity estimation protocol from scratch every time. When the a quantum system is evaluated at two nearby points in parameter space, an operation performed repeatedly in descent algorithms, the objective function does not typically change much in practice. Since ACRONYM does not take this into account, it requires more data than is strictly needed. Thus, if ACRONYM could be modified to use prior information extracted from the previous iteration in SPSA, the data needed to obtain an estimate of the gradient can be reduced.
Bayesian methods provide a natural means to use prior information within parameter estimation and have been used previously to analyze randomized benchmarking experiments. These methods, yield estimates of the average gate fidelity based on prior beliefs of about the randomized benchmarking parameters as well as the evidence obtained experimentally [@gfc_accelerated_2015]. To use a Bayesian approach, we begin by taking as input a probability distribution for the average gate fidelity ($\AGF$) as function of the control parameters $\vec{\theta}$, $\Pr(\AGF|\vec{\theta})$. This is our prior belief about the average gate fidelity. In addition to a prior, we need a method for computing the likelihood of witnessing a set of experimental evidence $E$. This is known as the likelihood function; in the case of Bayesian randomized benchmarking, it is $\Pr(E|\AGF;\vec{\theta})$. Given these as input, we then seek to output an approximation to the posterior probability distribution, *i.e.,* the probability with which the AGF takes a specific value conditioned on our prior belief and $E$. To accomplish this, we use Bayes’ theorem, which states that $$\Pr(\AGF|E;\vec{\theta}) = \frac{\Pr(\AGF|\theta) \Pr(E|\AGF;\theta)}{\Pr(E|\theta)},\label{eq:Bayes}$$ where $\Pr(E|\theta)$ is just a normalization constant. From the posterior distribution $\Pr(\AGF|E;\vec{\theta})$ we can then extract a point estimate of the $\AGF$ (by taking the mean) or estimate its uncertainty (by computing the variance).
Our work combines these two ideas to show that provided the quantum channels that describe the underlying gates are continuous functions of the control parameters then the uncertainty in parameters like $\AGF$ that occurs from transitioning from $\vec{\theta}\rightarrow \vec{\theta'}$ in the optimization process is also a continuous function of $\|\vec{\theta} - \vec{\theta'}\|$. This gives us a rule that we can follow to argue how much uncertainty we have to add to our posterior distribution $\Pr(\AGF|E;\vec{\theta})$ to use it as a prior $\Pr(\AGF|\vec{\theta'})$ at the next step of the gradient optimization procedure.
Notation
--------
The notation that we use in this paper necessarily spans several fields, most notably Bayesian inference and randomized benchmarking theory. Here we will introduce the necessary notation from these fields in order to understand our results. For any distribution $\Pr(\vec{x})$ over a vector $\vec{x}$ of random variables, we write $\supp(\Pr(\vec{x}))$ to mean the set of vectors $\vec{x}$ such that $\Pr(\vec{x})>0$. When it is clear from context, we will write $\supp(\vec{x} | \vec{y})$ in place of $\supp(\Pr(\vec{x} | \vec{y}))$.
Let ${\mathcal{H}}= {\mathbb{C}}^{d}$ be a finite-dimensional Hilbert space describing the states of a quantum system of interest, and let ${\mathrm{L}}({\mathcal{H}})$ be the set of linear operators acting on ${\mathcal{H}}$. Let $\Herm({\mathcal{H}}) \subsetneq {\mathrm{L}}({\mathcal{H}})$ and ${\mathrm{U}}({\mathcal{H}}) \subsetneq {\mathrm{L}}({\mathcal{H}})$ be the sets of Hermitian and unitary operators acting on ${\mathcal{H}}$, respectively. For the most part, however, we are not concerned directly with pure states $\ket{\psi} \in {\mathcal{H}}$, but with classical distributions over such states, described by density operators $\rho \in {\mathrm{D}}({\mathcal{H}}) \subsetneq \Herm({\mathcal{H}}) \subsetneq {\mathrm{L}}({\mathcal{H}})$. Whereas ${\mathcal{H}}$ transforms under ${\mathrm{U}}({\mathcal{H}})$ by left action, ${\mathrm{D}}({\mathcal{H}})$ transforms under ${\mathrm{U}}({\mathcal{H}})$ by the *group action* $\bullet : {\mathrm{U}}({\mathcal{H}}) \times {\mathrm{L}}({\mathcal{H}}) \to {\mathrm{L}}({\mathcal{H}})$, given by $U \bullet \rho \defeq U\rho U^\dagger$. We note that $\bullet$ is linear in its second argument, such that for a particular $U \in {\mathrm{U}}({\mathcal{H}})$, $U \bullet : {\mathrm{L}}({\mathcal{H}}) \to {\mathrm{L}}({\mathcal{H}})$ is a linear function. We thus write that $U \bullet {} \in {\mathrm{L}}({\mathrm{L}}({\mathcal{H}}))$. Moreover, since $U \bullet {}$ is a completely positive and trace preserving map on ${\mathrm{L}}({\mathcal{H}})$, we say that $U \bullet {}$ is a *channel* on ${\mathcal{H}}$, written ${\mathrm{C}}({\mathcal{H}}) \subsetneq {\mathrm{L}}({\mathrm{L}}({\mathcal{H}})) \subsetneq {\mathrm{L}}({\mathcal{H}}) \to {\mathrm{L}}({\mathcal{H}})$. More generally, we take ${\mathrm{C}}({\mathcal{H}})$ to be the set of all such completely positive and trace preserving maps acting on ${\mathrm{L}}({\mathcal{H}})$.
Problem Description
-------------------
Before proceeding further, it is helpful to carefully define the problem that we address with BACRONYM. In particular, let $G = \langle V_0, \dots, V_{\ell - 1} \rangle \subsetneq {\mathrm{U}}({\mathcal{H}})$ be a group and a unitary 2-design [@dankert_exact_2006], such that $G$ is appropriate for use in standard randomized benchmarking. Often, $G$ will be the Clifford group acting on a Hilbert space of dimension $d$, but smaller twirling groups may be chosen in some circumstances [@ian_pc]. We will consider that the generator $T$ is a gate, which we would like to tune to be $V_0$ without loss of generality, as a function of a vector $\vec{\theta}$ of control parameters, such that $T = T(\vec{\theta})$. We write that $V_i {\perp\!\!\!\!\perp}\vec{\theta}$ for all $i \ge 0$ to indicate that the generators $\{V_0, \dots, V_{\ell - 1}\}$ are not functions of the controls $\vec{\theta}$ (note that $V_0$ is manifestly not a function of the controls because it represents the ideal action). Nonetheless, it is often convenient to write that $V_i = V_i(\vec{\theta)})$ with the understanding that $\partial_{\theta_j} V_i = 0$ for all $i \ge 0$ and for all control parameters $\theta_j$.
In order to reason about the errors in our implementation of each generator, we will write that the imperfect implementation $\tilde{V} \in {\mathrm{C}}({\mathcal{H}})$ of a generator $V \in \{V_0, \dots, V_{\ell - 1}\}$ is defined as $$\begin{aligned}
\tilde{V} & = \Lambda_V (V \bullet {}) \label{eq:Lambda_V_def} \\
\intertext{which acts on $\rho$ as }
\tilde{V}[\rho] & = \Lambda_V[V\rho V^{\dagger}],\end{aligned}$$ where $\Lambda_V$ is the *discrepancy channel* describing the errors in $V$. Note that for an ideal implementation, $\Lambda_V$ is the identity channel.
We extend this definition to arbitrary elements of $G$ in a straightforward fashion. Let $U \defeq \prod_{i \in \vec{i}(U)} V_{i}$, where $\vec{i}(U)$ is the sequence of indices of each generator in the decomposition of $U$. For instance, if $G = \langle H, S \rangle$ for the phase gate $S = \diag(1, \ii)$, then $\sqrt{X} = HSH$ is represented by $\vec{i}(U) = (0, 1, 0)$. Combining the definition of $U$ with Eq. (\[eq:Lambda\_V\_def\]), the imperfect composite action $\tilde{U}$ is $$\begin{aligned}
\tilde{U} &= \prod_{i \in \vec{i}(U)} \tilde{V_i}
= \prod_{i \in \vec{i}(U)} \Lambda_{V_i} (V_i \bullet {})
\defeq \Lambda_U (U \bullet),\end{aligned}$$ where the final point defines the composite discrepancy channel $\Lambda_U$. By rearranging the equation above, we obtain $$\begin{aligned}
\label{eq:discrepancy-u}
\Lambda_U = \tilde{U} (U^\dagger \bullet {}) =
\left(\prod_{i \in \vec{i}(U)} \Lambda_{V_i} (V_i \bullet {}) \right)
\left(U^\dagger \bullet {} \right).\end{aligned}$$ Returning to the example $\sqrt{X} = HSH$, we thus obtain that $$\begin{aligned}
\Lambda_{\sqrt{X}} = \Lambda_H (H \bullet {}) \Lambda_S (S \bullet {}) \Lambda_H (H \bullet {})
((H^\dagger S^\dagger H^\dagger) \bullet {})\end{aligned}$$ is the discrepancy channel describing the noise incurred if we implement $\widetilde{\sqrt{X}}$ as the sequence $\tilde{H}\tilde{S}\tilde{H}$.
Equipped with the discrepancy channels for all elements of $G$, we can now concretely state the parameters of interest to randomized benchmarking over $G$. Standard randomized benchmarking without sequence reuse [@gfc_accelerated_2015], in the limit of long sequences [@wal_randomized_2017], depends only on the state preparation and measurement (SPAM) procedure and on the average gate fidelity $\AGF(\Lambda_{{\mathrm{ref}}})$, where $$\begin{aligned}
\label{eq:ref-channel-defn}
\Lambda_{{\mathrm{ref}}} \defeq \expect_{U \sim \Uni(G)} [\Lambda_U] = \frac{1}{|G|} \sum_{U \in G} \Lambda_U\end{aligned}$$ is the reference discrepancy channel, obtained by taking the expectation value of the discrepancy channel $\Lambda_U$ over $U$ sampled uniformly at random from $G$, and where the average gate fidelity is given by the expected action of a channel $\Lambda$ over the Haar measure $\dd\psi$, $$\begin{aligned}
\AGF(\Lambda) \defeq \int \dd{\psi} \braket{
\psi \mid
\Lambda(\ket{\psi}\bra{\psi})
\mid \psi
}.\end{aligned}$$ When discussing the quality of a particular generator, say $T\defeq \tilde{V_0}$, we unfortunately cannot directly access $\AGF(\Lambda_{T})$ experimentally. However, interleaved randomized benchmarking allows us to rigorously estimate $\AGF(\Lambda_{T} \Lambda_{{\mathrm{ref}}})$ in the limit of long sequences and without sequence reuse.
Our goal here is to find a set of control parameters that optimizes $\AGF(\Lambda_{T} \Lambda_{{\mathrm{ref}}})$. To state this more formally, suppose that $T$ is a function of a vector $\vec{\theta}$ of control parameters such that $T = T(\vec{\theta})$. For all ideal generators, we write that $V_i {\perp\!\!\!\!\perp}\vec{\theta}$ for all $i \ge 0$ to indicate that the other generators $\{V_0, \dots, V_{\ell - 1}\}$ are not functions of the controls $\vec{\theta}$. We also assume that $\tilde V_i {\perp\!\!\!\!\perp}\vec{\theta}$ for all $i > 0$, so that $T(\vec{\theta}) = \Lambda_{V_0} (\vec \theta ) V_0$ is the sole generator we are optimizing. We therefore aim to find $\vec{\theta}$ such that $\vec{\theta} = {\rm argmax} \left(\AGF(\Lambda_{T(\vec{\theta})} \Lambda_{{\mathrm{ref}}})\right)$.
This problem has previously been considered by @ew_adaptive_2014 and later by @kelly2014optimal, who proposed the use of interleaved randomized benchmarking with least-squares fitting to implement an approximate oracle for $\AGF(\Lambda_T(\vec{\theta}) \Lambda_{{\mathrm{ref}}}(\vec{\theta}))$. Taken together with the bounds showed by @mgj+_efficient_2012 and later improved by @kdr+_robust_2014, this approximate oracle provides an approximate lower bound on $\AGF(\Lambda_{{\mathrm{ref}}}(\vec{\theta}))$. This lower bound can then be taken as an objective function for standard optimization routines such as Nelder–Mead to yield a “fix-up” procedure that improves gates based on experimental evidence. @fm_robust_2015 showed an improvement in this procedure by the use of an optimization algorithm that is more robust to the approximations incurred by the use of finite data in the underlying randomized benchmarking experiments. In particular, the simultaneous pertubative stochastic approximation (SPSA) [@spa_multivariate_1992], while less efficient for optimizing exact oracles, can provide dramatic improvements in approximate cases such as that considered by @fm_robust_2015. This advantage has been further shown in other areas of quantum information, such as in tomography [@fer_self_2014; @cfa_experimental_2016].
We improve this result still further by using a Lipschitz continuity assumption on the dependence of $\Lambda_T$ on $\vec{\theta}$ to propagate prior information between optimization iterations. This assumption is physically well-motivated: it reflects a desire that our control knobs have a smooth (but not known) influence on our generators. Since small gradient steps cannot greatly modify the average gate fidelity of interest under such a continuity assumption, the prior distribution for each randomized benchmarking experiment is closely related to the posterior distribution from the previous optimization iteration.
Recent work has shown, however, that this approach faces two significant challenges. First, the work of @pry+_what_2017 has shown explicit counterexamples in which reconstructing $\AGF(\Lambda_{T}(\vec{\theta}))$ from $\AGF(\Lambda_{T}(\vec{\theta}) \Lambda_{{\mathrm{ref}}}(\vec{\theta}))$ can yield very poor estimates due to the gauge dependence of this inverse problem. Second, the work of @hwf+_bayesian_2018 has shown that the statistical inference problem induced by randomized benchmarking becomes considerably more complicated with sequence reuse, and in particular, depends on higher moments such as the unitarity [@wghf_estimating_2015]. While the work of @hwf+_bayesian_2018 provides the first concrete algorithm that allows for learning randomized benchmarking parameters with sequence reuse, we will consider the single-shot limit to address the @pry+_what_2017 argument, as this is the unique randomized benchmarking protocol that provides gauge invariant estimates of $\AGF(\Lambda_{T}(\vec{\theta}) \Lambda_{{\mathrm{ref}}}(\vec{\theta}))$ [@rpz_gauge_2017], and as this model readily generalizes to include the effects of error correction [@cgff_logical_2017].
In this work, we adopt as our objective function $$\begin{aligned}
{F}(\vec{\theta}) \defeq \AGF(\Lambda_{T}(\vec{\theta}) \Lambda_{{\mathrm{ref}}}(\vec{\theta})).\end{aligned}$$ This choice of objective represents that we want to see improvements in the interleaved average gate fidelity, regardless of whether they occur from a more accurate target gate or a more accurate reference channel. In practice, these two contributions to our objective function can be teased apart by the use of more complete protocols such as gateset tomography [@merkel2013self; @blume2017demonstration]. We proceed in three steps. First, we demonstrate that the Lipschitz continuity of $\Lambda_T(\vec{\theta})$ implies the Lipschitz continuity of ${F}(\vec{\theta})$. We then proceed to show that this implies an upper bound on $\Var[{F}(\vec{\theta} + \vec{\delta\theta}) | \text{data}]$ in terms of $\Var[{F}(\vec{\theta}) | \text{data}]$, such that we can readily produce estimates $\hat{{F}}(\vec{\theta})$ at each step of an optimization procedure, while reusing much of our data to accelerate the process. Finally, we conclude by presenting a numerical example for a representative model to demonstrate how BACRONYM may be used in practice.
Lipschitz Continuity of ${F}(\vec{\theta})$
===========================================
Proving Lipshitz continuity of the objective function is an important first step towards arguing that we can reuse information during BACRONYM’s optimization process. We need this fact because if the objective function were to vary unpredictably at adjacent values of the controls then finding the optima would reduce to an unstructured search problem, which cannot be solved efficiently. Our aim is to first argue that continuity of $\Lambda$ implies continuity of ${F}$. We then will use this fact to argue about the maximum amount that the posterior variance can grow as the control parameters are updated, which will allow us to quantify how to propagate uncertainties of ${F}$ at adjacent points later. We begin by recalling the definition of Lipschitz continuity for functions acting on vectors.
\[def:lipschitz-functions\] Given a Euclidean metric space $S$, a function $f : S \to \mathbb{R}$ is said to be Lipschitz continuous if there exists ${\mathcal{L}}\ge 0$ such that for all $\vec{x}, \vec{y} \in S$, $$\begin{aligned}
|f(\vec{x}) - f(\vec{y})| \le {\mathcal{L}}\|\vec{x} - \vec{y}\|.
\end{aligned}$$ If not otherwise stated, we will assume $\| \cdot \|$ on vectors to be the Euclidean norm $\| \cdot \|_2$.
As an example, $f(x) = \sqrt{x}$ is not Lipschitz continuous on \[0,1\], but any differentiable function on a closed, bounded interval of the real line is. We now generalize the notion of Lipschitz continuity to channels. Let ${\mathrm{L}}({\mathcal{H}})$ be the set of all linear operators acting on the Hilbert space ${\mathcal{H}}$, and let ${\mathrm{L}}({\mathrm{L}}({\mathcal{H}}))$ be the set of linear operators acting on all such linear operators (often referred to as *superoperators*).
\[def:lipschitz-channels\] Given a metric space $S$ and a Hilbert space ${\mathcal{H}}$, we say that a function $\Lambda : S \to {\mathrm{L}}({\mathrm{L}}({\mathcal{H}}))$ is ${\mathcal{L}}$-continuous or Lipschitz continuous in the $\star$ distance if there exists ${\mathcal{L}}\ge 0$ such that for all $\vec{x}, \vec{y} \in S$ and $\rho \in {\mathrm{D}}({\mathcal{H}})$, $$\begin{aligned}
\| \Lambda(\vec{x})[\rho] - \Lambda(\vec{y})[\rho] \|_\star \le {\mathcal{L}}\|\vec{x} - \vec{y}\|.
\end{aligned}$$ If not specified explicitly, the trace norm $\| \cdot \| = \| \cdot \|_{\Tr}$ is assumed for operators in ${\mathrm{L}}({\mathcal{H}})$.
From the definition, we immediately can show the following:
\[lem:compose-chan-lip\] Let $\Lambda, \Phi : S \to {\mathrm{L}}({\mathrm{L}}({\mathcal{H}}))$ be Lipschitz continuous in the trace distance with constants ${\mathcal{L}}$ and $\mathcal{M}$, respectively. Then, $(\Phi \Lambda) : \vec{x} \mapsto \Phi(\vec{x}) \Lambda(\vec{x})$ is Lipschitz continuous in the trace distance with constant ${\mathcal{L}}+ \mathcal{M}$.
The proof of the lemma follows immediately after a few applications of the triangle inequality under the assumption of continuity of the individual channels. $$\begin{aligned}
\| (\Phi \Lambda)(\vec{x})[\rho] - (\Phi \Lambda)(\vec{y})[\rho] \|_{\Tr} & =
\| \Phi(\vec{x}) [\Lambda(\vec{x}) [\rho]] - \Phi(\vec{y}) [\Lambda(\vec{y}) \rho]] \|_{\Tr} \\
& =
\| \Phi(\vec{x}) [\Lambda(\vec{x}) [\rho]] - \Phi(\vec{x}) [\Lambda(\vec{y}) [\rho]] + \Phi(\vec{x}) [\Lambda(\vec{y}) [\rho]] - \Phi(\vec{y}) [\Lambda(\vec{y}) [\rho]] \|_{\Tr} \\
& \le
\| \Phi(\vec{x}) [\Lambda(\vec{x}) [\rho]] - \Phi(\vec{x}) [\Lambda(\vec{y}) [\rho]] \|_{\Tr} +
\| \Phi(\vec{x}) [\Lambda(\vec{y}) [\rho]] - \Phi(\vec{y}) [\Lambda(\vec{y}) [\rho]] \|_{\Tr} \\
& \le
\| \Phi(\vec{x}) [\Lambda(\vec{x}) [\rho]] - \Phi(\vec{x}) [\Lambda(\vec{y}) [\rho]] \|_{\Tr} +
\mathcal{M} \|\vec{x} - \vec{y}\| \\
& \le
\| \Lambda(\vec{x}) [\rho] - \Lambda(\vec{y}) [\rho] \|_{\Tr} +
\mathcal{M} \|\vec{x} - \vec{y}\| \\
& \le
{\mathcal{L}}\|\vec{x} - \vec{y}\| +
\mathcal{M} \|\vec{x} - \vec{y}\|,
\end{aligned}$$ where the second-to-last line follows from contradiction on Helstrom’s theorem [@wat_theory_2018].
We note that the above lemma immediately implies that if $\Lambda(\vec{\theta})$ is Lipschitz continuous in the trace distance with constant ${\mathcal{L}}$, then so is $(\Phi \Lambda)(\vec{\theta})$ for any channel $\Phi {\perp\!\!\!\!\perp}\vec{\theta}$, since $\Phi$ can be written as a channel that is Lipschitz continuous in the trace distance with constant $0$.
\[cor:comp-mult-channels\] Let $\Lambda_0, \Lambda_1,...,\Lambda_k : S \to {\mathrm{L}}({\mathrm{L}}({\mathcal{H}}))$ be Lipschitz continuous in the trace distance with constants ${\mathcal{L}}_i$ with $i \in [0,1,...,k]$. Then, $(\Lambda_0 \Lambda_1 \cdots \Lambda_k) : \vec{x} \mapsto \Lambda_0(\vec{x}) \Lambda_1(\vec{x})\cdots \Lambda_k(\vec{x}) $ is Lipschitz continuous in the trace distance with constant $\sum_{i = 0}^k {\mathcal{L}}_i$.
\[lem:convex-chan-lip\] Let $\Lambda : S \to {\mathrm{L}}({\mathrm{L}}({\mathcal{H}}))$ be a convex combination of channels, $$\begin{aligned}
\Lambda(\vec{\theta}) & = \sum_i p_i \Lambda_i(\vec{\theta}),
\end{aligned}$$ where $\{p_i\}$ are nonnegative real numbers such that $\sum_i p_i = 1$, and where each $\Lambda_i : S \to {\mathrm{L}}({\mathrm{L}}({\mathcal{H}}))$ is Lipschitz continuous in a norm $\|\cdot\|_\star$ with constant ${\mathcal{L}}_i$. Then, $\Lambda$ is Lipschitz continuous with constant $\bar{{\mathcal{L}}} = \sum_i p_i {\mathcal{L}}_i$.
Consider an input state $\rho \in {\mathrm{D}}({\mathcal{H}})$. Then, $$\begin{aligned}
\| \Lambda(\vec{\theta})[\rho] - \Lambda(\vec{\theta}')[\rho] \|_\star & =
\left\|
\sum_i p_i \left(
\Lambda_i(\vec{\theta})[\rho] - \Lambda_i(\vec{\theta}')[\rho]
\right)
\right\|_\star \\
& \le
\sum_i p_i \left(
\left\|
\Lambda_i(\vec{\theta})[\rho] - \Lambda_i(\vec{\theta}')[\rho]
\right\|_\star
\right) \\
& \le
\sum_i p_i {\mathcal{L}}_i \| \vec{\theta} - \vec{\theta}' \| \\
& = \bar{{\mathcal{L}}} \| \vec{\theta} - \vec{\theta}' \|.
\end{aligned}$$
The above lemmas can then be used to show that $\AGF(\Lambda_T(\vec{\theta}))$ is Lipschitz continuous with constant ${\mathcal{L}}$ when $\Lambda_T(\vec{\theta})$ is Lipschitz continuous in the trace distance with constant ${\mathcal{L}}$, as we formally state in the following theorem.
\[thm:agf-continuity\] Let $\Lambda(\vec{\theta})$ be Lipschitz continuous in the trace distance with constant ${\mathcal{L}}$. Then $\AGF(\Lambda(\vec{\theta}))$ is Lipschitz continuous with constant ${\mathcal{L}}$.
Recall that $$\begin{aligned}
\AGF(\Lambda(\vec{\theta})) &
\defeq \int \dd\psi \braket{
\psi | \Lambda\left[ \ket{\psi}\bra{\psi} \right] | \psi
}, \\
\intertext{so}
|\AGF(\Lambda(\vec{\theta})) - \AGF(\Lambda(\vec{\theta}'))| & =
\left|
\int \dd\psi \braket{
\psi |
\Lambda(\vec{\theta}) [\ket{\psi} \bra{\psi}] -
\Lambda(\vec{\theta}') [\ket{\psi} \bra{\psi}]
| \psi
}
\right| \\
& \le
\int \dd\psi \left|
\braket{
\psi |
\Lambda(\vec{\theta}) [\ket{\psi} \bra{\psi}] -
\Lambda(\vec{\theta}') [\ket{\psi} \bra{\psi}]
| \psi
}
\right| \\
& \le
\int \dd\psi \left\|
\Lambda(\vec{\theta}) [\ket{\psi} \bra{\psi}] -
\Lambda(\vec{\theta}') [\ket{\psi} \bra{\psi}]
\right\|_{\Tr} \\
& \le
\int \dd\psi\,{\mathcal{L}}\|\vec{\theta} - \vec{\theta}'\| \\
& = {\mathcal{L}}\|\vec{\theta} - \vec{\theta}'\|.
\end{aligned}$$
As noted in the introduction, we do not have direct access to $\AGF(\Lambda_T(\vec{\theta}))$, but rather to $\AGF(\Lambda_T(\vec{\theta}) \Lambda_{{\mathrm{ref}}}(\vec{\theta}))$. In particular, ${F}(\vec{\theta}) \defeq \AGF(\Lambda_T(\vec{\theta}) \Lambda_{{\mathrm{ref}}}(\vec{\theta}))$ may be estimated from the interleaved randomized benchmarking parameters:
\[eq:rb-parameter-defns\] $$\begin{aligned}
p(\vec{\theta}) & \defeq \frac{d{F}(\vec{\theta}) - 1}{d - 1}, \label{eq:pDef}\\
A(\vec{\theta}) & \defeq \Tr(E \Lambda_{{\mathrm{ref}}}(\vec{\theta})[\rho - \frac{\id}{d}]), \\
\text{and }
B(\vec{\theta}) & \defeq \Tr(E \Lambda_{{\mathrm{ref}}}(\vec{\theta})[\frac{\id}{d}]),
\end{aligned}$$
where $d = \operatorname{dim}({\mathcal{H}})$, $\rho$ is the state prepared at the start of each sequence, and $E$ is the measurement at the end of each sequence. We consider $A$ and $B$ later, but note for now that up to a factor of $d / (d - 1)$, Lipschitz continuity of ${F}(\vec{\theta})$ immediately implies Lipschitz continuity of $p(\vec{\theta})$. Thus, we can follow the same argument as above, but using the channel $\Lambda_T (\vec{\theta}) \Lambda_{{\mathrm{ref}}}(\vec{\theta})$ instead to argue the Lipschitz continuity of experimentally accessible estimates.
We proceed to show the Lipschitz continuity of ${F}$ and hence of $p$ by revisiting the definition of $\Lambda_{{\mathrm{ref}}}$. In particular, we partition the twirling group as $G = \bigcup_{n = 0}^{\infty} G_n$, where $G_n$ is the set of elements of $G$ whose decomposition into generators $\{T, V_1, \dots, V_{\ell - 1}\}$ requires at least $n$ instances of the target gate $T$. For instance, if $G = \langle S, H \rangle$ and the target gate is $T = S$, then $Z \in G_2$ since $Z = SS$ is the decomposition of $Z$ requiring the least copies of $S$. The partition of $G$ in this example is shown as .
Using this partitioning of $G$, we can define an analogous partition on the terms occuring in the definition of $\Lambda_{{\mathrm{ref}}}(\vec{\theta})$, $$\begin{aligned}
\Lambda_{{\mathrm{ref}}}(\vec{\theta}) & = \sum_{n = 0}^{\infty} \frac{| G_n |}{| G |} \Lambda_{{\mathrm{ref}}, n}(\vec{\theta}), \\
\text{where }
\label{eq:rref-n-expansion}
\Lambda_{{\mathrm{ref}}, n}(\vec{\theta}) & \defeq \frac{1}{| G_n |} \sum_{U \in G_n} \Lambda_{U}(\vec{\theta}).\end{aligned}$$
\[thm:ref-continuity\] If $\Lambda_T(\vec{\theta})$ is Lipschitz continuous in the trace distance with constant ${\mathcal{L}}$, then $\Lambda_{{\mathrm{ref}},n}(\vec{\theta})$ is Lipschitz continuous in the trace distance with constant $nL$. Furthermore $\Lambda_{{\mathrm{ref}}}(\vec{\theta})$ is Lipshitz continuous with constant $\bar{n} \defeq \sum_{n = 0}^{\infty} n \frac{|G_n|}{|G|}$.
Consider one of the summands from , and without loss of generality let $U = V_{i_0} V_{i_1} \cdots V_{i_k}$ for the sequence of integer indices $\vec{i} = (i_0, i_1, \dots, i_k)$. Then, by , $$\begin{aligned}
\Lambda_U(\vec{\theta}) & =
\Lambda_{V_{i_0}}(\vec{\theta})
(V_{i_0} \bullet {})
\cdots
\Lambda_{V_{i_k}}(\vec{\theta})
(V_{i_k} \bullet {})
(U^\dagger \bullet {}).\end{aligned}$$ Note that, $\forall i$, $V_i {\perp\!\!\!\!\perp}\vec \theta$ since these are ideal channels and hence independent of the control vector $\vec \theta$; these channels are Lipschitz continuous in the trace distance with constant $0$. Further, each $\Lambda_{V_{i}} {\perp\!\!\!\!\perp}\vec \theta$ for $i>0$; these channels are also Lipschitz continuous in the trace distance with constant $0$. By assumption, we have $\Lambda_{V_{0}}$ is Lipschitz continuous in the trace distance with constant ${\mathcal{L}}$. Hence, each factor in $\Lambda_U$ is Lipschitz continuous in the trace distance with constant ${\mathcal{L}}$ or $0$, as detailed above.
By , $\Lambda_U$ is Lipschitz continuous in the trace distance with constant $mL$, where $m$ counts the number of $0$s in $\vec i$ (corresponding to the number of times the target gate occurs in the decomposition of $U$). By construction, $m\leq n$, so $\Lambda_U$ is also Lipschitz continuous in the trace distance with constant $nL$.
Using to, we now have that $\Lambda_{{\mathrm{ref}},n}(\vec{\theta})$ is Lipschitz continuous in the trace distance with constant $\frac{1}{| G_n |} \sum_{U \in G_n} nL = nL$, which is what we wanted to show.
We thus have that $\Lambda_{{\mathrm{ref}}}(\vec{\theta})$ is Lipschitz continuous in the trace distance with constant $\bar{n}{\mathcal{L}}$, wherein $$\begin{aligned}
\bar{n} \defeq \sum_{n = 0}^{\infty} n \frac{|G_n|}{|G|}\end{aligned}$$ is the average number of times that the target gate $T$ appears in decompositions of elements of the twirling group $G$.
\[cor:cont-ref-channel\] Let $$\begin{aligned}
\bar{n} \defeq \sum_{n = 0}^{\infty} n \frac{|G_n|}{|G|}\end{aligned}$$ be the average number of times that the target gate $V_0$ appears in decompositions of elements of the twirling group $G$. Then, $\Lambda_{{\mathrm{ref}}}(\vec{\theta})$ is Lipschitz continuous in the trace distance with constant $\bar{n}{\mathcal{L}}$.
Combining with the previous argument, we thus have our central theorem.
\[thm:pab-continuity\] Let $\Lambda_T(\vec{\theta})$ be Lipschitz continuous in the trace distance with constant ${\mathcal{L}}$. Then, ${F}(\vec{\theta})=\AGF(\theta)$ is Lipschitz continuous with constant $(1 + \bar{n}) {\mathcal{L}}$, and $p(\vec{\theta})$ is Lipschitz continuous with constant $d (1 + \bar{n}) {\mathcal{L}}/ (d - 1) $, and $A(\vec{\theta})$ and $B(\vec{\theta})$ are Lipschitz continuous with constant $\bar{n} {\mathcal{L}}$.
First, ${F}(\vec{\theta})=\AGF(\Lambda_T (\vec{\theta}) \Lambda_{{\mathrm{ref}}}(\vec{\theta}))$. By assumption, $\Lambda_T(\vec{\theta})$ is Lipschitz continuous with constant ${\mathcal{L}}$, and by , $\Lambda_{{\mathrm{ref}}}(\vec{\theta}))$ is Lipschitz continuous with constant $\bar{n} {\mathcal{L}}$. Hence, by , ${F}(\vec{\theta})$ is Lipschitz continuous with constant $(1 + \bar{n}) {\mathcal{L}}$.
Next, recall that $p(\vec{\theta}) = \frac{d {F}(\vec{\theta})- 1}{d - 1}$. Then, it follows that $p(\vec{\theta})$ is Lipschitz continuous with constant $\frac{d (1+\bar n) {\mathcal{L}}}{d - 1}$.
For $B(\vec{\theta})$, we have $$\begin{aligned}
\label{eq:pab-continuity-step0}
| B(\vec{\theta}') - B(\vec{\theta}) | & =
\left|
\Tr(E \Lambda_{{\mathrm{ref}}}(\vec{\theta}')[\id / d]) -
\Tr(E \Lambda_{{\mathrm{ref}}}(\vec{\theta}) [\id / d])
\right| \\
& = \left| \Tr(E (\Lambda_{{\mathrm{ref}}}(\vec{\theta}') - \Lambda_{{\mathrm{ref}}}(\vec{\theta})) [\id / d])\right|.
\end{aligned}$$ Letting $(\epsilon_0,\epsilon_1,...,\epsilon_d)$ be the ordered singular values of $E$ and $(\lambda_0,\lambda_1,...,\lambda_d)$ be the ordered singular values of $(\Lambda_{{\mathrm{ref}}}(\vec{\theta}') - \Lambda_{{\mathrm{ref}}}(\vec{\theta})) [\id / d]$, we have $$\begin{aligned}
\label{eq:pab-continuity-step1}
| B(\vec{\theta}') - B(\vec{\theta}) |
\leq \sum_{i=1}^{d} \epsilon_i \lambda_i
\leq \max (\epsilon) \sum_{i=1}^{d}\lambda_i
= \max(\epsilon) \|(\Lambda_{{\mathrm{ref}}}(\vec{\theta}') - \Lambda_{{\mathrm{ref}}}(\vec{\theta})) [\id / d]\|_{\Tr}
\leq \bar{n} {\mathcal{L}},
\end{aligned}$$ Since $E$ and $C$ are both Hermitian, $EC$ is also Hermitian, and thus $\|EC\|_{\Tr} = \Tr(|EC|) \ge |\Tr(EC)|$. The argument is completed by Hölder’s inequality [@wat_theory_2018], which states that for all $X$ and $Y$, $\|XY\|_{\Tr} \le \|X\|_{\Tr} \|Y\|_{\spec}$, where $\|\cdot\|_{\spec}$ is the spectral norm (*a.k.a.* the induced $(2 \to 2)$-norm or Schatten $\infty$-norm). In particular, we note that since $E$ is a POVM effect, $\|E\|_{\spec} \le 1$, such that $\|EC\|_{\Tr} \le \|C\|_{\Tr} \le 1$.
Finally, we note that this argument goes identically for the state $\rho - \frac{\id}{d}$, as we did not use any special properties of $\frac{\id}{d}$. Hence, we also have that $| A(\vec{\theta}') - A(\vec{\theta}) | \leq\bar{n} {\mathcal{L}}$.
We are thusly equipped to return to the problem of estimating ${F}(\vec{\theta} + \vec{\delta\theta})$ from experimental data concerning ${F}(\vec{\theta})$.
\[thm:lipshitz\] Suppose that $f(\vec{\theta}, \vec{y})$ is a Lipschitz continuous function of $\vec{\theta}$ with constant ${\mathcal{L}}$ where $y$ is a variable in a measurable set $S$ with corresponding probability distribution on that set of $\Pr(\vec{y})$ and for any function $g:S\mapsto \mathbb{R}$ define $\mathbb{E}_{\vec{y}}(g(\vec{y})) = \int_{S} g(\vec{y}) \Pr(\vec{y})\dd\vec{y} $ and $\Var_{\vec{y}}(g(\vec{y})) = \mathbb{E}_{\vec{y}} \big(g(\vec{y}) - \mathbb{E}_{\vec{y}}(g(\vec{y}) \big)^2$. For all $\vec{\theta}$ and $\vec{\theta}'$ such that ${\mathcal{L}}\|\vec{\theta}' - \vec{\theta}\| < \sqrt{\Var_{\vec{y}}(f(\vec{\theta}, \vec{y})})$, it holds that $$\begin{aligned}
\Var_{\vec{y}}[f(\vec{\theta}', \vec{y})]
& \le
\Var_{\vec{y}}[f(\vec{\theta}, \vec{y})]
\left(
1 + \frac{
2 {\mathcal{L}}\|\vec{\theta}' - \vec{\theta}\|
}{
\sqrt{\Var_{\vec{y}}[f(\vec{\theta}, \vec{y})]}
}
\right).
\end{aligned}$$
Note that since $f$ is Lipschitz continuous as a function of $\vec{\theta}$, $$\begin{aligned}
\left| f(\vec{\theta}', \vec{y}) - f(\vec{\theta}, \vec{y}) \right| \leq {\mathcal{L}}\|\vec{\theta}' - \vec{\theta}\|,
\end{aligned}$$ so there exists a function $c$ such that $|c(\vec{\theta}, \vec{\theta}', \vec{y})| \le 1$ for all $\vec{\theta}$, $\vec{\theta}'$ and $\vec{y}$: $$\begin{aligned}
f(\vec{\theta}', \vec{y})
& =
f(\vec{\theta}, \vec{y}) +
{\mathcal{L}}\|\vec{\theta}' - \vec{\theta}\| c(\vec{\theta}, \vec{\theta}', \vec{y}).
\end{aligned}$$ Thus, $\Var_{\vec{y}}[c] \le 1$, and by addition of variance, we have that $$\begin{aligned}
\Var_{\vec{y}}[f(\vec{\theta}', \vec{y})]
& = \Var_{\vec{y}}[f(\vec{\theta}, \vec{y})]+{\mathcal{L}}^2 \|\vec{\theta} - \vec{\theta}'\|^2 \Var_{\vec{y}}(c(\vec{\theta}, \vec{\theta}', \vec{y}))\! +\! 2{\mathcal{L}}\|\vec{\theta} - \vec{\theta}'\| \text{Cov}_{\vec{y}}(f(\theta,\vec{y}),c(\theta,\theta',\vec{y}))\nonumber\\
& \le
\Var_{\vec{y}}[f(\vec{\theta}, \vec{y})] +
{\mathcal{L}}^2 \|\vec{\theta} - \vec{\theta}'\|^2 +
{\mathcal{L}}\|\vec{\theta} - \vec{\theta}'\| \sqrt{\Var_{\vec{y}}(f(\vec{\theta}, \vec{y})}.\nonumber\\
& \le
\Var_{\vec{y}}[f(\vec{\theta}, \vec{y})] +
2 {\mathcal{L}}\|\vec{\theta} - \vec{\theta}'\| \sqrt{\Var_{\vec{y}}(f(\vec{\theta}, \vec{y})}.
\end{aligned}$$ The result then follows from elementary algebra.
Examples
--------
![ \[fig:unitary-overrotation-objective\] The objective function ${F}(\theta)$ and the average gate fidelity versus the overrotation angle $\theta$ for is given in the left figure. The right figure gives the calculated RB parameters as a function of $\theta$ where the optimal solution $\theta=0$ is unknown to the optimizer a priori. ](\figurefolder/unitary-overrotation-objective.pdf){width="0.95\linewidth"}
\[ex:unitary-overrotation\]
Consider $G = \langle S, H \rangle$, where $T = S$ is the target gate. For a control parameter vector consisting of a single overrotation parameter $\vec{\theta} = (\delta\theta)$, suppose that $\Lambda_T[\rho] = (\e^{-\ii\,\delta\theta\,\sigma_z}) \bullet \rho$. Since this is a unitary channel, its Choi–Jamiłkowski rank[^1] is 1. Thus, the AGF of $\Lambda_T$ can be calculated as the trace [@nie_simple_2002; @hhh_general_1999; @emerson_scalable_2005] $$\begin{aligned}
\AGF(\Lambda_T(\delta \theta)) = \frac{ |\Tr(e^{-\ii\,\delta\theta\,\sigma_z})|^2 + 2 }{4 + 2}
= \frac23 + \frac13 \cos(2\,\delta\theta).
\end{aligned}$$ On the other hand, ${F}(\delta\theta)$ isn’t as straightforward, and so we will consider its Lipschitz continuity instead. To do so, we note that for all $\rho \in {\mathrm{D}}(\mathbb{C}^2)$, we wish to bound the trace norm $$\begin{aligned}
\Delta & = \| \Lambda_T(\delta\theta)[\rho] - \Lambda_T(\delta\theta')[\rho] \|_{\Tr}.
\intertext{
Expanding $\rho$ in the unnormalized Pauli basis as $\rho = \id / 2 + \vec{r} \cdot \vec{\sigma} / 2$, we note that since $\Lambda_T(\delta\theta)[\id] = \id$ and $\Lambda_T(\delta\theta)[\sigma_z] = \sigma_z$ for all $\delta\theta$, the above becomes
}
\Delta & =
\frac12 \| \Lambda_T(\delta\theta)[r_x \sigma_x + r_y \sigma_y +r_z \sigma_z] - \Lambda_T(\delta\theta')[r_x \sigma_x + r_y \sigma_y+r_z \sigma_z] \|_{\Tr} \\
&= \frac12 \| \Lambda_T(\delta\theta)[r_x \sigma_x + r_y \sigma_y] - \Lambda_T(\delta\theta')[r_x \sigma_x + r_y \sigma_y] \|_{\Tr} \\
& =
4 |\sin(\delta\theta - \delta\theta')| \sqrt{r_x^2 + r_y^2} \\
& \le
4 |\sin(\delta\theta - \delta\theta')| \\
& \le 4 |\delta\theta - \delta\theta'|,
\end{aligned}$$ where the last line follows from that $|\sin(x)| \le |x|$. Thus, we conclude that $\Lambda_T$ is Lipschitz continuous in the trace distance with constant 4.
We can then find $\bar{n}$ for occurrences of $T$ in decompositions of elements of $G$ to find the Lipschitz constant for ${F}(\delta\theta)$ in this example. In particular, as shown in the Supplementary Material, $\bar{n} = 13 / 6$ for the presentation of the Clifford group under consideration, such that ${F}$ is Lipschitz continuous with constant $(d / (d - 1)) \times 4 \times (19 / 6) = 76 / 3$ in this case.
We note that a more detailed analysis of the Lipschitz continuity of $\Lambda_T$ or a presentation of $G$ that is less dense in $T$ would both yield smaller Lipschitz constants for ${F}$, and hence better reuse of prior information. Thus by , a change in overrotation of approximately $1 / 100$ the current standard deviation in ${F}$ would result in at most a doubling of the current standard deviation.
We can easily include the effects of noise in other generators in numerical simulations. In particular, suppose that $\Lambda_H$ is a depolarizing channel with strength $0.5\%$. Then, simulating ${F}(\vec{\theta})$ for this case shows that ${F}$ is Lipschitz continuous with a constant of approximately , as illustrated in .
Approximate Bayesian Inference
==============================
An important implication of is that the uncertainty quantified by the variance of the posterior distribution yielded by Bayesian inference grows by at most a constant factor. However, while the theorem specify how the variance should grow in the worst case scenario it does not give us an understanding of what form the posterior distribution should take. Our goal in this section is to provide an operationally meaningful way to think about how the posterior distribution evaluated at $\vec{\theta}$ changes as the control parameters transition to $\vec{\theta'}$.
Let the posterior probability distribution for the objective function ${F}$ evaluated at parameters $\vec{\theta}$ be $\Pr\left({F}(\vec{\theta})\right)$. In practice, we do not generally estimate the objective function ${F}$ directly, but estimate ${F}$ from a latent variable $\vec{y}$, such as the RB parameters . Marginalizing over this latent variable, we obtain the Bayesian mean estimator for ${F}$,
$$\label{eq:marginalized-bme-happy}
\hat{{F}} =
\int {F}\Pr\left({F}| \theta \right) \dd{F}=
\int {F}\Pr\left({F}| \theta, \vec{y} \right) \Pr(\vec{y}) \dd\vec{y}.$$
For the RB case in particular, the objective function ${F}$ does not depend on the control parameters $\vec{\theta}$ if we know the RB parameters $\vec{y}$ exactly. That is, we write that ${F}{\perp\!\!\!\!\perp}\vec{\theta} | \vec{y}$ for the RB case, such that $\Pr({F}| \vec{\theta}, \vec{y}) = \Pr({F}| \vec{y})$. Moreover, $\Pr({F}| \vec{y})$ is a $\delta$-distribution supported only at ${F}= (dp + 1) / (d + 1)$ where $\vec{y} = (p, A, B)$. We may thus abuse notation slightly and write that ${F}= {F}(\vec{y})$ is a deterministic function. Doing so, our estimator simplifies considerably, such that $$\label{eq:simplified-bme-happy}
\hat{{F}} =
\int {F}\Pr\left({F}| \theta, \vec{y} \right) \Pr(\vec{y}) \dd\vec{y}
=
\int {F}(\vec{y}) \Pr(\vec{y}) \dd \vec{y}.$$
In exact Bayesian inference, the probability density $\Pr(\vec{y})$ is an arbitrary distribution, but computation of the estimator is in general intractable. Perhaps the most easily generalizable distribution is the sequential Monte Carlo (SMC) approximation [@dj_tutorial_2011], also known as a particle filter, which attempts to approximate the probability density as $$\Pr\left({F}| \theta, \vec{y} \right) \Pr(\vec{y} | \vec{\theta}) =
\Pr({F}, \vec{y} | \vec{\theta})
\approx
\sum_{j=1}^{N_p} w_j \delta(\vec{y} - \vec{y}_j) \delta({F}_i - {F}),$$ where $\delta$ is the Dirac-delta distribution and $\sum_j w_j =1$. This representation is convenient for recording on a computer, as it only needs to store $(w_i, \vec{y}_i, {F}_i)$ for each particle. If ${F}= {F}(\vec{y})$ is a deterministic function of the RB parameters then we need not even record ${F}$ with each particle, such that $$\Pr\left({F}| \theta, \vec{y} \right) \Pr(\vec{y} | \vec{\theta}) \approx
\sum_{j=1}^{N_p} w_j \delta(\vec{y} - \vec{y}_j) \delta({F}(\vec{y}) - {F}).$$
More generally, the SMC approximation allows us to approximate expectation values over the probability distribution using a finite number of points, or particles, such that the expectation value of any continuous function can be approximated with arbitrary accuracy as $N_p \rightarrow \infty$. In particular, we can approximate the estimator $\hat{{F}}$ within arbitrary accuracy.
The uncertainty (mean squared error) of this estimator is given by the posterior variance, $$\mathbb{V}({F}) =
\int {F}^2 \Pr({F}| \theta, \vec{y}) \Pr(\vec{y}) \dd\vec{y} -
\hat{{F}}^2.$$ The posterior variance can be computed as the variance over the variable $\vec{y}$ induced from the sequential Monte Carlo approximation to the probability distribution, $$\mathbb{V}({F}) \approx
\sum_i w_i {F}(\vec{y}_i)^2 - \left(\sum_i w_i {F}(\vec{y}_i)\right)^2,$$ where we have assumed that ${F}{\perp\!\!\!\!\perp}\vec{\theta} | \vec{y}$ and that $\Pr({F}| \vec{\theta})$ is a $\delta$-distribution, as in the RB case. This observation is key to our implementation of Bayesian ACRONYM tuning.
A final note regarding approximate Bayesian inference is that the learning process can be easily implemented. From if $\Pr({F}|\theta, \vec{y}) \Pr(\vec{y}) = \sum_{j=1}^{N_p} w_j \delta(\vec{y}-\vec{y}_j)$ and if evidence $E$ is obtained in an experiment, then Bayes’ theorem when applied to the weights $w_j$ yields $$w_j \gets \frac{\Pr(E | \vec{y_j}) w_j}{\sum_j \Pr(E | \vec{y_j}) w_j}.$$ This update procedure is repeated iteratively over all data that is collected from a set of experiments. In practice, if an accurate estimate is needed then an enormous number of particles may be needed because the weights shrink exponentially with the number of updates. This causes the effective number of particles in the approximation to shrink exponentially and with it the accuracy of the approximation to the posterior. We can address this by moving the particles to regions of high probability density. In practice, we use a method proposed by @lw_combined_2001 to move the particles but other methods exist and we recommend reviewing [@dj_tutorial_2011; @granade2017structured; @hincks2018bayesian] for more details. Here, we will use the implementation of particle filtering and Liu–West resampling provided by the QInfer package [@granade2017qinfer].
Reusing Priors from Nearby Experiments
--------------------------------------
We have argued above that the posterior variance of the probability distribution is Lipshitz continuous, which allows us to reason that the variance of the probability distribution at most expands by a fixed multiplicative constant when transitioning information between different points. Operationally though, it is less clear how we should choose the posterior distribution over the average gate fidelity in Bayesian ACRONYM training given prior information at a single point. provides us with an intuition that can be used for this: each element in the support of the probability distribution is shifted by at most a fixed amount that is dictated by the Lipshitz constants for the channels. Here, we build on this intuition by showing that the prior at each step in a Bayesian ACRONYM tuning protocol can be related to the previous step in terms of the Minkowski sum and convex hull.
\[def:convex-hull\] Let $A$ be a set of vectors. Then the convex hull of $A$, written ${\mathrm{Conv}}(A)$ is the smallest convex set containing $A$, $$\begin{aligned}
{\mathrm{Conv}}(A) \defeq \left\{
\lambda \vec{a} + (1 - \lambda) \vec{b} :
\vec{a}, \vec{b} \in A,
0 \le \lambda \le 1
\right\}.
\end{aligned}$$
\[def:minkowski-sum\] Let $A$ and $B$ be sets of vectors. Then the Minkowski sum $A + B$ is defined as the convolution of $A$ with $B$, $$\begin{aligned}
A + B \defeq \left\{
\vec{a} + \vec{b} : \vec{a} \in A, \vec{b} \in B
\right\}.
\end{aligned}$$
With these concepts in place we can now state the following Corollary, which can be used to define a sensible prior distribution for $\vec{y}{(\vec{\theta} + \vec{\delta\theta})}$ given a posterior distribution for $\vec{y}(\vec{\theta})$.
\[cor:prior-reuse\] Let $\Lambda_T(\vec{\theta})$ be Lipshitz continuous in the trace distance with constant ${\mathcal{L}}$, and let $\Pr(\vec{y} | \vec{\theta})$ be a probability distribution over the RB parameters $\vec{y} = (p, A, B)$ for $\Lambda_T$ evaluated at some particular $\vec{\theta}$. Then, for any $\vec{\delta\theta} \in \mathbb{R}^n$, let $$\begin{aligned}
\Delta & \defeq
\|\vec{\delta\theta}\|, \\
D & \defeq
\Biggr\{ \pm \Delta \frac{d {\mathcal{L}}(1 + \bar{n})}{d - 1} \Biggr\} \times
\Biggr\{ \pm \Delta (1 + \bar{n}) {\mathcal{L}}\Biggr\} \times
\Biggr\{ \pm \Delta (1 + \bar{n}) {\mathcal{L}}\Biggr\}, \\
\textrm{and }
\Pr(\vec{y} | \vec{\theta} + \vec{\delta \theta}) & \defeq
\frac{1}{8}\sum_{\vec{s} \in S} \Pr(\vec{y} - \vec{s} | \vec{\theta}).
\end{aligned}$$ The following statements then hold:
1. $\Pr(\vec{y} | \vec{\theta} + \vec{\delta\theta})$ is a valid prior probability distribution for $\vec{y}(\vec{\theta} + \vec{\delta\theta})$.
2. $\hat{y} = \int \vec{y} \Pr(\vec{y} | \vec{\theta}) \dd\vec{y} = \int \vec{y} \Pr(\vec{y} | \vec{\theta} + \vec{\delta\theta}) \dd\vec{y}$.
3. If $\Pr(\vec{y} | \vec{\theta})$ has support only on $A \subset \mathbb{R}^3$, then $\Pr(\vec{y}|\vec{\theta} + \vec{\delta\theta})$ has support only on ${\mathrm{Conv}}(A + D)$.
4. If $\vec{y}_{\true}(\theta) \in A$ then $\vec{y}_{\true}(\theta+\delta\theta)\in {\mathrm{Conv}}(A + D)$.
The proof of the first claim is trivial and follows immediately from the fact that $\Pr(\vec{y} | \vec{\theta})$ is a probability distribution. The proof of the second claim is also straightforward. Note that $$\begin{aligned}
\hat{\vec{y}}
\defeq \int \vec{y}
\Pr(\vec{y} | \vec{\theta} + \vec{\delta\theta}) \dd\vec{y}
= & \frac18 \int \sum_{\vec{s} \in \{\vec{y}\} + D}
\vec{y} \Pr(\vec{y} - \vec{s} | \vec{\theta}) \dd\vec{y} \nonumber \\
= & \frac18 \int \sum_{\vec{s} \in \{\vec{y}\} + D}
(\vec{y} + \vec{s}) \Pr(\vec{y} | \vec{\theta}) \dd\vec{y} \nonumber \\
= & \int \vec{y}
\Pr(\vec{y} | \vec{\theta}) \dd\vec{y}.
\end{aligned}$$
To consider the third claim, let $\vec{c} = (c_p, c_A, c_B)$ be a vector such that $|c_p| \le dL(1+\bar{n})/(d-1)$ and $\max\{ |c_A|, |c_B|\} \le {\mathcal{L}}(1 + \bar{n})$. The convex hull ${\mathrm{Conv}}(D)$ consists of a convex region of identical dimensions. Since the set is convex it then follows that $\vec{c} \in {\mathrm{Conv}}(D)$.
Put differently, we can express and in terms of the Minkowski sum, such that $$\begin{aligned}
\vec{y}(\Lambda_T(\vec{\theta} + \vec{\delta\theta})) \in
{\mathrm{Conv}}\left(
\{\vec{y}(\Lambda_T(\vec{\theta})\} +
D
\right).
\end{aligned}$$ Taking the union over all vectors $\vec{a}$ in the support of $\Pr(\vec{y} | \vec{\theta})$, we obtain that $$\begin{aligned}
\supp(\vec{y} | \vec{\theta} + \vec{\delta\theta}) \subseteq
{\mathrm{Conv}}\left(
\supp(\vec{y} | \vec{\theta}) +
D
\right).
\end{aligned}$$ From the linearity of convex hulls under Minkowski summation, $${\mathrm{Conv}}(\supp(\vec{y} | \vec{\theta}) + D) = {\mathrm{Conv}}(\supp(\vec{y} | \vec{\theta})) + {\mathrm{Conv}}(D).\label{eq:samething}$$ The fourth and final statement then immediately follows from .
This shows that if we follow the above rule to generate a prior distribution for the RB parameters at $\vec{\theta} + \vec{\delta\theta}$ then the resultant distribution does not introduce any bias into the current estimate of the parameters, which is codified by the mean of the posterior distribution. We also have that if the true model is within the support of the prior distribution at $\vec{\theta}$ then it also will be at $\vec{\theta} + \vec{\delta\theta}$. This is important because it states that we can use the resulting distribution to give a credible region for the RB parameters. Thus this choice of prior is well justified and furthermore if the measurement process reduces the posterior variance faster than it expands when $\vec{\theta}$ is updated, it will allow us to get very accurate estimates of the true RB parameters without needing to extract redundant information.
Numerical Experiments
=====================
The above analysis shows that, under assumptions of Lipshitz continuity of the likelihood function, the posterior distribution found at a given step of the algorithm can be used to provide a prior for the next step. This holds provided that we form a new prior that expands the variance of the posterior distribution.
While the above analysis shows that prior information can be reused in theory, we will now show in practice that this ability to re-use prior information can reduce the information needed to calibrate a simulated quantum device. The Clifford gates in the device, which we take to be the generators of the single-qubit Clifford group, are $H$ and $S$. We assume that $H$ can be implemented exactly but that $S$ has an over-rotation error such that $$S(\theta)= e^{-i \theta Z} S,$$ for some value of $\theta$. While this is called an “over-rotation” we make no assumption that $\theta>0$. We further apply depolarizing noise at a per-gate level to the system with strength $0.005$ meaning that we apply the channels $$\begin{aligned}
\Lambda_H &: \rho \mapsto 0.995 H \rho H + 0.005 (\openone/2),\nonumber\\
\Lambda_{S(\theta)} &: \rho \mapsto 0.995 e^{-i \theta Z} S \rho S^\dagger e^{i\theta Z} + 0.005 (\openone/2).\label{eq:channel}\end{aligned}$$
We assume that the user has control over the parameter $\theta$ but we do not assume that they know the functional form and thus do not know that setting $\theta=0$ will yield optimal performance. The goal of our Bayesian ACRONYM algorithm is then to allow the method to discover that $\theta=0$ yields the optimal performance via local search.
![ \[fig:unitary-overrotation-perf\] Observed survival probabilities as a function of sequence lengths using $20$ measurements (shots) per length for an overrotation model with $\theta=0.04$. Solid orange line represents the true value for the survival probability, $(A-B)p^{\mathcal{L}}+B$, as a function of the sequence length ${\mathcal{L}}$ and the dashed line represents the estimate of the survival probability. The prior was set to be uniform for $p$ and $A$ on $[0,1]$ and the prior $B$ was set to be the normal distribution $\mathcal{N}(0.5,0.05^2)$. ](\figurefolder/overrotation-synthetic-rb-data.pdf){width="0.8\linewidth"}
shows the impact that using Bayesian inference to estimate RB parameters can have in data limited cases of the over-rotation problem. Specifically, we apply Bayesian ACRONYM training to calibrate the over–rotation to within an error of $0.005$ which is equal to the dephasing error that we included in the channels in . A broad prior was taken and despite the challenges that we would have learning a good model from least-squares fitting, we are able to accurately learn the survival probability. We can then learn the parameters $A$, $B$ and $p$, the latter of which gives us the average gate fidelity needed for ACRONYM training via . As the required accuracy for the estimate of $p$ increases, the advantages gleaned from using Bayesian methods relative to fitting disappear [@hwf+_bayesian_2018]. However, in our context this observation is significant because we wish to tune the performance of quantum devices in the small data limit rather than the large data limit and use prior information from previous experiments to compensate.
Local search is implemented using SPSA with learning rate $0.05$, a step of $0.05$ used to compute approximate gradients and a maximum step size of $0.1$. We repeat the method until the posterior variance in the average gate fidelity is less than $0.005^2$. We use a Lipshitz constant of $1.48$, which was numerically computed as a bound to give an appropriate amount of diffusion for the posterior distribution during an update. Bayesian inference is approximated using a particle filter with $256~000$ particles and Liu–West resampling with a resample threshold of $1/256$ as implemented by QInfer [@granade2017qinfer]. Single shot experiments are used with a maximum number of sequences of $500$ per set of parameters.
Perhaps the key observation is that throughout the tuning process the true parameters for the overrotation error remain within the $70\%$ credible region reported by QInfer, which suggests if anything that the credible region is pessimistic. The estimate of ${F}$ also closely tracks the true throughout the learning process and also the amount of data required for the tuning process is minimal, less than $1$ kB.
![ \[fig:unitary-overrotation-perf\] Over-rotation angle and objective function values for an over-rotation model with a $0.35$ radian over-rotation initially with a target error of $0.005$ in $F$ as measured by the posterior standard-deviation. (Left) Over-rotation angle as a function of number of iterations of SPSA taken. (Right) Estimated Average gate infidelity as a function of the number of SPSA iterations and the total number of sequences used to achieve that level of infidelity. The shaded region represents a $70\%$ credible region for the infidelity. ](\figurefolder/overrotation-bacronym-perf_alpha.pdf){width="\linewidth"}
Conclusion
==========
The main result of our work is to show that, under weak assumptions of Lipshitz continuity, Bayesian inference can be used to piece together evidence gained from experiments at nearby experimental settings to accelerate learning of optimal control parameters for quantum devices. We further demonstrate the success of this approach numerically by using a Bayesian ACRONYM tuning protocol (BACRONYM) to tune a rotation gate that suffers from an unknown overrotation. We find that by use of evidence from nearby experimental settings for the gate, we can learn optimal controls with fewer than $1$ kilobit of data which is a reduction of nearly a factor of $20$ relative to the best known non-Bayesian approach [@kelly2014optimal].
Looking forward, there are a number of ways in which this work can be built upon. Firstly, upper bounds on the Lipshitz constant and variance are needed to properly use evidence from nearby points within the optimization loop; however, tight estimates are not known a priori for either quantity. Finding approaches that yield useful empirical bounds would be an important contribution beyond what we provide here. Secondly, an experimental demonstration of Bayesian ACRONYM tuning would be useful to demonstrate the viability of such tuning parameters in real-world applications. Finally, while we have picked SPSA as an optimizer for convenience, there may be better choices within the literature. This raises an interesting issue because the number of times that the objective function needs to be queried is not the best metric when information is reused. This point is important not just for choosing the best optimizer to minimize experimental costs for tuning hardware, it also potentially reveals a new way of optimizing parameters in variational quantum eigensolvers [@peruzzo2014variational], as well as QAOA [@farhi2014quantum] and quantum machine learning algorithms [@schuld2018circuit].
This project was prepared using a reproducible workflow [@granade_reproducible_2017].
Pseudocode for BACROYNM Tuning
==============================
$\vec{\theta}_0$: initial control parameters $n_{\text{shots}}$: number of measurements per seq. length $\sigma_{\text{req}}$: required accuracy for ${F}$ $(a, b, s, t)$: SPSA1 parameters largest allowed step in the parameter $\vec{\theta}$ ${F}_{\text{target}}$: target objective function value $\pi_0$: initial prior ${\mathcal{L}}$: Lipschitz continuity assumed for ${F}$ [ 0.5em ]{} $\pi \gets \pi_0$, $\vec{\theta} \gets \vec{\theta}_0$ collect RB data at $\vec{\theta}$ until $\Var[{F}] \le \sigma_{\text{req}}^2$ $\hat{{F}} \gets \expect[{F}(\vec{\theta}) | \text{data}]$ $i_{\text{iter}} \gets 0$ [ 0.5em ]{} $i_{\text{iter}}+\!\!+$ [ 0.5em ]{} $\vec{\Delta} \gets$ a random $\pm1$ vector the same length as $\vec{\theta}$ $\mathrm{step} \gets a / (1 + i_{\text{iter}}^{s})$ $\mathrm{gain} \gets b / (1 + i_{\text{iter}}^{t})$ $\vec{\delta\theta} \gets \mathrm{step} \cdot \vec{\Delta}$ estimate $\hat{{F}}(\vec{\theta} + \vec{\delta\theta})$ using $\vec{u} \gets \mathrm{gain} \cdot \vec{\Delta} (\hat{{F}}(\vec{\theta} + \vec{\delta\theta}) - \hat{{F}}(\vec{\theta}))$ $\vec{u} \gets \vec{u} / \max_{u \in \vec{u}} |u|$ $\vec{\theta} +\!\!= \vec{u}$ [[ ]{}]{} $\vec{\theta} -\!\!= \mathrm{step} \cdot \vec{\Delta}$ $\vec{\theta} +\!\!= \mathrm{step} \cdot \vec{\Delta}$ [ 0.5em ]{} $\vec{\theta}, \hat{{F}}$
[^1]: Sometimes informally called a “Kraus rank.”
|
---
abstract: 'We study general quantum waveguides and establish explicit effective Hamiltonians for the Laplacian on these spaces. A conventional quantum waveguide is an ${\varepsilon}$-tubular neighbourhood of a curve in ${\mathbb{R}}^3$ and the object of interest is the Dirichlet Laplacian [on]{} this tube in the asymptotic limit ${\varepsilon}\to0$. We generalise this by considering fibre bundles $M$ over a $d$-dimensional submanifold $B\subset{\mathbb{R}}^{d+k}$ [with]{} fibres diffeomorphic to $F\subset{\mathbb{R}}^k$, whose total space is embedded into an ${\varepsilon}$-neighbourhood of $B$. From this point of view $B$ takes the role of the curve and $F$ that of the disc-shaped cross-section of a conventional quantum waveguide. Our approach allows, [among other things]{}, for waveguides whose cross-sections $F$ are deformed along $B$ and also the study of the Laplacian on the boundaries of such waveguides. By applying recent results on the adiabatic limit of Schrödinger operators on fibre bundles we show, in particular, that for small energies the dynamics and the spectrum of the Laplacian on $M$ are reflected by the adiabatic approximation associated to the ground state band of the normal Laplacian. We give explicit formulas for the according effective operator on $L^2(B)$ in various scenarios, thereby improving and extending many of the known results on quantum waveguides and quantum layers in ${\mathbb{R}}^3$.'
author:
- 'Stefan Haag, Jonas Lampart, Stefan Teufel'
title: Generalised Quantum Waveguides
---
Eberhard Karls Universität Tübingen, Mathematisches Institut, Auf der Morgenstelle 10, 72076 Tübingen, Germany.
Introduction
============
Quantum waveguides have been studied by physicists, chemists and mathematicians for many years now and the rate at which new contributions appear is still high (see [@BMT07; @dV11; @DE95; @KS12; @KR13; @SS13] and references therein). Mathematically speaking, a conventional quantum waveguide corresponds to the study of the Dirichlet Laplacian on a thin tube around a smooth curve in ${\mathbb{R}}^3$. Of particular interest are effects of the geometry of the tube on the spectrum of and the unitary group generated by the Laplacian. Similarly so-called quantum layers, [i.e. ]{}the Laplacian on a thin layer around a smooth surface, have been studied [@CEK04; @KL12; @KRT13]. The related problem of the constraining of a quantum particle to a neighbourhood of such a curve (or surface) by a steep potential rather than through the boundary condition was studied in [@daC; @deO13; @FrHe; @JK71; @Mar; @Mit; @WaTe]. Recently, progress has also been made on quantum waveguides and layers in magnetic fields [@deO13; @KR13; @KRT13].
There are obvious geometric generalisations of these concepts. One can consider the Dirichlet Laplacian on small neighbourhoods of $d$-dimensional submanifolds of ${\mathbb{R}}^{d+k}$ (see e.g. [@LL06]), or of any $(d+k)$-dimensional Riemannian manifold. Another possibility is to look at the Laplacian on the boundary of such a submanifold, which, in the case of a conventional waveguide, is a cylindrical surface around a curve in ${\mathbb{R}}^3$. Beyond generalising to higher dimension and codimension, one could also ask for waveguides with cross-sections that change their shape and size along the curve, or more generally along the submanifold around which the waveguide is modelled.
In the majority of mathematical works on quantum waveguides, with the exception of [@dV11], such variations of the cross-section along the curve must be excluded. The reason is that, physically speaking, localizing a quantum particle to a thin domain leads to large kinetic energies in the constrained directions, [i.e. ]{}in the directions normal to the curve for a conventional waveguide, and that variations of the cross-section lead to exchange of this kinetic energy between normal and tangent directions. However, the common approaches require that the Laplacian acts only on functions that have much smaller derivatives in the tangent directions than in the normal directions. In this paper we show how to cope with several of the possible generalisations mentioned above: (i) We consider general dimension and codimension of the submanifold along which the waveguide is modelled. (ii) We allow for general variations of the cross-sections along the submanifold and thus necessarily for kinetic energies of the same order in all directions, with possible exchange of energy between the tangent and normal directions. (iii) We also include the case of “hollow” waveguides, [i.e. ]{}the Laplacian on the boundary of a general “massive” quantum waveguide.
All of this is achieved by developing a suitable geometric framework for general quantum waveguides and by the subsequent application of recent results on the adiabatic limit of Schrödinger operators on fibre bundles [@LT2014]. As concrete applications we will mostly emphasise geometric effects and explain, in particular, how the known effects of “bending” and “twisting” of waveguides in ${\mathbb{R}}^3$ manifest themselves in higher dimensional generalised waveguides.
Before going into a more detailed discussion of our results and of the vast literature, let us review the main concepts in the context of conventional quantum waveguides in a geometrical language that is already adapted to our subsequent generalisation. Moreover, it will allow us to explain the adiabatic structure of the problem within a simple example.
Consider a smooth curve $c:{\mathbb{R}}\to {\mathbb{R}}^3$ parametrised by arclength with bounded second derivative $c''$ and its ${\varepsilon}$-neighbourhood $${\mathcal{T}}^{\varepsilon}:=\bigl\{y\in{\mathbb{R}}^3:\ \operatorname{dist}(y,B)\leq {\varepsilon}\bigr\}\subset{\mathbb{R}}^3$$ for some ${\varepsilon}>0$. By $B:=c({\mathbb{R}})$ we denote the image of the curve in ${\mathbb{R}}^3$ and we call ${\mathcal{T}}^{\varepsilon}$ the [tube]{} of a conventional waveguide. The aim is to understand the Laplace operator $\Delta_{\updelta^3}$ on $L^2({\mathcal{T}}^{\varepsilon},{\mathrm{d}}\updelta^3)$ with Dirichlet boundary conditions on ${\mathcal{T}}^{\varepsilon}$ in the asymptotic limit ${\varepsilon}\ll 1$. As different metrics will appear in the course of the discussion, we make the Euclidean metric $\updelta^3$ explicit in the Laplacian.
For ${\varepsilon}$ small enough one can map ${\mathcal{T}}^{\varepsilon}$ diffeomorphically onto the ${\varepsilon}$-tube in the normal bundle of $B$. In order to make the following compuations explicit, we pick an orthonormal frame along the curve. A natural choice is to start with an orthonormal basis $(\tau, e_1,e_2)$ at one point in $B$ such that $\tau=c'$ is tangent and $(e_1,e_2)$ are normal to the curve. Then one obtains a (in this special case global) unique frame by parallel transport of $(\tau, e_1,e_2)$ along the curve $B$. This construction is sometimes called the relatively parallel adapted frame [@Bishop1975]. The frame $(\tau(x),e_1(x),e_2(x))$ satisfies the differential equation $$\label{eq:DGLBishop}
\begin{pmatrix} \tau' \\ e'_1 \\ e'_2 \end{pmatrix} = \begin{pmatrix} 0 & \kappa^1 & \kappa^2 \\ - \kappa^1 & 0 & 0 \\ -\kappa^2 & 0 & 0 \end{pmatrix} \begin{pmatrix} \tau \\ e_1 \\ e_2 \end{pmatrix}$$ with the components of the mean curvature vector $\kappa^\alpha:B\to{\mathbb{R}}$ ($\alpha=1,2$) given by $$\kappa^\alpha(x) := {\left \langle \tau'(x) , e_\alpha(x) \right \rangle}_{{\mathbb{R}}^3} = {\left \langle c''(x) , e_\alpha(x) \right \rangle}_{{\mathbb{R}}^3} \, .$$ The two normal vector fields $e_{1,2}:B\to {\mathbb{R}}^3$ form an orthonormal frame of $B$’s normal bundle ${\mathsf{N}}B$. Hence, for ${\varepsilon}>0$ small enough, there is a canonical identification of the ${\varepsilon}$-tube in the normal bundle denoted by $$M^{\varepsilon}:= \left\{ \bigl(x, n^1e_1(x)+n^2e_2(x)\bigr)\in {\mathsf{N}}B:\ (n_1)^2+(n_2)^2\leq{\varepsilon}^2\right\} \subset {\mathsf{N}}B$$ with the original ${\varepsilon}$-tube ${\mathcal{T}}^{\varepsilon}\subset{\mathbb{R}}^3$ via the map $$\label{eq:Phiconv}
\Phi:M^{\varepsilon}\to{\mathcal{T}}^{\varepsilon}\, ,\quad \Phi :\bigl(x,n^1 e_1(x)+n^2 e_2(x)\bigr)\mapsto x + n^1 e_1(x)+n^2 e_2(x)\, .$$ We will refer to $F^{\varepsilon}_x:= M^{\varepsilon}\cap {\mathsf{N}}_x B$ as the cross-section of $M^{\varepsilon}$ and to $\Phi(F^{\varepsilon}_x)$ as the cross-section of ${\mathcal{T}}^{\varepsilon}$ at $x\in B$.
In order to give somewhat more substance to the simple example, let us generalise the concept of a conventional waveguide already at this point. For a smooth function $f:B\to [f_-,f_+]$ with $0<f_-<f_+<\infty$ let $$M^{\varepsilon}_f := \left\{ \bigl(x, n^1e_1(x)+n^2e_2(x)\bigr)\in {\mathsf{N}}B:\ (n_1)^2+(n_2)^2\leq{\varepsilon}^2 \,f(x)^2\right\}$$ be the tube with varying cross-section $F^{\varepsilon}_x$, a disc of radius $ {\varepsilon}f(x)$. This gives rise to a corresponding tube ${\mathcal{T}}^{\varepsilon}_f := \Phi(M^{\varepsilon}_f)$ in ${\mathbb{R}}^3$. To not overburden notation, we will drop the subscript $f$ in the following, [i.e. ]{}put $M^{\varepsilon}:= M^{\varepsilon}_f$ and ${\mathcal{T}}^{\varepsilon}:={\mathcal{T}}^{\varepsilon}_f $.
By equipping $M^{\varepsilon}$ with the pullback metric $g:=\Phi^*\updelta^3$, we can turn $\Phi$ into an isometry. Then the Dirichlet Laplacian $\Delta _{\updelta^3}$ on $L^2({\mathcal{T}}^{\varepsilon},{\mathrm{d}}\updelta^3)$ is unitarily equivalent to the Dirichlet Laplacian $\Delta _g$ on $L^2(M^{\varepsilon},{\mathrm{d}}g)$.
In order to obtain an explicit expression for $\Delta _g$ with respect to the bundle coordinates $(x,n^1,n^2)$ associated with the orthonormal frame $(e_1(x), e_2(x))$, we need to compute the pullback metric $g$ on the tube $M^{\varepsilon}$. For the coordinate vector fields $\partial_x$ and $\partial_{n^\alpha}$, $\alpha\in\{1,2\}$, one finds $$\begin{aligned}
\Phi_*\partial_x|_{(x,n)} &= \tfrac{{\mathrm{d}}}{{\mathrm{d}}x} \Phi\Bigl(\bigl(c(x),n^\alpha e_\alpha\bigl(c(x)\bigr)\Bigr) = \tfrac{{\mathrm{d}}}{{\mathrm{d}}x} \Bigl(c(x) + n^\alpha e_\alpha\bigl(c(x)\bigr) \Bigr) \\
&= \tau(x) - n^\alpha \kappa^\alpha(x) \tau(x) = \bigl(1 - n\cdot \kappa(x)\bigr) \tau(x)\, , \\
\Phi_*\partial_{n^\alpha}|_{(x,n)} &=\tfrac{{\mathrm{d}}}{{\mathrm{d}}n^\alpha} \Phi\Bigl(\bigl(c(x),n^\alpha e_\alpha\bigl(c(x)\bigr)\Bigr) = \tfrac{{\mathrm{d}}}{{\mathrm{d}}n^\alpha} \Bigl(c(x) + n^\alpha e_\alpha\bigl(c(x)\bigr) \Bigr) \\
&= e_\alpha(x)\,.\end{aligned}$$ Here, we used $c'(x)=\tau(x)$ and the differential equations . Knowing that $(\tau,e_1,e_2)$ is an orthonormal frame of ${\mathsf{T}}{\mathbb{R}}^3|_B$ with respect to $\updelta^3$, this yields $$g(x,n):=\begin{pmatrix} (1-n\cdot \kappa(x))^2 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix}.$$ The Laplace-Beltrami operator on $M^{\varepsilon}$ associated to $g$ is thus $$-\Delta _g = -\frac{1}{(1-n\cdot\kappa)^2}\left(\partial_x^2 + \frac{n\cdot\kappa'}{1-n\cdot\kappa}\,\partial_x\right) - \Delta_n + \frac{\kappa\cdot\nabla_n}{1-n\cdot\kappa}$$ with $\Delta_n = \nabla_n^2 = \partial_{n^1}^2 + \partial_{n^2}^2$. As the Riemannian volume measure of $g$ on coordinate space reads ${\mathrm{d}}g = (1-n\cdot\kappa(x)) {\mathrm{d}}\updelta^3$, it is convenient to introduce the multiplication operator ${\mathcal{M}}_\rho:\psi\mapsto \rho^{1/2}\psi$ with the density $\rho(x,n):= 1-n\cdot\kappa(x)$ as a unitary operator from $L^2(M^{\varepsilon},{\mathrm{d}}g)$ to $L^2(M^{\varepsilon}, {\mathrm{d}}\updelta^3)$. Here ${\mathrm{d}}\updelta^3$ just denotes Lebesgue measure on the coordinate space. A straightforward computation shows that $$\label{eq:V_rho}
{\mathcal{M}}_\rho\bigl(-\Delta _g\bigr) {\mathcal{M}}_\rho^* = - \Delta_\mathrm{hor} - \Delta_n + V_\rho$$ with $$\Delta_\mathrm{hor} := \partial_x\, \rho^{-2}\,\partial_x = \partial_x^2 + \partial_x (\rho^{-2} - 1) \partial_x$$ and $$V_\rho := - \frac{\kappa^2}{4\rho^2} - \frac{n\cdot\kappa''}{2\rho^3} - \frac{5(n\cdot\kappa')^2}{4\rho^4}\,.$$ The rescaling by $\rho$ thus leads to a simpler split up of the Laplacian $-\Delta _g$ into a horizontal operator $\Delta_\mathrm{hor}$ and a vertical operator $\Delta_n$ given by the Euclidean Laplace operator with Dirichlet boundary conditions on the cross-sections $F^{\varepsilon}_x$. This simplification is, however, at the expense of an additional potential $V_\rho$.
Since $F^{\varepsilon}_x$ is isometric to the disc of radius ${\varepsilon}f(x)$, the eigenvalues and eigenfunctions of the vertical operator $\Delta_n$ are ${\varepsilon}$-dependent and also functions of $x\in B$. In order to arrive at an ${\varepsilon}$-independent vertical operator and an ${\varepsilon}$-independent domain, one dilates the fibres of ${\mathsf{N}}B$ using ${\mathcal{D}}_{\varepsilon}:(x,n)\mapsto (x,{\varepsilon}n)$ and its associated lift to a unitary operator mapping $L^2(M^{{\varepsilon}=1},{\mathrm{d}}\updelta^3)$ to $L^2(M^{\varepsilon},{\mathrm{d}}\updelta^3)$. One then arrives at $${\mathcal{D}}_{\varepsilon}^*{\mathcal{M}}_\rho \bigl(-\Delta _g\bigr) {\mathcal{M}}_\rho^* {\mathcal{D}}_{\varepsilon}= - (\partial_x^2 + {\varepsilon}S^{\varepsilon}) - {\varepsilon}^{-2}\Delta_n + {V_\mathrm{bend}}~$$ with the second order differential operator $$S^{\varepsilon}:= {\varepsilon}^{-1}\partial_x {\mathcal{D}}_{\varepsilon}^* (\rho^{-2} - 1) {\mathcal{D}}_{\varepsilon}\partial_x
= {\varepsilon}^{-1}\partial_x \underbrace{(\rho_{\varepsilon}^{-2} - 1)}_{= 2{\varepsilon}n\cdot \kappa + \mathcal{O}({\varepsilon}^2)} \partial_x\, ,\quad \rho_{\varepsilon}:= 1 - {\varepsilon}n\cdot\kappa~\,$$ and the bending potential $$\label{eq:Vb intro}
{V_\mathrm{bend}}:= {\mathcal{D}}_{\varepsilon}^* V_\rho {\mathcal{D}}_{\varepsilon}= - \frac{\kappa^2}{4\rho_{\varepsilon}^2} - \frac{{\varepsilon}n\cdot\kappa''}{2\rho_{\varepsilon}^3} - \frac{5({\varepsilon}n\cdot\kappa')^2}{4\rho_{\varepsilon}^4} = - \frac{\kappa^2}{4} + {\mathcal{O}}({\varepsilon})\, .$$ These terms account for the bending of the curve in the ambient space, i.e. its extrinsic geometry. In order to make the asymptotic limit ${\varepsilon}\to 0$ more transparent, we rescale units of energy in such a way that the transverse energies are of order one by multiplying the full Laplacian with ${\varepsilon}^2$. In summary one finds that $ -{\varepsilon}^2\Delta _g $ is unitarily equivalent to the operator $$H^{\varepsilon}:= {\mathcal{D}}_{\varepsilon}^* {\mathcal{M}}_\rho\bigl(-{\varepsilon}^2\Delta _g\bigr) {\mathcal{M}}_\rho^* {\mathcal{D}}_{\varepsilon}= - {\varepsilon}^2 \Delta_{\mathsf{H}}- \Delta_{\mathsf{V}}+ {\varepsilon}^2 {V_\mathrm{bend}}- {\varepsilon}^3 S^{\varepsilon}\,$$ acting on the domain $D(H^{\varepsilon}) = W^2(M)\cap W^1_0(M)\subset L^2(M)$ with $M:=M^{{\varepsilon}=1}$. The Hamiltonian $H^{\varepsilon}$ thus splits into the horizontal Laplacian ${\varepsilon}^2\Delta_{\mathsf{H}}:={\varepsilon}^2\partial_x^2$, the vertical Laplacian $\Delta_{\mathsf{V}}:=\Delta_n$ and a additional differential operator ${\varepsilon}H_1:={\varepsilon}^2 {V_\mathrm{bend}}- {\varepsilon}^3 S^{\varepsilon}$ that will be treated as a perturbation. This structure is reminiscent of the starting point for the Born-Oppenheimer approximation in molecular physics. There the $x$-coordinate(s) describe heavy nuclei and ${\varepsilon}^2$ equals the inverse mass of the nuclei. The $n$-coordinates describe the electrons with mass of order one. In both cases the vertical resp. electron operator depends on $x$: the vertical Laplacian $\Delta_{\mathsf{V}}$ in the quantum waveguide Hamiltonian $H^{\varepsilon}$ depends on $x\in B$ through the domain $F_x:=F_x^{{\varepsilon}=1}$ and the electron operator in the molecular Hamiltonian depends on $x$ through an interaction potential. This suggests to study the asymptotics ${\varepsilon}\ll 1$ for quantum waveguide Hamiltonians by the same methods that have been successfully developed for molecular Hamiltonians, namely by adiabatic perturbation theory. The latter allows to seperate slow and fast degrees of freedom in a systematic way. In the context of quantum waveguides the tangent dynamics are slow compared to the frequencies of the normal modes.
To illustrate the adiabatic structure of the problem, let $\lambda_0(x) \sim \frac{1}{f(x)^2}$ be the smallest eigenvalue of $-\Delta_{\mathsf{V}}$ on $F_x$ and denote by $\phi_0(x)\in L^2(F_x)$ the corresponding normalised non-negative eigenfunction, the so-called ground state wave function. Let $$P_0 L^2(M) := \left\{ \Psi(x,n ) = \psi(x) \phi_0(x,n ):\ \psi\in L^2(B)\right\} \subset L^2(M)$$ be the subspace of local product states and $P_0$ the orthogonal projection onto this space. Now the restriction of $H^{\varepsilon}$ to the subspace $P_0 L^2(M)$ is called the adiabatic approximation of $H^{\varepsilon}$ on the ground state band and the associated adiabatic operator is defined by $$\label{eq:H_a def}
H_\mathrm{a} := P_0H^{\varepsilon}P_0\,.$$ A simple computation using $$(P_0\Psi) (x,n)= \langle \phi_0(x,\cdot),\Psi(x,\cdot)\rangle_{L^2(F_x)} \;\phi_0(x,n)$$ and the unitary identification $W: P_0L^2(M) \to L^2(B)$, $ \psi(x) \phi_0(x,n^1,n^2)\mapsto \psi(x)$, shows that the adiabatic operator $H_{\rm a}$ can be seen as an operator acting only on functions on the curve $B$, given by $$\begin{aligned}
(WH_{\rm a} W^*\psi )(x) &= \bigl( - {\varepsilon}^2 \partial_x^2 + \lambda_0(x) + {\varepsilon}^2 V_\mathrm{a}(x) + {\varepsilon}^2 {V_\mathrm{bend}}^0(x) \bigr) \psi(x)\\
& \hphantom{=}\ + {\varepsilon}^3 \int_{F_x} \phi_0(x,n) \bigl(S^{\varepsilon}(\phi_0\psi)\bigr)(x,n)\ {\mathrm{d}}n+ {\mathcal{O}}({\varepsilon}^3)\,,\end{aligned}$$ where $V_\mathrm{a}(x) := {\left \lVert \partial_x \phi_0(x) \right \rVert}_{L^2(F_x)}^2$ and ${V_\mathrm{bend}}^0(x)= - \frac{\kappa^2(x)}{4}$. As such it is a one-dimensional Schrödinger-type operator with potential function $\lambda_0(x)+\mathcal{O}({\varepsilon}^2)$ and the asymptotic limit ${\varepsilon}\ll 1$ corresponds to the semi-classical limit. This analogy shows that, in general, $- {\varepsilon}^2 \partial_x^2 $ cannot be considered small compared to $\lambda_0(x) $, despite the factor of ${\varepsilon}^2$. To see this, observe that all eigenfunctions $\psi^{\varepsilon}$ (and also all solutions of the corresponding time-dependent Schrödinger equation) are necessarily ${\varepsilon}$-dependent with $ \| {\varepsilon}\partial_x \psi^{\varepsilon}\|^2\gg {\varepsilon}^2$, unless $\lambda_0(x)\equiv c$ for some constant $c\in{\mathbb{R}}$. To be more explicit, assume that $\lambda_0(x) \approx {\omega^2} (x-x_0)^2$ near a global minimum at $x_0$. Then the lowest eigenvalues of $H_\mathrm{a}$ are $e_\ell = \lambda_0(x_0) + {\varepsilon}\omega (1 + 2 \ell) + \mathcal{O}({\varepsilon}^2)$ for $\ell=0,1,2,\dots$. While the level spacing of order ${\varepsilon}$ is small compared to $\lambda_0$, it is large compared to the energy scale of order ${\varepsilon}^2$ of the geometric potentials. And for states $\psi^{\varepsilon}$ with $\ell \sim {\varepsilon}^{-1}$ the kinetic energy $ \| {\varepsilon}\partial_x \psi^{\varepsilon}\|^2 $ in the tangential direction is of order one.
However, the majority of mathematical works on the subject considers the situation where $ \| {\varepsilon}\partial_x \psi^{\varepsilon}\|^2 $ is of order ${\varepsilon}^2$. Clearly this only yields meaningful results if one assumes $\lambda_0 \equiv c$ for some constant $c\in{\mathbb{R}}$. But this, in turn, puts strong constraints on the possible geometries of the waveguide, which we avoid in the present paper. Now the obvious mathematical question is: [*To what extent and in which sense do the properties of the adiabatic operator $H_\mathrm{a}$ reflect the corresponding properties of $H^{\varepsilon}$?*]{} This question was answered in great generality in [@Lampart2013; @LT2014] and we will translate these results to our setting of generalised quantum waveguides in Section \[sec:mainresults\]. Roughly speaking, Theorem \[thm:low spectrum\] states that the low-lying eigenvalues of $H_\mathrm{a}$ approximate those of $H^{\varepsilon}$ up to errors of order ${\varepsilon}^3$ in general, and up to order ${\varepsilon}^4$ in the special case of $\lambda_0 \equiv c$. In the latter case the order ${\varepsilon}^3$ terms in $H_\mathrm{a}$ turn out to be significant as well.
Our main new contribution in this work is to introduce the concept of generalised quantum waveguides in Section \[chap:general\] and to compute explicitly the adiabatic operator for such generalised waveguides to all significant orders. For massive quantum waveguides, which are basically “tubes” with varying cross-sections modelled over submanifolds of arbitrary dimension and codimension, this is done in Section \[chap:massiveQWG\]. There we follow basically the same strategy as in the simple example given in the present section. We obtain general expressions for the adiabatic operator, from which we determine the relevant terms for different energy scales. Though the underlying calculations of geometric quantities have been long known [@Tolar1988], the contribution of $S^{\varepsilon}$ has usually been neglected, because at the energy scale of ${\varepsilon}^2$ it is of lower order than ${V_\mathrm{bend}}$. This changes however on the natural energy scale of the example $M_f$ with non-constant $f$, where they may be of the same order, as we see in Section \[sect:H\_a alpha\]. The contribution of the bending potential is known to be non-positive in dimensions $d=1$ or $d=2$ [@CEK04; @Kre07], while it has no definite sign in higher dimensions [@Tolar1988]. It was stressed in [@Kre07] that this leads to competing effects of bending and the non-negative “twisting potential” in quantum waveguides whose cross-sections $F_x$ are all isometric but not rotationally invariant and twist along the curve relative to the parallel frame. The generalisation of this twisting potential is the adiabatic potential $V_\mathrm{a}$, which is always non-negative and of the same order as ${V_\mathrm{bend}}$. Using this general framework we generalise the concept of “twisted” waveguides to arbitrary dimension and codimension in Section \[sect:twist\].
In Section \[chap:hollowQWG\] we finally consider hollow waveguides, which are the boundaries of massive waveguides. So far there seem to be no results on these waveguides in the literature and the adiabatic operator derived in Section \[sect:H\_a hollow\] is completely new. For hollow waveguides the vertical operator is essentially the Laplacian on a compact manifold without boundary and thus its lowest eigenvalue vanishes identically, $\lambda_0(x) \equiv 0$. The adiabatic operator on $L^2(B)$ is quite different from the massive case. Up to errors of order ${\varepsilon}^3$, it is the sum of the Laplacian on $B$ and an effective potential given in . For the special case of the boundary of $M^{\varepsilon}_f$ discussed above this potential is given by $${\varepsilon}^2 \left[\tfrac12 \partial_x^2\log \bigl(2\uppi f(x)\bigr) + \tfrac14 {\bigl \lvert \partial_x \log \bigl(2 \uppi f(x)\bigr) \bigr \rvert}^2 \right]
= {\varepsilon}^2 \left[\tfrac12 \tfrac{f''}{f} - \tfrac14 \big(\tfrac{f'}{f}\big)^2\right]\,,$$ which, in contrast to massive waveguides, is independent of the curvature $\kappa$ and depends only on the rate of change of Vol$(\partial F_x)= 2\uppi f(x)$. One can check for explicit examples that a local constriction in the tube, [e.g. ]{}for $f(x) = 2-\frac{1}{1+x^2}$, leads to an effective potential with wells. Thus, constrictions can support bound states on the surface of a tube.
Generalised Quantum Waveguides {#chap:general}
==============================
In this part we give a precise definition of what we call generalised quantum waveguides. In view of the example discussed in the introduction, the ambient space ${\mathbb{R}}^3$ is replaced by $(d+k)$-dimensional Euclidean space and the role of the curve is played by an arbitrary smooth $d$-dimensional submanifold $B\subset{\mathbb{R}}^{d+k}$. The generalised waveguide $M$ is contained in a neighbourhood of the zero section in ${\mathsf{N}}B$ which can be diffeomorphically mapped to a tubular neighbourhood of $B\subset {\mathbb{R}}^{d+k}$. We will again call $F_x= M \cap {\mathsf{N}}_x B$ the cross-section of the quantum waveguide at the point $x\in B$ and essentially assume that $F_x$ and $F_y$ are diffeomorphic for $x,y\in B$. This allows for general deformations of the cross-sections as one moves along the base, where in the introduction we only considered scaling by the function $f$.
In order to separate the Laplacian into its horizontal and vertical parts, we follow the strategy of the previous section. However, it will be more convenient to adopt the equivalent viewpoint, where we implement the scaling within the metric $g^{\varepsilon}$ on $M=M^{{\varepsilon}=1}\subset {\mathsf{N}}B$ instead of shrinking $M^{\varepsilon}\subset {\mathsf{N}}B$ and keeping $g $ fixed.
For the following considerations, we assume that there exists a tubular neighbourhood $B\subset {\mathcal{T}}\subset{\mathbb{R}}^{d+k}$ with globally fixed diameter, [i.e. ]{}there is $r>0$ such that normals to $B$ of length less than $r$ do not intersect. More precisely, we assume that the map $$\Phi:{\mathsf{N}}B \to {\mathbb{R}}^{d+k}\,,\quad (x,\nu)\mapsto x + \nu\,,$$ restricted to $${\mathsf{N}}B^r := \bigl\{(x,\nu)\in{\mathsf{N}}B:\ {\left \lVert \nu \right \rVert}_{{\mathbb{R}}^{d+k}} < r\bigr\}\subset{\mathsf{N}}B$$ is a diffeomorphism to its image ${\mathcal{T}}$. Again, this mapping $\Phi$ provides a metric $G := \Phi^* \updelta^{d+k}$ on ${\mathsf{N}}B^r$ and the rescaled version $G^{\varepsilon}:= {\varepsilon}^{-2} {\mathcal{D}}_{\varepsilon}^* G$, where ${\mathcal{D}}_{\varepsilon}:(x,\nu)\mapsto(x,{\varepsilon}\nu)$ is the dilatation of the fibres in ${\mathsf{N}}B$.
\[def:HighDimQWG\] Let $B\subset{\mathbb{R}}^{d+k}$ be a smooth $d$-dimensional submanifold with tubular neighbourhood ${\mathcal{T}}\subset{\mathbb{R}}^{d+k}$ and $F$ be a compact manifold with smooth boundary and $\dim F\leq k$.
Suppose $M\subset{\mathsf{N}}B^r=\Phi^{-1}({\mathcal{T}})$ is a connected subset that is a fibre bundle with projection $\pi_M:M\to B$ and typical fibre $F$ such that the diagram
M && B\
\^[\_M]{} && \_[\_[B]{}]{}\
B &\_[\_B]{}&B
commutes. We then call the pair $(M,g^{\varepsilon})$, with the scaled pullback metric $$g^{\varepsilon}:=G^{\varepsilon}|_{{\mathsf{T}}M}\in{\mathcal{T}}^0_2(M)\,,$$ a *generalised quantum waveguide*.
It immediately follows from the commutative diagram that the cross-sections $F_x$ coincide with the fibres $\pi_M^{-1}(x)$ given by the fibre bundle structure. From now on we will usually refer to this object simply as the fibre of $M$ over $x$. Although other geometries are conceivable, the most interesting examples of generalised waveguides are given by subsets $M\subset {\mathsf{N}}B^r$ of codimension zero and their boundaries. In the following we will only treat these two cases and distinguish them by the following terminology:
\[def:MassiveHollow\] Let $F\to M\xrightarrow{\pi_M}B$ be a generalised quantum waveguide as in Definition \[def:HighDimQWG\].
1. We call $M$ *massive* if $F$ is the closure of an open, bounded and connected subset of ${\mathbb{R}}^k$ with smooth boundary.
2. We call $M$ *hollow* if $\mathrm{dim}(F)>0$ and there exists a massive quantum waveguide $\mathring{F}\to\mathring{M}\xrightarrow{\pi_{{\mathring{M}}}} B$ such that $M=\partial \mathring{M}$.
This definition implies $\pi_M = \pi_{{\mathring{M}}}|_M$, [i.e. ]{}each fibre $F_x=\pi_M^{-1}(x)$ of a hollow quantum waveguide is the boundary of $\mathring{F}_x$, the fibre of the related massive waveguide $\mathring{M}$.
We denote by ${{\mathsf{V}}M := \ker(\pi_{M*})\subset {\mathsf{T}}M}$ the vertical subbundle of ${\mathsf{T}}M$. Its elements are vectors that are tangent to the fibres of $M$. We refer to the orthogonal complement of ${\mathsf{V}}M$ (with respect to $g:=g^{{\varepsilon}=1}$) as the horizontal subbundle ${\mathsf{H}}M \cong \pi^*_M({\mathsf{T}}B)$. Clearly $$\label{eq:THVM}
{\mathsf{T}}M = {\mathsf{H}}M \oplus {\mathsf{V}}M\,,$$ and this decomposition will turn out to be independent of ${\varepsilon}$. That is, the decomposition ${\mathsf{T}}M = {\mathsf{H}}M \oplus {\mathsf{V}}M$ is orthogonal for every ${\varepsilon}>0$. Furthermore, we will see (Lemma \[lem:pullback\] and equation for the massive case, equation for hollow waveguides) that the scaled pullback metric is always of the form $$\label{eq:formg}
g^{\varepsilon}= {\varepsilon}^{-2}(\pi_M^* g_B + {\varepsilon}h^{\varepsilon}) + g_F\,,$$ where
- $g_B:= \updelta^{d+k}|_{{\mathsf{T}}B}\in {\mathcal{T}}^0_2(B)$ is the induced Riemannian metric on the submanifold $B$,
- $h^{\varepsilon}\in {\mathcal{T}}^0_2(M)$ is a symmetric (but not necessarily non-degenerate) tensor with $h^{\varepsilon}(V,\cdot)=0$ for any vertical vector field $V$,
- $g_F:=g^{\varepsilon}|_{{\mathsf{V}}M}$ is the ${\varepsilon}$-independent restriction of the scaled pullback metric to its vertical contribution.
Thus, if we define for any vector field $X\in\Gamma({\mathsf{T}}B)$ its unique horizontal lift $X^{{\mathsf{H}}M}\in\Gamma({\mathsf{H}}M)$ by the relation $\pi_{M*} X^{{\mathsf{H}}M} = X$, we have $$\begin{aligned}
g^{\varepsilon}(X^{{\mathsf{H}}M},Y^{{\mathsf{H}}M}) &= {\varepsilon}^{-2}\bigl(g_B(X,Y)+{\varepsilon}h^{\varepsilon}(X^{{\mathsf{H}}M},X^{{\mathsf{H}}M})\bigr) \, , \\
g^{\varepsilon}(X^{{\mathsf{H}}M},V) &= 0 \, , \\
g^{\varepsilon}(V,W) &= g_F(V,W)\end{aligned}$$ for all $X,Y\in\Gamma({\mathsf{T}}B)$ and $V,W\in \Gamma({\mathsf{V}}M)$.
\[ex:conv\] In the introduction we considered a massive waveguide $M=M_f$ with $d=1$ and $k=2$. The typical fibre was given by $F={\mathrm{D}}^2\subset{\mathbb{R}}^2$. Using the bundle coordinates $(x,n^1,n^2)$ induced by , one easily checks the the scaled pullback metric reads $$g^{\varepsilon}:=\begin{pmatrix} {\varepsilon}^{-2}(1-{\varepsilon}n\cdot \kappa)^2 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix},$$ hence $$g_B={\mathrm{d}}x^2\, ,\quad g_F = {\mathrm{d}}(n^1)^2 + {\mathrm{d}}(n^2)^2\, ,\quad h^{\varepsilon}= \big(-2 (n\cdot\kappa) + {\varepsilon}(n\cdot\kappa)^2\big)\, {\mathrm{d}}x^2\, .$$ Finally remark that $\mathbb{S}^1=\partial {\mathrm{D}}^2\subset{\mathbb{R}}^2$ is the typical fibre of the associated hollow waveguide.
After having introduced the geometry of a generalised waveguide $(M,g^{\varepsilon})$, we now analyse the Laplace-Beltrami operator $\Delta_{g^{\varepsilon}}$ with Dirichlet boundary conditions. The boundary condition is of course vacuous if $M$ is hollow since $\partial M = \emptyset$ in that case. Similarly to the introduction , we apply a unitary ${\mathcal{M}}_{\rho_{\varepsilon}}$ (which equals ${\mathcal{D}}_{\varepsilon}^* {\mathcal{M}}_\rho {\mathcal{D}}_{\varepsilon}$ in the earlier notation). The according scaled density is given by $$\label{eq:rhoepsgen}
\rho_{\varepsilon}:= {\frac{ {{\mathrm{d}}g^{\varepsilon}}}{ {{\mathrm{d}}g^{\varepsilon}_\mathrm{s}}}}\, ,\quad g^{\varepsilon}_\mathrm{s} := {\varepsilon}^{-2}\pi_M^* g_B + g_F\,.$$ We call the metric $g_\mathrm{s}^{\varepsilon}$ the (scaled) submersion metric on $M$, since it turns $\pi_M$ into a Riemannian submersion. The transformed Laplacian then reads $$\begin{aligned}
H^{\varepsilon}={\mathcal{M}}_{\rho_{\varepsilon}}\bigl(-\Delta_{g^{\varepsilon}}\bigr) {\mathcal{M}}_{\rho_{\varepsilon}}^* & = - {\varepsilon}^2\Delta_{\mathsf{H}}- \Delta_{\mathsf{V}}+ {\varepsilon}^2 {V_\mathrm{bend}}- {\varepsilon}^3 S^{\varepsilon}\,.\end{aligned}$$ Here, the horizontal Laplacian is defined by its quadratic form (with $g_\mathrm{s}:=g_\mathrm{s}^{{\varepsilon}=1}$) $$\begin{aligned}
{\left \langle \Psi , - \Delta_{{\mathsf{H}}} \Psi \right \rangle} &= \int_M \pi^*_M g_B\bigl(\operatorname{grad}_{g_\mathrm{s}} \overline\Psi, \operatorname{grad}_{g_\mathrm{s}} \Psi\bigr)\ {\mathrm{d}}g_\mathrm{s}\\
&=\int_M g_\mathrm{s} \bigl(\operatorname{grad}_{g_\mathrm{s}} \overline\Psi, {\operatorname{P}^{{\mathsf{H}}M}}\operatorname{grad}_{g_\mathrm{s}} \Psi\bigr)\ {\mathrm{d}}g_\mathrm{s}\, ,\end{aligned}$$ where ${\operatorname{P}^{{\mathsf{H}}M}}$ denotes the orthogonal projection to ${\mathsf{H}}M$, so integration by parts yields (see also Section \[sect:Laplace\]) $$\label{eq:LaplaceH}
\Delta_{{\mathsf{H}}}= \operatorname{div}_{g_\mathrm{s}} {\operatorname{P}^{{\mathsf{H}}M}}\operatorname{grad}_{g_\mathrm{s}}\, .$$ The vertical operator is given on each fibre $F_x$ by the Laplace-Beltrami operator $$\Delta_{\mathsf{V}}|_{F_x} := \Delta_{g_{F_x}}$$ with Dirichlet boundary conditions. The bending potential $$\begin{aligned}
{\varepsilon}^2 {V_\mathrm{bend}}&= \tfrac{1}{2} \operatorname{div}_{g_\mathrm{s}^{\varepsilon}} \operatorname{grad}_{g^{\varepsilon}}(\log\rho_{\varepsilon}) + \tfrac{1}{4} g^{\varepsilon}({\mathrm{d}}\log\rho_{\varepsilon},{\mathrm{d}}\log\rho_{\varepsilon}) \nonumber \\
&= \tfrac{1}{2} ({\varepsilon}^2\Delta_{\mathsf{H}}+ \Delta_{\mathsf{V}})(\log\rho_{\varepsilon}) + \tfrac{1}{4} g_F({\mathrm{d}}\log\rho_{\varepsilon},{\mathrm{d}}\log\rho_{\varepsilon}) + {\mathcal{O}}({\varepsilon}^4) \label{eq:Vrhoeps}\end{aligned}$$ is a by-product of the unitary transformation $M_{\rho_{\varepsilon}}$ and the second order differential operator $$S^{\varepsilon}: \Psi\mapsto S^{\varepsilon}\Psi:= {\varepsilon}^{-3}\operatorname{div}_{g_\mathrm{s}}(g^{\varepsilon}- g^{\varepsilon}_\mathrm{s})({\mathrm{d}}\Psi,\cdot)$$ accounts for the corrections to $g_s^{\varepsilon}$.
Adiabatic Perturbation Theory {#sec:mainresults}
=============================
In this section we show that the adiabatic operator $H_{\rm a}$ approximates essential features of generalised quantum waveguide Hamiltonians $H^{\varepsilon}$, such as its unitary group and its spectrum. This motivates the derivation of explicit expansions of $H_{\rm a}$ in the subsequent sections. In this work we will only consider the ground state band $\lambda_0(x)$ and pay special attention to the behaviour of $H^{\varepsilon}$ for small energies. This, as we will show, allows to view $H_{\rm a}$ as an operator on $L^2(B)$. The results of this section were derived in [@Lampart2013; @LT2014] in more generality.
For a massive quantum waveguide set $$\label{eq:X1}
\tag{massive}
H_F:=-\Delta_{\mathsf{V}}$$ and for a hollow waveguide $$\label{eq:X2}
\tag{hollow}
H_F:=-\Delta_{\mathsf{V}}+ \tfrac{1}{2} \Delta_{\mathsf{V}}(\log \rho_{\varepsilon}) + \tfrac{1}{4} g_F({\mathrm{d}}\log\rho_{\varepsilon},{\mathrm{d}}\log\rho_{\varepsilon})\, .$$ Let $\lambda_0(x):=\min \sigma(H_{F_x})$ be the smallest eigenvalue of the fibre operator $H_F$ acting on the fibre over $x$. For hollow waveguides we have no boundary and $\lambda_0\equiv 0$ with the eigenfunction $$\phi_0=\sqrt{\rho_{\varepsilon}} \Big(\int_{F_x} \rho_{\varepsilon}\ {\mathrm{d}}g_F\Big)^{-1/2}= \pi_M^*\mathrm{Vol}(F_x)^{-1/2} + \mathcal{O}({\varepsilon})\, .$$ In the massive case we have $\lambda_0>0$ and denote by $\phi_0(x,\cdot)$ the uniquely determined positive normalised eigenfunction of $H_{F_x}$ with eigenvalue $\lambda_0(x)$. Let $P_0$ be the orthogonal projection in $L^2(M)$ defined by $$(P_0 \Psi) ( x,\nu)=\phi_0\bigl(x,\nu\bigr) \int_{F_{x}} \phi_0\bigl(x,\cdot\bigr) \Psi (x,\cdot\bigr)\ {\mathrm{d}}g_F\,.$$ The image of this projection is the subspace $L^2(B)\otimes \mathrm{span}(\phi_0) \cong L^2(B)$ of $L^2(M)$. The function $\phi_0$ and its derivatives, both horizontal and vertical, are uniformly bounded in ${\varepsilon}$. Thus, the action of the horizontal Laplacian $-{\varepsilon}^2\Delta_{\mathsf{H}}$ on $\phi_0$ gives a term of order ${\varepsilon}$ and $$[H^{\varepsilon}, P_0]P_0 = [H^{\varepsilon}- H_F, P_0]P_0=\mathcal{O}({\varepsilon})\label{eq:P0comm}$$ as an operator from $D(H^{\varepsilon})$ to $L^2(M)$. Since this expression equals $(H^{\varepsilon}-H_{\rm{a}})P_0$, this justifies the adiabatic approximation for states in the image of $P_0$. However, the error is of order ${\varepsilon}$, while interesting effects of the geometry, such as the potentials $V_\mathrm{a}$ and $V_\mathrm{bend}$ discussed in the introduction, are of order ${\varepsilon}^2$. Because of this it is desirable to construct also a super-adiabatic approximation, consisting of a modified projection $P_{\varepsilon}=P_0 + \mathcal{O}({\varepsilon})\in \mathcal{L}(L^2(M))\cap\mathcal{L}(D(H^{\varepsilon}))$ and an intertwining unitary $U_{\varepsilon}$ with $P_{\varepsilon}U_{\varepsilon}= U_{\varepsilon}P_0$, such that the effective operator $${H_\mathrm{eff}}:=P_0 U_{\varepsilon}^* H^{\varepsilon}U_{\varepsilon}P_0$$ provides a better approximation of $H^{\varepsilon}$ than $H_\mathrm{a}$ does. It then turns out that the approximation provided by $H_{\rm{a}}$ can also be made more accurate than expected from using the unitary $U_{\varepsilon}$.
Such approximations can be constructed and justified if the geometry of $(M,g^{\varepsilon})$ satisfies some uniformity conditions. Here we only spell out the conditions relevant to our case, for a comprehensive discussion see [@Lampart2013].
The generalised quantum waveguide $(M,g^{\varepsilon})$ is a *waveguide of bounded geometry* if the following conditions are satisfied:
1. The manifold $(B,g_B)$ is of bounded geometry. This means it has positive injectivity radius and for every $k\in {\mathbb{N}}$ there exists a constant $C_k>0$ such that $$g_B(\nabla^k R, \nabla^k R)\leq C_k\,,$$ where $R$ denotes the curvature tensor of $B$ and $\nabla$, $g_B$ are the connections and metrics induced on the tensor bundles over $B$.
2. The fibre bundle $(M,g)\xrightarrow{\pi_M} (B, g_B)$ is uniformly locally trivial. That is, there exists a Riemannian metric $g_0$ on $F$ such that for every $x\in B$ and metric ball $B(r,x)$ of radius $r< r_\mathrm{inj}(B)$ there is a trivialisation $\Omega_{x,r}:(\pi_M^{-1}(B(x,r)), g)\to (B(x,r) \times F, g_B\times g_0)$, and the tensors $\Omega_{x,r}^*$ and $\Omega_{x,r*}$ and all their covariant derivatives are bounded uniformly in $x$.
3. The embeddings $(M,g) \hookrightarrow ({\mathsf{N}}B,G)$ and $(B,g_B)\hookrightarrow {\mathbb{R}}^{d+k}$ are bounded with all their derivatives.
These conditions are trivially satisfied for compact manifolds $M$ and many examples such as “asymptotically straight” or periodic waveguides. The existence result for the super-adiabatic approximation can be formulated as follows.
Let $M$ be a waveguide of bounded geometry and set $\Lambda := \inf_{x\in B} \min (\sigma(H_{F_x})\setminus\lambda_0)$. For every $N\in {\mathbb{N}}$ there exist a projection $P_{\varepsilon}$ and a unitary $U_{\varepsilon}$ in $\mathcal{L}(L^2(M))\cap\mathcal{L}(D(H^{\varepsilon}))$, intertwining $P_0$ and $P_{\varepsilon}$, such that for every $\chi\in \mathcal{C}^\infty_0\big((-\infty, \Lambda), [0,1])$, satisfying $\chi^p\in {\mathcal{C}}^\infty_0\big((-\infty, \Lambda), [0,1])$ for every $p\in(0,\infty)$, we have $${\left \lVert H^{\varepsilon}\chi(H^{\varepsilon}) - U_{\varepsilon}{H_\mathrm{eff}}\chi({H_\mathrm{eff}}) U_{\varepsilon}^* \right \rVert} =\mathcal{O}({\varepsilon}^N)\,.$$ In particular the Hausdorff distance between the spectra of $H^{\varepsilon}$ and ${H_\mathrm{eff}}$ is small, i.e. for every $\delta>0:$ $$\operatorname{dist}\big(\sigma(H^{\varepsilon})\cap (-\infty, \Lambda-\delta],\sigma({H_\mathrm{eff}})\cap (-\infty, \Lambda-\delta]\big)=\mathcal{O}({\varepsilon}^N)\,.$$
For $N=1$ we can choose $P_{\varepsilon}=P_0$, so at first sight the approximation of $H^{\varepsilon}$ by $H_\mathrm{a}$ yields errors of order ${\varepsilon}$. More careful inspection shows that for $N>1$ we have $H_\mathrm{a}-{H_\mathrm{eff}}=\mathcal{O}({\varepsilon}^2)$ as an operator from $W^2(B)$ to $L^2(M)$, so the statement on the spectrum holds for $H_\mathrm{a}$ with an error of order ${\varepsilon}^2$. This improvement over relies on the existence of $U_{\varepsilon}$ for a better choice of trial states. Close to the ground state the approximation is even more accurate.
\[thm:low spectrum\] Let $M$ be a waveguide of bounded geometry, $\Lambda_0:= \inf_{x\in B} \lambda_0(x)$ and ${0< \alpha\leq 2}$. Then for every $C>0$ $$\operatorname{dist}\big(\sigma(H^{\varepsilon})\cap (-\infty, \Lambda_0 + C{\varepsilon}^\alpha],\sigma(H_\mathrm{a})\cap (-\infty, \Lambda_0 + C{\varepsilon}^\alpha]\big)=\mathcal{O}({\varepsilon}^{2+\alpha/2})\,.$$ Assume, in addition, that $\Lambda_0 +C{\varepsilon}^\alpha$ is strictly below the essential spectrum of $H_{\rm a}$ in the sense that for some $\delta>0$ and ${\varepsilon}$ small enough the spectral projection ${\bf 1}_{(-\infty, \Lambda_0 + (C+\delta){\varepsilon}^\alpha]}(H_{\rm a})$ has finite rank. Then, if $\mu_0< \mu_1\leq \ldots \leq \mu_K$ are all the eigenvalues of $H_{\rm a}$ below $\Lambda_0+C{\varepsilon}^\alpha$, $H^{\varepsilon}$ has at least $K+1$ eigenvalues $\nu_0 <\nu_1 \leq \ldots \leq \nu_{K} $ below its essential spectrum and $$|\mu_j-\nu_j| =\mathcal{O}({\varepsilon}^{2+\alpha})$$ for $j\in \{0,\dots, K\}$.
The natural energy scale $\alpha$ to consider this theorem would be the spacing of eigenvalues of $H_\mathrm{a}$. This of course depends on the specific situation. If $\lambda_0$ is constant we will see that $\alpha=2$ is a natural choice. In the somewhat more generic case in which the eigenband $\lambda_0(x)$ has a global and non-degenerate minimum, as in the example of the waveguide $M_f$ in the introduction, the lowest eigenvalues of $H_\mathrm{a}$ will behave like those of an harmonic oscillator and $\alpha=1$ is the correct choice of scale. In this case the set $(-\infty, \Lambda_0 + C {\varepsilon}^2]\cap \sigma(H_\mathrm{a})$ will just be empty for ${\varepsilon}$ small enough and thus by the theorem there is no spectrum of $H^{\varepsilon}$ in this interval.
We remark that results can be obtained also for energies higher than $\Lambda$ and projections to other eigenbands than $\lambda_0$. The relevant condition is that they are separated from the rest of the spectrum of $H_F$ by a local gap. For $\lambda_0$ this is a consequence of the bounded geometry of $M$ (see [@LT2014 Proposition 4.1]). The approximation of spectra is not mutual as for low energies, but there is always spectrum of $H^{\varepsilon}$ near that of ${H_\mathrm{eff}}$ (see [@LT2014 Corollary 2.4]).
From now on we will focus on analysing the adiabatic operator. In particular we will see how the geometry of the waveguide enters into this operator and its expansion up to order ${\varepsilon}^4$, which is relevant for small energies irrespective of the super-adiabatic corrections by Theorem \[thm:low spectrum\]. We now give a general expression for $H_\mathrm{a}$ from which we will derive the explicit form for various specific situations. First group the terms of $H^{\varepsilon}$ in such a way that $$H^{\varepsilon}= -{\varepsilon}^2 \Delta_{\mathsf{H}}+ H_F + {\varepsilon}H_1\, ,$$ by taking $$\begin{aligned}
\tag{massive} H_1&= - {\varepsilon}^2 S^{\varepsilon}+ {\varepsilon}{V_\mathrm{bend}}\, , \\
\tag{hollow} H_1&
\begin{aligned}[t]
&= -{\varepsilon}^2 S^{\varepsilon}+ {\varepsilon}{V_\mathrm{bend}}- {\varepsilon}^{-1}\bigl(\tfrac{1}{2} \Delta_{\mathsf{V}}(\log \rho_{\varepsilon}) + \tfrac{1}{4} g_F({\mathrm{d}}\log\rho_{\varepsilon},{\mathrm{d}}\log\rho_{\varepsilon})\big)\\
&= -{\varepsilon}^2 S^{\varepsilon}+ \underbrace{\tfrac{{\varepsilon}}{2} \Delta_{\mathsf{H}}(\log \rho_{\varepsilon}) + \mathcal{O}({\varepsilon}^3)}_{=:{\varepsilon}\tilde V_{\rm{bend}}}\, .
\end{aligned}\end{aligned}$$ Projecting this expression with $P_0$ as in equation gives $H_FP_0 =\lambda_0 P_0$ and $$\label{eq:Va}
P_0\Delta_{\mathsf{H}}P_0 = \Delta_{g_B} + \underbrace{\tfrac{1}{2} \mathrm{tr}_{g_B}(\nabla^B \bar\eta) -\int_{F_x} \pi_M^*g_B(\operatorname{grad}_{g_\mathrm{s}}\phi_0, \operatorname{grad}_{g_\mathrm{s}}\phi_0)\ {\mathrm{d}}g_F}_{=: -V_\mathrm{a}}\, ,$$ where $\nabla^B$ is the Levi-Cività connection of $g_B$, $\bar \eta$ is the one-form $$\bar\eta(X):=\int_{F_x} \vert \phi_0 \vert^2 g_B(\pi_{M*}\eta_F, X)\ {\mathrm{d}}g_F$$ and $\eta_F$ is the mean curvature vector of the fibres (see equation ). The derivation for the projection of $\Delta_{\mathsf{H}}$ can be found in [@Lampart2013 Chapter 3].
Altogether we have the expression $$\label{eq:Ha}
H_\mathrm{a}= -{\varepsilon}^2\Delta_{g_B} + \lambda_0 + {\varepsilon}^2 V_\mathrm{a} + {\varepsilon}P_0 H_1 P_0$$ for the adiabatic operator as an operator on $L^2(B)$. By analogy with the introduction, we view $$(P_0 S^{\varepsilon}P_0)\psi= \int_{F_x} \phi_0 S^{\varepsilon}(\phi_0\psi) \ {\mathrm{d}}g_F$$ as an operator on $B$ via the identification $L^2(B)\cong L^2(B) \otimes \mathrm{span}(\phi_0)$. By the same procedure, projecting the potentials in $H_1$ amounts to averaging them over the fibres with the weight ${\left \lvert \phi_0 \right \rvert}^2$.
Massive Quantum Waveguides {#chap:massiveQWG}
==========================
The vast literature on quantum waveguides is, in our terminology, concerned with the case of massive waveguides. In this section we give a detailed derivation of the effects due to the extrinsic geometry of $B\subset {\mathbb{R}}^{d+k}$. The necessary calculations of the metric $g^{\varepsilon}$ and the bending potential ${V_\mathrm{bend}}$ have been performed in all of the works on quantum waveguides for the respective special cases, and by Tolar [@Tolar1988] for the leading order ${V_\mathrm{bend}}^0$ in the general case. A generalisation to tubes in Riemannian manifolds is due to Wittich [@Wit].
We then discuss the explicit form of the adiabatic Hamiltonian , calculating the adiabatic potential and the projection of $H_1$. In particular we generalise the concept of a “twisted” waveguide (c.f. [@Kre07]) to arbitrary dimension and codimension in Section \[sect:twist\]. We also examine the role of the differential operator $S^{\varepsilon}$ in $H_\mathrm{a}$, which is rarely discussed in the literature, and its relevance for the different energy scales ${\varepsilon}^\alpha$.
The Pullback Metric {#sec:pullbackmassive}
-------------------
Let $(x^1,\dots,x^d)$ be local coordinates on $B$ and $\{e_\alpha\}_{\alpha=1}^k$ a local orthonormal frame of $M$ with respect to $g_B^\bot:=\updelta^{d+k}|_{{\mathsf{N}}B}$ such that every normal vector $\nu(x)\in {\mathsf{N}}_x B$ may be written as $$\label{eq:xn}
\nu(x) = n^\alpha e_\alpha(x)\, .$$ These bundle coordinates yield local coordinate vector fields $$\label{eq:productbasis}
\partial_i|_{(x,n)} := \tfrac{\partial}{\partial x^i}\,,\quad
\partial_{d+\alpha}|_{(x,n)} := \tfrac{\partial}{\partial n^\alpha}$$ on $M$ for $i\in\{1,\dots,d\}$ and $\alpha \in \{1,\dots,k\}$. The aim is to obtain formulas for the coefficients of the unscaled pullback metric $g=\Phi^*\updelta^{d+k}|_{{\mathsf{T}}M}$ with respect to these coordinate vector fields.
Let $I\subset{\mathbb{R}}$ be an open neighbourhood of zero, $b:I\to M$, $s\mapsto b(s) = (c(s),v(s))$ be a curve with $b(0)=(x,n)$ and $b'(0)=\xi\in {\mathsf{T}}_{(x,n)}M$. It then holds that $$\Phi_*\xi = \left.\tfrac{{\mathrm{d}}}{{\mathrm{d}}s}\right|_{s=0} \Phi\bigl(b(s)\bigr) = c'(0) + v'(0)\, .$$ For the case $\xi=\partial_i|_{(x,n)}$, we choose the curve $b:I\to M$ given by $$b(s) = \Bigl(c(s),n^\alpha e_\alpha\bigl(c(s)\bigr)\Bigr)\quad\Rightarrow\quad \Phi\bigl(b(s)\bigr) = c(s) + n^\alpha e_\alpha\bigl(c(s)\bigr)$$ where $c:I\to B$ is a smooth curve with $c(0)=x$ and $c'(0) = \partial_{x^i}\in {\mathsf{T}}_xB$. We then have $$\Phi_*\partial_i|_{(x,n)} = c'(0) + n^\alpha \left.\tfrac{{\mathrm{d}}}{{\mathrm{d}}s}\right|_{s=0} e_\alpha\bigl(c(s)\bigr) = \partial_{x^i} + n^\alpha \nabla^{{\mathbb{R}}^{d+k}}_{\partial_{x^i}} e_\alpha(x)$$ In order to relate the appearing derivative $\nabla^{{\mathbb{R}}^{d+k}}_{\partial_{x^i}} e_\alpha(x)$ to the extrinsic curvature of $B$, we project the latter onto its tangent and normal component, respectively. Therefor, we introduce the Weingarten map $${\mathcal{W}}:\Gamma({\mathsf{N}}B)\to {\mathcal{T}}^1_1(B)\, ,\quad e_\alpha \mapsto {\mathcal{W}}(e_\alpha)\partial_{x^i}:=-{\operatorname{P}^{{\mathsf{T}}B}}\nabla^{{\mathbb{R}}^{d+k}}_{\partial_{x^i}} e_\alpha\, ,$$ and the ${\mathfrak{s}}{\mathfrak{o}}(k)$-valued local connection one-form associated to the normal connection $\nabla^{\mathsf{N}}$ with respect to $\{e_\alpha\}_{k=1}^\alpha$, [i.e. ]{}$$\omega^{\mathsf{N}}(\partial_{x^i})e_\alpha = \nabla^{\mathsf{N}}_{\partial_{x^i}} e_\alpha := {\operatorname{P}^{{\mathsf{N}}B}}\nabla^{{\mathbb{R}}^{d+k}}_{\partial_{x^i}} e_\alpha\, .$$ With these objects we have $$\label{eq:partiali}
\Phi_*\partial_i|_{(x,n)} = \partial_{x^i} + n^\alpha \Bigl(- {\mathcal{W}}\bigl(e_\alpha(x)\bigr)\partial_{x^i} + \omega^{\mathsf{N}}(\partial_{x^i})e_\alpha(x)\Bigr).$$ For the case $\xi=\partial_{d+\alpha}|_{(x,n)}$, one takes the curve $b:I\to M$ with $$b(s) = \bigl(x,\nu(x) + s e_\alpha(x)\bigr)\quad\Rightarrow\quad \Phi\bigl(b(s)\bigr) = x + (n^\beta + s \updelta_\alpha^{\hphantom{\alpha}\beta}) e_\beta(x)\, .$$ Hence, $$\label{eq:isoalpha}
\Phi_*\partial_{d+\alpha}|_{(x,n)} = 0 + \left.\tfrac{{\mathrm{d}}}{{\mathrm{d}}s}\right|_{s=0} (n^\beta + s \updelta_\alpha^{\hphantom{\alpha}\beta}) e_\beta(x) = \updelta_\alpha^{\hphantom{\alpha}\beta} e_\beta(x) = e_\alpha(x)\, .$$ Combining the expressions for the tangent maps, we finally obtain the following expressions for the pullback metric $g$:
\[lem:pullback\] Let $(x,n)$ denote the local bundle coordinates on $M$ introduced in and the associated coordinate vector fields. Then the coefficients of the pullback metric are given by $$\begin{aligned}
g_{ij}(x,n) &= g_B(\partial_{x^i},\partial_{x^j}) - 2\operatorname{II}(\nu)(\partial_{x^i},\partial_{x^j}) + g_B\bigl({\mathcal{W}}(\nu)\partial_{x^i},{\mathcal{W}}(\nu)\partial_{x^j}\bigr) \\
&\hphantom{=}\,\,+g_B^\bot\bigl(\omega^{\mathsf{N}}(\partial_{x^i})\nu,\omega^{\mathsf{N}}(\partial_{x^j})\nu\bigr) \, , \\
g_{i,d+\alpha}(x,n) &= g_B^\bot\bigl(\omega^{\mathsf{N}}(\partial_{x^i})\nu,e_\alpha\bigr)\, , \\
g_{d+\alpha,d+\beta}(x,n) &= g_B^\bot\bigl(e_\alpha,e_\beta\bigr) = \updelta_{\alpha\beta}\end{aligned}$$ for $i,j\in\{1,\dots,d\}$ and $\alpha,\beta\in\{1,\dots,k\}$. Here, ${\operatorname{II}:\Gamma({\mathsf{N}}B)\to {\mathcal{T}}^0_2(B)}$ stands for the second fundamental form defined by $\operatorname{II}(\nu)(\partial_{x^i},\partial_{x^j}):=g_B({\mathcal{W}}(\nu)\partial_{x^i},\partial_{x^j})$.
Let us now consider the scaled pullback metric $g^{\varepsilon}={\varepsilon}^{-2}{\mathcal{D}}_{\varepsilon}^*\Phi^*\updelta^{d+k}|_{{\mathsf{T}}M}$. Observe that for any $\xi\in {\mathsf{T}}_{(x,n)}M$ $$\begin{aligned}
\Phi_*({\mathcal{D}}_{\varepsilon})_*\xi &= (\Phi\circ {\mathcal{D}}_{\varepsilon})_*\xi \\
&= \left.\tfrac{{\mathrm{d}}}{{\mathrm{d}}s}\right|_{s=0} (\Phi\circ {\mathcal{D}}_{\varepsilon})\bigl(b(s)\bigr)
= \left.\tfrac{{\mathrm{d}}}{{\mathrm{d}}s}\right|_{s=0} (\Phi\circ {\mathcal{D}}_{\varepsilon})\bigl(c(s),v(s)\bigr) \\
&=\left.\tfrac{{\mathrm{d}}}{{\mathrm{d}}s}\right|_{s=0} \Phi\bigl(c(s),{\varepsilon}v(s)\bigr) = c'(0) + {\varepsilon}v'(0)\end{aligned}$$ and one immediately concludes from and that $$\begin{aligned}
\Phi_*({\mathcal{D}}_{\varepsilon})_*\partial_i|_{(x,n)} &= \partial_{x^i} + {\varepsilon}n_\alpha \Bigl(- {\mathcal{W}}\bigl(e_\alpha(x)\bigr)\partial_{x^i} + \omega^N(\partial_{x^i})e_\alpha(x)\Bigr), \\
\Phi_*({\mathcal{D}}_{\varepsilon})_*\partial_\alpha|_{(x,n)} &= {\varepsilon}e_\alpha(x)\, .\end{aligned}$$ Consequently, the coefficients of the scaled pullback metric are given by $$\begin{aligned}
g_{ij}^{\varepsilon}(x,n) &= {\varepsilon}^{-2} \Bigl[g_B(\partial_{x^i},\partial_{x^j}) - {\varepsilon}2 \operatorname{II}(\nu)(\partial_{x^i},\partial_{x^j}) + {\varepsilon}^2 g_B\bigl({\mathcal{W}}(\nu)\partial_{x^i},{\mathcal{W}}(\nu)\partial_{x^j}\bigr) \\
&\hphantom{=\frac{1}{{\varepsilon}^2} \Bigl(} + {\varepsilon}^2 g_B^\bot\bigl(\omega^{\mathsf{N}}(\partial_{x^i})\nu,\omega^{\mathsf{N}}(\partial_{x^j})\nu\bigr)\Bigr]\, , \\
g_{i,d+\alpha}^{\varepsilon}(x,n) &= g_B^\bot\bigl(\omega^{\mathsf{N}}(\partial_{x^i})\nu,e_\alpha\bigr)\, , \\
g_{d+\alpha,d+\beta}^{\varepsilon}(x,n) &= \updelta_{\alpha\beta}\end{aligned}$$ for $i,j\in\{1,\dots,d\}$ and $\alpha,\beta\in\{1,\dots,k\}$.
We see that $\operatorname{span}\{\partial_i|_{(x,n)}\}_{i=1}^d$ is not orthogonal to $\operatorname{span}\{\partial_{d+\alpha}|_{(x,n)}\}_{\alpha=1}^k={\mathsf{V}}_{(x,n)}M$ with respect to $g^{\varepsilon}$. However, any vector $\partial_i|_{(x,n)}$ can be orthogonalised by subtracting its vertical component. The resulting vector $$\begin{aligned}
\partial_{x^i}^{{\mathsf{H}}M}|_{(x,n)} &:= \partial_i|_{(x,n)} - g_{i,d+\beta}^{\varepsilon}(x,n)\, {\partial_{d+\beta}|}_{(x,n)} \nonumber \\
&\hphantom{:}= \partial_i|_{(x,n)} - g_B^\bot\bigl(\omega^{\mathsf{N}}(\partial_{x^i})\nu,e_\beta\bigr)\, {\partial_{d+\beta}|}_{(x,n)} \label{eq:partialHM} \\
&\hphantom{:}= \partial_i|_{(x,n)} - g_{i,d+\beta}(x,n)\, {\partial_{d+\beta}|}_{(x,n)} \nonumber\end{aligned}$$ is the horizontal lift of $\partial_{x^i}$. Consequently, the orthogonal complement of ${\mathsf{V}}_{(x,n)}M$ with respect to $g^{\varepsilon}$ is given by ${\mathsf{H}}_{(x,n)}M=\operatorname{span}\{\partial_{x^i}^{{\mathsf{H}}M}|_{(x,n)}\}_{i=1}^d$ for all ${\varepsilon}>0$. Finally, a short computation shows that $$\begin{aligned}
& g^{\varepsilon}(\partial_{x^i}^{{\mathsf{H}}M}|_{(x,n)},\partial_{x^j}^{{\mathsf{H}}M}|_{(x,n)}) \nonumber \\
&\ ={\varepsilon}^{-2} g_B\Bigl(\bigl(1-{\varepsilon}{\mathcal{W}}(\nu)\bigr)\partial_{x^i}, \bigl(1-{\varepsilon}{\mathcal{W}}(\nu)\bigr)\partial_{x^j}\Bigr) \label{eq:horblock} \\
&\ ={\varepsilon}^{-2}\left[g_B(\partial_{x^i},\partial_{x^j}) + {\varepsilon}\Bigl(-2\operatorname{II}(\nu)(\partial_{x^i},\partial_{x^j}) + {\varepsilon}g_B\bigl({\mathcal{W}}(\nu)\partial_{x^i},{\mathcal{W}}(\nu)\partial_{x^j}\bigr)\Bigr)\right]. \nonumber\end{aligned}$$ Hence, the scaled pullback metric $g^{\varepsilon}$ actually has the form with “horizontal correction” $$\label{eq:heps}
h^{\varepsilon}(\partial_{x^i}^{{\mathsf{H}}M}|_{(x,n)},\partial_{x^j}^{{\mathsf{H}}M}|_{(x,n)}) = - 2 \operatorname{II}(\nu)(\partial_{x^i},\partial_{x^j}) + {\varepsilon}g_B\bigl({\mathcal{W}}(\nu)\partial_{x^i},{\mathcal{W}}(\nu)\partial_{x^j}\bigr)\, .$$
\[rem:complgeod\] The fibres $F_x$ of $M$ are completely geodesic for the pullback metric $g^{\varepsilon}$. In order to see this, we show that the second fundamental form of the fibres $\operatorname{II}^F|_x:{\mathsf{H}}_x M \to {\mathcal{T}}^0_2(F_x)$ vanishes identically. Since the latter is a symmetric tensor, it is sufficient to show that the diagonal elements $$\operatorname{II}^F(\partial_{x^i}^{{\mathsf{H}}M})(\partial_\alpha,\partial_\alpha) = g^{\varepsilon}(\nabla^M_{\partial_\alpha} \partial_\alpha , \partial_{x^i}^{{\mathsf{H}}M})$$ are zero. Using Koszul’s formula, four out of the six appearing terms obviously vanish and we are left with $$\begin{aligned}
\operatorname{II}^F(\partial_{x^i}^{{\mathsf{H}}M})(\partial_\alpha,\partial_\alpha) &
\stackrel{\hphantom{(21)}}{=} g_F\bigl([\partial_{x^i}^{{\mathsf{H}}M},\partial_\alpha],\partial_\alpha\bigr) \\
&\stackrel{\eqref{eq:partialHM}}{=} g_F\Bigl(\underbrace{[\partial_i,\partial_\alpha]}_{=0} - \bigl[g_{i,d+\beta}(x,n)\partial_\beta,\partial_\alpha\bigr],\partial_\alpha\Bigr) \\
&\stackrel{\hphantom{(21)}}{=} \frac{g_{i,d+\beta}(x,n)}{\partial n^\alpha} \underbrace{g_F (\partial_\beta,\partial_\alpha)}_{\updelta_{\beta\alpha}} \\
&\stackrel{\hphantom{(21)}}{=} g_B^\bot(\omega^{\mathsf{N}}(\partial_{x^i})e_\alpha,e_\alpha)\,.\end{aligned}$$ But now, the last expression equals zero since $\omega^{\mathsf{N}}(\partial_{x^i})$ is ${\mathfrak{s}}{\mathfrak{o}}(k)$-valued. Consequently, the mean curvature vector $\eta_F$ defined by $$\label{eq:etaF}
\operatorname{tr}_{{\mathsf{T}}F} \operatorname{II}(\partial_{x^i}^{{\mathsf{H}}M}) = \pi^*_M g_B (\partial_{x^i}^{{\mathsf{H}}M},\eta_F)$$ vanishes identically. Finally note that the same considerations also hold for the submersion metric $g_\mathrm{s}^{\varepsilon}$ due to $g^{\varepsilon}|_{{\mathsf{V}}M}=g_F = g^{\varepsilon}_\mathrm{s}|_{{\mathsf{V}}M}$.
The Horizontal Laplacian {#sect:Laplace}
------------------------
Now that we have a detailed description of the metric, we can explicitly express the horizontal Laplacian by the vector fields $(\{\partial_{x^i}^{{\mathsf{H}}M}\}_{i=1}^d, \{\partial_{d+\alpha}\}_{\alpha=1}^k)$. Let $(\{\pi^*_M {\mathrm{d}}x^i\}_{i=1}^d,\{\delta n^\alpha\}_{\alpha=1}^k)$ be the dual basis (note that in general $\delta n^\alpha \neq {\mathrm{d}}n^\alpha$ since ${\mathrm{d}}n^\alpha(\partial_{x^i}^{{\mathsf{H}}M})\neq 0$). Then by definition $$\begin{aligned}
\delta n^\alpha({\operatorname{P}^{{\mathsf{H}}M}}\operatorname{grad}_{g_\mathrm{s}}\psi)&=0\, , \\
\pi^*_M{\mathrm{d}}x^i({\operatorname{P}^{{\mathsf{H}}M}}\operatorname{grad}_{g_\mathrm{s}}\psi) &= g_\mathrm{s}^{jk}(\partial_{x^j}^{{\mathsf{H}}M}\psi) {\mathrm{d}}x^i(\partial_{x^k})=g_B^{ij}\partial_{x^j}^{{\mathsf{H}}M}\psi \end{aligned}$$ and thus $$\operatorname{grad}_{g_\mathrm{s}}\psi= g_B^{ij}(\partial_{x^i}^{{\mathsf{H}}M}\psi) \partial_{x^j}^{{\mathsf{H}}M}\, .$$ When acting on a horizontal vector field $Y$, the divergence takes the coordinate form $$\begin{aligned}
\operatorname{div}_{g_\mathrm{s}}Y &= \frac{1}{\sqrt{{\left \lvert g_\mathrm{s} \right \rvert}}}\partial_{x^i}^{{\mathsf{H}}M}\sqrt{{\left \lvert g_\mathrm{s} \right \rvert}}(\pi^*_M{\mathrm{d}}x^i(Y)) \nonumber \\
&=\frac{1}{\sqrt{{\left \lvert g_B \right \rvert}}}\partial_{x^i}^{{\mathsf{H}}M}\sqrt{{\left \lvert g_B \right \rvert}}\bigl(\pi^*_M{\mathrm{d}}x^i(Y)\bigr) - \pi_M^* g_B(\eta_F, Y)\, , \label{eq:divgs}\end{aligned}$$ since $\sqrt{{\left \lvert g_F \right \rvert}}^{-1} \big(\partial_{x^i}^{{\mathsf{H}}M} \sqrt{{\left \lvert g_F \right \rvert}}\big)=
-g_\mathrm{s}(\eta_F,\partial_{x^i}^{{\mathsf{H}}M})$. For a horizontal lift $X^{{\mathsf{H}}M}$ we have the simple formula $$\operatorname{div}_{g_\mathrm{s}} X^{{\mathsf{H}}M}=\pi^*_M \bigl(\operatorname{div}_{g_B}X - g_B(\pi_{M*}\eta_F, X)\bigr)\, .$$ Now for a massive waveguide $\eta_F=0$ and the horizontal Laplacian takes the familiar form $$\Delta_{{\mathsf{H}}} = \frac{1}{\sqrt{{\left \lvert g_B \right \rvert}}}\partial_{x^i}^{{\mathsf{H}}M}\sqrt{{\left \lvert g_B \right \rvert}}g_B^{ij} \partial_{x^j}^{{\mathsf{H}}M}\, ,$$ which is just $\Delta_{g_B}$ with $\partial_{x^i}$ replaced by $\partial_{x^i}^{{\mathsf{H}}M}$.
The Bending Potential
---------------------
In the introduction (see equation ) we saw that the leading order of ${V_\mathrm{bend}}$ is attractive (negative) and proportional to the square of the curve’s curvature $\kappa={\left \lvert c'' \right \rvert}$. Here we give a detailed derivation of ${V_\mathrm{bend}}$ for generalised massive waveguides and then discuss the sign of its leading part. Therefor, let $\{\tau_i\}_{i=1}^d$ be a local orthonormal frame of ${\mathsf{T}}B$ with respect to $g_B$ and let $\{n^\alpha\}_{\alpha=1}^k$ be coordinates on ${\mathsf{N}}B$ as in equation . Then $$T_i:=\tau_i^{{\mathsf{H}}M}\, ,\quad N_\alpha:=\tfrac{\partial}{\partial n^\alpha}$$ for $i\in\{1,\dots,d\}$ and $\alpha\in\{1,\dots,k\}$ form a local frame of ${\mathsf{T}}M$. In this frame the scaled metrics have the form (see also ) $$g^{\varepsilon}= \begin{pmatrix}[c|c]
{\varepsilon}^{-2}(\operatorname{id}_{d\times d} -{\varepsilon}{\mathcal{W}}(\nu))^2 & 0\\ \hline
0 & \operatorname{id}_{k\times k}
\end{pmatrix}
\,,\quad
g^{\varepsilon}_\mathrm{s} = \begin{pmatrix}[c|c]
{\varepsilon}^{-2}\operatorname{id}_{d\times d} & 0\\ \hline
0 & \operatorname{id}_{k\times k}
\end{pmatrix}.$$ From that and equation we easily conclude that $$\rho_{\varepsilon}= \sqrt{\frac{\det(g^{\varepsilon})}{\det(g^{\varepsilon}_\mathrm{s})}} = \det\bigl(\operatorname{id}_{d\times d} - {\varepsilon}{\mathcal{W}}(\nu)\bigr) = \exp\Bigl(\operatorname{tr}\log\bigl(\operatorname{id}_{d\times d} - {\varepsilon}{\mathcal{W}}(\nu)\bigr)\Bigr) .$$ Using Taylor’s expansion for ${\varepsilon}$ small enough, $$-\log\bigl(\operatorname{id}_{d\times d} - {\varepsilon}{\mathcal{W}}(\nu)\bigr) = \underbrace{{\varepsilon}{\mathcal{W}}(\nu) + \tfrac{{\varepsilon}^2}{2} {\mathcal{W}}(\nu)^2 + \tfrac{{\varepsilon}^3}{3} {\mathcal{W}}(\nu)^3}_{=:{\mathcal{Z}}({\varepsilon})} + {\mathcal{O}}({\varepsilon}^4)\, ,$$ we have $
\log(\rho_{\varepsilon}) = - \operatorname{tr}{\mathcal{Z}}({\varepsilon}) + {\mathcal{O}}({\varepsilon}^4)$. Next, we calculate the terms appearing in ${V_\mathrm{bend}}$ separately: $$\Delta_{\mathsf{H}}\log \rho_{\varepsilon}= - {\varepsilon}\Delta_{\mathsf{H}}\operatorname{tr}{\mathcal{W}}(\nu) + {\mathcal{O}}({\varepsilon}^2)\, ,$$ $$\begin{aligned}
{\varepsilon}^{-2}\Delta_{\mathsf{V}}\log \rho_{\varepsilon}&= - {\varepsilon}^{-2}\sum_{\alpha=1}^k \partial_{n^\alpha}^2 \operatorname{tr}{\mathcal{Z}}({\varepsilon}) + {\mathcal{O}}({\varepsilon}^2) \\
&= -\sum_{\alpha=1}^k \Bigl[\operatorname{tr}\bigl({\mathcal{W}}(e_\alpha)^2\bigr) + 2 {\varepsilon}\operatorname{tr}\bigl({\mathcal{W}}(e_\alpha)^2{\mathcal{W}}(\nu)\bigr)\Bigr] + {\mathcal{O}}({\varepsilon}^2)\, ,\end{aligned}$$ $$\begin{aligned}
{\mathrm{d}}\log \rho_{\varepsilon}&= {\operatorname{P}^{{\mathsf{H}}M}}{\mathrm{d}}\log\rho_{\varepsilon}- \operatorname{tr}\bigl(\partial_{n_\alpha} {\mathcal{Z}}({\varepsilon})\bigr)\, {\mathrm{d}}n^\alpha + {\mathcal{O}}({\varepsilon}^4) \\
&= {\operatorname{P}^{{\mathsf{H}}M}}{\mathrm{d}}\log \rho_{\varepsilon}- \operatorname{tr}\Bigl( {\varepsilon}{\mathcal{W}}({\mathrm{e}}_\alpha)\bigl(\operatorname{id}_{d\times d}+{\varepsilon}{\mathcal{W}}(\nu) + {\varepsilon}^2 {\mathcal{W}}(\nu)^2\bigr)\Bigr) \, {\mathrm{d}}n^\alpha + {\mathcal{O}}({\varepsilon}^4)\, , \end{aligned}$$ denoting by ${\operatorname{P}^{{\mathsf{H}}M}}$ the adjoint of the original ${\operatorname{P}^{{\mathsf{H}}M}}$ with respect to the pairing of ${\mathsf{T}}^*M$ and ${\mathsf{T}}M$, and hence $$\begin{aligned}
& {\varepsilon}^{-2}g_F({\mathrm{d}}\log\rho_{\varepsilon},{\mathrm{d}}\log\rho_{\varepsilon}) \\
&= \sum_{\alpha=1}^k
\Bigl[\bigl(\operatorname{tr}{\mathcal{W}}(e_\alpha)\bigr)^2 + 2 {\varepsilon}\operatorname{tr}\bigl({\mathcal{W}}(e_\alpha)\bigr) \operatorname{tr}\bigl({\mathcal{W}}(e_\alpha){\mathcal{W}}(\nu)\bigr)\Bigr] + {\mathcal{O}}({\varepsilon}^2)\, .\end{aligned}$$ Putting all this together, we obtain the following expression for the bending potential in the case of massive quantum waveguides: $$\begin{aligned}
{V_\mathrm{bend}}&= \frac{1}{4} \sum_{\alpha=1}^k \Bigl[\bigl(\operatorname{tr}{\mathcal{W}}(e_\alpha)\bigr)^2 - 2 \operatorname{tr}\bigl({\mathcal{W}}(e_\alpha)^2\bigr)\Bigr] \label{eq:Vbend_0}\\
&\hphantom{=}\ + \frac{{\varepsilon}}{2} \sum_{\alpha=1}^k \Bigl[\operatorname{tr}{\mathcal{W}}(e_\alpha) \operatorname{tr}\bigl({\mathcal{W}}(e_\alpha){\mathcal{W}}(\nu)\bigr) - 2\operatorname{tr}\bigl({\mathcal{W}}(e_\alpha)^2{\mathcal{W}}(\nu)\bigr) - \Delta_{\mathsf{H}}\operatorname{tr}{\mathcal{W}}(\nu)\Bigr] \label{eq:Vrhoeps3} \\
&\hphantom{=}\ + {\mathcal{O}}({\varepsilon}^2)\, . \nonumber \end{aligned}$$ The leading term of this expression (${V_\mathrm{bend}}^0:=\eqref{eq:Vbend_0}$) has been widely stressed in the literature concerning one-dimensional quantum waveguides (see e.g. [@DE95; @Kre07]), where it has a purely attractive effect. Its higher dimensional versions were discussed by Tolar [@Tolar1988] but are generally less known, so we will discuss their possible effects for the rest of this section.
Since ${\mathcal{W}}(e_\alpha)$ is self-adjoint, we may choose for each $\alpha\in\{1,\dots,k\}$ the orthonormal frame $\{\tau_i\}_{i=1}^d$ such that it consists of the eigenvectors of ${\mathcal{W}}(e_\alpha)$ with eigenvalues (principal curvatures) $\{\kappa_i^\alpha\}_{i=1}^d$. In order to get an impression of ${V_\mathrm{bend}}^0$’s sign, we divide ${\mathcal{W}}(e_\alpha)$ into a traceless part ${\mathcal{W}}_0(e_\alpha)$ and a multiple of the identity: $${\mathcal{W}}(e_\alpha) = {\mathcal{W}}_0(e_\alpha) + \frac{H_\alpha}{d} \operatorname{id}_{d\times d}\, .$$ Note that the prefactors $H_\alpha$ equal the components of the mean curvature vector of $B$ in direction $e_\alpha$. With the notation $${\left \lVert M \right \rVert}^2 := \operatorname{tr}\bigl(M^\mathrm{t} M\bigr)\geq 0$$ for any $M\in{\mathbb{R}}^{d\times d}$, we get the relation $${\left \lVert {\mathcal{W}}(e_\alpha) \right \rVert}^2 = {\left \lVert {\mathcal{W}}_0(e_\alpha) \right \rVert}^2 + \frac{H_\alpha^2}{d}$$ for all $\alpha\in\{1,\dots,k\}$ since ${\mathcal{W}}_0(\cdot)$ is traceless. This yields for the potential : $$\begin{aligned}
{V_\mathrm{bend}}^0 &= \frac{1}{4} \sum_{\alpha=1}^k\bigl[H_\alpha^2 - 2 {\left \lVert {\mathcal{W}}(e_\alpha) \right \rVert}^2\bigr] \\
&= \frac{1}{4} \sum_{\alpha=1}^k \left[H_\alpha^2 - 2 \left({\left \lVert {\mathcal{W}}_0(e_\alpha) \right \rVert}^2 + \frac{H_\alpha^2}{d}\right)\right] \\
&= \frac{1}{4} \sum_{\alpha=1}^k \left[\left(1-\frac{2}{d}\right) H_\alpha^2 - 2{\left \lVert {\mathcal{W}}_0(e_\alpha) \right \rVert}^2\right].\end{aligned}$$ The latter relation shows that for $d\in\{1,2\}$ the leading order of the bending potential is non-positive. Thus, the effect of bending has an attractive character (${V_\mathrm{bend}}^0<0$) for ${\varepsilon}$ small enough, or is of lower order (${V_\mathrm{bend}}^0=0$), independently of the codimension $k$. For $d\geq 3$, the first term is non-negative and may overcompensate the second term leading to a positive contribution to ${V_\mathrm{bend}}$. Consequently, a repulsive bending effect is possible.
We may rewrite expression in terms of principal curvatures as $$\label{eq:Vbendkappa}
{V_\mathrm{bend}}^0 = \frac{1}{4} \sum_{\alpha=1}^k \left[ \left(\sum_{i=1}^d \kappa_i^\alpha\right)^2 - 2 \sum_{i=1}^d (\kappa_i^\alpha)^2\right].$$
1. For a waveguide modelled around a curve $c$, $d=1$, one immediately sees that ${V_\mathrm{bend}}^0=-\tfrac14 \kappa^2=- \tfrac14 {\left \lvert c'' \right \rvert}^2$.
2. We consider the case where $B\subset {\mathbb{R}}^{d+1}$ is the $d$-dimensional standard sphere of radius $R$. The principal curvatures in the direction of the outer-pointing normal are given by $\kappa_i = 1/R$ for all $i\in\{1,\dots,d\}$, hence the bending potential reads $$\begin{aligned}
{V_\mathrm{bend}}^0 &= \frac{1}{4} \left[\left(\sum_{i=1}^d \frac{1}{R}\right)^2 - 2 \sum_{i=1}^d \left(\frac{1}{R}\right)^2\right] = \left(1-\frac{2}{d}\right)\frac{d^2}{4 R^2}\, .\end{aligned}$$ It follows that ${V_\mathrm{bend}}^0 < 0$ for $d=1$, ${V_\mathrm{bend}}^0 = 0$ for $d=2$ and ${V_\mathrm{bend}}^0>0$ for $d\geq 3$, respectively. Thus, depending on the dimension $d$ of the sphere, the effect of bending can be either attractive or repulsive.
The Adiabatic Hamiltonian
-------------------------
We are now ready to calculate the geometric terms in the adiabatic operator. In this we concentrate on the adiabatic operator and explicitly calculate all the relevant terms on the energy scale given by Theorem \[thm:low spectrum\]. First we take care of the contribution of $H_1$, then we turn to the potential $V_\mathrm{a}$ and explain its connection to “twisting” of the quantum waveguide.
### The Operator PHP
The contribution of the bending potential, that was calculated in the previous section, is given by its adiabatic approximation $${V_\mathrm{bend}}^\mathrm{a}:=P_0 {V_\mathrm{bend}}P_0= \int_{F_x} {V_\mathrm{bend}}(\nu) {\left \lvert \phi_0(\nu) \right \rvert}^2\ {\mathrm{d}}\nu \, .$$ Since the leading part ${V_\mathrm{bend}}^0$ is independent of the fibre coordinate $\nu$, it is unchanged by this projection. The next term in the expansion of ${V_\mathrm{bend}}$ is given by . The Weingarten map is linear in $\nu$ and since $$\partial_{x^i}^{{\mathsf{H}}M}n^\alpha \stackrel{\eqref{eq:partialHM}}{=}- g_B^\bot\bigl(\omega^{\mathsf{N}}(\partial_{x^i})\nu, e_\beta\bigr) \partial_{n^\beta}n^\alpha = - g_B^\bot\bigl(\omega^{\mathsf{N}}(\partial_{x^i})\nu, e_\alpha\bigr)$$ is again linear in $\nu$, $\Delta_{\mathsf{H}}\operatorname{tr}\mathcal{W}(\nu)$ is also linear in $\nu$. Consequently, the contribution of to ${V_\mathrm{bend}}^\mathrm{a}$ is proportional to $${\left \langle \phi_0 , \nu \phi_0 \right \rangle}_{F_x} = \int_{F_x} \nu {\left \lvert \phi_0(\nu) \right \rvert}^2\ {\mathrm{d}}\nu\, .$$ Hence, this contribution vanishes if the centre of mass of the ground state $\phi_0$ lies exactly on the submanifold $B$. This is a reasonable assumption to make and represents a “correct” choice of parametrisation of the waveguide. Under this assumption we have $${V_\mathrm{bend}}^\mathrm{a}= {V_\mathrm{bend}}^0 + \mathcal{O}({\varepsilon}^2)\, .$$ From the expression for the horizontal block of the metric $g^{\varepsilon}$ one obtains its expansion on horizontal one-forms by locally inverting the matrix $(g^{\varepsilon})_{ij}$ (see [@Wit]). The result is $$\label{eq:dual metric}
g^{\varepsilon}(\pi_M^*{\mathrm{d}}x^i, \pi_M^*{\mathrm{d}}x^j)={\varepsilon}^2 \big(g_B^{ij} + 2{\varepsilon}\operatorname{II}(\nu)^{ij} + \mathcal{O}({\varepsilon}^2)\big)\,,$$ where $\operatorname{II}$ denotes the second fundamental form of $B$, defined on ${\mathsf{T}}^*B$ by $\operatorname{II}^{ij}:=\operatorname{II}_{kl}g_B^{ik}g_B^{jl}$. Moreover, we extend the latter to ${\mathsf{T}}^*M$, understanding $\operatorname{II}(\nu)$ as its lift to the horizontal part ${\mathsf{H}}^*M$ and extending to ${\mathsf{T}}^*M$ by zero. The vertical components of $g^{\varepsilon}$ and $g_\mathrm{s}^{\varepsilon}$ coincide, hence as an operator on $L^2(B)$ we have the expression $$\label{eq:H1 massive nocent}
(P_0 S^{\varepsilon}P_0)\psi=2 \int_{F_x} \phi_0 \operatorname{div}_{g_\mathrm{s}} \Bigl(\operatorname{II}(\nu)\bigl({\mathrm{d}}(\phi_0\psi), \cdot\bigr)\Bigr)\ {\mathrm{d}}\nu + \mathcal{O}({\varepsilon})$$ with an error of order ${\varepsilon}$ on $W^2(B)$. Using the Leibniz rule we can rewrite this as $$\begin{aligned}
&2 \int_{F_x} 2 \phi_0 \operatorname{II}(\nu)\big({\mathrm{d}}\phi_0, {\mathrm{d}}\psi\big) + {\left \lvert \phi_0 \right \rvert}^2 \operatorname{div}_{g_\mathrm{s}}\bigl(\operatorname{II}(\nu)({\mathrm{d}}\psi, \cdot)\bigr) + \phi_0 \psi \operatorname{div}_{g_\mathrm{s}}\bigl(\operatorname{II}(\nu)({\mathrm{d}}\phi_0, \cdot)\bigr)\ {\mathrm{d}}\nu\nonumber \\
&\ =2\int_{F_x} \operatorname{div}_{g_\mathrm{s}} \bigl({\left \lvert \phi_0 \right \rvert}^2 \operatorname{II}(\nu)({\mathrm{d}}\psi, \cdot)\bigr) +
\phi_0 \psi \operatorname{div}_{g_\mathrm{s}}\bigl(\operatorname{II}(\nu)({\mathrm{d}}\phi_0, \cdot)\bigr)\ {\mathrm{d}}\nu\, . \label{eq:H_1 allg}\end{aligned}$$ Now $\phi_0$ vanishes on the boundary and $\operatorname{II}({\mathrm{d}}\psi, \cdot)$ is a horizontal vector field, so by we have $$\label{eq:diff H_1 massive}
\int_{F_x} \operatorname{div}_{g_\mathrm{s}} \bigl({\left \lvert \phi_0 \right \rvert}^2 \operatorname{II}(\nu)({\mathrm{d}}\psi, \cdot)\bigr)\ {\mathrm{d}}\nu
=\operatorname{div}_{g_B}\int_{F_x} {\left \lvert \phi_0 \right \rvert}^2 \operatorname{II}(\nu)({\mathrm{d}}\psi, \cdot)\ {\mathrm{d}}\nu \, .$$ If we assume again that $\phi_0$ is centred on $B$, this term vanishes and we are left with the potential $$\label{eq:H_1 massive}
{\varepsilon}P_0 H_1 P_0= {\varepsilon}^2 {V_\mathrm{bend}}^0 - 2{\varepsilon}^3 \int_{F_x} \phi_0 \operatorname{div}_{g_\mathrm{s}}\bigl(\operatorname{II}(\nu)({\mathrm{d}}\phi_0, \cdot)\bigr)\ {\mathrm{d}}\nu +\mathcal{O}({\varepsilon}^4)$$ with an error bound in $\mathcal{L}\big(W^2(B), L^2(B)\big)$.
### The Adiabatic Potential and “Twisted” Waveguides {#sect:twist}
Since the fibres $F_x$ are completely geodesic with respect to $g_F$ for massive quantum waveguides ([cf. ]{}Remark \[rem:complgeod\]), we have $\eta_F=0$ and the adiabatic potential defined in reduces to $$\label{eq:Va massive}
V_\mathrm{a} = \int_{F_x} \pi_M^* g_B(\operatorname{grad}_{g_\mathrm{s}}\phi_0, \operatorname{grad}_{g_\mathrm{s}}\phi_0)\ {\mathrm{d}}\nu\, .$$ This is called the Born-Huang potential in the context of the Born-Oppenheimer approximation. This potential is always non-negative. It basically accounts for the alteration rate of $\phi_0$ in horizontal directions.
In the literature, the adiabatic potential has been studied mainly for “twisted” quantum waveguides. These have two-dimensional fibres $F_x$ which are isometric but not invariant under rotations and twist as one moves along the one-dimensional base curve $B$ [@Kre07]. The operators $\Delta_{F_x}$, $x\in B$, are isospectral and their non-trivial dependence on $x$ is captured by $V_\mathrm{a}$.
We now generalise this concept to massive waveguides of arbitrary dimension and codimension and calculate the adiabatic potential for this class of examples. In this context, a massive quantum waveguide $F\to M\xrightarrow{\pi_M} B$ is said to be only twisted at $x_0\in B$, if there exist a geodesic ball $U\subset B$ around $x_0$ and a local orthonormal frame $\{f_\alpha\}_{\alpha=1}^k$ of ${\mathsf{N}}B|_U$ such that $$\pi^{-1}_M(U) = \bigl\{n^\alpha f_\alpha(x):\ (n^1,\dots,n^k)\in F,~ x\in U\bigr\}\, .$$ This exactly describes the situation that the cross-sections $(F_x, g_{F_x})$ are isometric to $F\subset{\mathbb{R}}^k$, but may vary from fibre to fibre by an $\mathrm{SO}(k)$-transformation. Moreover, it follows that $\lambda_0$ is constant on $U$ and the associated eigenfunction $\phi_0$ is of the form $\phi_0(\nu(x)=n^\alpha f_\alpha(x))=\Phi_0(n^1,\dots,n^k)$, where $\Phi_0$ is the solution of $$-\Delta_n \Phi_0(n) = \lambda_0 \Phi_0(n)\, ,\quad \text{$\Phi_0(n)=0$ on $\partial F$}\, .$$ As for the calculation of $V_\mathrm{a}$ at $x_0$, we firstly compute for $\partial_{x^i}^{{\mathsf{H}}M}\in\Gamma({\mathsf{H}}M)$: $$\begin{aligned}
\pi_M^* g_B(\operatorname{grad}_{g_\mathrm{s}} \phi_0, \partial_{x^i}^{{\mathsf{H}}M})|_{\nu(x_0)} &\stackrel{\hphantom{\text{\eqref{eq:partialHM}}}}{=} g_\mathrm{s}(\operatorname{grad}_{g_\mathrm{s}} \phi_0, \partial_{x^i}^{{\mathsf{H}}M})|_{\nu(x_0)} \nonumber \\
&\stackrel{\hphantom{\text{\eqref{eq:partialHM}}}}{=} \partial_{x^i}^{{\mathsf{H}}M} \phi_0\bigr|_{\nu(x_0)} \nonumber \\
&\stackrel{\text{\eqref{eq:partialHM}}}{=} \Bigl[\partial_i - g_B^\bot\bigl(\omega^{\mathsf{N}}(\partial_{x^i}) \nu, f_\beta\bigr)\bigr|_{x_0} \partial_{n^\beta}\Bigr] \Phi_0(n) \nonumber \\
&\stackrel{\hphantom{\text{\eqref{eq:partialHM}}}}{=} - n^\alpha g_B^\bot \bigl(\nabla^{\mathsf{N}}_{\partial_{x^i}} f_\alpha, f_\beta\bigr)\bigr|_{x_0} \frac{\partial\Phi_0(n)}{\partial n^\beta}\, . \label{eq:twist1}\end{aligned}$$ In order to get a better understanding of $g_B^\bot(\nabla^{\mathsf{N}}_{\partial_{x^i}} f_\alpha, f_\beta)|_{x_0}$, we introduce on $U$ a locally untwisted orthonormal frame $\{e_\alpha\}_{\alpha=1}^k$ of ${\mathsf{N}}B|_U$. It is obtained by taking the vectors $f_\alpha(x_0)\in {\mathsf{N}}_{x_0} B$ and parallel transporting them along radial geodesics with respect to the normal connection $\nabla^{{\mathsf{N}}}$. Thus, twisting is always to be understood relative to the locally parallel frame $\{e_\alpha\}_{\alpha=1}^k$. The induced map that transfers the reference frame $\{e_\alpha\}_{\alpha=1}^k$ into the twisting frame $\{f_\alpha\}_{\alpha=1}^k$ is denoted by $R:U\to\mathrm{SO}(k)$. It is defined by the relation $f_\alpha(x) = e_\gamma(x) R^\gamma_{\hphantom{\gamma}\alpha}(x)$ for $x\in U$ and obeys $R(x_0)=\operatorname{id}_{k\times k}$ due to the initial data of $\{e_\alpha\}_{\alpha=1}^k$. Consequently, using the differential equation of the parallel transport, we have $$\nabla^{\mathsf{N}}_{\partial_{x^i}} f_\alpha(x_0) = \nabla^{\mathsf{N}}_{\partial_{x^i}}(e_\gamma R^\gamma_{\hphantom{\gamma}\alpha})(x_0)= \underbrace{\big(\nabla^{\mathsf{N}}_{\partial_{x^i}} e_\gamma\big) (x_0)}_{=0} \updelta^\gamma_{\hphantom{\gamma}\alpha} + e_\gamma(x)\, \partial_{x^i} R^\gamma_{\hphantom{\gamma}\alpha}(x_0)$$ and hence $$\label{eq:twist2}
g_B^\bot \bigl(\nabla^{\mathsf{N}}_{\partial_{x^i}} f_\alpha, f_\beta\bigr)\bigr|_{x_0} = g_B^\bot(e_\gamma \,\partial_{x^i} R^\gamma_{\hphantom{\gamma}\alpha}, e_\beta)|_{x_0} = \partial_{x^i} R_{\beta\alpha}(x_0)\, .$$ For $1\leq\alpha<\beta\leq k$, let $T_{\alpha\beta}\in{\mathbb{R}}^{k\times k}$ defined by $$(T_{\alpha\beta})_{\gamma\zeta} := \updelta_{\alpha\zeta}\updelta_{\beta\gamma} - \updelta_{\alpha\gamma}\updelta_{\beta\zeta}$$ be a set of generators of the Lie Algebra ${\mathfrak{s}}{\mathfrak{o}}(k)$. This induces generalised angle functions $\{\omega^{\alpha\beta}\in C^\infty(U)\}_{\alpha<\beta}$ by the relation $$R(x) = \exp\Big(\sum_{\alpha<\beta} \omega^{\alpha\beta}(x) T_{\alpha\beta}\Big)$$ for $x\in U$. Then a short calculation shows that $$\label{eq:twist3}
\partial_{x^i} R(x_0) = \bigl((\partial_{x^i}\omega^{\alpha\beta}T_{\alpha\beta}) R \bigr)(x_0) = {\mathrm{d}}\omega^{\alpha\beta}(\partial_{x^i})|_{x_0} T_{\alpha\beta}$$ for $\alpha<\beta$. Combining , and , we obtain $$\pi_M^* g_B(\operatorname{grad}_{g_\mathrm{s}} \phi_0, \partial_{x^i}^{{\mathsf{H}}M})|_{\nu(x_0)=n^\alpha f_\alpha(x_0)} = - {\mathrm{d}}\omega^{\alpha\beta}(\partial_{x^i})|_{x_0} (L_{\alpha\beta} \Phi_0)(n)\, ,\quad \alpha<\beta\, ,$$ where $$L_{\alpha\beta}: \Phi_0(n)\mapsto (L_{\alpha\beta}\Phi_0)(n) := {\left \langle (\nabla_n \Phi_0)(n) , T_{\alpha\beta} n \right \rangle}_{{\mathbb{R}}^k} = (n^\alpha \partial_{n^\beta} - n^\beta \partial_{n^\alpha}) \Phi_0(n)$$ defines the action of the $(\alpha,\beta)$-component of the angular momentum operator in $k$ dimensions. From here, it is easy to see that the adiabatic potential at $x_0$ is given by $$\begin{aligned}
V_\mathrm{a}(x_0) &= \int_{F_{x_0}} \pi_M^* g_B(\operatorname{grad}_{g_\mathrm{s}} \phi_0, \operatorname{grad}_{g_\mathrm{s}} \phi_0)\ {\mathrm{d}}\nu \nonumber \\
&= \int_F g_B \bigl( - {\mathrm{d}}\omega^{\alpha\beta} (L_{\alpha\beta}\Phi_0)(n),- {\mathrm{d}}\omega^{\gamma\zeta} (L_{\gamma\zeta}\Phi_0)(n)\bigr)\bigr|_{x_0}\ {\mathrm{d}}n \nonumber \\
&= \underbrace{g_B ({\mathrm{d}}\omega^{\alpha\beta},{\mathrm{d}}\omega^{\gamma\zeta})|_{x_0}}_{=:\mathbf{ R}^{(\alpha\beta),(\gamma\zeta)}(x_0)} \underbrace{{\left \langle L_{\alpha\beta}\Phi_0 , L_{\gamma\zeta} \Phi_0 \right \rangle}_{L^2(F)}}_{=: \mathbf{L }_{(\alpha\beta),(\gamma\zeta)}}\, ,\quad \text{$\alpha<\beta$ and $\gamma<\zeta$} \label{eq:Vtwist} \\
&= \operatorname{tr}_{{\mathbb{R}}^{k(k-1)/2}} \bigl(\mathbf{ R}(x_0)^\mathrm{t} \mathbf{L }\bigr)\, . \nonumber\end{aligned}$$ The first matrix $\mathbf{ R}(x_0)$ encodes the rate, at which the frame $\{f_\alpha\}_{\alpha=1}^k$ twists relatively to the parallel frame $\{e_\alpha\}_{\alpha=1}^k$ at $x_0$. The second matrix $\mathbf{L }$ measures the deviation of the eigenfunction $\Phi_0$ from being rotationally invariant. It determines to which extent the twisting of the waveguide effects the states in the range of $P_0$ and it depends only on the set $F\subset{\mathbb{R}}^k$ (and not on the point $x_0$ of the submanifold $B$). Finally for the case of a twisted quantum waveguides with $(B,g_B)\cong ({\mathbb{R}},\updelta^1)$ and $k=2$, there exists only one angle function $\omega\in C^\infty({\mathbb{R}})$ and one angular momentum operator $L=n^1 \partial_{n^2} - n^2 \partial_{n^1}$. Then formula yields the well-known result [@KS12] $$V_\mathrm{a} = (\omega')^2 {\left \lVert L\Phi_0 \right \rVert}^2_{L^2(F)}\,,$$ which clearly vanishes if $F$ is invariant under rotations.
### Conclusion {#sect:H_a alpha}
Now that we have calculated all the relevant quantities, we can give an explicit expansion of $H_\mathrm{a}$. The correct norm for error bounds of course depends on the energy scale under consideration. For a constant eigenvalue $\lambda_0$ and $\alpha=2$ the graph-norm of ${\varepsilon}^{-2} H_\mathrm{a}$ is clearly equivalent (with constants independent of ${\varepsilon}$) to the usual norm of $W^2(B,g_B)$. In this situation the best approximation by $H_\mathrm{a}$ given by Theorem \[thm:low spectrum\] has errors of order ${\varepsilon}^4$, so the estimates just derived give $$\label{eq:X3}
\tag{$\alpha=2$}
H_\mathrm{a} = -{\varepsilon}^2\Delta_{g_B} + \lambda_0 + {\varepsilon}^2 V_\mathrm{a} + {\varepsilon}^2{V_\mathrm{bend}}^0 - 2 {\varepsilon}^3 \int_{F_x} \phi_0 \operatorname{div}_{g_\mathrm{s}}\bigl(\operatorname{II}(\nu)({\mathrm{d}}\phi_0, \cdot)\bigr)\ {\mathrm{d}}\nu + \mathcal{O}({\varepsilon}^4)$$ if $\phi_0$ is centred. If this is not the case, the expansion can be read off from equations , and .
If $\lambda_0$ has a non-degenerate minimum and $\alpha=1$, the errors of our best approximation are of order ${\varepsilon}^3$. Thus, the potentials of order ${\varepsilon}^3$ can be disregarded in this case. Note however that on the domain of ${\varepsilon}^{-1}H_{\mathrm{a}}$ we have ${\varepsilon}\partial_{x^i}^{{\mathsf{H}}M}=\mathcal{O}(\sqrt{\varepsilon})$, so the differential operator will be relevant. The error terms of equation , containing second order differential operators, are of order ${\varepsilon}^3$ with respect to ${\varepsilon}^{-1}H_\mathrm{a}$, so they are still negligible. Thus, for $\psi \in W^2(B)$ with ${\left \lVert \psi \right \rVert}^2 + {\left \lVert (-{\varepsilon}\Delta_{g_B} + {\varepsilon}^{-1}\lambda_0)\psi \right \rVert}^2=\mathcal{O}(1)$ we have $$\label{eq:X4}
\tag{$\alpha=1$}
H_\mathrm{a}\psi= \bigl(-{\varepsilon}^2\Delta_{g_B} + \lambda_0 + {\varepsilon}^2 V_\mathrm{a} + {\varepsilon}^2{V_\mathrm{bend}}^0\bigr)\psi
- 2{\varepsilon}^3 \operatorname{div}_{g_B}\int_{F_x} {\left \lvert \phi_0(\nu) \right \rvert}^2 \operatorname{II}(\nu)({\mathrm{d}}\psi, \cdot)\ {\mathrm{d}}\nu
+ \mathcal{O}({\varepsilon}^3)\, ,$$ where the last term is of order ${\varepsilon}^2$ in general and vanishes for centred $\phi_0$.
Hollow Quantum Waveguides {#chap:hollowQWG}
=========================
In this section we consider hollow quantum waveguides $F\to(M,g^{\varepsilon})\xrightarrow{\pi_M} (B,g_B)$, which by Definition \[def:MassiveHollow\] are the boundaries of massive waveguides. This underlying massive wavguide is denoted by $\mathring F\to (\mathring{M},\mathring{g}^{\varepsilon})\xrightarrow{\pi_{\mathring{M}}} (B,g_B)$ in the following. The bundle structure is inherited from the massive waveguide as well, [i.e. ]{}$F = \partial \mathring{F}$ and the diagram
M &&\
\^[\_M]{} && \_[\_]{}\
B &\_[\_B]{}&B
commutes.
Hollow quantum waveguides have, to our knowledge, not been studied before. In fact, already the derivations of $g^{\varepsilon}$ and ${V_\mathrm{bend}}$ constitute novel results. A slight generalisation of these calculations to objects that are not necessarily boundaries can be found in [@Lampart2013 Chapter 3].
In order to determine the adiabatic operator $H_\mathrm{a}$ for hollow quantum waveguides, we follow the same procedure as layed out in the introduction and in the previous section.
The Pullback Metric {#the-pullback-metric}
-------------------
Note that $g^{\varepsilon}=\mathring{g}^{\varepsilon}|_{{\mathsf{T}}M}$ and that we computed the unscaled pullback metric $\mathring{g}^{{\varepsilon}=1}$ for the massive waveguide $\mathring{M}$ already in Lemma \[lem:pullback\]. The latter reads $$\mathring{g} := \mathring{g}^{{\varepsilon}=1} = \underbrace{\pi_{\mathring{M}}^* g_B + \mathring{h}^{{\varepsilon}=1}}_{=:{\mathring{g}}^\mathrm{hor}} + g_{\mathring{F}}\,,$$ where the “horizontal correction” $\mathring{h}^{{\varepsilon}=1}$ vanishes on vertical vector fields and essentially depends on the extrinsic geometry of the embedding $B\hookrightarrow {\mathbb{R}}^{d+k}$.
If we restrict $\mathring{M}$’s tangent bundle to $M$, one has the orthogonal decomposition $$\label{eq:HVN}
{\mathsf{T}}\mathring{M}|_M = {\mathsf{T}}M \oplus {\mathsf{N}}M \stackrel{\eqref{eq:THVM}}= {\mathsf{H}}M \oplus {\mathsf{V}}M \oplus {\mathsf{N}}M$$ with respect to $\mathring{g}$. Due to the commutativity of the above diagram, it follows that ${\mathsf{V}}M \subset {\mathsf{V}}\mathring{M}|_M$. This suggests to introduce the notation ${\mathsf{V}}M^\bot$ for the orthogonal complement of ${\mathsf{V}}M$ in ${\mathsf{V}}\mathring{M}|_M$ with respect to $g_{\mathring{F}}$, [i.e. ]{}$$\label{eq:decompgF}
{\mathsf{V}}\mathring{M}|_M = {\mathsf{V}}M \oplus {\mathsf{V}}M^\bot\, .$$ For any $X\in\Gamma({\mathsf{T}}B)$, let $X^{{\mathsf{H}}{\mathring{M}}}\in\Gamma({\mathsf{H}}\mathring{M})$ and $X^{{\mathsf{H}}M}\in\Gamma({\mathsf{H}}M)$ be the respective unique horizontal lifts. It then holds that $$\pi_{\mathring{M}*} \bigl(X^{{\mathsf{H}}M} - X^{{\mathsf{H}}{\mathring{M}}}|_M\bigr) = \pi_{M*} X^{{\mathsf{H}}M} - \pi_{\mathring{M}*} X^{{\mathsf{H}}{\mathring{M}}}|_M = X - X = 0\, .$$ Thus, the difference between $X^{{\mathsf{H}}M}$ and $X^{{\mathsf{H}}{\mathring{M}}}|_M$ is a vertical field: $$\label{eq:VX}
X^{{\mathsf{H}}M} = X^{{\mathsf{H}}{\mathring{M}}}|_M + V_X$$ with $V_X\in \Gamma({\mathsf{V}}\mathring{M}|_M)$. Moreover, $V_X\in \Gamma({\mathsf{V}}M^\bot)$ since for arbitrary $W\in \Gamma({\mathsf{V}}M)\subset \Gamma({\mathsf{V}}\mathring{M}|_M)$ $$0 = g(X^{{\mathsf{H}}M},W) = \underbrace{\mathring{g}\bigl(X^{{\mathsf{H}}{\mathring{M}}}|_M, W\bigr)}_{=0} + g_{\mathring{F}}\bigl(V_X,W)$$ implies $g_{\mathring{F}}(V_X,W)=0$.
![Left: Sketch of the fibre ${\mathsf{N}}_x B$ for any $x\in B$. Note that for any $\nu\in \mathring{M}_x\subset{\mathsf{N}}_xB$ we have the canonical identification of ${\mathsf{N}}_xB$ and ${\mathsf{V}}_\nu\mathring{M}$ via the isomorphism . Right: Relationship between the horizontal lifts $X^{{\mathsf{H}}M}$ and $X^{{\mathsf{H}}\mathring{M}}|_M$. They are connected by the vertical field $V_X$.[]{data-label="fig:hollowQWG"}](hollowQWG "fig:"){width="95.00000%"} (-10,36)[$M$]{} (-10,150)[$M$]{} (-96,72)[$X$]{} (-98,157)[$X^{{\mathsf{H}}M}$]{} (-98,119)[$X^{{\mathsf{H}}{\mathring{M}}}|_M$]{} (-128,153)[$V_X$]{} (-119,136)[$\nu$]{} (-119,87)[$x$]{} (-109,17)[${\mathsf{N}}_xB \cong {\mathsf{V}}_\nu \mathring{M}$]{} (-190,140)[${\mathsf{N}}B$]{} (-187,60)[$\mathring{M}$]{} (-30,73)[$0\cong B$]{} (-153,137) (-145,99) (-109,95) (-250,154) (-305,160) (-285,121)[$\nu$]{} (-336,87)[$0$]{} (-298,18)[$M_x$]{} (-360,50)[$\mathring{M}_x$]{} (-415,158)[${\mathsf{N}}_x B \cong {\mathsf{V}}_\nu \mathring{M}$]{}
Obviously, the relation $\pi_{M*} X^{{\mathsf{H}}M}= X$ does not shed light on the vertical part $V_X$. The latter will be determined by the requirement $X^{{\mathsf{H}}M}$ to be a tangent vector field on $M$, or equivalently by the condition $\mathring{g}(X^{{\mathsf{H}}M},n)=0$, where $n\in\Gamma({\mathsf{N}}M)$ denotes a unit normal field of $M$ in $\mathring{M}$. In order to determine $V_X$ from this condition, we first need to show that the vertical component of $n$ is non-zero everywhere.
\[lem:v\] Let $n\in\Gamma({\mathsf{N}}M)$ be a unit normal field of the hollow quantum waveguide $M$. Then $v_n:={\operatorname{P}^{{\mathsf{V}}{\mathring{M}}}}n\in \Gamma({\mathsf{V}}M^\bot)$ is a non-vanishing vector field.
Decompose $n = v_n + h_n$ with $h_n:={\operatorname{P}^{{\mathsf{H}}{\mathring{M}}}}n\in\Gamma({\mathsf{H}}\mathring{M}|_M)$. It then holds for any vector field $W\in\Gamma({\mathsf{V}}M)$: $$g_{\mathring{F}}(W,v_n) = \mathring{g}(W,v_n) = \mathring{g}(W,v_n + h_n) = \mathring{g}(W,n) = 0\,,$$ where we used for the second and fourth equality. This clearly implies $v_n\in\Gamma({\mathsf{V}}M^\bot)$ by . Now suppose there exists $\nu\in M$ with $v_n(\nu)=0$. Consider the space $$U_\nu := {\mathsf{H}}_\nu M \oplus \operatorname{span}\{(n(\nu)\} \subset {\mathsf{T}}_\nu \mathring{M}\, .$$ Since $n(\nu)\in {\mathsf{N}}_\nu M$ is orthogonal to ${\mathsf{H}}_\nu M\subset {\mathsf{T}}_\nu M$, one has $\dim(U_\nu) = d+1$. We will show that the kernel of $\pi_{\mathring{M}*}|_{U_\nu}:U_\nu\to \operatorname{im}(\pi_{\mathring{M}*}|_{U_\nu})\subset {\mathsf{T}}_{\pi_{\mathring{M}}(\nu)}B$ is trivial. Hence, $$d+1 = \dim(U_\nu) = \operatorname{rank}(\pi_{\mathring{M}*}|_{U_\nu})\leq \dim\bigl({\mathsf{T}}_{\pi_{\mathring{M}}(\nu)}B\bigr)$$ clearly contradicts the fact that $\dim(B)=d$ and finally the assumption that $n(\nu)=0$. Therefor, let $w\in\ker(\pi_{\mathring{M}*}|_{U_\nu})\in {\mathsf{V}}_\nu \mathring{M}\cap U_\nu$. On the one hand, since $$n(\nu)=\underbrace{v_n(\nu)}_{=0} + h_n(\nu)\in {\mathsf{H}}_\nu \mathring{M} = \ker(\pi_{\mathring{M}*}|_\nu\bigr)^\bot\, ,$$ $w$ is an element of ${\mathsf{H}}_\nu M$. But on the other hand, $\pi_{\mathring{M}*}|_{{\mathsf{H}}_\nu M}:{\mathsf{H}}_\nu M\to {\mathsf{T}}_{\pi_{\mathring{M}}(\nu)} B$ posseses a trivial kernel. Together, this yields $w=0$, [i.e. ]{}$\ker(\pi_{\mathring{M}*}|_{U_\nu})=\{0\}$.
In view of equation , Lemma \[lem:v\] suggests to define a function $\gimel(X)\in C^\infty(M)$ such that $V_X = \gimel(X)v_n$. Thus, the requirement $X^{{\mathsf{H}}M}\in\Gamma({\mathsf{T}}M)$ yields $$\begin{aligned}
0 &= \mathring{g}(X^{{\mathsf{H}}M},n) = \mathring{g}\bigl(X^{{\mathsf{H}}{\mathring{M}}}|_M,n\bigr) + \mathring{g}(v_n,n) \gimel(X) \\
&= \mathring{g}^\mathrm{hor}\bigl(X^{{\mathsf{H}}{\mathring{M}}}|_M,h_n\bigr) + g_{\mathring{F}}(v_n,v_n) \gimel(X)\, ,\end{aligned}$$ consequently $$\label{eq:gimel}
\gimel(X) = -\frac{\mathring{g}^\mathrm{hor}({X^{{\mathsf{H}}{\mathring{M}}}|}_M,h_n)}{g_{\mathring{F}}(v_n,v_n)}\, .$$ Note that $\gimel(X)$ is well-defined since $g_{\mathring{F}}(v_n,v_n)>0$ by Lemma \[lem:v\]. Moreover, the latter equation shows that $\gimel\in{\mathcal{T}}^0_1(B)\otimes C^\infty(M)$ is actually a tensor.
In summary, we just showed that the unscaled pullback metric on $M$ may be written as $$g= g^\mathrm{hor} + g_F\, ,\quad g_F:={g_{\mathring{F}}|}_{{\mathsf{V}}M}$$ with “horizontal block” $$g^\mathrm{hor}(X^{{\mathsf{H}}M},Y^{{\mathsf{H}}M}) := \mathring{g}^\mathrm{hor}\bigl(X^{{\mathsf{H}}{\mathring{M}}}|_M,Y^{{\mathsf{H}}{\mathring{M}}}|_M\bigr) + g_{\mathring{F}}(v_n,v_n) \gimel(X)\gimel(Y)$$ for $X,Y\in\Gamma({\mathsf{T}}B)$. Going over to the scaled pullback metric $g^{\varepsilon}$, we first show that the horizontal lift remains unchanged.
\[lem:hor\] Let $(M,g^{\varepsilon})\to(B,g_B)$ be a hollow quantum waveguide for ${\varepsilon}>0$. Then the horizontal subbundle ${\mathsf{H}}M$ is independent of ${\varepsilon}$.
It is sufficient to show that for any vector field $X\in\Gamma({\mathsf{T}}B)$ its unique horizontal lift $X^{{\mathsf{H}}M}$ is given by the ${\varepsilon}$-independent expression $$X^{{\mathsf{H}}M} = X^{{\mathsf{H}}{\mathring{M}}}|_M + \gimel(X) v_n$$ with $\gimel(X)\in C^\infty(M)$ and $v_n\in\Gamma({\mathsf{V}}M^\bot)$ as before. We already know that $X^{{\mathsf{H}}M}$ is tangent to $M$ and satisfies $\pi_{M*} X^{{\mathsf{H}}M}=X$. Thus, the requirement that $X^{{\mathsf{H}}M}$ is orthogonal to any $W\in\Gamma({\mathsf{V}}M)$ with respect $g^{\varepsilon}$ is the only possible way for any ${\varepsilon}$-dependence to come into play. Therefore, we calculate $$\begin{aligned}
g^{\varepsilon}(X^{{\mathsf{H}}M},W) &= \mathring{g}^{\varepsilon}\bigl( X^{{\mathsf{H}}{\mathring{M}}}|_M + \gimel(X) v_n, W\bigr) \\
&= \underbrace{{\varepsilon}^{-2}\Bigl[\pi_{\mathring{M}}^* g_B \bigl(X^{{\mathsf{H}}{\mathring{M}}}|_M, W\bigr) + {\varepsilon}\mathring{h}^{\varepsilon}\bigl(X^{{\mathsf{H}}{\mathring{M}}}|_M,W\bigr)\Bigr]}_{\text{$=0$, since $W\in\Gamma({\mathsf{V}}M)\subset\Gamma({\mathsf{V}}\mathring{M}|_M)$}} + \gimel(X) \underbrace{g_{\mathring{F}}(v_n,W)}_{\text{$=0$ by~\eqref{eq:decompgF}}} \\
&=0\, .\end{aligned}$$
In summary, if the scaled pullback metric of the massive waveguide $\mathring{M}$ has the form $\mathring{g}^{\varepsilon}= {\varepsilon}^{-2}(\pi^*_{\mathring{M}} g_B + {\varepsilon}\mathring{h}^{\varepsilon}) + g_{\mathring{F}}$, the scaled pullback metric $g^{\varepsilon}$ of the associated hollow waveguide $M$ reads $$\label{eq:g^eps hollow}
g^{\varepsilon}= {\varepsilon}^{-2}(\pi^*_M g_B + {\varepsilon}h^{\varepsilon}) + g_F$$ with $$h^{\varepsilon}(X^{{\mathsf{H}}M},Y^{{\mathsf{H}}M}) := \mathring{h}^{\varepsilon}\bigl(X^{{\mathsf{H}}{\mathring{M}}}|_M,Y^{{\mathsf{H}}{\mathring{M}}}|_M\bigr) + {\varepsilon}g_{\mathring{F}}(v_n,v_n) \gimel(X)\gimel(Y)$$ for $X,Y\in\Gamma({\mathsf{T}}B)$. This shows that the scaled pullback metric $g^{\varepsilon}$ is again of the form .
Let us consider a simple example of a hollow quantum waveguide with $d=1$, $k=2$. Take $B=\{(x,0,0)\in{\mathbb{R}}^3:\ x\in{\mathbb{R}}\}\subset {\mathbb{R}}^3$ as submanifold and parametrise the according massive quantum waveguide via $$\mathring{M} := \Bigl\{(x,0,0) + \varrho\, r(x,\varphi) e_r:\ (x,\varphi,\varrho)\in{\mathbb{R}}\times[0,2\uppi)\times[0,1]\Bigr\} ,$$ where $r:{\mathbb{R}}\times[0,2\uppi)\to [r_-,r_+]$ with $0<r_-<r_+<\infty$ is a smooth function obeying the periodicity condition $r(\cdot,\varphi+2\uppi) = r(\cdot,\varphi)$ and $e_r=(0,\cos\varphi,\sin\varphi)\in{\mathsf{V}}_{(x,\varphi,\varrho)} \mathring{M}$ stands for the “radial unit vector”. In view of Example \[ex:conv\] with $\kappa\equiv 0$, the unscaled pullback metric on $\mathring{M}$ is given by $$\mathring{g} = \mathring{g}^\mathrm{hor} + g_{\mathring{F}} = {\mathrm{d}}x^2 + (\varrho^2\, {\mathrm{d}}\varphi^2 + {\mathrm{d}}\varrho^2)\, .$$ Furthermore, we immediately observe that ${\mathsf{T}}_x B=\operatorname{span}\{\partial_x\}$ with trivial horizontal lift $\partial_x^{{\mathsf{H}}{\mathring{M}}}=(1,0,0)=:e_x\in{\mathsf{H}}_{(x,\varphi,\varrho)}\mathring{M}$. The hollow quantum waveguide associated to $M$ is obviously given by $$M := \Bigl\{(x,0,0) + r(x,\varphi) e_r:\ (x,\varphi)\in{\mathbb{R}}\times[0,2\uppi)\Bigr\} = \mathring{M}|_{\varrho=1}\, .$$ Consequently, ${\mathsf{T}}_{(x,\varphi)}M$ is given by $\operatorname{span}\{\tau_x,\tau_\varphi\}$, where $$\begin{aligned}
\tau_x(x,\varphi) &= \tfrac{\partial M}{\partial x}(x,\varphi) = e_x + \tfrac{\partial r}{\partial x} e_r\, , \\
\tau_\varphi(x,\varphi) &= \tfrac{\partial M}{\partial\varphi}(x,\varphi) = \tfrac{\partial r}{\partial \varphi} e_r + r e_\varphi\end{aligned}$$ with $e_\varphi=(0,-\sin\varphi,\cos\varphi)\in{\mathsf{V}}_{(x,\varphi,\varrho)}\mathring{M}$. One easily agrees that $\tau_x$ and $\tau_\varphi$ are orthogonal to $$\tilde{n} = - \tfrac{\partial r}{\partial\varphi} e_\varphi + r e_r - r \tfrac{\partial r}{\partial x} e_x$$ with respect to $\mathring{g}$. Hence, $$n(x,\varphi) := \frac{\tilde{n}}{{\left \lVert \tilde{n} \right \rVert}_{\mathring{g}}} = \underbrace{\frac{- \frac{\partial r}{\partial\varphi} e_\varphi + r e_r}{\sqrt{(\frac{\partial r}{\partial\varphi})^2 + r^2\left[1+(\frac{\partial r}{\partial x})^2\right]}}}_{=:v_n\in{\mathsf{V}}_{(x,\varphi)}M^\bot} + \underbrace{\frac{- r\frac{\partial r}{\partial x} e_x}{\sqrt{(\frac{\partial r}{\partial\varphi})^2 + r^2\left[1+(\frac{\partial r}{\partial x})^2\right]}}}_{=:h_n\in{\mathsf{H}}_{(x,\varphi)}\mathring{M}}$$ is a unit normal vector of $M$ at $(x,\varphi)$ for ${\varepsilon}=1$. Noting that $$g_{\mathring{F}}(v_n,v_n) = \frac{(\frac{\partial r}{\partial\varphi})^2+ r^2}{(\frac{\partial r}{\partial\varphi})^2 + r^2\left[1+(\frac{\partial r}{\partial x})^2\right]}\, ,$$ equation gives $$\begin{aligned}
\gimel(\partial_x) &= -\frac{\mathring{g}^\mathrm{hor}(\partial_x^{{\mathsf{H}}{\mathring{M}}}|_M,h_n)}{g_{\mathring{F}}(v_n,v_n)} \\
&= - \frac{- r\frac{\partial r}{\partial \varphi}}{\sqrt{(\frac{\partial r}{\partial\varphi})^2 + r^2\left[1+(\frac{\partial r}{\partial x})^2\right]}} \left(\frac{(\frac{\partial r}{\partial\varphi})^2+ r^2}{(\frac{\partial r}{\partial\varphi})^2 + r^2\left[1+(\frac{\partial r}{\partial x})^2\right]}\right)^{-1} \\
&= \frac{r \frac{\partial r}{\partial\varphi}\sqrt{(\frac{\partial r}{\partial\varphi})^2 + r^2\left[1+(\frac{\partial r}{\partial x})^2\right]}}{(\frac{\partial r}{\partial \varphi})^2 + r^2}\, .\end{aligned}$$ This yields the following expression for the “horizontal block” of the scaled pullback metric $g^{\varepsilon}$: $$\begin{aligned}
g^{{\varepsilon},\mathrm{hor}}(\partial_x^{{\mathsf{H}}M},\partial_x^{{\mathsf{H}}M}) &= {\varepsilon}^{-2}\, {\mathrm{d}}x^2\bigl(\partial_x^{{\mathsf{H}}{\mathring{M}}}|_M,\partial_x^{{\mathsf{H}}{\mathring{M}}}|_M\bigr) + g_{\mathring{F}}(v_n,v_n) \gimel(\partial_x)\gimel(\partial_x) \\
&= {\varepsilon}^{-2} + \frac{r^2(\frac{\partial r}{\partial x})^2}{(\frac{\partial r}{\partial\varphi})^2 + r^2} \\
&= {\varepsilon}^{-2}\bigl(1 + {\varepsilon}h^{\varepsilon}(\partial_x^{{\mathsf{H}}M},\partial_x^{{\mathsf{H}}M}) \bigr)\end{aligned}$$ with $$h^{\varepsilon}(\partial_x^{{\mathsf{H}}M},\partial_x^{{\mathsf{H}}M}) = {\varepsilon}\frac{r^2(\frac{\partial r}{\partial x})^2}{(\frac{\partial r}{\partial\varphi})^2 + r^2}\, .$$
The Adiabatic Hamiltonian {#sect:H_a hollow}
-------------------------
We now calculate the adiabatic operator for hollow waveguides. Since in this case the fibre is a manifold without boundary, the ground state of $H_F$ is explicitly known: $$\phi_0=\sqrt{\frac{\rho_{\varepsilon}}{{\left \lVert \rho_{\varepsilon}\right \rVert}_1}}=\pi_M^*\mathrm{Vol}(F_x)^{-1/2} + \mathcal{O}({\varepsilon})\, ,$$ where ${\left \lVert \rho_{\varepsilon}\right \rVert}_1(x)$ is the $L^1$-norm of $\rho_{\varepsilon}$ on the fibre $F_x$. Because of this we can express many of the terms appearing in $H_\mathrm{a}$, given in equation , through $\rho_{\varepsilon}$.
Let us begin with the sum of the modified bending potential $\tilde{V}_\mathrm{bend}$ appearing in $H_1$ and the adiabatic potential $V_\mathrm{a}$. First we obtain an expression for the one-form $\bar\eta$ by observing that for any vector field $X$ on $B$ $$0=X \int_{F_x} {\left \lvert \phi_0 \right \rvert}^2 \ {\mathrm{d}}g_F = \int_{F_x} X^{{\mathsf{H}}M} {\left \lvert \phi_0 \right \rvert}^2 \ {\mathrm{d}}g_F - \underbrace{\int_{F_x} {\left \lvert \phi_0 \right \rvert}^2 g_B(X,\pi_{M*}\eta_F)\ {\mathrm{d}}g_F}_{=\bar\eta(X)}\, .$$ So we see that $$\label{eq:bareta}
\bar\eta = \int_{F_x} {\operatorname{P}^{{\mathsf{H}}M}}\bigl({\mathrm{d}}{\lvert \phi_0 \vert}^2\bigr)\ {\mathrm{d}}g_F\, .$$ Now to start with the first term of the adiabatic potential can be calculated as in $$\begin{aligned}
\operatorname{tr}_{g_B} (\nabla^B \bar\eta)&\stackrel{\hphantom{(25)}}{=}\operatorname{div}_{g_B} g_B(\bar \eta, \cdot)\\
&\stackrel{\eqref{eq:divgs}}{=} \int_{F_x} \operatorname{div}_{g_\mathrm{s}}\bigl({\operatorname{P}^{{\mathsf{H}}M}}\operatorname{grad}_{g_\mathrm{s}}{\left \lvert \phi_0 \right \rvert}^2\bigr)\ {\mathrm{d}}g_F\\
&\stackrel{\,\,\eqref{eq:LaplaceH}\,\,}{=}\int_{F_x}\Delta_{\mathsf{H}}{\left \lvert \phi_0 \right \rvert}^2 \ {\mathrm{d}}g_F\, .\end{aligned}$$ For the the modified bending potential one has, using the shorthand ${\left \lvert F_x \right \rvert}=\mathrm{Vol}(F_x)$, $$\begin{aligned}
{\varepsilon}^{2}\tilde{V}_\mathrm{bend}^\mathrm{a} &:=P_0 \big({V_\mathrm{bend}}- \tfrac{1}{2} \Delta_{\mathsf{V}}(\log \rho_{\varepsilon}) - \tfrac{1}{4} g_F({\mathrm{d}}\log\rho_{\varepsilon},{\mathrm{d}}\log\rho_{\varepsilon})\big)P_0\\
&\hphantom{:}=\frac{{\varepsilon}^2}{2} \int_{F_x} {\left \lvert \phi_0 \right \rvert}^2 \Delta_{\mathsf{H}}(\log\rho_{\varepsilon}) \ {\mathrm{d}}g_F+ \mathcal{O}({\varepsilon}^4)\\
&\hphantom{:}=\frac{{\varepsilon}^2}{2} \int_{F_x} \vert F_x \vert^{-1} (\Delta_{\mathsf{H}}\rho_{\varepsilon}) \ {\mathrm{d}}g_F + \mathcal{O}({\varepsilon}^4)\, .\end{aligned}$$ Note that this expression is of order ${\varepsilon}^3$ since $\rho_{\varepsilon}=1+\mathcal{O}({\varepsilon})$. Hence, bending does not contribute to the leading order of $H_\mathrm{a}$. Now inserting the explicit form of $\phi_0$ an elementary calculation yields $$\label{eq:V_a hollow}
V_\mathrm{a} + \tilde V_{\mathrm{bend}}^\mathrm{a} =
\begin{aligned}[t]
\int_{F_x}& - \tfrac{1}{2}\rho_{\varepsilon}\Delta_{\mathsf{H}}{\left \lVert \rho_{\varepsilon}\right \rVert}_1^{-1}
- \tfrac{1}{2} g_B\bigl(\operatorname{grad}{\left \lvert F_x \right \rvert}^{-1}, \pi_{M*} \operatorname{grad}\rho_{\varepsilon}\bigr) \\
&+ \tfrac{1}{4} \rho_{\varepsilon}{\left \lVert \rho_{\varepsilon}\right \rVert}_1 g_B\bigl(\pi_{M*} \operatorname{grad}{\left \lVert \rho_{\varepsilon}\right \rVert}_1^{-1}, \pi_{M*} \operatorname{grad}{\left \lVert \rho_{\varepsilon}\right \rVert}_1^{-1}\bigr)\ {\mathrm{d}}g_F + \mathcal{O}({\varepsilon}^2)\, .
\end{aligned}$$ With $\rho_{\varepsilon}=1+\mathcal{O}({\varepsilon})$ and $\Vert\rho_{\varepsilon}\Vert_1= \vert F_x \vert + \mathcal{O}({\varepsilon})$ one easily checks that up to order ${\varepsilon}$ this expression equals $$\tfrac{1}{4} g_B\bigl({\mathrm{d}}\log {\left \lvert F_x \right \rvert} , {\mathrm{d}}\log {\left \lvert F_x \right \rvert} \bigr) +
\tfrac{1}{2} \Delta_{g_B}\log {\left \lvert F_x \right \rvert} \, .$$ As far as the remaining terms of $H_1$ are concerned, note that the scaled pullback metric $g^{\varepsilon}$ of the hollow waveguide has the same expansion on horizontal one-forms up to errors of order ${\varepsilon}^4$ as in the case of the massive waveguide , [i.e. ]{}$$g^{\varepsilon}(\pi_M^*{\mathrm{d}}x^i, \pi_M^*{\mathrm{d}}x^j)={\varepsilon}^2 \bigl(g_B^{ij} + 2{\varepsilon}\operatorname{II}(\nu)^{ij} + \mathcal{O}({\varepsilon}^2)\bigr)\, .$$ Hence, we can calculate these terms starting from expression . Since the latter all carry a prefactor ${\varepsilon}^3$, we may replace any $\phi_0$ by ${\left \lvert F_x \right \rvert}^{-1/2}$, obtaining for $\psi\in L^2(B)$ $$\label{eq:diff H_1 hollow}
\int_{F_x} \operatorname{div}_{g_\mathrm{s}} \bigl({\left \lvert \phi_0 \right \rvert}^2 \operatorname{II}(\nu)({\mathrm{d}}\psi, \cdot)\bigr)\ {\mathrm{d}}g_F =
\int_{F_x} \operatorname{div}_{g_\mathrm{s}} \bigl({\left \lvert F_x \right \rvert}^{-1} \operatorname{II}(\nu)({\mathrm{d}}\psi, \cdot)\bigr)\ {\mathrm{d}}g_F + \mathcal{O}({\varepsilon})$$ and $$\begin{aligned}
&\int_{F_x} \phi_0 \operatorname{div}_{g_\mathrm{s}} \bigl(\operatorname{II}(\nu)(\pi_{M*}\operatorname{grad}_{g_\mathrm{s}} \phi_0, \cdot)\bigr)\ {\mathrm{d}}g_F\nonumber \\
&\ =\int_{F_x} {\left \lvert F_x \right \rvert}^{-1/2} \operatorname{div}_{g_\mathrm{s}} \bigl(\operatorname{II}(\nu)\bigl({\mathrm{d}}{\left \lvert F_x \right \rvert}^{-1/2}, \cdot\bigr)\bigr)\ {\mathrm{d}}g_F + \mathcal{O}({\varepsilon})\, . \label{eq:pot H_1 hollow}\end{aligned}$$ As in equation we have $$\int_{F_x} \operatorname{div}_{g_\mathrm{s}} \bigl({\left \lvert F_x \right \rvert}^{-1} \operatorname{II}(\nu)({\mathrm{d}}\psi, \cdot)\bigr)\ {\mathrm{d}}g_F
= \operatorname{div}_{g_B}\int_{F_x} {\left \lvert F_x \right \rvert}^{-1} \operatorname{II}(\nu)({\mathrm{d}}\psi, \cdot)\ {\mathrm{d}}g_F\,.$$ Again, this term vanishes if the barycentre of the fibres $F_x=\partial \mathring{F}_x \subset {\mathsf{N}}_xB$ is zero, that is $$\int_{F_x} \nu \ {\mathrm{d}}g_F=0\,.$$ Since $\lambda_0\equiv 0$, the adiabatic operator is of the form $$\begin{aligned}
H_\mathrm{a}&=-{\varepsilon}^2 \Delta_{g_B} + {\varepsilon}^2 V_\mathrm{a} + {\varepsilon}P_0 H_1 P_0 \nonumber \\
&= -{\varepsilon}^2 \Delta_{g_B} + {\varepsilon}^2\Bigl( \tfrac{1}{4} g_B\bigl({\mathrm{d}}\log {\left \lvert F_x \right \rvert} , {\mathrm{d}}\log {\left \lvert F_x \right \rvert} \bigr) +
\tfrac{1}{2} \Delta_{g_B}\log {\left \lvert F_x \right \rvert}\Bigr) + \mathcal{O}({\varepsilon}^3)\, , \label{eq:Vhollow}\end{aligned}$$ with an error in $\mathcal{L}(W^2(B), L^2(B))$. Thus, the adiabatic operator at leading order is just the Laplacian on the base $B$ plus an effective potential depending solely on the relative change of the volume of the fibres. Going one order further in the approximation, we have $${\varepsilon}^2 V_\mathrm{a} + {\varepsilon}P_0 H_1 P_0 ={\varepsilon}^2\eqref{eq:V_a hollow} + 2{\varepsilon}^3 \eqref{eq:pot H_1 hollow} + 2{\varepsilon}^3\eqref{eq:diff H_1 hollow} + \mathcal{O}({\varepsilon}^4)$$ in the same norm. Hence, $H_\mathrm{a}$ also contains the second order differential operator if the barycentre of $F_x$ is different from zero. Let us also remark that the leading order of the adiabatic potential can also be calculated by applying a unitary transformation $L^2(F, {\mathrm{d}}g_F)\to L^2(F, {\left \lvert F_x \right \rvert}^{-1} {\mathrm{d}}g_F)$ that rescales fibre volume to one, in the spirit of $\mathcal{M}_\rho$ ([cf. ]{}equation ). In this way a similar potential was derived by Kleine [@Kle], in a slightly different context, for a special case with one-dimensional base and without bending.
[DNPW84]{}
G. Bouchitt[é]{}, M. L. Mascarenhas and L. Trabucho, *On the curvature and torsion effects in one dimensional waveguides*, ESAIM: Control, Optimisation and Calculus of Variations **13** (2007), no. 4, pp. 793–808.
R. L. Bishop, *There is more than one way to frame a curve*, The American Mathematical Monthly **82** (1975), no. 3, pp. 246–251.
G. Carron, P. Exner and D. Krej[č]{}i[ř]{}[í]{}k, *Topologically nontrivial quantum layers*, Journal of Mathematical Physics **45** (2004), no. 2, pp. 774–784.
R. Da Costa, *Constraints in quantum mechanics*, Physical Review A **25** (1982), no. 6, pp. 2893–2900.
P. Duclos and P. Exner, *Curvature-induced bound states in quantum waveguides in two and three dimensions*, Reviews in Mathematical Physics **7** (1995), no. 1, pp. 73–102.
R. Froese and I. Herbst, *Realizing holonomic constraints in classical and quantum mechanics*, Communications in Mathematical Physics **220** (2001), no. 3, pp. 489–535.
H. Jensen and H. Koppe, *Quantum mechanics with constraints*, Annals of Physics **63** (1971), no. 2, pp. 586–591.
J. Jost, *[R]{}iemannian geometry and geometric analysis*, Universitext, Springer (2005).
R. Kleine, *Discreteness conditions for the Laplacian on complete, non-compact Riemannian manifolds*, Mathematische Zeitschrift **198** (1988), no. 1, pp. 127-141.
D. Krejčiř[í]{}k, *Twisting versus bending in quantum waveguides*, Proceedings of the Symposium on Pure Mathematics **77**, pp. 617-636, American Mathematical Society (2008).
D. Krejčiř[í]{}k and Z. Lu, *Location of the essential spectrum in curved quantum layers*, arXiv preprint arXiv:1211.2541 (2012).
D. Krejčiř[í]{}k and N. Raymond, *Magnetic effects in curved quantum waveguides*, arXiv preprint arXiv:1303.6844 (2013).
D. Krejčiř[í]{}k, N. Raymond and M. Tušek, *The magnetic Laplacian in shrinking tubular neighbourhoods of hypersurfaces.* arXiv preprint arXiv:1303.4753 (2013).
D. Krejčiř[í]{}k and H. Šediv[á]{}kov[á]{}, *The effective Hamiltonian in curved quantum waveguides under mild regularity assumptions*, Reviews in Mathematical Physics **24** (2012), no. 7, 1250018 (39 pages).
J. Lampart, *The adiabatic limit of Schrödinger operators on fibre bundles*, PhD thesis (2013), Eberhard Karls Universit[ä]{}t T[ü]{}bingen.
J. Lampart and S. Teufel, *The adiabatic limit of Schrödinger operators on fibre bundles*, arXiv preprint arXiv:1402.0382 (2014).
C. Lin and Z. Lu, *On the discrete spectrum of generalized quantum tubes*, **31** (2006), no. 10, pp. 1529–1546.
P. Maraner, *A complete perturbative expansion for quantum mechanics with constraints*, Journal of Physics A: Mathematical and General **28** (1995), no. 10, pp. 2939–2951.
K. A. Mitchell, *Gauge fields and extrapotentials in constrained quantum systems*, Physical review A **63** (2001), no. 4, 042112 (20 pages).
G. de Oliveira, *Quantum dynamics of a particle constrained to lie on a surface*, arXiv preprint arXiv:1310.6651 (2013).
C. R. de Oliveira and A. A. Verri, *On the spectrum and weakly effective operator for Dirichlet Laplacian in thin deformed tubes*, Journal of Mathematical Analysis and Applications **381** (2011), no. 1, pp. 454–468.
J. Stockhofe and P. Schmelcher, *Curved quantum waveguides: Nonadiabatic couplings and gauge theoretical structure*, arXiv preprint arXiv:1311.6925 (2013).
S. Teufel, *Adiabatic pertubation theory in quantum dynamics*, Lecture Notes in Mathematics 1821, Springer (2003).
J. Tolar, *On a quantum mechanical d’Alembert principle*, Lecture Notes in Physics 313, Springer (1988).
S. Teufel and J. Wachsmuth, *Effective Hamiltonians for constrained quantum systems*, Memoirs of the American Mathematical Society **230** (2013), no. 1083 (93 pages).
O. Wittich, *A homogenization result for Laplacians on tubular neighbourhoods of closed Riemannian submanifolds*, Habilitation Treatise (2007), Eberhard Karls Universit[ä]{}t T[ü]{}bingen.
|
---
abstract: |
.1in
Standard parton distribution function sets do not have rigorously quantified uncertainties. In recent years it has become apparent that these uncertainties play an important role in the interpretation of hadron collider data. In this paper, using the framework of statistical inference, we illustrate a technique that can be used to efficiently propagate the uncertainties to new observables, assess the compatibility of new data with an initial fit, and, in case the compatibility is good, include the new data in the fit.
---
\#1 \#2 \#3 [[*Phys. Lett.*]{} [**\#1**]{} (\#3) \#2]{} \#1 \#2 \#3 [[*Nucl. Phys.*]{} [**\#1**]{} (\#3) \#2]{} \#1 \#2 \#3 [[*Z. Phys.*]{} [**\#1**]{} (\#3) \#2]{} \#1 \#2 \#3 [[*Phys. Rev. Lett.*]{} [**\#1**]{} (\#3) \#2]{} \#1 \#2 \#3 [[*Phys. Rev.*]{} [**\#1**]{} (\#3) \#2]{} \#1 \#2 \#3 [[*Mod. Phys. Lett.*]{} [**\#1**]{} (\#3) \#2]{} \#1 \#2 \#3 [[*Rev. Mod. Phys.*]{} [**\#1**]{} (\#3) \#2]{}
=cmbx10 =cmr10 =cmti10 =cmbx10 scaled1 =cmr10 scaled1 =cmti10 scaled1 =cmbx9 =cmr9 =cmti9 =cmbx8 =cmr8 =cmti8 =cmr7
-0.5cm 0.1cm
[Implications of Hadron Collider Observables]{} .4cm [on Parton Distribution Function Uncertainties]{} 1.4cm
Walter T. Giele and Stephane Keller\
0.2cm
[*Fermilab, MS 106\
Batavia, IL 60510, USA*]{}
Introduction
============
Current standard sets of Parton Distribution Function (PDF) do not include uncertainties [@r1]. In practice, as long as the PDF’s are used to calculate observables that themselves have large experimental uncertainties this shortcoming is obviously not a problem. In the past the precision of the hadron collider data was such that there was no ostensible need for the PDF uncertainties, as was testified by the good agreement between the theory and measurements. However, the need for PDF uncertainties became apparent with the measurement of the one jet inclusive transverse energy at the Tevatron [@r2]. At large transverse jet energies the data was significantly above the theoretical prediction, a possible signal for new physics. The deviation was ultimately “fixed” by changing the PDF’s in such a manner that they still were consistent with the observables used to determine the PDF [@r3]. This is a reflection of the significant PDF uncertainties for this observable. Knowing the uncertainties on the PDF’s would have cleared the situation immediately. Note that once the data is used in the PDF fit, it can not be used for other purposes. Specifically, setting limits on possible physics beyond the Standard Model. In that case, one should fit the PDF’s and the new physics simultaneously. The technique presented in this paper is well suited for this sort of problem.
The spread between different sets of PDF’s is often associated with PDF uncertainties. Currently, this is what is used for the determination of the PDF uncertainty on the $W$-boson mass at the Tevatron. It is not possible to argue that this spread is an accurate representation of all experimental and theoretical PDF uncertainties. For the next planned high luminosity run at Fermilab, assuming an integrated luminosity of $2\ fb^{-1}$, the expected 40 MeV uncertainty on the $W$-boson mass is dominated by a 30 MeV production model uncertainty. The latter uncertainty itself is dominated by the PDF uncertainty, estimated to be 25 MeV [@r4]. This determination of the PDF uncertainty is currently nothing more than an educated guess. It is made by ruling out existing PDF’s using the lepton charge asymmetry in $W$-boson decay events. The spread of the remaining PDF’s determines the uncertainty on the extracted $W$-boson mass. Because the PDF uncertainty seems to be the dominant source of uncertainty in the determination of the $W$-boson mass, such a procedure must be replaced by a more rigorous quantitative approach. The method described in this paper is well suited for this purpose.
In this paper, using the framework of statistical inference [@r5a; @r5b], we illustrate a method that can be used for many purposes. First of all, it is easy to propagate the PDF uncertainties to a new observable without the need to calculate the derivative of the observable with respect to the different PDF parameters. Secondly, it is straightforward to assess the compatibility of new data with the current fit and determine whether the new data should be included in the fit. Finally, the new data can be included in the fit without redoing the whole fit.
This method is significantly different from more traditional approaches to fit the PDF’s to the data. It is very flexible and beside solving the problems already mentioned, it offers additional advantages. First, the experimental uncertainties and the probability density distributions for the fitted parameters do not have to be Gaussian distributed. However, such a generalization would require a significant increase in computer resources. Second, once a fit has been made to all the data sets, a specific data set can be easily excluded from the fit. Such an option is important in order to be able to investigate the effect of the different data sets. This is particularly useful in the case of incompatible new data. In that case one can easily investigate the origin of the incompatibility. Finally, because it is not necessary to redo a global fit in order to include a new data set, experimenters can include their own new data into the PDF’s during the analysis phase.
The outline for the rest of the paper is as follows. In Sec. \[sect:method\], we describe the inference method. The flexibility and simplicity of the method is illustrated in Sec. \[sect:expanding\], by applying it to the CDF one jet inclusive transverse jet energy distribution [@r2] and the CDF lepton charge asymmetry data [@r7]. In Sec. \[sect:conclusions\] we draw our conclusions and outline future improvements and extensions to our method.
The Method of Inference {#sect:method}
=======================
Statistical inference requires an initial probability density distribution for the PDF parameters. This initial distribution can be rather arbitrary, in particular it can be solely based on theoretical considerations. Once enough experimental data are used to constrain the probability density distribution of the parameters the initial choices become irrelevant [^1]. Obviously, the initial choice does play a role at intermediate stages. The initial distribution can also be the result of a former fit to other data. The data that we will use later in this paper do not constrain the PDF’s enough by themselves to consider using an initial distribution based only on theory. The final answer would depend too much on our initial guess. We therefore decided to use the results of Ref. [@r6]. In this work the probability density distribution was assumed to be Gaussian distributed and was constrained using Deep Inelastic Scattering (DIS) data. All the experimental uncertainties, including correlations, were included in the fit, but no theoretical uncertainties were considered. The fact that no Tevatron data were used allows us to illustrate the method with Tevatron data [^2]. We briefly summarize Ref. [@r6] in the appendix.
In Sec. 2.1 we explain the propagation of the uncertainty to new observables. Sec. 2.2 shows how the compatibility of new data with the PDF can be estimated. Finally, in Sec. 2.3 we demonstrate how the effect of new data can be included in the PDF’s by updating the probability density distribution of the PDF parameters.
Propagation of the uncertainty
------------------------------
We now assume that the PDF’s are parametrized at an initial factorization scale $Q_0$, with $N_{par}$ parameters, $\{\lambda\}\equiv\lambda_1, \lambda_2, \ldots, \lambda_{N_{par}}$ and that the probability density distribution is given by $P_{init}(\lambda)$. Note that $P_{init}(\lambda)$ does not have to be a Gaussian distribution.
By definition $P_{init}(\lambda)$ is normalized to unity, $$\int_V P_{init}(\lambda) d\lambda =1\ ,$$ where the integration is performed over the full multi-dimensional parameter space and $d\lambda\equiv \prod_{i=1}^{N_{par}} d\lambda_i$. To calculate the parameter space integrals we use a Monte-Carlo (MC) integration approach with importance sampling. We generate $N_{pdf}$ random sets of parameters $\{\lam\}$ distributed according to $P_{init}(\lam)$. This choice should minimize the MC uncertainty for most of the integrals we are interested in. For reference we also generate one set at the central values of the $\{\lam\}$, the $\mu_{\{\lam\}}$. The number of parameter sets to be used depends on the quality of the data. The smaller the experimental uncertainty is compared to the PDF uncertainty, the more PDF’s we need. We must ensure a sufficient fraction of PDF’s span the region of interest (i.e. close to the data). For the purposes of this paper, we found that $N_{pdf}=100$ is adequate. Clearly, to each of the $N_{pdf}$ sets of parameters $\{\lam\}$ correspond a set of unique PDF’s. Each of these PDF sets have to be evolved using the Altarelli-Parisi evolution equations. We used the CTEQ package to do this evolution [@CTEQevol].
We now can evaluate any integral $I$ over the parameter space as a finite sum [@r5b] $$\begin{aligned}
\label{eq:I}
I &=& \int_V f(\lam) P_{init}(\lam) d\lam \nonumber \\
&\approx& \frac{1}{N_{pdf}} \sum_{j=1}^{N_{pdf}} f(\lam^j) \\
&\equiv& \langle f\rangle\ ,\nonumber\end{aligned}$$ with $\lam^j$ is the $j$-th random set of $\{\lam\}$. The function $f$ represents an integrable function of the PDF parameters. The uncertainty on the integral $I$ due to the MC integration is given by, $$\label{eq:deltaI}
\delta I= \sqrt{\frac { \langle f^2\rangle - \langle f\rangle^2}{N_{pdf}}}\ .$$
For any quantity, $x(\lambda)$, that depends on the PDF parameters $\{\lambda\}$ (for example an observable, one of the flavor PDF’s or for that matter one of the parameter itself), the theory prediction is given by its average value, $\mu_x$, and its uncertainty, $\sigma_x $ [^3]: $$\begin{aligned}
\label{eq:musig}
\mu_x & = & \int_V x(\lambda) P_{init}(\lam) d\lam
\approx\frac{1}{N_{pdf}}\sum_{j=1}^{N_{pdf}}
x\left(\lambda^j\right)
\nonumber \\
\sigma_x^2 & = & \int_V (x(\lam)-\mu_x)^2 P_{init}(\lam) d\lam
\approx\frac{1}{N_{pdf}}\sum_{j=1}^{N_{pdf}}
\left(x\left(\lambda^j\right)-\mu_x\right)^2\ .\end{aligned}$$ Note that $\mu_x$ is not necessarily equal to the value of $x(\lam)$ evaluated at the central value of the $\{\lam\}$. However, this is how observables are evaluated if one has only access to PDF’s without uncertainties.
Given $y(\lambda)$, another quantity calculable from the $\{\lam\}$, the covariance of $x(\lambda)$ and $y(\lambda)$ is given by the usual expression: $$\begin{aligned}
\label{corunc}
{\rm C}_{xy}&=&\int
\left(x(\lam)-\mu_x\right)\left(y(\lam)-\mu_y\right) P_{init}(\lam) d\lam
\nonumber \\
&\approx&\frac{1}{N_{pdf}}\sum_{j=1}^{N_{pdf}}
\left(x\left(\lambda^j\right)-\mu_x\right)
\left(y\left(\lambda^j\right)-\mu_y\right)\ .\end{aligned}$$ The correlation between $x(\lambda)$ and $y(\lambda)$ is given by ${\rm cor}_{xy} = {\rm C}_{xy}/ (\sigma_x\sigma_y)$. For example, this can be used to calculate the correlation between two experimental observables, between an observable and one of the PDF parameters, or between an observable and a specific flavor PDF at a fixed Bjorken-$x$.
Using Eq. \[eq:deltaI\], the MC uncertainty on the average and (co)variance is given by $$\begin{aligned}
\delta \mu_x &=& \frac{\sigma_x}{\sqrt{N_{pdf}}} \nonumber \\
\delta \sigma_x^2 &=& \sigma_x^2 \sqrt{ \frac{2}{N_{pdf}} } \\
\delta {\rm C}_{xy} &=& {\rm C}_{xy} \sqrt{ \frac{2}{N_{pdf}} }\ .
\nonumber\end{aligned}$$
The MC technique presented in this sub-section, gives a simple way to propagate uncertainties to a new observable, without the need for calculating the derivatives of the observable with respect to the parameters.
Compatibility of New Data
-------------------------
We will assume that one or several new experiments, not used in the determination of the initial probability density distribution, have measured a set of $N_{obs}$ observables $\{x^e\}=x^e_1,x^e_2,\ldots,x^e_{N_{obs}}$. The experimental uncertainties, including the systematic uncertainties, are summarized by the $N_{obs}\times N_{obs}$ experimental covariance matrix $C^{exp}$. Note that the correlations between experiments are easily incorporated. Here however, we have to assume that the new experiments are not correlated with any of the experiments used in the determination of $P_{init}$. The probability density distribution of $\{x^e\}$ is given by $$\begin{aligned}
\label{eq:pxe}
P(x^e) &=& \int_V P(x^e|\lam) P_{init}(\lam) d\lam \\
&\approx& \frac{1}{N_{pdf}}\sum_{j=1}^{N_{pdf}} P(x^e|\lam^j)\ ,
\nonumber\end{aligned}$$ where $P(x^e|\lam)$ is the conditional probability density distribution (often referred to as likelihood function). This distribution quantifies the probability of measuring the specific set of experimental values {$x^e$} given the set of PDF parameters $\{\lam\}$. In PDF sets without uncertainties, $P_{init}(\lam)$ is a delta function and $P(x^e)=P(x^e|\lam)$.
Instead of dealing with the probability density distribution of Eq. \[eq:pxe\], one often quotes the confidence level to determine the agreement between the data and the model. The confidence level is defined as the probability that a repeat of the given experiment(s) would observe a worse agreement with the model. The confidence level of {$x^e$} is given by $$\begin{aligned}
{\rm CL}(x^e) &=& \int_V {\rm CL}(x^e|\lam) P_{init} (\lam)d\lam \\
&\approx& \frac{1}{N_{pdf}}\sum_{j=1}^{N_{pdf}}
{\rm CL}(x^e|\lam)\ ,
\nonumber\end{aligned}$$ where ${\rm CL}(x^e|\lam)$ is the confidence level of {$x^e$} given $\{\lam\}$. If ${\rm CL}(x^e)$ is larger than an agreed value, the data are considered consistent with the PDF and can be included in the fit. If it is smaller, the data are inconsistent and we have to determine the source of discrepancy.
For non-Gaussian uncertainties the calculation of the confidence level might be ambiguous. In this paper we assume that the uncertainties are Gaussian. The conditional probability density distribution and confidence level are then given by $$\begin{aligned}
P(x^e|\lam)=P(\chi_{new}^2)
&=& \frac{e^{-\frac{1}{2}\chi^2_{new}(\lambda)}}
{\sqrt{(2\pi)^{N_{obs}}|C^{tot}|}} \\
{\rm CL}(x^e|\lam)={\rm CL}(\chi_{new}^2)
&=& \int_{\chi_{new}^2}^\infty P(\chi^2)\,d\chi^2\ ,\end{aligned}$$ where $$\label{chisqnew}
\chi^2_{new}(\lambda) = \sum_{k,l}^{N_{obs}}
\left(x^e_k-x_k^{t}(\lambda)\right)
M^{tot}_{kl}\left(x^e_l-x_l^{t}(\lambda)\right)\ ,$$ is the chi-squared of the new data. The theory prediction for the $k$-th experimental observable, $x_k^t(\lam)$, is calculated using the PDF set given by the parameters $\{\lam\}$. The matrix $M^{tot}$ is the inverse of the total covariance matrix, $C^{tot}$, which in turn is given by the sum of the experimental, $C^{exp}$, and theoretical, $C^{theor}$, covariance matrix. We assume that there is no correlation between the experimental and theoretical uncertainties. We will use a minimal value of 0.27% on the confidence level, corresponding to a three sigma deviation, as a measure of compatibility of the data with the theory. If the new data are consistent with the theory prediction then the maximum of the distribution of the $\chi^2_{new}$ should be close to $N_{obs}$ (within the expected $\sqrt{2 N_{obs}}$ uncertainty). The standard deviation of $\chi^2_{new}$, $\sigma_{\chi^2_{new}}$, tells us something about the relative size of the PDF uncertainty compared to the size of the data uncertainty. The larger the value of $\sigma_{\chi^2_{new}}$ is compared to $\sqrt{2 N_{obs}}$, the more the data will be useful in constraining the PDF’s.
Note that if there are several uncorrelated experiments, the total $\chi^2_{new}$ is equal to the sum of the $\chi^2_{new}$ of the individual experiments and the conditional probability is equal to the product of the individual conditional probabilities.
Effect of new data on the PDF’s
-------------------------------
Once we have decided that the new data are compatible with the initial PDF’s, we can constrain the PDF’s further. We do this within the formalism of statistical inference, using Bayes theorem. The idea is to update the probability density distribution taking into account the new data. This new probability density distribution is in fact the conditional probability density distribution for the $\{\lam\}$ considering the new data {$x^e$} and is given directly by Bayes theorem $$\label{eq:pnew}
P_{new}(\lam)=P(\lam|x^e)=\frac{P(x^e|\lam)\ P_{init}(\lam)}{P(x^e)}\ ,$$ where $P(x^e)$, defined in Eq. \[eq:pxe\], acts as a normalization factor such that $P(\lam|x^e)$ is normalized to one. Because $P_{new}(\lam)$ is normalized to unity, we can replace $P(x^e|\lam)$ in Eq. \[eq:pnew\] simply by $e^{-\frac{\chi^2_{new}(\lam)}{2}}$. This factor acts as a new weight on each of the PDF’s.
We can now replace $P_{init}(\lam)$ by $P_{new}(\lam)$ in the expression for the average, standard deviation and covariance given in Sec. 2.1 and obtain predictions that include the effect of the new data. With the MC integration technique described before, these quantities can be estimated by weighted sums over the $N_{pdf}$ PDF sets $$\begin{aligned}
\label{eq:newmusig}
\mu_x &\approx&\sum_{k=1}^{N_{pdf}} w_k
x\left(\lambda^{(k)}\right) \nonumber \\
\sigma_x^2 &\approx&\sum_{k=1}^{N_{pdf}} w_k
\left(x(\lambda^{(k)})-\mu_x\right)^2 \\
C_{xy}&\approx&\sum_{k=1}^{N_{pdf}} w_k \left(x(\lambda^{(k)}-\mu_x\right)
\left(y(\lambda^{(k)}-\mu_y\right)\ , \nonumber\end{aligned}$$ where the weights are given by $$\label{wgt}
w_k=\frac{ e^{- \frac{1}{2} \chi_{new}^2(\lambda^{k} )}}
{\sum_{l=1}^{N_{pdf}}
e^{-\frac{1}{2} \chi_{new}^2(\lambda^{l}) }}\ .$$ Note that for the calculation of the Monte-Carlo uncertainty of the weighted sums, the correlation between the numerator and denominator in Eq. \[eq:newmusig\] has to be taken into account properly.
Our strategy is very flexible. Once the theory predictions $x^t_l(\lam)$ using the $N_{pdf}$ PDF sets are known for each of the experiments , it is trivial to include or exclude the effect of one of the experiments on the probability density distribution. If the different experiments are uncorrelated then all what is needed is the $\chi^2_{new}$ of each individual experiments for all the PDF sets. In that case, each experiment is compressed into $N_{pdf}$ $\chi^2_{new}$ values.
One other advantage is that all the needed $x^t_l (\lam)$ can be calculated beforehand in a systematic manner, whereas standard chi-squared or maximum likelihood fits require many evaluations of $x_k^{t}(\lambda)$ during the fit as the parameters are changed in order to find the extremum. These methods are not very flexible, as a new fit is required each time an experiment is added or removed.
The new probability density distribution of the PDF parameters is Gaussian if the following three conditions are met. First, the initial probability density distribution, $P_{init}(\lam)$, must be Gaussian. Second, all the uncertainties on the data points must be Gaussian distributed (that includes systematic and theoretical uncertainties). Finally, the theory predictions, $x_l^{t}(\lam)$, must be linear in $\{\lam\}$ in the region of interest. This last requirement is fulfilled once the PDF uncertainties are small enough. For the studies in this paper all three requirements are fulfilled. The new probability density distribution can therefore be characterized by the average value of the parameters and their covariance matrix, which can be calculated, together with their MC integration uncertainty, using Eq. \[eq:newmusig\]. Once the new values of the average and the covariance matrix have been calculated, a new set of PDF parameters can be generated according to the new distribution and used to make further prediction instead of using the initial set of PDF with the weights.
An alternative way to generate a PDF set distributed according to $P_{new}(\lam)$ is to unweight the now weighted initial PDF set. The simplest way to unweight the PDF sets is to use a rejection algorithm. That is, define $w_{max}$ as the largest of the $N_{pdf}$ weights given in Eq. \[wgt\]. Next generate for each PDF set a uniform stochastic number, $r_k$, between zero and one. If the weight $w_k$ is larger or equal to $r_k\times w_{max}$ we keep PDF set $k$, otherwise it is discarded. The surviving PDF’s are now distributed according to $P_{new}(\lam)$. The number of surviving PDF’s is on average given by $N_{pdf}^{new}=1/w_{max}$. We can now apply all the techniques of the previous sub-sections, using the new unweighted PDF set. The MC integration uncertainties are easily estimated using the expected number of surviving PDF’s. In the extreme case that $w_{max}$ is close to one and only a few PDF survive the unweighting procedure, the number of initial PDF’s must be increased. The other extreme occurs when all the weights are approximately equal, i.e. $w_k\sim
1/N_{pdf}$. In that case the new data puts hardly any additional constraints on the PDF.
The $\chi^2_{new}$ is only used to calculate the weight of a particular PDF, so that the new probability density distribution of the PDF parameters can be determined. We do not perform a chi-squared fit. However, if the new probability density distribution of the parameters is Gaussian distributed then our method is equivalent to a chi-squared fit. In that case the average value of the parameters correspond to the maximum of the probability density distribution. The minimum chi-squared can be estimated (with MC uncertainties) from the average $\chi^2_{new}$ calculated with the new probability density distribution. Indeed, by definition this average must be equal to the minimum chi-squared, $\chi^2_{min}$, plus the known number of parameter. Note that the variance of the $\chi^2_{new}$ must itself be equal to twice the number of parameters. To obtain the overall minimum chi-squared, the value of the minimum chi-squared of the initial fit must be added to $\chi^2_{min}$. As long as the confidence level of the new data that were included in the fit is sufficiently high, the overall minimum chi-squared obtained is guaranteed to be in accordance with expectations [^4].
Expanding the PDF sets {#sect:expanding}
======================
The viability of the method described in Sec. 2 is studied using two CDF measurements. In Sec. 3.1 the one jet inclusive transverse energy distribution is considered, while the lepton charge asymmetry in $W$-boson decay is examined in Sec. 3.2. The statistical, systematic, and theoretical uncertainties on the observables will be taken into account.
The one jet inclusive measurement
---------------------------------
The CDF results on the one jet inclusive transverse energy distribution [@r2] demonstrated the weakness of the current standard PDF sets due to the absence of uncertainties on the PDF parameters.
The observables are the inclusive jet cross section at different transverse energies [^5], $E_T^i$ $$x_i=\frac{d\,\sigma}{d\, E_T}(E_T^i)\ .$$ We first have to construct the experimental covariance matrix, $C^{exp}_{ij}$, using the information contained in Ref. [@r2]. The paper lists the statistical uncertainty at the different experimental points, $\Delta_0 (E_T^i)$, together with eight independent sources of systematic uncertainties, $\Delta_k (E_T^i)$. Hence, the experimental measurements, $x_i^e$ are given by $$\label{eq:xie}
x_i^e = x_i^t (\lam)
+ \eta_0^i \Delta_0 (E_T^i)
+ \sum_{k=1}^8 \eta_k \Delta_k (E_T^i)\ ,$$ where as before, $x_i^t (\lam)$ is the theoretical prediction for the observable calculated with the set of parameters $\{\lam\}$. The $\eta_0^i$ and $\eta_k$ are independent random variables normally distributed with zero average and unit standard deviation. Note that some of the systematic uncertainties given in Ref. [@r2] are asymmetric. In those cases we symmetrized the uncertainty using the average deviation from zero. From Eq. \[eq:xie\] we can construct the experimental covariance matrix $$C^{exp}_{ij}=\left(\Delta_0 (E_T^i)\right)^2 \delta_{ij}
+\sum_{k=1}^8\Delta_k (E_T^i) \Delta_k (E_T^j)\ .$$
We also need to estimate the theoretical uncertainty. In Eq. \[eq:xie\] no theoretical uncertainties were taken into account. We consider two types of uncertainties: the uncertainty due to the numerical Monte Carlo integration over the final state particle phase space, $\Delta_{MC}(E_T^i)$, and the renormalization/factorization scale, $\mu$, uncertainty, $\Delta_{\mu} (E_T^i)$. The theoretical prediction in Eq. \[eq:xie\] must then be replaced by $$x_i^t (\lam) \rightarrow \frac{d\,\sigma^{NLO}}{d\,E_T}
(E_T^i,\lam,\mu)
+ \eta_{MC}^i \Delta_{MC}(E_T^i)
+ \eta_{\mu} \Delta_{\mu} (E_T^i)\ ,$$ from which we can derive the theoretical covariance matrix $$C^{theor}_{ij}=\left(\Delta_{MC}(E_T^i)\right)^2 \delta_{ij}
+\Delta_{\mu} (E_T^i) \Delta_{\mu} (E_T^j)\ .$$ Here we assume that there is no bin to bin correlation in the MC uncertainty. On the other hand, we take the correlation of the scale uncertainty fully into account. Both $\Delta_{MC}$ and $\Delta_{\mu}$ are evaluated at the central values of the PDF parameters, assuming that the variation is small.
We evaluate the scale uncertainty in a very straightforward manner. As the central prediction the renormalization and factorization scale are taken to be equal to half the transverse energy of the leading jet in the event, $\mu=\frac{1}{2}E_T^{max}$. To estimate the uncertainty we make another theoretical prediction now choosing as a scale $\mu=E_T^{max}$. The “one-sigma” uncertainty is defined as $$\Delta_{\mu}(E_T) =
\frac{d\,\sigma^{NLO}}{d\,E_T}(E_T,\mu_{\lam},\mu=\frac{1}{2}E_T^{max})
- \frac{d\,\sigma^{NLO}}{d\,E_T}(E_T,\mu_{\lam},\mu=E_T^{max})\ .$$ As we will see later in this section the theoretical uncertainties are small compared to the other uncertainties. Therefore this crude estimate suffices for the purposes of this paper. In the future a more detailed study of the theoretical uncertainty is required. The scale uncertainty is often associated with the theoretical uncertainty due to the truncation of the perturbative series. However, it is important to realize this is only a part of the full theoretical uncertainty.
In Fig. \[fig3\]$^a$ we present results for the single inclusive jet cross section as a function of the jet transverse energy. Both data and theoretical predictions are divided by the average prediction of the initial PDF’s. The NLO predictions are calculated using the JETRAD prediction [@jetrad]. The inner (outer) error bar on the experimental points represent the diagonal part of the experimental (total) covariance matrix. The dotted lines represent the initial one-sigma PDF uncertainties. The solid lines are the theory predictions calculated with the new PDF’s (i.e., the new probability density distribution). The plot is somewhat misleading because of the large point-to-point correlation of the uncertainties. The confidence level of 50% is very high, indicating a good agreement between the prediction and the data.
This leads us to the conclusion that the one jet inclusive transverse energy distribution is statistically in agreement with the NLO theoretical expectation based on the initial probability density distribution of the PDF parameters. No indication of new physics is present. Note that the prediction using the initial PDF differs quite a bit from the more traditional fits such as MRSD0, see the dashed line in Fig. \[fig3\]$^a$. Having no uncertainties on the traditional fits it is hard to draw any quantitative conclusion from this observation. The larger value of the jet cross section calculated using the initial PDF set at high transverse energies compared to MRSD0 was anticipated in Ref. [@r6] and can probably be traced back to the larger $d$ and $u$ quark distribution at the reference scale $Q_0$ and moderate $x\sim 0.2$. This difference in turn was partially attributed to the different way of treating target mass and Fermi motion corrections.
Given the confidence level of 50% the one jet inclusive data can be included in the fit. Using Eq. \[chisqnew\] we calculate for each PDF set $k$ the corresponding $\chi^2_{new}(\lambda^k)$. This gives us the 100 weights $w_k$ (conditional probabilities) defined in Eq. \[wgt\]. Using Eq. \[eq:newmusig\], we can calculate the effects of including the CDF data into the fit. The results are shown in Figs. \[fig3\]$^a$ and \[fig3\]$^b$. As can be seen in Fig. \[fig3\]$^a$ the effect is that the central value is pulled closer to the data and the PDF uncertainty is reduced substantially. Two of the fourteen PDF parameters are affected the most. As expected these are the strong coupling constant $\alpha_S(M_Z)$ and the gluon PDF coefficient $\beta$, which controls the high $x$ behavior (the gluon PDF is proportional to $x^\alpha (1-x)^\beta$ at the initial scale). In Fig. \[fig3\]$^b$ we show the correlation between these two parameters before and after the inclusion of the CDF data. As can be seen the impact on $\beta$ is very significant. Similarly, the uncertainty on $\alpha_s$ is reduced substantially and the correlation between the two parameters is also changed. This indicates that the one jet inclusive transverse energy distribution in itself has a major impact on the uncertainty of $\alpha_s$ and the determination of the gluon PDF. Note that we do not address the issue of the parametrization uncertainty. Other choices of how to parameterize the initial PDF’s will change the results. To obtain a value and uncertainty of $\alpha_S(M_Z)$ which is on the same footing as the one obtained from $e^+e^-$-colliders, one needs to address this issue.
The lepton charge asymmetry measurement
---------------------------------------
\[RyW\]
Our second example is the lepton charge asymmetry in $W$-boson decay at the Tevatron. As already explained, this observable is important for the reduction of the PDF uncertainties in the $W$-boson mass extraction at hadron colliders. The asymmetry is given by $$\label{asym}
A(\eta_e)=\frac{\left(N^+ (\eta_e)-N^- (\eta_e)\right)}
{\left(N^+ (\eta_e)+N^- (\eta_e)\right)}\ ,$$ where $N^+$ and $N^-$ are respectively the number of positrons and electrons at the pseudo-rapidity $\eta_e$.
In Fig. \[fig:asym\]$^a$, we show the preliminary CDF data of run 1$^b$ (solid points) for the asymmetry, along with the NLO predictions (dotted lines) including the PDF uncertainties, relative to the theory average prediction using the initial PDF’s. For the NLO calculations the DYRAD prediction [@jetrad] was used. The inner error bars on the experimental points are the statistical uncertainties; the systematic uncertainties are small and we can safely neglect them. The outer error bars are the diagonal of the total covariance matrix. In this case, the theoretical uncertainty is dominated by the phase space Monte Carlo integration uncertainty; we took its bin to bin correlation into account. Similar to the one jet inclusive transverse energy case, the scale uncertainty is defined by the difference between the theoretical prediction calculated using two scales, $\mu=M_W$ and $\mu=2\times M_W$.
As is clear from Fig. \[fig:asym\]$^a$, there is a good agreement between the data and the NLO prediction, except for the last experimental point at the highest pseudo-rapidity. The confidence level including the last point is well below our threshold of 0.27%. In order to be able to include the data into the PDF fit we decided to simply exclude this data point from our analysis. Without the highest pseudo-rapidity point we obtain a reasonable confidence level of 4%. It is not as good as in the single inclusive jet case even though the plots appear to indicate otherwise. The reason for this is the absence of significant point-to-point correlation for the charge asymmetry uncertainties.
We can now include the lepton charge asymmetry data into the fit by updating the probability density distribution with Bayes theorem, as described in the previous section. In Fig. \[fig:asym\]$^a$ the prediction obtained with the new probability density distribution are shown by the solid lines. As expected, the data are pulling the theory down and reducing the PDF uncertainties.
It is difficult to correlate the change in the asymmetry to a change in a particular PDF parameter. On the other hand, it is well known that the lepton asymmetry can be approximately related to the following asymmetry of the ratio of up quark ($u$) and down quark ($d$) distribution function $$R(y_W)= \frac{\frac{u(x_1)}{d(x_1)}-\frac{u(x_2)}{d(x_2)}}
{\frac{u(x_1)}{d(x_1)}+\frac{u(x_2)}{d(x_2)}}\ .$$ The Bjorken-$x$ are given by $$x_{1,2}=\frac{M_W}{\sqrt{s}} e^{\pm y_W}\ .$$ where $M_W$ is the mass of the $W$-boson, ${\sqrt{s}}$ the center of mass of the collider, and $y_W$ the $W$-boson rapidity. The PDF’s were evaluated with the factorization scale equal to $M_W$. The ratio $R(y_W)$ is approximately the $W$-boson asymmetry and obviously is sensitive to the slope of the $u$/$d$ ratio.
In Fig. \[RyW\]$^b$ we show the ratio $R(y_W)$ calculated with both the initial and the new probability density distributions. As can be seen, the change is very similar to the change experienced by the lepton charge asymmetry itself. The change in $R(y_W)$ can be traced to a simultaneous increase in the anti-up quark distribution and decrease in the anti-down quark distribution at low $x$.
Conclusions ant Outlook {#sect:conclusions}
=======================
Current standard sets of PDF do not include uncertainties. It is clear that we can not continue to discount them. Already current measurements at the Tevatron have highlighted the importance of these uncertainties for the search of physics beyond the Standard Model. Furthermore, the potential of future hadron colliders to measure $\alpha_s(M_Z)$ and the $W$-boson mass is impressive, but can not be disentangled from PDF uncertainties. The physics at the LHC will also undoubtedly require a good understanding of the PDF uncertainties. On a more general level, if we want to quantitatively test the framework of perturbative QCD over a very large range of parton collision energies the issue of PDF uncertainties can not be sidestepped.
In this paper we have illustrated a method, based on statistical inference, that can be used to easily propagate uncertainties to new observables, assess the compatibility of new data, and if the latter is good to include the effect of the new data on the PDF without having to redo the whole fit. The method is versatile and modular: an experiment can be included in or excluded from the new PDF fit without any additional work. The statistical and systematic uncertainties with the full point-to-point correlation matrix can be included as well as the theoretical uncertainties. None of the uncertainties are required to be Gaussian distributed.
One remaining problem is the uncertainty associated with the choice of parametrization of the input PDF. This is a difficult problem that does not have a clear answer yet and will require a compromise between the number of parameters and the smoothness of the PDF. We plan to address this question in another paper. The next phase would then be to obtain a large number of initial PDF’s sets based on theoretical consideration only, in the spirit of the inference method and Bayes theorem. The DIS and Tevatron data could then be used to constraint the range of these PDF’s resulting in a set of PDF’s which would include both experimental and theoretical uncertainties.
Appendix: Input PDF {#sect:input}
===================
For our initial PDF parameter probability density distribution we use the results of Ref. [@r6]. There a chi-squared fit was performed to DIS data from BCDMS [@bcdms], NMC [@nmc], H1 [@h1] and ZEUS [@zeus]. Both statistical uncertainties and experimental systematic uncertainties with point-to-point correlations were included in the fit, assuming Gaussian distributions. However, [*no*]{} theoretical uncertainties were considered. It is important to include the correlation of the systematic uncertainties because the latter usually dominate in DIS data. Simply adding them in quadrature to the statistical uncertainty would result in a overestimation of the uncertainty.
A standard parametrization at $Q_0^2=9 \ {\rm GeV}^2$ is used with 14 (=$N_{par}$) parameters: $x d_v$, $x g$, $x \bar{d}$, $x \bar{u}$, and $x s$ are parametrized using the functional form $x^{\lambda_i} (1-x)^{\lambda_j}$ whereas $x u_v$ is parametrized as $x^{\lambda_i}
(1-x)^{\lambda_j} (1+\lambda_k x)$. Here $x$ is the Bjorken-$x$. Parton number and momentum conservation constraints are imposed. The full covariance matrix of the parameters, $C^{init}$, is extracted at the same time as the value of the parameters that minimize the chi-squared. The uncertainties on the parameters were assumed to be Gaussian, such that the fitted values also correspond to the average values of the parameters, $\mu_{\lam_i}$. The probability density distribution is then given by $$\label{eq:chi2init}
P_{init}(\lam) = \frac{e^{-\frac{\chi^2_{init}(\lam)} {2} }}
{\sqrt{(2\pi)^{N_{par}}|C^{init}|}}\ ,$$ where $$\label{chiinit}
\chi^2_{init}(\lam)=\sum_{ij}^{N_{par}}(\lambda_i-\mu_{\lambda_i})
M^{init}_{ij}(\lambda_j-\mu_{\lambda_j})\ ,$$ is the difference between the total chi-squared of the experimental data used in the fit and the minimum chi-squared (1256 for 1061 data points) with the PDF’s fixed by the set of parameters $\{\lam\}$. The matrix $M^{init}$ is the inverse of the covariance matrix $C^{init}$. The $|C^{init}|$ is the determinant of the covariance matrix. All the calculations were done in the $\overline{MS}$-scheme.
Comparison with MRS and CTEQ sets showed a good overall agreement with a few exceptions. One example is the difference in the gluon distribution function at large values of $x$. The CTEQ and MRS distribution are somewhat above the result of Ref. [@r6]. This difference was attributed to the fact that prompt photon data were included in the CTEQ and MRS fits. Note that the direct photon data have large scale uncertainty, and it might be misleading to include them in a fit without taking the theoretical uncertainty into account. Also, it is important to keep in mind that it is misleading to compare specific PDF’s, as the correlations between different PDF parameters are large.
[99]{} H. L. Lai et al., Phys. Rev. D55 (1997) 1280;\
A. D. Martin et al., DTP-96-102, Dec 1996, hep-ph/9612449. The CDF collaboration, F. Abe et al., Phys. Rev. Lett. 77 (1996) 438. J. Huston et al., Phys. Rev. Lett. 77 (1996) 444. The CDF II collaboration, F. Abe et al., the CDF II detector technical design report, FERMILAB-Pub-96/390-E. R. M. Barnett et al., Phys. Rev. D54, 1 (1996). Numerical Recipes, W. H. Press et al., Cambridge University Press. The CDF collaboration, Phys. Rev. Lett. 74 (1995) 850;\
R. G. Wagner, for the CDF Collaboration, Published Proceedings 5th International Conference on Physics Beyond the Standard Model, Balholm, Norway, April 29-May 4, 1997. S. Alekhin, IFVE-96-79 (Sep 1996), hep-ph/9611213. The program was obtained from the CTEQ collaboration WWW-site “http://www.phys.psu.edu/$\sim$cteq/”. W. T. Giele, E. W. N. Glover and D. A. Kosower, Nucl. Phys. [**B403**]{} (1993) 633. The BCDMS collaboration, A. C. Benvenuti et al., Phys. Lett. 223B (1989) 485; Phys. Lett 237B (1990) 592. The NM collaboration, M. Arneodo et al., Nucl. Phys. B483 (1997) 3. The H1 collaboration, S. Aid et al., Nucl. Phys. 470B (1996) 3. The ZEUS collaboration, M. Derrick et al., Zeit. Phys. C72 (1996) 399.
[^1]: The standard PDF sets of Ref. [@r1] basically assume that the initial probability density distribution for the parameters is uniform.
[^2]: Recent PDF sets have also included the Tevatron data that we will use, but none of these sets included uncertainties.
[^3]: If the uncertainty distribution is not Gaussian the average and the standard deviation might not properly quantify the distribution.
[^4]: We are assuming that the initial $\chi^2_{min}$ was within expectations.
[^5]: To be more precise, the inclusive jet cross section in different bins of transverse energy. In the numerical results presented here we take the finite binning effects into account.
|
---
abstract: 'We show that by nullifying the short-wave response to the long-wave excitation (local-field-effects), the adiabatic time-dependent density-functional theory (TDDFT) of optics of semiconductors and insulators can be brought into excellent agreement with experiment. This indicates that the wing elements \[($\Gv,\0v)$ and $(\0v,\Gv)$, $\Gv\ne\0v$\] of both the Kohn-Sham (KS) density-response function $\chi^s$ and the exchange-correlation kernel $f^{xc}$ are greatly overestimated by the existing approximations to the static DFT and TDDFT, respectively, to the extent that zero is a better approximation for them than the corresponding values provided by current theories. The head element of $f^{xc}$ is thereby fixed by the static macroscopic dielectric constant $\epsilon_M$. Our method yields accurate optical spectra including both the weakly and strongly bound excitons, while its computational cost is extremely low, since only the head element of the KS response matrix and the static dielectric constant are needed.'
author:
- 'V. U. Nazarov'
- 'S. Kais'
title: 'Optics of semiconductors and insulators: Role of local-field effects revised'
---
It is known since works of Adler [@Adler-62] and Wiser [@Wiser-63] that in order to obtain the macroscopic dielectric function $\epsilon_M(\qv,\omega)$ of a crystal, one must invert the microscopic dielectric matrix $\epsilon_{\Gv\Gv'}(\qv,\omega)$ indexed with the reciprocal lattice vectors. Then $$\epsilon_M(\qv,\omega)=\frac{1}{\epsilon^{-1}_{\0v\0v}(\qv,\omega)}.
\label{AW}$$ As a result, in general, $\epsilon_M(\qv,\omega)\ne\epsilon_{\0v\0v}(\qv,\omega)$, which fact is due to the short-wave response to the long-wave perturbation and is usually referred to as the local-field effects (l.f.e.) (see, e.g., Ref. and references therein).
The time-dependent density-functional theory (TDDFT) [@Gross-85], which has become a preferential approach in the studies of dynamic quantum-mechanical processes in general, and in optics, in particular [@Kim-02; @Kim-02-2; @Sottile-03; @Botti-04; @Sharma-11; @Nazarov-11; @Yang-12; @Bates-12; @Yang-13; @Trevisanutto-13], takes full care of l.f.e., representing crystals’ response functions with matrices indexed with reciprocal lattice vectors. The key quantity of TDDFT is the exchange-correlation kernel $f^{xc}$, which, together with the Kohn-Sham (KS) single-particle density-response function $\chi^s$, determine the interacting-particles density-response function $\chi$ through the equality [@Gross-85] $$\chi^{-1}_{\Gv\Gv'}(\qv,\omega)=(\chi^s)^{-1}_{\Gv\Gv'}(\qv,\omega)
-\frac{4\pi}{|\Gv+\qv|^2} \delta_{\Gv\Gv'}- f^{xc}_{\Gv\Gv'}(\qv,\omega).
\label{8}$$ While $\chi^s$ is constructed using the single-particle states obtained with a given approximation to the static exchange-correlation potential $v_{xc}(\rv)$ [@Kohn-65], $f^{xc}$ is a true many-body quantity containing, in principle exactly, all the dynamic exchange-correlation effects in a real interacting system.
A great amount of efforts has been invested into the development of approximations to $f^{xc}$ of crystalline semiconductors and insulators [@Kim-02; @Kim-02-2; @Sottile-03; @Botti-04; @Kim-02-2; @Sharma-11; @Nazarov-11; @Yang-12; @Bates-12; @Yang-13; @Trevisanutto-13]. They range from the computationlly demanding ones which provide little gain in the efficiency compared with the solution of Bethe-Salpeter equation [@Albrecht-98], although solidly grounded theoretically [@Kim-02; @Sottile-03], to very practicable [*ad hoc*]{} schemes [@Sharma-11]. In this Letter, which tends to the latter category, we come up with a simple [*ansatz*]{} which leads to an approximation by far simpler and computationally more efficient than any of the existing approaches. At the same time our method provides very accurate optical spectra of semiconductors and insulators including, in particular, the weakly and strongly bound excitons. Specifically, we nullify the l.f.e., in other words, the contribution from the wing elements of both $\chi^s$ and $f^{xc}$ are set to zero $$\begin{aligned}
&\lim_{\qv\rightarrow 0} \frac{\chi^s_{\Gv\ne\0v,\0v}(\qv,\omega)}{G q} = 0,
\label{chisnull}\\
&\lim_{\qv\rightarrow 0} G q f^{xc}_{\Gv\ne\0v,\0v}(\qv,\omega) = 0.
\label{fxcnull}\end{aligned}$$ From the expression for the microscopic dielectric matrix $$\epsilon^{-1}_{\Gv\Gv'}(\qv,\omega)=\delta_{\Gv\Gv'}+\frac{4\pi}{|\Gv+\qv| |\Gv+\qv'|} \chi_{\Gv\Gv'}(\qv,\omega),
\label{epsMM}$$ equation (\[8\]), and the mathematical fact that the inverse of a matrix with zero wings is a matrix with zero wings, we see that Eqs. (\[chisnull\]) and (\[fxcnull\]) lead to $$\epsilon_{\Gv\ne\0v,\0v}(\qv= \0v,\omega)=0.
\label{epsMM0w}$$ Obviously, any two of the Eqs. (\[chisnull\]), (\[fxcnull\]), and (\[epsMM0w\]) entail the third one. Equations (\[AW\]) - (\[fxcnull\]) also yield $$\lim_{\qv\rightarrow \0v} q^2 f^{xc}_{\0v\0v}(\qv,0)=
\lim_{\qv\rightarrow \0v}
\frac{q^2} {\chi^s_{\0v\0v}(\qv,0)}-\frac{4\pi}{1-\epsilon_M},
\label{fxchead2}$$ where $\epsilon_M$ is the macroscopic static dielectric function [^1]. Taking use of the [*adiabatic*]{} TDDFT, we extend Eq. (\[fxchead2\]) to finite frequencies $$\lim_{\qv\rightarrow \0v} q^2 f^{xc}_{\0v\0v}(\qv,\omega)=
\lim_{\qv\rightarrow \0v}
\frac{q^2} {\chi^s_{\0v\0v}(\qv,0)}-\frac{4\pi}{1-\epsilon_M}.
\label{fxchead2w}$$ Since all the matrices involved have zero wings, no body elements are now relevant to the calculation of the macroscopic dielectric function $\epsilon_M(\omega)$, and Eqs. (\[chisnull\]), (\[fxcnull\]), and (\[fxchead2w\]) constitute a closed-form solution as soon as the static $\epsilon_M$ is known. The latter can be found by an independent calculation or taken from experiment.
In Fig. \[fig\_chch0\], we present results for the optical absorption of several semiconductors and insulators obtained with the use of Eqs. (\[chisnull\]), (\[fxcnull\]), and (\[fxchead2w\]). Calculations were carried out with the full-potential linear augmented plane-waves (LAPW) code Elk [@Elk]. We use Tran and Blaha’s meta generalized gradient approximation (meta-GGA) (TB09) for the exchange potential, which provides realistic band-gaps [@Tran-09]. For correlations, the local-density approximation (LDA) potential [@Perdew-92] is used. Convergence was achieved with the shifted $32\times 32\times 32$ $\kv$-points grid and the reciprocal vector cut-off $G=12$ bohr$^{-1}$. Clearly, the overall agreement between the theory and experiment is very good: The positions of the excitonic features in the spectra are correct for all the considered materials and their intensity compared with other peaks is mostly accurate too. Figure \[fig\_chch0r\] shows the real part of the dielectric function of the same crystals.
It must be noted that the idea to relate the head element of $f^{xc}$ to the static macroscopic dielectric function dates back to Ref. . Our results, however, show that this idea is quantitatively successful only if the l.f.e. are discarded, otherwise there is no good agreement between theory and experiment [@Botti-04]. Besides, only if there are no wing elements Eq. (\[fxchead2\]) holds, providing a basis for the adiabatic TDDFT within this approach.
In conclusion, we suggest the simplest of the existing approximations in the time-dependent density-functional theory of optics of semiconductor and insulators, i.e., the approximation of the zero wing elements of the response matrices. This proves to be remarkably successful in reproducing the experimental optical spectra, including both weakly and strongly bound excitons. The simplicity of the implementation combined with the high accuracy has the potential of making this method a useful theoretical tool in optics and possibly beyond.
V.U.N. acknowledges partial support from National Science Council, Taiwan, Grant No. 100-2112-M-001-025-MY3 and he is grateful for the hospitality of Qatar Energy and Environment Institute, Qatar Foundation, Qatar.
[22]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\
12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} [****, ()](\doibase 10.1103/PhysRevLett.89.096402) [****, ()](\doibase 10.1103/PhysRevLett.91.056402) @noop [****, ()]{} [****, ()](\doibase 10.1103/PhysRevLett.107.186401) [****, ()](\doibase 10.1103/PhysRevLett.107.216402) [****, ()](\doibase 10.1063/1.4730031) [****, ()](\doibase 10.1063/1.4759080) [****, ()](\doibase 10.1103/PhysRevB.87.195204) [****, ()](\doibase 10.1103/PhysRevB.87.205143) @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()](\doibase 10.1103/PhysRevLett.102.226401) [****, ()](\doibase 10.1103/PhysRevB.45.13244) , ed., @noop [**]{} (, , ) [****, ()](\doibase 10.1063/1.362975)
[^1]: In this context, ’static’ means the so called high-frequency dielectric constant $\epsilon_M\equiv \epsilon_\infty=\lim_{\omega\rightarrow 0} \epsilon_M(\qv=\0v,\omega)$.
|
---
author:
- Carlo Abate
- 'Onno R. Pols'
- 'Richard J. Stancliffe'
date: 'Received ...; accepted ...'
title: 'Understanding the orbital periods of CEMP-s stars'
---
Introduction {#intro}
============
A significant proportion of the low-metallicity stars observed in the Galactic Halo are found to have abundances of carbon relative to iron more than ten times larger than in the Sun, that is[^1] $[{\mathrm{C}}/{\mathrm{Fe}}]>1.0$. These so-called carbon-enhanced metal-poor (CEMP) stars are a significant fraction of the metal-poor population of the Halo and their proportion increases with decreasing metallicity, making up more than $20\%$ of all metal-poor stars at $[{\mathrm{Fe}}/{\mathrm{H}}]<-3$ [e.g. @Cohen2005; @Frebel2006; @Lucatello2006; @Carollo2012; @Lee2013; @Yong2013III; @Placco2014]. At $[{\mathrm{Fe}}/{\mathrm{H}}]>-3.5$ the majority of these carbon-rich stars also have strong enhancements in the element barium, which is predominantly produced by the *slow* neutron-capture process. These objects are therefore classified as CEMP-$s$ stars. The main site of nucleosynthesis of $s$-elements is the intershell region of thermally-pulsing asymptotic giant branch (TP-AGB) stars [e.g. @Gallino1998; @Busso1999; @Herwig2005; @Romano2010; @Prantzos2018]. However, the luminosities and surface gravities of observed CEMP-$s$ stars prove that most of these objects, if not all, have not yet reached the AGB phase. On the other hand, the majority of CEMP-$s$ stars are found in binary systems [@Aoki2007; @HansenTT2016-2], suggesting that the origin of the chemical enrichment is mass transfer from a binary companion that was once a TP-AGB star.
Insight into the nature of the mass-transfer mechanism can be gained from the study of the orbital properties of these systems. However, information on orbital periods is hard to come by, particularly for long-period systems for which an accurate orbital solution cannot be achieved without observations spanning many years. [@Lucatello2005-2] were the first to suggest that all CEMP-$s$ stars are members of binary systems, based on a sample of 19 stars with radial-velocity variations. They were able to calculate orbital solutions for ten of these systems. With the exception of HE $0024$–$2523$, which has a very short period of just 3.41 days, all the systems have periods between a few hundred and a few thousand days. With this same data, and additional radial-velocity data from many sources, [@Starkenburg2014] performed a maximum-likelihood analysis and concluded that the binary fraction of the CEMP-$s$ stars is consistent with unity. They also placed a maximum period of around $10^4$ days on such systems, with an average period of around $500$ days. In addition, they showed that CH stars (the higher metallicity analogues of the CEMP-$s$ stars) have a similar, if not tighter, period distribution.
In their analysis of $13$ low-metallicity carbon stars, [@Jorissen2016] provided orbital solutions for an additional four CEMP-$s$ stars, finding periods of $400$–$3,\!000$ days in line with those determined for previous systems. They also noted the similarity of the period range occupied by the CH and CEMP-$s$ stars, and in addition pointed out that the two groups have a similar distribution in period-eccentricity space. Similar orbital properties and a binary fraction consistent with $100\%$ have been determined for barium stars, a class of G and K barium-rich giants at solar metallicity, which are believed to form by the same mass-transfer mechanism as CH and CEMP-$s$ stars [e.g. @BoffinJorissen1988; @McClure1990; @Jorissen1998; @VanderSwaelmen2017].
[@HansenTT2016-2] built a sample of 22 CEMP-$s$ stars, selected only based on their enhanced abundances of carbon and barium, and they monitored the radial velocities of these systems monthly over a period of about $3,\!000$ days. They determined $17$ orbits independently of other work, adding twelve CEMP-$s$ stars to the set of those with known orbital parameters. Because this sample was chosen regardless of any previous detection of radial-velocity variations, it is not expected to be biased towards any period range and it should be representative of the orbital properties of the overall CEMP-$s$ population. [@HansenTT2016-2] found periods between $20$ and $10,\!000$ days for $17$ out of $22$ systems (shown as blue-filled circles in Fig. \[fig:fig5hansen\]). Four stars are apparently single, while one further system exhibits clear radial-velocity variations but it was impossible to determine its orbit (the point at negative eccentricity in Fig. \[fig:fig5hansen\]), indicating that the orbital period is probably very long.
Model predictions from binary population synthesis [@Abate2015-3] show that the observed fraction of CEMP stars among the very metal-poor stars of the SDSS/SEGUE survey can be reproduced at $[{\mathrm{Fe}}/{\mathrm{H}}]\lesssim-2.0$, but only with the contribution of binaries in a wide range of initial periods, up to a few times $10^5$ days. This also yields a wide distribution of current orbits, mostly in the range $10^3$ up to almost $10^6$ days [@Izzard2009; @Abate2013; @Abate2015-3]. In particular, [@Abate2015-3] demonstrated that the orbital-period distribution of the simulated binaries is shifted towards periods longer by a factor of ten (on average) compared to the observed distribution. Because the data available at the time was an inhomogeneous collection of orbital periods from the literature, the authors could not draw any definite conclusions from this comparison. Either the models should produce more CEMP stars in binary systems below a few thousand days, or, alternatively, the observed sample was biased towards short periods and most observed CEMP stars without an orbit determination should have periods longer than $10^4$ days. The comparison with the unbiased sample of [@HansenTT2016-2], which has periods in a range consistent with previous results, has the potential to provide tighter constraints on the simulations, in particular on the modelling of the mass-transfer process.
The nature of mass transfer in binary systems containing AGB stars is not well understood. Because mass transfer by Roche-lobe overflow (RLOF) from stars with deep convective envelopes is in many cases unstable, it is likely an inefficient mechanism of mass transfer. Hence, systems with AGB donors have to be wide to avoid RLOF and the secondary stars accrete material from the AGB winds. Unfortunately, our understanding of wind mass transfer is rather uncertain. In situations where the wind speed is much faster than the orbital speed of the binary, the accretion can be described by the model of Bondi, Hoyle and Lyttelton [@Hoyle1939; @BoHo], which results in low accretion efficiencies. However, AGB stars typically have slow winds of not more than around $15\,{\mathrm{km}}\,{\mathrm{s}}^{-1}$ [@VW93], making the situation much more complex. Much effort has gone into modelling this type of mass transfer [e.g. @Shazrene2007; @deValBorro2009; @ChenZ2017; @ChenZ2018; @deValBorro2017; @Liu2017; @Saladino2018-1], which has led to the discovery of a new mode of mass transfer, dubbed wind Roche-lobe overflow [WRLOF, @Shazrene2007]. This mode relies on the fact that AGB winds require the formation of dust to efficiently accelerate their winds. If the dust formation radius lies sufficiently close to the Roche lobe, then the AGB wind moves very slowly and material can be efficiently transferred to the companion through the inner Lagrangian point, with perhaps up to half the ejected material being accreted [@Shazrene2007; @Abate2013].
![Period-eccentricity diagram of the binary systems detected by @HansenTT2016-2 [blue-filled circles] and of all other known binary CEMP-$s$ stars in the literature [@Suda2008; @Suda2011; @Jorissen2016 black crosses]. []{data-label="fig:fig5hansen"}](Fig-1.pdf){width="49.00000%"}
Coupled with the issue of mass transfer and mass loss from binary systems is the issue of angular-momentum loss and transfer. While the latter is most important for the subsequent evolution of the secondary, angular-momentum loss from the system as a whole will alter the binary orbit. In the Jeans approximation of a spherically symmetric wind, the ejected material has the same specific angular momentum as the orbit of the wind-losing star and consequently the system widens in response to mass loss. However, in the case of a dense AGB outflow with velocity comparable to or lower than the orbital velocity of the binary the situation is certainly more complicated and both observations and hydrodynamical simulations show that the matter is not ejected isotropically (see e.g. [@Karovska2005] for the observations of Mira AB, the prototypical detached binary system with an AGB donor star, and [@deValBorro2009], [@Shazrene2010], [@Liu2017] for the simulations). [@Jahanara2005] and, more recently, [@ChenZ2018] and [@Saladino2018-1] have performed hydrodynamical simulations to determine the amount of angular momentum carried by the ejected material for different binary separations and mass ratios and a variety of assumptions about the input physics, such as the wind acceleration mechanism, chemical composition of the outflowing gas and its cooling efficiency. However, because of the complexity of the problem, a reliable model of how the amount of angular momentum lost by the binary system depends on all these parameters has yet to be developed.
In this work we use our binary population-synthesis model to investigate how different assumptions about the accretion efficiency, angular-momentum loss, stability of Roche-lobe overflow and the initial binary-parameter distributions modify the period distributions of synthetic CEMP-$s$ stars. In particular, we assume that the sample of CEMP-$s$ stars observed by [@HansenTT2016-2] is representative of the overall population and we use the period distribution determined from this sample as a case study to address the following questions:
1. What mechanism is responsible for mass transfer in binary systems and how efficient is it?
2. How much angular momentum is carried away by the material that leaves the binary system?
3. Is there a set of assumptions with which our binary-population-synthesis model reproduces the orbital-period distribution observed in CEMP-$s$ stars?
Models {#model}
======
In this study we use the binary population synthesis code $\texttt{binary\_c/nucsyn}$ [^2] developed by [@Izzard2004; @Izzard2006; @Izzard2009; @Izzard2018]. The starting point of our analysis is the default model set A of [@Abate2015-3], of which we describe the basic properties, assumptions, and selection criteria in Sect. \[sub:popsyn\]. We introduce several modifications in the input physics of this model set and we investigate their consequences for the orbital-period distribution of the synthetic CEMP-$s$ population. These modifications are discussed in Sections \[sub:RLOF\]–\[sub:initial\]. Table \[tab:all\_models\] presents a list of all adopted model sets, together with the CEMP fractions they produce and a few numbers characterising their period distributions (which will be discussed in Sect. \[results\]).
Population-synthesis models {#sub:popsyn}
---------------------------
Following [@Abate2015-3], in our simulated populations of binary stars the primary masses and separations (${M_{1,\mathrm{i}}}$ and ${a_{\mathrm{i}}}$) are logarithmically spaced over the intervals $[0.5, 8.0]\,{M_{\odot}}$ and $[5, 5\times10^6]\,{R_{\odot}}$, respectively, while the secondary masses, ${M_{2,\mathrm{i}}}$, are linearly distributed in $[0.1, 0.9]\,{M_{\odot}}$. Our grid resolution is $N=N_{\mathrm{M}1} \times N_{\mathrm{M}2}\times N_{a}$, with $N_{\mathrm{M}1}=100$, $N_{\mathrm{M}2}=32$, $N_{a}=80$, giving a total grid of $256,\!000$ systems. In most of our simulations we consider circular orbits, although ten observed CEMP-$s$ stars have eccentricities greater than zero. The results of modelling eccentric systems are discussed in Sect. \[sub:ecc\]. The mass of the partial mixing zone, a parameter that determines the abundances of neutron-capture elements synthetised in the AGB phase [@Karakas2010; @Lugaro2012; @Abate2015-1], is set to be equal to ${M_{\mathrm{PMZ}}}=2\times10^{-3}{M_{\odot}}$, which is the default of [@Karakas2010]. This assumption has negligible effects on the total fraction of CEMP stars or the period distribution of synthetic CEMP stars. This is because the populations of simulated CEMP stars are dominated by binary systems with initial primary masses smaller than about $2{M_{\odot}}$ [cf. Fig. 1 of @Abate2015-3], and the amount of $s$-elements produced at these low masses does not depend much on the mass of the partial-mixing zone [@Lugaro2012; @Abate2015-3]. The wind velocity and mass-loss rate on the AGB are computed according to the empirical relations determined by [@VW93]. Wind velocities vary between $5$ and $15~{\mathrm{km\,s}^{-1}}$, as in the paper of [@Abate2013], except in our model set M7 in which we adopt a maximum of $7.5\,{\mathrm{km\,s}^{-1}}$.
The primary masses in our populations are initially distributed according to the solar-neighbourhood initial mass function (IMF) proposed by [@Kroupa1993]. In most of our models the distribution of initial mass ratios $q_{\mathrm{i}}={M_{2,\mathrm{i}}}/{M_{1,\mathrm{i}}}$ is flat in the interval $[0,\,1]$, and the separation distribution is flat in $\log a_i$. The total binary fraction, ${f_{\mathrm{bin}}}$, is assumed to be unity in the considered range of ${a_{\mathrm{i}}}$, $[5, 5\times10^6]\,{R_{\odot}}$. Following the approach of [@Moe2017], these assumptions translate into a constant binary fraction per decade of orbital period, d$f/$d$\log P\equiv {f_{\log P}}=0.11$, because our separation distribution spans a range of $10^9$ days in orbital periods and the sum over all $\log\!P$-bins adds up to the total binary fraction, $\sum_P {f_{\log P}}={f_{\mathrm{bin}}}$. Alternative separation distributions are explored in Sect. \[sub:initial\].
As initial composition we adopt the solar abundance distribution of [@Asplund2009] scaled down to metallicity $10^{-4}$ (that is $[{\mathrm{Fe}}/{\mathrm{H}}]\approx-2.2$). We assume that the transferred material is mixed throughout the accreting star (hereinafter, the *accretor*) to mimic the effect of non-convective mixing processes, such as thermohaline mixing, which is expected to be efficient in low-mass stars [@Stancliffe2007; @Stancliffe2008]. Our assumption will overestimate the dilution effect of these non-convective processes [see e.g. @Matrozis2017-1] but it has a small impact on the final properties of the CEMP population, partly because most of our synthetic CEMP stars have undergone first dredge-up, which efficiently mixes the accreted material anyway [@Abate2015-3].
We evolve our binary systems with these initial conditions and we select the stars that after ten billion years have not yet become white dwarfs. We determine which of these stars are visible with a criterion based on their luminosity following the method described by @Abate2015-3 [Sect. 2.3] with $V$-magnitude limits at $6$ and $16.5$. According to the selection criteria of [@HansenTT2016-2], we flag a star as CEMP-$s$ when its carbon and barium surface abundances are $[{\mathrm{C}}/{\mathrm{Fe}}]>1$ and $[{\mathrm{Ba}}/{\mathrm{Fe}}]>0.5$, respectively.
----------- ----------------------- ----------------------------------- ----------------------------------------------------- ------------------------------------------------------------------------------ ------ ------ ------ ------- -----------
Model set ${Q_{\mathrm{crit}}}$ Wind accretion Angular-momentum ${a_{\mathrm{i}}}$ distribution CEMP K-S
mode loss (%) 2.5% 50% 97.5% $p$-value
M1 H02 WRLOF isotropic wind ${f_{\log P}}=0.11$, ${a_{\mathrm{i}}}/{R_{\odot}}\in[5,5\times10^6]$ 5.4 2.83 4.07 5.13 0.001
M2 CH08 WRLOF isotropic wind ${f_{\log P}}=0.11$, ${a_{\mathrm{i}}}/{R_{\odot}}\in[5,5\times10^6]$ 6.0 2.61 3.93 5.11 0.012
M3 $10^6$ WRLOF isotropic wind ${f_{\log P}}=0.11$, ${a_{\mathrm{i}}}/{R_{\odot}}\in[5,5\times10^6]$ 6.7 2.66 3.81 5.09 0.054
M4 H02 BHL isotropic wind ${f_{\log P}}=0.11$, ${a_{\mathrm{i}}}/{R_{\odot}}\in[5,5\times10^6]$ 4.0 2.84 3.96 4.93 0.002
M5 CH08 BHL isotropic wind ${f_{\log P}}=0.11$, ${a_{\mathrm{i}}}/{R_{\odot}}\in[5,5\times10^6]$ 4.6 2.57 3.84 4.87 0.026
M6 CH08 WRLOF hydro ${f_{\log P}}=0.11$, ${a_{\mathrm{i}}}/{R_{\odot}}\in[5,5\times10^6]$ 5.8 2.20 3.91 5.09 0.032
M7 CH08 WRLOF hydro (${v_{\mathrm{w}}}=7.5{\mathrm{km}}\,s^{-1}$) ${f_{\log P}}=0.11$, ${a_{\mathrm{i}}}/{R_{\odot}}\in[5,5\times10^6]$ 6.2 1.92 3.87 5.39 0.044
M8 CH08 WRLOF BT93 ${f_{\log P}}=0.11$, ${a_{\mathrm{i}}}/{R_{\odot}}\in[5,5\times10^6]$ 5.5 1.65 3.41 4.88 0.773
M9 CH08 WRLOF $\Delta J/J=2~(\Delta M/M)$ ${f_{\log P}}=0.11$, ${a_{\mathrm{i}}}/{R_{\odot}}\in[5,5\times10^6]$ 6.8 2.96 3.92 4.97 0.018
M10 CH08 BHL $\Delta J/J=2~(\Delta M/M)$ ${f_{\log P}}=0.11$, ${a_{\mathrm{i}}}/{R_{\odot}}\in[5,5\times10^6]$ 5.1 2.00 3.63 4.55 0.294
M11 CH08 WRLOF $\Delta J/J=3~(\Delta M/M)$ ${f_{\log P}}=0.11$, ${a_{\mathrm{i}}}/{R_{\odot}}\in[5,5\times10^6]$ 7.7 1.92 3.87 4.92 0.033
M12 CH08 WRLOF $\Delta J/J=6~(\Delta M/M)$ ${f_{\log P}}=0.11$, ${a_{\mathrm{i}}}/{R_{\odot}}\in[5,5\times10^6]$ 10.1 1.75 3.64 4.81 0.276
M13 CH08 WRLOF isotropic wind [@Moe2017] 5.0 2.62 3.99 5.18 0.006
M14 CH08 WRLOF isotropic wind ${f_{\log P}}=0.15$, ${P_{\mathrm{i}}}/{\mathrm{days}}\in[10,2\times10^4]$ 6.5 2.54 3.67 4.44 0.108
M15 CH08 BHL, ${\alpha_{\mathrm{BHL}}}=10$ isotropic wind ${f_{\log P}}=0.11$, ${a_{\mathrm{i}}}/{R_{\odot}}\in[5,5\times10^6]$ 7.2 2.25 4.26 5.54 0.001
M16 CH08 BHL, ${\alpha_{\mathrm{BHL}}}=10$ $\Delta J/J=2~(\Delta M/M)$ ${f_{\log P}}=0.11$, ${a_{\mathrm{i}}}/{R_{\odot}}\in[5,5\times10^6]$ 7.5 2.01 4.06 5.36 0.005
----------- ----------------------- ----------------------------------- ----------------------------------------------------- ------------------------------------------------------------------------------ ------ ------ ------ ------- -----------
Stability of Roche-lobe overflow {#sub:RLOF}
--------------------------------
Roche-lobe overflow from AGB donors is believed to be generally unstable, except in some cases when the donor is less massive than its companion or the mass of the convective envelope is small compared to that of the core [@Hjellming1987]. This is because AGB stars expand in response to mass loss because of their large convective envelopes [@Paczynski1965], whereas the Roche-lobe radius shrinks in response to mass transfer when the donor is more massive than its companion [@Paczynski1965; @Paczynski1971]. Favourable conditions for stable RLOF are rarely met in the formation process of CEMP-$s$ stars. Low-metallcitity AGB stars of initial masses above $0.9\,{M_{\odot}}$ efficiently dredge up nuclear-processed material to the surface [@Stancliffe2009; @Karakas2010; @Lugaro2012]. By contrast, their binary companions need to be low in mass, ${M_{2,\mathrm{i}}}\leq 0.85\,{M_{\odot}}$ [@Abate2015-3], otherwise after accreting a few hundredths of a solar mass of material they rapidly evolve and become white dwarfs before $10$–$12$ Gyr, which is approximately the age of the Galactic-halo population. In that case they would not be observable today as CEMP-$s$ stars. Consequently, the mass ratio in most potential progenitors of CEMP-$s$ stars is $M_2/M_1<1$ during the whole evolution and therefore the RLOF is in most cases unstable. The binary system is then believed to evolve into a common-envelope phase [@Paczynski1976] during which there is no significant accretion of material on to the companion star [@RickerTaam08].
Our binary population-synthesis model determines whether RLOF is stable by comparing the mass ratio between the donor and the accretor, $Q={M_{\mathrm{don}}}/{M_{\mathrm{acc}}}$, with a critical value, ${Q_{\mathrm{crit}}}$: if $Q>{Q_{\mathrm{crit}}}$ during RLOF, the systems undergo common-envelope evolution. When the donor star is a giant, ${Q_{\mathrm{crit}}}$ is calculated with Eq. 57 of @Hurley2002 [Sect. 2.6.1] and scales with the fifth power of the ratio between the core and total masses of the donor. When the RLOF process is unstable, we model the common-envelope evolution according to equations 69–78 of @Hurley2002 [Sect. 2.7], in which we assume ${\alpha_{\mathrm{CE}}}=1.0$, ${\lambda_{\mathrm{ion}}}=0.0$, and ${\lambda_{\mathrm{CE}}}$ is computed as in Eq. A.1 of @Claeys2014 [Appendix A].
In recent years, the response of giant stars to mass loss has been investigated by several authors who have argued that stars with convective envelopes may expand less, and less rapidly, than previously derived with simplified models [e.g. @Chen2008; @Ge2010; @Woods2011; @Passy2012-1; @Passy2012-2; @Ge2015; @Pavlovskii2015]. @Chen2008 [tables 1 and 2] provide critical mass ratios for stable RLOF for different primary masses, stellar radii, and mass accretion efficiencies. These values are generally higher than the ${Q_{\mathrm{crit}}}$ implemented in our code, which implies that RLOF from AGB donors may be more stable than, for example, in the simulations of [@Abate2015-3]. Therefore, we use the results of [@Chen2008] to construct a table (R.G. Izzard, priv. comm.) which can be interpolated by our population-synthesis code to determine the stability of RLOF in binary systems with an alternative criterion to that of [@Hurley2002]. Hereinafter we will refer to the former as the “CH08 criterion” of RLOF stability and to the latter as the “H02 criterion”. The CH08 criterion is adopted in most of our model sets (see Table \[tab:all\_models\]).
As we discuss in Sect. \[res:RLOF\], models with more stable RLOF from AGB donors predict a larger number of CEMP-$s$ stars with orbital periods between a few hundred and a few thousand days. To test the maximum possible effect of increased RLOF stability on the period distribution of synthetic CEMP-$s$ stars, in our model set M3 we impose that RLOF from AGB stars is always stable by setting an arbitrarily high ${Q_{\mathrm{crit}}}$ (namely, ${Q_{\mathrm{crit}}}=10^6$).
Accretion efficiency of wind mass transfer {#sub:accretion}
------------------------------------------
Because of the above-mentioned constraints on the stability of the RLOF process, mass transfer from AGB donors and the consequent formation of CEMP-$s$ stars is generally considered to occur by accretion of stellar winds. The efficiency of this process as a function of the masses of the two stars and of the orbital separation is not well understood. Population-synthesis studies often use the Bondi-Hoyle-Lyttleton model [@Hoyle1939; @BoHo; @Bondi1952; @Edgar2004] to determine the accretion efficiency of wind mass transfer. This prescription is adopted in our model sets labelled with BHL in Table \[tab:all\_models\], which compute the wind accretion rate using equation 6 of [@BoffinJorissen1988]: $$\beta_{\mathrm{BHL}} = \frac{{\alpha_{\mathrm{BHL}}}}{2\sqrt{1-e^2}} \cdot
\left(\frac{GM_\mathrm{acc}}{a~v_\mathrm{w}^2}\right)^2~\left[1 + \left(\frac{v_{\mathrm{orb}}}{v_{\mathrm{w}}}\right)^2\right]^{-\frac{3}{2}}
~,~\label{eq:BHL}$$ where $G$ is the gravitational constant, $v_\mathrm{w}$ and $v_\mathrm{orb}$ are the wind and orbital velocities, respectively, $e$ is the eccentricity, and ${\alpha_{\mathrm{BHL}}}$ is a numerical constant between $1$ and $2$ (by default it is equal to $1.5$ in our models).
The BHL model is appropriate under the assumption that the orbital velocity of the accretor is much smaller than the wind velocity. AGB winds do not usually fulfil this condition. Their detected velocities vary between approximately $3$ and $30\,{\mathrm{km}}\,{\mathrm{s}}^{-1}$ [e.g. @VW93; @vanLoon2005; @Goldman2017], which are comparable to the orbital velocities of binary stars of total mass in the range $1-3\,{M_{\odot}}$ and periods up to about $30,\!000$ days. In wider systems AGB winds are typically faster than the orbital velocities of the donor stars.
[@Abate2013] use the results of detailed hydrodynamical calculations [@Shazrene2007; @Shazrene2012; @Shazrene2010] to develop a simplified model of wind Roche-lobe overflow (WRLOF), a mode of mass transfer in which it is the slow and dense wind of the donor star, rather than the star itself, that fills the Roche lobe and is transferred on to the binary companion. We refer to [@Abate2013] for a complete description of their WRLOF model and here we summarise the basics. AGB winds are attributed to a combination of stellar pulsations, which create the conditions for dust condensation at some distance ${R_{\mathrm{d}}}$ from the surface of the AGB star, and radiation pressure on dust grains. These are accelerated beyond the escape velocity and, by dynamical collisions with the gas, transfer a net outward momentum to the wind particles [e.g. @Freytag2008; @Nowotny2010; @Bladh2012; @Bladh2015; @Hofner2015]. If ${R_{\mathrm{d}}}$ is greater than or comparable to the Roche-lobe radius ${R_{\mathrm{RL}}}$ of the donor, the AGB wind is slow inside the Roche lobe and is gravitationally focused through the inner lagrangian point $L_1$ and transferred with high efficiency to the secondary star. The dust-formation radius, ${R_{\mathrm{d}}}$, is a function of the effective temperature of the star and of the condensation temperature of the dust [@Hofner2007]. The latter depends on the chemical composition of the dust and it is assumed to be $1500\,$K and $1000$ K for carbon- and oxygen-rich dust, respectively [@Hofner2009]. Because the dust composition of low-metallicity AGB stars is complex and often very uncertain [e.g. @Boyer2015III; @Boyer2015II], in our model the condensation temperature, ${T_{\mathrm{cond}}}$, is treated as a free parameter.
In their hydrodynamical calculations of WRLOF, [@Shazrene2007] originally adopt ${T_{\mathrm{cond}}}=1000\,$K, which is also the value used by [@Abate2015-3] to maximise the fraction of CEMP stars in their synthetic populations of metal-poor stars. Indeed, a low dust-condensation temperature implies that the range in which WRLOF takes place is shifted towards longer separations compared to the case of a higher ${T_{\mathrm{cond}}}$, because dust forms further away from the star and hence ${R_{\mathrm{d}}}$ is larger. Consequently, the number of binary systems that undergo efficient wind mass transfer at long separations increases and so does the CEMP fraction [^3]. However, a large proportion of these systems are formed at much longer periods than observed [@Abate2015-3]. Following [@Abate2013], here we choose to assume ${T_{\mathrm{cond}}}=1500$ K because after about five thermal pulses our model AGB stars have ${\mathrm{C}}/{\mathrm{O}}>1$ at the surface and hence the dust formed in their outflow is also likely carbon rich.
Angular-momentum-loss model {#sub:AM}
---------------------------
The variation in orbital angular momentum caused by mass loss in a binary system can be parameterised as $$\dot{J} = \eta \left( \dot{M}_{\mathrm{don}}-\dot{M}_{\mathrm{acc}} \right) a^2 {\Omega_{\mathrm{orb}}}~,~\label{eq:jdot}$$ where ${\Omega_{\mathrm{orb}}}$ and $a$ are the orbital angular velocity and separation of the binary, respectively, $\dot{M}_{\mathrm{don}}$ and $\dot{M}_{\mathrm{acc}}$ are the mass-loss and mass-accretion rates of the donor and the accretor, respectively, hence their difference is the total mass lost by the system per unit time, and $\eta$ is a parameter identifying the specific angular momentum carried away by the expelled material per unit mass.
In our model sets M1–M5, M13 and M14, the variations of orbital angular momentum because of wind mass loss are computed according to the Jeans approximation of an isotropic, spherically-symmetric wind, as e.g. in Eq. (4) of [@Abate2013]. This is a valid approximation in the case of fast winds, with velocities much larger than the orbital velocity of the binary. In this approach the specific angular momentum of the ejected material is $${\eta_{\mathrm{iso}}}= \frac{1}{\left(1+Q\right)^2}
~.~\label{eq:jeans}$$ This mode of mass loss always results in expansion of the orbit.
In contrast with the isotropic-wind approximation, a variety of observations [e.g. @Karovska1997; @Castro-Carrizo2002; @Karovska2005; @Karovska2010] and hydrodynamical simulations [e.g. @Theuns1993; @Nagae2004; @Shazrene2007; @ChenZ2017; @deValBorro2017; @Liu2017] show that the geometry of the wind lost in binary systems with AGB donor stars is in many cases not spherical, but focussed into the orbital plane. As a consequence, the angular momentum carried away by the ejected material may be larger than predicted by the Jeans mode. This in turn can cause the binary orbit to shrink rather than expand [@ChenZ2018; @Saladino2018-1]. Taking into account such enhanced angular-momentum loss may therefore help to explain the short orbital periods of CEMP-$s$ binaries and related systems. For example, [@Izzard2010] find that the observed period-eccentricity distribution of barium stars, which are often considered the solar-metallicity analogs of CEMP-$s$ stars, can be reproduced if the ejected material carries away at least two times the average specific orbital angular momentum of the binary system. [@Abate2015-1] reach a similar conclusion in their effort to simultaneously model the chemical composition and the orbital period of $15$ observed CEMP-$s$ stars.
By including the formalism of @Izzard2010 [Eq. 2] into our Eq. (\[eq:jdot\]), we obtain the following $$\dot{J} = \gamma \times \frac{Q}{(1+Q)^2} \left( \dot{M}_{\mathrm{don}}-\dot{M}_{\mathrm{acc}} \right) a^2 {\Omega_{\mathrm{orb}}}~~,
\label{eq:gamma}$$ from which follows the relation $\eta = \gamma \, Q \, (1+Q)^{-2}$. [@Izzard2010] and [@Abate2015-1] adopt $\gamma=2$. In this work, we test different values of the constant $\gamma$, namely $\gamma=2,3,$ and $6$, to qualitatively investigate how strong the angular-momentum loss has to be in order to reproduce the period distribution of CEMP-$s$ stars [^4] . The choice of a constant $\gamma$ in Eq. (\[eq:gamma\]) implies that the specific angular momentum of the ejected material does not depend on the orbital period of the system. Hence, a binary system so wide that the gravitational influence of the companion star on the wind of the donor is negligible loses the same amount of angular momentum as a close binary, if the total ejected mass in the two cases is the same. A step towards a more physical description of the process is to include a dependence of $\eta$ on the orbital properties of the binary system. For this purpose we use the results of the hydrodynamical simulations of [@Jahanara2005], [@ChenZ2018] and [@Saladino2018-1], in which the angular-momentum loss rates from binaries interacting via stellar winds are computed explicitly. [@ChenZ2018] and [@Saladino2018-1] present simulations of low-mass binaries interacting via the winds of their AGB donor stars. Despite the different assumptions made in these studies, the specific orbital angular momentum of matter lost from the system in both studies is very similar when it is expressed as a function of the ratio of the terminal wind velocity and the orbital velocity, ${v_{\mathrm{w}}}/{v_{\mathrm{orb}}}$ [see @Saladino2018-1]. [@Jahanara2005] present more generic simulations of wind mass transfer, of which those labelled as ‘radiatively driven’ are the most applicable to AGB winds. They also find that the specific angular momentum of the ejected matter depends on the ratio of the wind velocity and the orbital velocity.
[@Saladino2018-1] find that the results of all three sets of simulations can be represented fairly well by a simple relation, $${\eta_{\mathrm{hydro}}}= {\eta_{\mathrm{iso}}}+ \frac{1.2 - {\eta_{\mathrm{iso}}}}{1 + (2.2 {v_{\mathrm{w}}}/{v_{\mathrm{orb}}})^3}~~,
\label{eq:hydro}$$ where ${v_{\mathrm{w}}}$ and ${v_{\mathrm{orb}}}$ are the wind and orbital velocities, respectively, and ${\eta_{\mathrm{iso}}}$ is the specific angular momentum for isotropic mass loss given by Eq. (\[eq:jeans\]). The second term in Eq. (\[eq:hydro\]) gives ${\eta_{\mathrm{hydro}}}$ a dependence on the separation through ${v_{\mathrm{orb}}}$. In binary systems with very long orbital periods the ratio ${v_{\mathrm{w}}}/{v_{\mathrm{orb}}}$ is large, consequently the first term of Eq. (\[eq:hydro\]) dominates and the angular momentum lost by the system is the same as in the isotropic-wind model. By contrast, for shorter orbital separations the ratio ${v_{\mathrm{w}}}/{v_{\mathrm{orb}}}$ decreases (because ${v_{\mathrm{w}}}$ is constant while ${v_{\mathrm{orb}}}$ increases) and consequently the contribution of the second term in Eq. (\[eq:hydro\]) becomes stronger.
![Specific angular momentum, $\eta$ (in units of $a^2\Omega_{\rm{orb}}$), as a function of the orbital period of binary systems with fixed primary mass, ${M_{\mathrm{don}}}=0.9{M_{\odot}}$, and mass ratios $Q=1.05, 2$ (left and right panels, respectively). The solid, dashed, dot-dashed and dotted lines show, respectively, the profiles of $\eta$ as computed with Eqs. (2), (3) with $\gamma=2$, (4) and (5).[]{data-label="fig:eta-vs-P"}]({Fig-2}.pdf){width="50.00000%"}
Alternatively, we can use the results of [@Brookshaw1993] who studied the angular-momentum loss from binary systems in which the wind is modelled by ballistic calculations of test particles. These calculations ignore the effects caused by gas pressure and radiative acceleration. This is permissible for fast winds, but it is a poor representation of slow and dense AGB winds. Because both these phenomena will tend to make the outflow more isotropic, the results of this ballistic study can be taken to give an upper limit to the amount of angular momentum lost by a stellar wind. We fit the results of [@Brookshaw1993] as a function of the mass ratio and ${v_{\mathrm{w}}}/{v_{\mathrm{orb}}}$ (see Appendix \[appendixA\]), similarly to Eq. (\[eq:hydro\]): $${\eta_{\mathrm{BT93}}}= \mathrm{max}\left\{
~{\eta_{\mathrm{iso}}}~, ~
\cfrac{1.7}{1 + [(0.6 + 0.02 Q) \cdot {v_{\mathrm{w}}}/{v_{\mathrm{orb}}}]^6}~
\right\}~~.
\label{eq:BT93}$$ With Eq. (\[eq:BT93\]) wide binary systems evolve as for an isotropic outflow, whereas in closer binaries the angular momentum lost is significantly higher. The transition between these two regimes is considerably steeper than in Eq. (\[eq:hydro\]), as shown by Fig. \[fig:eta-vs-P\], in which we plot the specific angular momentum of the ejected material for different models and mass ratios ($Q=1.05$ and $2$ in the left and right panels, respectively) as a function of the orbital period in binary systems with primary mass ${M_{\mathrm{don}}}=0.9{M_{\odot}}$. Figure \[fig:eta-vs-P\] also shows that in the models with constant $\gamma$ and with an isotopic wind, $\eta$ does not depend on the orbital period and consequently the variations of angular momentum are only determined by the total mass that is lost by the system.
The angular-momentum loss rates given by Eqs. (\[eq:jeans\])–(\[eq:BT93\]) represent only the angular-momentum loss from the *orbit*, and do not include a possible contribution from the loss of rotational angular momentum in the wind of the AGB donor star. The latter is accounted for separately in our code. In case the spin of the mass-losing star is tidally locked to the orbit, this rotational angular-momentum loss is effectively also taken out of the orbit. This is important in binaries that are close enough for tidal friction to occur on a timescale shorter than the mass-loss timescale, and can result in additional orbital shrinkage. In our binary population synthesis code, tidal friction and angular momentum transfer between the stars and the orbit is calculated explicitly [@Hurley2002] and the effect of spin angular-momentum loss on the evolution of the orbit is thus taken into account as well.
Initial distribution of orbital periods and separations {#sub:initial}
-------------------------------------------------------
![Initial period distributions adopted in our models. The vertical axis represents the binary fraction in each period bin, i.e. ${f_{\log P}}$. The black-solid line represents our default initial distribution of separations, which is flat in $\log_{10} {a_{\mathrm{i}}}$ between $5{R_{\odot}}$ and $5\times10^6{R_{\odot}}$, with ${f_{\log P}}=0.11$ over this interval. The brown-dashed line shows the prescription of [@Moe2017], adopted in set M13, for a $1.0{M_{\odot}}$ primary and mass ratio ${M_{2,\mathrm{i}}}/{M_{1,\mathrm{i}}}>0.1$. The shape and maximum of the distribution of [@Moe2017] depend on the primary mass and mass ratio. The green-dotted line shows a period distribution flat in $\log_{10} {P_{\mathrm{i}}}$ and with ${f_{\log P}}=0.15$ in the period range $[10,2\times10^4]$ days, which we adopt in our model set M14. The integral of each curve represents the total binary fraction over the period interval. []{data-label="fig:Pini"}]({Fig-3}.pdf){width="50.00000%"}
In our model sets it is assumed by default that the initial distribution of separations is flat in $\log {a_{\mathrm{i}}}$ over the range $[5,\,5\times10^6]\,{R_{\odot}}$, with a constant binary fraction per decade of orbital period, ${f_{\log P}}$, of approximately $0.11$ in this interval. This choice has the advantage of being easy to implement and to compare with previous results of population synthesis studies. Furthermore, it is broadly consistent with the observed orbital separations of binary systems in the young stellar association Scorpius OB2 [@Kouwenhoven2007] and with the data of @Moe2017 [Fig. 37] for the orbits of A/late-B-type binaries with primary masses in the range $2$–$5{M_{\odot}}$. However, solar-type stars with masses between $0.8$ and $1.2{M_{\odot}}$, which are the most frequent primary masses of the progenitor systems of our synthetic CEMP-$s$ stars, have a rather different orbital-period distribution and a lower overall binary frequency [@Raghavan2010; @Moe2017]. In addition, by default we assume in our models that the initial distributions of period and mass ratios are independent, hence the joint probability of forming a binary system with initial period $P$ and mass ratio $Q$, $p(P, Q)$, is the product of the individual probabilities $p(P)$ and $p(Q)$. By contrast, [@Moe2017] find that the period and mass-ratio distributions of observed binary stars are not separable, but closely interconnected. [@Moe2017] determine a set of equations to calculate the joint probability function of forming a binary system with primary mass $M_1$, mass ratio $Q$, and orbital period $P$. We implement this set of equations in our model set M13.
Anticipating the results of Sect. \[results\], our simulations in general form CEMP-$s$ stars at much longer orbital periods than observed, unless angular momentum is removed from the system with extremely high efficiency. For the sake of comparison, in our model set M14 we assume that all binary systems are formed with orbital periods in the range $[10,2\times10^4]$ days, which is approximately the range in which the CEMP-$s$ stars are observed by [@HansenTT2016-2], and that the total binary fraction is $50\%$ over this interval. Although we have little information about the initial binary and orbital properties of very metal-poor Halo stars, in particular for relatively wide binaries, the resulting value of ${f_{\log P}}=0.15$ is in approximate agreement with the results of [@Gao2014] for metal-poor F/G/K stars in binary systems with $P \la 1,\!000$ d. Figure \[fig:Pini\] shows the three different initial period distributions adopted in our simulations.
Detection probability of the orbits. {#sub:detect}
------------------------------------
In order to compare our simulations to the observed distribution of systems, we need to take account of the likelihood of a given synthetic binary to be detected by the observing campaign of [@HansenTT2016-2], whose strategy consisted of taking observations roughly every $30$ days for around $3,\!000$ days. We compute this likelihood using a Monte Carlo method. For a given set of system parameters (primary and secondary masses, orbital period and eccentricity), we randomly select an angle between the orbit’s major axis and the line of nodes, a value for the cosine of the inclination of the orbital plane of the binary to the plane of the sky [^5], and a starting point in the orbit. We compute the line-of-sight velocity at this point and at every $30$ days for $3,\!000$ days, recording the maximum and minimum velocity. The difference between these is compared to the threshold radial-velocity amplitude of the observations, and if it is above this threshold, the system is deemed to have been detected. In their study, [@HansenTT2016-2] achieve a $0.1{\mathrm{km\,s}^{-1}}$ precision in their radial-velocity measurements, and we consider this value to be our detection threshold, ${K_{\mathrm{min}}}$.
![(a) Period distributions of synthetic CEMP-s stars computed with our model set M2 and with different adopted detection thresholds for radial-velocity variations. The black-dashed line shows the period distribution of all the CEMP-$s$ stars in our simulation. The blue-dot-dashed and orange-dotted lines show the period distributions of the simulated CEMP-$s$ stars with detection thresholds of $K_{\mathrm{min}}=0.1$ and $0.5\,{\mathrm{km}}\,{\mathrm{s}}^{-1}$, respectively. (b) Same as in panel (a) for synthetic CEMP-$s$ stars that would be detected as singles with the above thresholds. (c) The cumulative orbital-period distributions corresponding to the models in panel (a) are compared with the observed distribution calculated with the data of @HansenTT2016-2 [grey-solid line]. []{data-label="fig:detectability"}]({Fig-4}.pdf){width="50.00000%"}
We repeat this for $10^5$ choices of the orientation of the major axis, the inclination, and starting point in the orbit. The detection probability is then given by the number of systems that exceed the detection threshold, divided by the total number of iterations. We compute detection probabilities for a grid of potential binaries. To limit the number of systems we need to compute, we first determine the relevant parameter range of systems, as explained below. We then interpolate in this grid to find the detection probability of any CEMP-$s$ system returned by the population-synthesis calculations.
In the binary systems of our simulated CEMP-$s$ population, the primary is the carbon-rich star which would be observed. Its mass should be higher than $0.5{M_{\odot}}$, or its luminosity would be so low ($L_{\star}\lesssim 0.08{L_{\odot}}$) that the $V$-magnitude would typically not satisfy our selection criteria (see Sect. \[sub:popsyn\]), and lower than about $0.95{M_{\odot}}$, otherwise the star would have become a white dwarf before ten billion years [see e.g. Fig. 7 of @Abate2015-3]. The secondary star is a white dwarf of mass that depends on the initial progenitor mass, and in our simulations varies mostly in the range $[0.5,0.8]{M_{\odot}}$. CEMP-$s$ systems at periods longer than $10^6$ days are not found in our simulations. For binary systems with periods shorter than $3,\!000$ days the full radial-velocity curve would be sampled by the observing campaign of [@HansenTT2016-2]. Such a binary would still go undetected if observed close to face-on ($i\lesssim 1^{\circ}$ for a detection limit of ${K_{\mathrm{min}}}= 0.1~{\mathrm{km\,s}^{-1}}$), but the probability of such an unfavourable inclination is less than about $1\%$. We therefore assume a detection probability of $100\%$ for $P < 3,\!000$ d. In conclusion, our grid of detection probabilities covers total system masses in the range $[0.7,2.0]{M_{\odot}}$, secondary masses in the range $[0.5,1.0]{M_{\odot}}$, and orbital periods in the range $[10^3,10^6]\,{\mathrm{days}}$. Because all our synthetic systems are circular, we do not account for the detectability of eccentric orbits (but see the discussion in sections \[sub:obs\] and \[sub:ecc\]).
Figure \[fig:detectability\] illustrates the effect of different radial-velocity detection thresholds on the period distribution of the synthetic CEMP-$s$ systems computed with our default model set M2. In the top panel of Fig. \[fig:detectability\], the black-dashed line shows the differential period distribution of all our synthetic CEMP-$s$ stars. The blue-dot-dashed and orange-dotted lines show the CEMP-$s$ stars that would be detected as binary systems with threshold radial-velocity amplitudes of $0.1$ and $0.5\,{\mathrm{km}}\,{\mathrm{s}}^{-1}$, respectively. Figure \[fig:detectability\]b shows the period distributions of CEMP-$s$ systems that would be detected as single stars with the observation strategy and thresholds described above. In Fig. \[fig:detectability\]c the same three models as in panel (a) are compared to the observed cumulative period distribution (grey-solid line). Only $18$ out of the $22$ CEMP-$s$ stars observed by [@HansenTT2016-2] ($\approx 82\%$ of the sample) are confirmed binaries, $17$ of which have a determined period while the binary with an as yet undetermined period is tentatively plotted at $P = 15,\!000$d in Fig. \[fig:detectability\]. We make the assumption that the other four stars also belong to binary systems but have periods too long to be detected (indicatively ${P_{\mathrm{orb}}}>15,\!000$ days). As expected, the proportion of CEMP-$s$ stars detected as binary systems in our simulations decreases with increasing radial-velocity threshold ${K_{\mathrm{min}}}$. A binary fraction among simulated CEMP-$s$ stars consistent with the observations is found adopting ${K_{\mathrm{min}}}=0.5\,{\mathrm{km}}\,{\mathrm{s}}^{-1}$ in our model set M2. Nonetheless, to be consistent with the precision achieved in the work of [@HansenTT2016-2] we adopt ${K_{\mathrm{min}}}=0.1{\mathrm{km}}\,{\mathrm{s}}^{-1}$.
Results
=======
We evolve a population of very metal-poor binary stars for each set of initial assumptions described in the previous sections. We select the CEMP-$s$ stars and we calculate the orbital-period distribution for these systems, which we subsequently compare with the observations of [@HansenTT2016-2]. Columns 7–9 of Table \[tab:all\_models\] characterise the resulting period distribution of each model set by providing the logarithmic orbital periods at $2.5$, $50$ and $97.5$ percentiles of the synthetic CEMP-$s$ population (i.e. the orbital period at which the cumulative period distribution is equal to $0.025$, $0.5$ and $0.975$, respectively).
For each model set, we perform a Kolmogorov-Smirnov (K-S) test to evaluate the likelihood that the observed period distribution is drawn from the corresponding synthetic distribution. Column 10 in Table \[tab:all\_models\] shows the resulting *p*-values [^6], which give an indication of the relative goodness-of-fit of each model. We note that eleven of our model sets have $p$-values less than $0.05$, which is the threshold often used as a criterion to reject a model with statistical significance. While this result suggests that most of our models are incompatible with the observed period distribution, in the following we will discuss in detail how different model assumptions modify the theoretical orbital-period distributions and why many of these fail to reproduce the data. All figures in this section consist of two panels, as in Fig. \[fig:detectability\]. In the top panel we show the differential period distributions predicted with our models, before accounting for the detection probability of the orbit. The results are normalised such that the integral of each curve is equal to the total CEMP fraction computed with that model, ${f_{\mathrm{C}}}$, which is reported in the top-right corner of the plot and in Table \[tab:all\_models\]. The bottom panels show the corresponding cumulative period distributions, after applying a radial-velocity detection threshold of ${K_{\mathrm{min}}}=0.1{\mathrm{km}}\,{\mathrm{s}}^{-1}$. These are compared to the observed cumulative period distribution, shown as a thick, solid grey line. Each cumulative distribution is normalized to the total CEMP-$s$ population, either observed or modelled, such that the value at $P = 10^6$days corresponds to the detected (or detectable) binary fraction in the population. Our default model set M2 is always shown as a reference with a black-solid line.
Changes in the stability criterion of Roche-lobe overflow {#res:RLOF}
---------------------------------------------------------
Figure \[fig:qcrit\] shows the period distributions of the models sets M1, M2 and M3, with different stability criteria for RLOF from AGB donors. As expected, if the value of ${Q_{\mathrm{crit}}}$ increases, meaning that RLOF is stable for a larger range of binary systems, the number of CEMP stars with periods between approximately a few hundred and a few thousand days is increased. This is roughly the interval at which the primary stars avoid filling their Roche lobes on the first giant branch so that the RLOF phase can occur when the donors have reached the AGB. In this period range we find systems in which the secondary star accretes enough material to become carbon-enriched through stable RLOF. This is best seen in the top panel of Fig. \[fig:qcrit\], where the distributions of sets M2 and M3 have a peak at periods between about $300$ and $1,\!000$ days, as a result of the increased stability of RLOF.
The proportion of CEMP stars with periods less than $2,\!500$ days is $15\%$ and $24\%$ in model sets M1 and M2, respectively, whereas it is $32\%$ in set M3 in which RLOF is always stable. This implies that at least two thirds of our synthetic CEMP stars, most realistically more, are formed by accretion of stellar winds. Were RLOF the only efficient mechanism to transfer material in low-mass binary stars, the fraction of CEMP stars in our metal-poor population would therefore be at most $2\%$, even if RLOF is always stable, that is, at least a factor of three lower than the fraction determined from the observed SDSS/SEGUE sample ($\approx~\!6.1\%$ for stars with $[{\mathrm{Fe}}/{\mathrm{H}}]\approx-2.0$, [@Lee2013]).
In addition, Fig. \[fig:qcrit\] shows that at any value of the cumulative fraction the synthetic distributions overestimate the observed periods approximately by a factor between $2$ and $10$. The mismatch is also reflected in the small $p$-values in Table \[tab:all\_models\]. While set M3 has a marginally acceptable $p$-value of $0.05$, this model lacks physical realism. Furthermore, a large proportion of simulated CEMP stars have periods above $15,\!000$ days, most of which should be detectable with the observing strategy of [@HansenTT2016-2]. In their sample, only one CEMP-$s$ star out of 22 (about 5%) is detected as a binary with an as yet undetermined period that is presumably at least $15,\!000$ days, while four stars (about 18% of the sample) have apparently constant radial velocity. By contrast, the proportion of detectable synthetic CEMP binaries with periods above $15,\!000$ days is about $26\%$ in model set M3, and it is $29\%$ and $32\%$ in the more realistic sets M2 and M1, respectively. The expected fraction of undetected binaries, with radial-velocity amplitude less than $0.1\,{\mathrm{km\,s}^{-1}}$, is only about $7$% for all three models. This confirms that, regardless of the assumptions about RLOF stability, a significant fraction of systems form at very long separations. We therefore conclude that, while a better understanding of the RLOF process is necessary to reproduce the proportion of observed CEMP-$s$ systems with periods up to a few thousand days, it is not sufficient to solve the discrepancy between models and observations. To reproduce at the same time the fraction of observed CEMP-$s$ stars and their period distribution it is necessary to correctly account for wind mass transfer.
![As Fig. \[fig:detectability\] for models with different RLOF stability criteria. Model set M1 adopting the H02 criterion is shown by the green-dashed line. Model set M2 (thin, black-solid line) uses the CH08 criterion. Model set M3 with ${Q_{\mathrm{crit}}}=10^6$ (i.e. RLOF from AGB stars is always stable) is indicated by the magenta dotted line. The top panel shows the entire simulated populations, while in the the bottom panel only CEMP-$s$ stars detectable as binary systems are shown (${K_{\mathrm{min}}}=0.1{\mathrm{km}}\,{\mathrm{s}}^{-1}$ is adopted). []{data-label="fig:qcrit"}]({Fig-5}.pdf){width="50.00000%"}
Varying the accretion efficiency of wind mass transfer {#wind-accretion-efficiency-model}
------------------------------------------------------
![As Fig. \[fig:qcrit\] for model sets M2 and M5 (dashed-blue line) in which the wind accretion efficiency is computed with the WRLOF and BHL prescriptions, respectively. []{data-label="fig:wind1"}]({Fig-6}.pdf){width="50.00000%"}
Figure \[fig:wind1\] compares the period distributions of model sets M2 and M5 (dashed-blue line), in which the wind accretion efficiency is computed with the WRLOF and BHL models, respectively. The WRLOF model predicts higher accretion efficiencies than the BHL prescription over a large range of separations, also in wide systems [@Abate2013]. Consequently set M2 produces CEMP stars at longer periods than set M5. However, the BHL model set M5 is only marginally closer to reproducing the observed period distribution than the WRLOF model. Furthermore, this comes at the expense of a predicted CEMP fraction of $4.6\%$, which underestimates the results of the SDSS/SEGUE survey [$\approx 6.1\%$ @Lee2013], and a predicted fraction of undetected binaries of only about $4\%$, even lower than in model set M2.
Changes in the adopted angular-momentum loss {#amloss}
--------------------------------------------
![As Fig. \[fig:qcrit\] for different for angular-momentum-loss prescriptions. Model set M2 assumes isotropic wind mass loss. Model sets M9, M11, and M12 (red-dotted, orange-dot-dashed and violet-dashed lines, respectively) are computed with Eq. (\[eq:gamma\]) and $\gamma=2,3,10$, respectively. []{data-label="fig:amloss1"}]({Fig-7}.pdf){width="50.00000%"}
Figures \[fig:amloss1\] and \[fig:amloss2\] show the period distributions obtained with model sets adopting different assumptions about the angular-momentum loss. In all models shown, the WRLOF prescription of wind accretion efficiency and the CH08 criterion of RLOF stability are used. Model sets M9, M11 and M12 in Fig. \[fig:amloss1\] are computed using $\gamma=2,3,6$ in Eq. (\[eq:gamma\]), whereas set M2 assumes the wind is expelled isotropically by the binary system. The CEMP fraction increases with increasing $\gamma$ because the enhanced angular-momentum loss causes the binary systems to shrink more and therefore the range of separations at which the stars can interact is larger. Also the number of systems that produce a CEMP star after experiencing a common envelope increases with $\gamma$. Many of these had a relatively large initial separation, and hence the accretor had the time to accrete material and become carbon-rich before the onset of unstable RLOF. These systems appear at $P \lesssim 400$days in Fig. \[fig:amloss1\].
For $\gamma=2$ and $3$, the increased angular-momentum loss compared to the isotropic wind model does not correspond to a significant shift of the period distributions towards shorter periods. To understand this result it is convenient to subdivide the entire range of orbital periods of the synthetic CEMP populations into smaller intervals and subsequently compare the initial periods of the progenitor binary systems which, with different model sets, end up in the same interval. This exercise shows, for example, that model sets M2, M9 and M11 form the same proportion (approximately $36$–$38\%$) of CEMP stars with orbital periods between $10^3$ and $10^4$ days, but the progenitor binary systems in the three model sets had different initial-period ranges. In the isotropic-wind assumption (set M2) CEMP stars come from systems that had initial periods in the interval $1000$–$8000\,{\mathrm{days}}$. Using Eq. (\[eq:gamma\]) with $\gamma=2$ (set M9) the progenitor binary systems had initial periods mostly between $2000$ and $20,\!000$ days, with a tail up to about $10^5$ days. With model set M11 ($\gamma=3$), the initial periods of these CEMP stars span between about $2000$ and $50,\!000$ days, with a tail up to a few hundred thousand days.
With model set M12 the cumulative period distribution of CEMP stars is roughly consistent with the observations (*p*=0.28, see also Fig. \[fig:amloss1\]) because of the combined effect of a strong angular momentum loss by stellar winds and the increased number of systems undergoing a common envelope. It should be remembered, however, that the assumption of a constant $\gamma$ is not supported by physical arguments and it is unrealistic. In fact, it implies that the specific angular momentum expelled by the binary system does not depend on the masses of the stars and their distance, which is at odds with the results of hydrodynamical simulations [e.g. @Jahanara2005; @ChenZ2018; @Saladino2018-1]. Considering for example a $1{M_{\odot}}$ primary star and a $0.6{M_{\odot}}$ companion in a $10^5$-day orbit, with $\gamma=6$ the ejected material has a specific angular momentum ten times higher than in the isotropic-wind approximation, despite the fact that at wide separations the outflow from the donor star is expected to be essentially spherical [e.g. @Shazrene2011; @Shazrene2012].
![As Fig. \[fig:amloss1\] with model sets M6 (dashed line) and M7 (dot-dashed line), in which the angular-momentum loss is computed with Eq. (\[eq:hydro\]) and ${v_{\mathrm{w}}}=15$ and $7.5{\mathrm{km}}\,{\mathrm{s}}^{-1}$, respectively, and set M8 (dotted line), in which Eq. (\[eq:BT93\]) is adopted. []{data-label="fig:amloss2"}]({Fig-8}.pdf){width="50.00000%"}
The distributions computed with the orbit-dependent angular-momentum loss prescriptions of model sets M6, M7 and M8 are shown in Fig. \[fig:amloss2\] with dashed, dot-dashed, and dotted lines, respectively. Model M6 uses Eq. (\[eq:hydro\]) based on hydrodynamical simulations from the literature and predicts an increased proportion of CEMP-$s$ stars at periods shorter than about $2,\!500$ days compared to default set M2, at the expense of systems with periods between about $2,\!500$ and $30,\!000$ days (see top panel of Fig. \[fig:amloss2\]). Binaries with $P \lesssim 30,\!000$d lose more angular momentum than in the isotropic-wind model and thus evolve into closer orbits. This results in a larger fraction of CEMP-$s$ stars formed through stable RLOF (with $300\,\mathrm{d} \lesssim P \lesssim 2,\!500$ days) as well as systems experiencing a common envelope after accreting enough material to become CEMP-$s$ stars (ending up with $P \lesssim 300$d). With model sets M2 and M6 the proportion of CEMP-$s$ stars with periods up to $2,\!500$ days is $24\%$ and $31\%$, respectively. At periods longer than about $10,\!000$ days the cumulative distributions of M2 and M6 are essentially identical, and the mismatch with the observations discussed in Sect. \[res:RLOF\] therefore persists ($p<$0.05, see Table \[tab:all\_models\]).
A critical parameter in Eq. (\[eq:hydro\]) is the terminal velocity of AGB winds, ${v_{\mathrm{w}}}$, because a lower wind velocity implies that, for the same orbital period, a larger amount of specific angular momentum is carried away by the ejected material. This parameter is uncertain, as observed wind velocities range between a few and a few tens of ${\mathrm{km}}\,{\mathrm{s}}^{-1}$ [e.g. @VW93; @Danilovich2015; @Goldman2017]. For the sake of comparison, in model set M7 we assume ${v_{\mathrm{w}}}=7.5{\mathrm{km\,s}^{-1}}$, which is half of our default value and consistent with the lowest velocity detected by [@Goldman2017] among high-luminosity, high-mass-loss stars in the Large Magellanic Cloud. Consequently, in set M7 the proportion of CEMP-$s$ stars with periods below $2,\!500$ days increases even more, to about $34\%$. A low ${v_{\mathrm{w}}}$ also affects the number of CEMP-$s$ stars in very wide orbits ($P \gtrsim 50,\!000\,{\mathrm{days}}$). These systems do not enter the WRLOF regime and consequently the mass-transfer efficiency is calculated according to the BHL prescription [Eq. 6 of @BoffinJorissen1988], which is proportional to ${v_{\mathrm{w}}}^{-4}$ (for ${v_{\mathrm{orb}}}\le{v_{\mathrm{w}}}$). With a low ${v_{\mathrm{w}}}$ also very wide systems accrete enough material to form a carbon-rich star. As a consequence, about $15\%$ of the detectable synthetic CEMP-$s$ stars have periods longer than $50,\!000\,{\mathrm{days}}$ ($4\%$ have $P>10^5{\mathrm{days}}$), whereas it is only about $10\%$ in model M6 ($\approx\!2\%$ at $P>10^5{\mathrm{days}}$). Thus, while the correspondence with observations improves somewhat at the short-period end of the distribution, it becomes worse at the long-period end.
The results of model set M8, based on the ballistic simulations of [@Brookshaw1993], are roughly consistent with the observed cumulative distribution up to about $10^4$ days, although the fraction of undetectable binaries is much smaller than the observed fraction of apparently single CEMP-s stars. This is not surprising because Eq. (\[eq:BT93\]) predicts large angular-momentum loss for systems with ${v_{\mathrm{w}}}\le{v_{\mathrm{orb}}}$, much larger than both the isotropic-wind model and the hydrodynamics-based prescription up to periods of about $50,\!000$ days. However, as we mentioned in Sect. \[sub:AM\], it should be kept in mind that this model overestimates the amount of angular momentum carried by the ejected winds and consequently the effect on the period distribution of CEMP stars, because it ignores the effects caused by gas pressure and radiative acceleration. Nevertheless, model set M8 is a useful test case to estimate how much angular momentum binary systems would need to lose in order to reconcile the results of our simulations with the observed CEMP-$s$ population.
We note that in model sets M6 and M8 the total fraction of CEMP stars ($5.8\%$ and $5.5\%$, respectively) is somewhat lower than in set M2 ($6.0\%$). This is because some binary systems that are just wide enough to avoid unstable RLOF and form CEMP stars in the isotropic-wind case, which widens their orbits, instead become tighter with the larger angular-momentum loss of model sets M6 and M8. Many of these undergo common-envelope evolution before sufficient chemical pollution of the accretor has taken place. Unlike in the models with constant $\gamma$, this is not compensated by very wide systems evolving into close enough orbits to become CEMP stars.
In conclusion, Eq. (\[eq:hydro\]) used in sets M6 and M7, which derives from the results of hydrodynamical simulations, is at present the only prescription for orbital angular-momentum loss with a physical basis. The evidence that, despite the uncertainty on ${v_{\mathrm{w}}}$, these model sets do not reproduce the observed period distribution very well suggests that other physical aspects in our models may have to be reconsidered.
Changes in the range of initial periods {#initial_distribution}
---------------------------------------
\[vary-ai\]
![As Fig. \[fig:qcrit\] for different distributions of initial separations and periods. By default ${a_{\mathrm{i}}}/{R_{\odot}}$ is in $[5,5\times10^6]$ (black solid line). The dashed line is computed using the initial $\log P$–distribution of @Moe2017 [Eq. 23]. In model set M14 ${P_{\mathrm{i}}}$ is log-flat between $10$ and $20,\!000$ days (dotted line). []{data-label="fig:ai"}]({Fig-9}.pdf){width="50.00000%"}
Our model sets M13 and M14 adopt the same assumptions as the default set M2 except for the initial distribution of orbital periods and mass ratios. In model M13 we implement the set of fitting equations to observed binary stars proposed by [@Moe2017]. These equations result in a quasi-flat initial-period distribution for low-mass primary stars, with a very wide peak between approximately $10^4$ and $10^6$ days, and in a combined distribution of periods and mass ratios, $q=Q^{-1}=M_2/M_1$, which favours small mass ratios, especially in wide orbits. The efficiency of WRLOF decreases with $q$ in the prescription of [@Abate2013], while it is higher for relatively long-period systems around $\approx\!10^4$ days. As a result, these two effects compensate one another and the period distribution of CEMP-$s$ stars with model set M13 is very similar to that of our default set M2, although with a slightly smaller proportion of systems with $P<10^4\,{\mathrm{days}}$ and a decreased overall CEMP fraction.
Our model set M14 has a flat $\log P$–distribution with ${f_{\log P}}=~0.15$ between $10$ and $20,\!000$ days. These assumptions result in a period distribution which resembles the observations more closely, in particular at the long-period end. By construction, only a small fraction of synthetic CEMP-$s$ stars have periods in excess of $20,\!000$ days, as in the sample of [@HansenTT2016-2]. However, this model also predicts that essentially all CEMP-$s$ stars should have detectable radial-velocity variations when monitored with the same strategy and sensitivity as the study of [@HansenTT2016-2], which appears at odds with their finding that four out of $22$ observed CEMP-$s$ stars are apparently single. The *p*-value of $0.11$ nevertheless suggests that we cannot reject this model on statistical grounds. We note, however, that the K-S test is relatively insensitive to differences occurring far from the median of the distribution, as is the case here.
Discussion
==========
On the observed binary fraction and orbital periods. {#sub:obs}
----------------------------------------------------
We have assumed that all CEMP-$s$ stars are formed in binary systems, a hypothesis borne out by several theoretical and observational studies [@Lucatello2005-2; @Aoki2007; @Bisterzo2012; @Lugaro2012; @Starkenburg2014]. However, four stars in the sample of [@HansenTT2016-2] do not exhibit radial-velocity variations consistent with orbital motion. Based on the sensitivity of their study, the authors exclude as highly unlikely that all four CEMP-$s$ stars are in binary systems observed face-on, and therefore they conclude that these are single stars. However, they did not consider the possibility that these may be binaries with periods much longer than $10^4$ days.
In our simulations we use a Monte Carlo method to account for the likelihood of our synthetic binary systems to be detected with the observing strategy of [@HansenTT2016-2]. In their study, the precision achieved in the velocity measurements is about $0.1{\mathrm{km\,s}^{-1}}$. Accordingly, we have assumed a sensitivity to radial-velocity variations of ${K_{\mathrm{min}}}=0.1{\mathrm{km}}\,{\mathrm{s}}^{-1}$, and we find a fraction of undetectable binaries which ranges between $2$ and $10\%$ (in model sets M14 and M7, respectively; it is $\approx\!7\%$ in our default set M2). This means that, in a sample of 22 CEMP-$s$ stars, we expect between 0.4 and 2.2 undetected binaries, rather than the four that are observed.
We note that the time span of observations for stars in the Hansen sample is often less than the 3000 days we have assumed for computing the detection probability. For two of the four constant-radial-velocity stars it is indeed much shorter: about $1000$ days for HE $0206$–$1916$ and $800$ days for HE $1045$+$0226$. It cannot be excluded that evidence for orbital motion would have been detected in these stars if the radial-velocity measurements had lasted for the nominal $3000$ days. In addition, for star HE $1045$+$0226$ both the errors and the spread in the observed radial-velocity values are substantially larger than $0.1\,{\mathrm{km\,s}^{-1}}$, and an overall downward trend in radial velocities with an amplitude several times $0.1\,{\mathrm{km\,s}^{-1}}$ is compatible with the data (Fig. 1 of Hansen et al.). Several other stars in the sample also have radial-velocity measurement errors in excess of $0.1\,{\mathrm{km\,s}^{-1}}$. This suggests that both our assumed time span of $3000$ days and detection threshold of $0.1\,{\mathrm{km\,s}^{-1}}$ are too conservative for the sample as a whole. In fact, if we assume a time-span of radial-velocity monitoring of $1,\!000$ days in our simulations, the predicted fractions of undetected binaries approximately double. In our default set M2 we expect about $16\%$ of undetected CEMP-$s$ systems, in rough agreement with the observed sample (see Fig. \[fig:baseline\] in Appendix \[appendixB\]). In addition, as we show in Sect. \[sub:detect\], if we adopt ${K_{\mathrm{min}}}=0.5~{\mathrm{km\,s}^{-1}}$ in our default set M2 about $20\%$ of the CEMP-$s$ stars would not be detected as binaries, approximately as in the observed sample. The same is true for most of our simulations, with the exception of sets M8, M12 and M14, which would require an even higher detection threshold in order to produce approximately $18\%$ of undetected binaries (${K_{\mathrm{min}}}=1.0~{\mathrm{km\,s}^{-1}}$ in set M12, ${K_{\mathrm{min}}}=1.5~{\mathrm{km\,s}^{-1}}$ in sets M8 and M14).
Perhaps harder to reconcile with our models than the non-detection of radial-velocity variations in four stars, is the paucity of *confirmed* binaries with orbital periods between $10^4$ and $10^5$ days. The Hansen sample probably contains one such a very wide binary system, HE $0959$–$1424$, for which it was not possible to determine the orbital solution but which exhibits velocity variations of about $2~{\mathrm{km\,s}^{-1}}$. Its orbital period must be longer than $10,\!000$ days, but is likely less than $10^5$ days, and some of the apparently constant-velocity stars discussed above may also turn out to have periods in this range. We also note, however, that the smallest velocity amplitude measured in a confirmed CEMP-$s$ binary star is $K=1.57~{\mathrm{km\,s}^{-1}}$, which is significantly higher than our adopted threshold, even assuming a generous value of ${K_{\mathrm{min}}}=0.5~{\mathrm{km\,s}^{-1}}$. This is hard to reconcile with a continuous distribution of radial-velocity amplitudes down to the detection threshold, as would be expected from our simulation, and suggests that the paucity of CEMP-$s$ binaries with $P > 10,\!000$ days is real. An approximate upper limit to the period distribution of the order of $10^4$ days is also suggested by the study of [@Starkenburg2014]. All of our model sets produce a substantial fraction of detectable binaries with periods larger than $10^4$ days. In a sample of 22 CEMP-$s$ stars, between $5$ and $9$ such binaries are predicted (with ${K_{\mathrm{min}}}=0.1~{\mathrm{km\,s}^{-1}}$). Our only model set that comes close to reproducing the lack of systems in this period range is M8 if we make the additional, extreme assumption that ${K_{\mathrm{min}}}=1.5~{\mathrm{km\,s}^{-1}}$, in which case we predict about $2$ detectable CEMP-$s$ binaries with period $P>10^4$ days. In all other model sets we cannot reproduce at the same time the proportion of undetectable binaries and the number of long-period detectable binaries, even if we assume a ${K_{\mathrm{min}}}\gg 0.1~{\mathrm{km\,s}^{-1}}$.
In the discussion above we have only considered circular orbits. With the Monte Carlo method described in Sect. \[sub:detect\], we find that eccentric binaries with orbital periods in the range $5\times 10^3\lesssim P \lesssim 3\times 10^5~{\mathrm{days}}$ are less likely to be detected, because the radial-velocity variations are smaller than in the circular case during most of the orbit, except when the system is close to periastron. Assuming ${K_{\mathrm{min}}}=0.1~{\mathrm{km\,s}^{-1}}$, the decrease in detection probability is about $5\%$ or less for small eccentricities, $e\le 0.3$, while it can be as high as about $25\%$ for $e=0.7$, which corresponds to the highest eccentricity determined in the observed CEMP-$s$ sample. The smaller detection probability of eccentric systems helps to reduce the difference between the observed and predicted numbers of detectable CEMP-$s$ binaries with $P>10^4$ days, but this effect is too small to remove the discrepancy.
One further aspect that needs to be addressed is whether the observed sample of [@HansenTT2016-2], which we use to constrain our models, is representative of the overall CEMP-$s$ population. The members of this sample were initially selected for their chemical properties, namely their observed abundances of carbon and barium relative to iron, and subsequently monitored to ascertain whether they belong to binary systems and, eventually, to determine their orbital periods [@HansenTT2016-2]. Consequently, in principle there is no obvious observational bias in favour of relatively short orbital periods.
Nevertheless, the fact that most stars in the sample have relatively large carbon enhancements ($[{\mathrm{C}}/{\mathrm{Fe}}]>1.5$ in $18$ out of $22$ stars) may introduce a potential bias. We note that all sample stars are giants that have already undergone first dredge-up, which has mixed and diluted the transferred material throughout the accretor (with the possible exception of HE$1046-1352$ which has ${\log_{10} g}=3.5$ and is also the star with the highest carbon abundance, $[{\mathrm{C}}/{\mathrm{Fe}}]>3.3$). This suggests that they must have accreted substantial amounts of material from their AGB companions and, therefore, that the adopted observed sample may be biased towards systems that experienced highly efficient mass transfer. Consequently wide-period systems, which transferred only a few percent of the mass ejected by the donor, may be underrepresented. A similar selection effect has been pointed out for barium stars, in which higher $s$-process enrichments are associated with shorter orbital periods [e.g. @Boffin1994-1; @Boffin2015]. However, if among the CEMP-$s$ stars simulated in model set M2 we select those that have $[{\mathrm{C}}/{\mathrm{Fe}}]>1.5$ and ${\log_{10} g}\le 3.0$, only very wide systems at ${P_{\mathrm{orb}}}>10^5$ days are significantly affected. These transfer less than a few $0.01\,{M_{\odot}}$, that is just enough to enhance the surface carbon abundance of the accretor up to $[{\mathrm{C}}/{\mathrm{Fe}}]\approx 1$. Because these wide systems make up just a few per cent of the total CEMP population in set M2, the final period distributions change only marginally if we exclude them. We conclude that even if there is a bias towards large accreted masses, this does not significantly affect the resulting period distribution.
Accretion efficiency and angular-momentum loss during wind mass transfer. {#sub:etabeta}
-------------------------------------------------------------------------
In the wind mass-transfer process, the amount of mass and angular momentum that are accreted by the companion or lost by the binary system are intricately related. The stronger the wind of the donor interacts with the binary, the higher the accretion rate and angular-momentum loss are expected to be. Many authors have computed hydrodynamical simulations of wind mass transfer in low-mass binary systems with different codes and algorithms, but only a few of these studies have addressed the loss of angular momentum. These simulations produce more or less consistent results concerning the angular momentum lost when similar input physics is adopted [cf. Fig. 11 of @Saladino2018-1]. However, the accretion efficiencies found in hydrodynamical simulations differ significantly, by as much as a factor of ten, depending on physical assumptions such as the acceleration mechanism of the wind, the equation of state used to describe the gas particles, the adopted cooling mechanism, and possibly also the algorithms used to compute the accretion rates [@Theuns1996; @Nagae2004; @deValBorro2009; @Shazrene2010; @ChenZ2017; @Liu2017; @Saladino2018-1]. Because of these discrepancies, and with the aim of investigating multiple combinations of model assumptions, in our work we chose to treat mass accretion and angular-momentum loss as if they were independent processes. To improve on this study it will be necessary to compute these processes self-consistently with a model based on a reliable set of hydrodynamical simulations covering a large parameter space.
We have investigated different models of angular-momentum loss. As demonstrated in Sect. \[amloss\], most of these models yield periods of CEMP-$s$ binaries that are significantly longer than observed, with the median of the synthetic period distribution typically exceeding the observed value by about a factor of about three. With our model sets M6 and M7, which are based on detailed hydrodynamical calculations, the proportion of synthetic CEMP-$s$ stars between $100$ and $2,\!500$ days increases compared to the isotropic-wind model, reducing the discrepancy with the observations in this range. However, at longer periods the ejected material only interacts weakly with the binary and the results are the same as with the isotropic-wind model, hence too many wide-orbit CEMP-$s$ systems are formed.
In order to obtain a period distribution of CEMP-$s$ stars that is consistent with the observed population, at least for periods up to about $10^4$ days, it is necessary to assume that much larger amounts of angular momentum are lost with the ejected wind material. This is the case for model sets M8, based on the simulations of [@Brookshaw1993], and M12, with $\gamma=6$ in Eq. (\[eq:gamma\]), in which binary systems shrink significantly in response to mass loss. However, we emphasize that sets M8 and M12 are not based on realistic physical assumptions. Set M8 is based on ballistic simulations that do not take into account gas pressure and radiative acceleration, which would make the ejected outflow more isotropic, and hence the angular momentum dispersed by wind material is overestimated. Assuming a constant $\gamma$ in set M12 implies that the specific angular momentum of the ejected material is independent of the orbital period of the systems. In addition, we note that with model sets M8 and M12 the $20\%$ widest CEMP-$s$ systems have periods mostly in the range $15,\!000$–$40,\!000$ days, of which the majority should be detectable as binaries (however, see the discussion in Sect. \[sub:obs\]). In their study of the periods and eccentricities of barium stars, [@Izzard2010] show that they reproduce the bulk of the observed distribution by adopting Eq. (\[eq:gamma\]) with $\gamma=2$. With the same assumptions, in our model set M10 we find a cumulative distribution approximately consistent with the observations for periods up to about $10^4$ days. However, at longer periods set M10 predicts only $2\%$ undetected CEMP-$s$ binary systems, and an overall CEMP fraction of about $5\%$, which is lower than the observed one (see Fig. \[fig:amloss3\]).
![As Fig. \[fig:qcrit\] for different for accretion-efficiency and angular-momentum-loss prescriptions. Model set M2 adopts the WRLOF model of wind accretion and assumes isotropic wind mass loss. Model set M10 (magenta-dotted line) is computed with Eq. (\[eq:gamma\]) and $\gamma=2$, and adopts the BHL model with $\alpha=1.5$. Model sets M15 and M16 (blue-dot-dashed and green-dashed lines, respectively) compute wind accretion efficiency with $\alpha=10$ within the BHL model. Set M15 assumes isotropic wind mass loss, whereas set M16 uses Eq. (\[eq:gamma\]) with $\gamma=2$. []{data-label="fig:amloss3"}]({Fig-10}.pdf){width="50.00000%"}
[@Abate2015-1] found that high accretion efficiencies are required to reproduce the surface abundances of their sample of observed CEMP binary stars. A similar conclusion was reached by [@Abate2015-2], who found that approximately $40\%$ of the analysed CEMP-$s$ stars accreted more than $0.1\,{M_{\odot}}$ by wind mass transfer from an AGB companion. In their best-fitting model set B, [@Abate2015-1] adopted the equation for the BHL wind accretion efficiency proposed by @BoffinJorissen1988 [Eq. 6] but they arbitrarily replaced the numerical constant ${\alpha_{\mathrm{BHL}}}=1.5$ with ${\alpha_{\mathrm{BHL}}}=10$. In their model, the high accretion efficiency was combined with strong angular-momentum loss ($\gamma=2$ as in our set M9) which was found to be necessary to reproduce the observed orbital periods. As a test, we computed two simulations adopting the BHL prescription for wind accretion and ${\alpha_{\mathrm{BHL}}}=10$, while for the angular-momentum loss either an isotropic wind (set M15) or Eq. (\[eq:gamma\]) with $\gamma=2$ (set M16) are assumed. The results are shown in Fig. \[fig:amloss3\]. The increase in accretion efficiency generated by ${\alpha_{\mathrm{BHL}}}=10$ causes a shift in the period distribution of CEMP-$s$ stars towards longer periods than in model sets M4 and M5, which use the BHL model with ${\alpha_{\mathrm{BHL}}}=1.5$. This is because in these two sets most systems with periods longer than about $60,\!000$ days do not transfer enough mass to generate CEMP stars, whereas they do if we adopt ${\alpha_{\mathrm{BHL}}}=10$. As a result, about $35\%$ of the whole synthetic CEMP-$s$ population come from very wide systems with periods between $60,\!000$ days and a few times $10^5$ days. In model set M16, which has the same assumptions of set B of [@Abate2015-1], the cumulative distribution we determine is shifted towards periods a factor of two up to ten longer than in our set M2. Overall these results indicate that a simple increase in wind accretion efficiency and specific angular-momentum loss of the ejected material applied to all systems regardless of their separations aggravate the discrepancy between synthetic and observed period distributions. In our simulations, we implicitly ignore that the material transferred to the secondary star carries angular momentum, which will spin up the accretor. If the angular momentum content is too great, it can prevent the accretion of material [@Packet1981]. [@Matrozis2017-1] showed that the transferred material has to dissipate most of its angular momentum for the secondary star to accrete more than a few $0.01{M_{\odot}}$. Investigating this issue and the constraints it puts on the mass-accretion process is beyond the scope of this paper.
On the initial orbital-period distribution. {#sub:Pi}
-------------------------------------------
The distribution of initial orbital periods (or separations) and the initial binary fraction are among the largest uncertainties in our simulations. [@Moe2017] have combined and integrated the formidable efforts made by many authors to characterise the orbital-period and mass-ratio distributions of young main-sequence binary systems in the Galactic disk. Adopting their fitting equations to these distributions instead of our default $\log a$-flat initial distribution in our simulations does not significantly change the period distribution of our synthetic CEMP-$s$ stars (see our set M13, Fig. \[fig:ai\]).
The study of [@Moe2017] does not include a dependence on the metallicity. In particular, little is known about the initial binary properties of the low-metallicity Halo population [see e.g. @Duchene2013 and references therein]. For example, [@Rastegaev2010] and [@Hettinger2015] argue that the total binary fraction increases with metallicity. In contrast, [@Gao2014; @Gao2017] and [@Badenes2018] find an anticorrelation between these two quantities. In addition, [@Rastegaev2010] has determined the orbital-period distribution of a sample of about $60$ Population-II subdwarf binaries. The result is an asymmetric distribution with a broad peak between periods of $10$ and $10^4$ days and a tail up to about $10^{10}$ days [see Figs. 8 and 10 of @Rastegaev2010]. This distribution is not dissimilar to what we assume in our model M14, although the ${f_{\log P}}$ within the peak of their distribution is smaller, about $0.10$. The result of set M14 is a cumulative period distribution of CEMP-$s$ stars that resembles the observed distribution for periods up to about $15,\!000$ days. In addition, with ${f_{\log P}}=0.15$ we find a CEMP fraction of $6.5\%$, similar to that determined from SDSS data [$\approx\!6\%$, @Lee2013]. We emphasize that the initial-period distribution in set M14 is adopted for comparison purposes and it is not claimed to be realistic. However, this choice is broadly in agreement with the study of [@Gao2014] for F/G/K stars at $[{\mathrm{Fe}}/{\mathrm{H}}]<-1.1$ in binary systems up to one thousand days, and it is also roughly consistent with the results obtained by [@Moe2017] for relatively massive stars ($M_1\gtrsim 5{M_{\odot}}$), although these are a marginal fraction of all CEMP-$s$ progenitors in our simulations. Another source of uncertainty is the contribution to the observed period distribution from binary systems with a white dwarf as a companion. The previous evolution of these systems has probably modified their orbit, and consequently their current periods differ from the initial periods at their formation. While in their equations [@Moe2017] take into account this effect, it is not discussed by [@Rastegaev2010].
These results are instructive to understand by how much it is necessary to modify the initial period distribution to reproduce the observed CEMP-$s$ period distribution, but we stress that the assumptions adopted in model set M14 are completely arbitrary. In principle, the validity of these assumptions can be tested on a number of different astrophysical phenomena. For example, if the binary fraction per decade of periods, ${f_{\log P}}$, increases at low metallicity for orbits up to about $10^4$ days, the rate of Type Ia supernovae is expected to increase, because these supernovae progenitors are formed at periods shorter than about $5,\!000$ days [e.g. @Claeys2014]. A similar argument applies to close symbiotic binaries and blue stragglers, which are expected to be more numerous among metal-poor stars if ${f_{\log P}}$ is weighted towards short orbits.
In conclusion, it is clear that the initial binary population in the Galactic halo needs to be much better characterised in order to put firmer constraints on the physical processes playing a role in the formation of CEMP-$s$ binaries. The final data release of the Gaia mission, in combination with radial-velocity monitoring surveys, will hopefully provide insight into the properties of the binary population in the halo. In particular, binary systems in which both components are low-mass main-sequence stars have likely not evolved much over the ten billion years since their formation, and thus the initial distribution of orbital periods may be inferred from the study of their current orbital properties.
On the eccentricities of CEMP-$s$ stars. {#sub:ecc}
----------------------------------------
In our simulations the binary systems have circular orbits, although about half of the observed CEMP-$s$ stars have non-zero eccentricities up to $e=0.67$ [@HansenTT2016-2]. We performed a number of simulations with the model sets described in Sect. \[model\] and eight values of the initial eccentricity, $e_\mathrm{i}$, uniformly distributed in the range $[0,0.7]$. We find that the period distributions computed in Sect. \[results\] do not vary significantly when non-zero eccentricities are considered. CEMP stars with periods longer than about $2,\!500$ days are formed at every eccentricity in our range, hence reproducing the spread of the observations. The effect of tidal friction is increasingly important at shorter periods, because when the primary star ascends the giant branch it fills a significant portion of its Roche lobe. Below $P\approx 1000$ days the binary systems experience RLOF, during which the orbit is fully circularised. As a consequence, if tides are as efficient as assumed [@Hurley2002] and in the absence of a physical mechanism that re-enhances the eccentricity, all these short-period synthetic CEMP-$s$ systems are in circular orbits, in contrast with at least four observed CEMP-$s$ stars [two of which are in the sample of @HansenTT2016-2 see Fig. 1].
A mechanism that counteracts the tidal forces in binary systems with a giant component close to filling its Roche lobe has been invoked to model the eccentricities in a variety of contexts, including post-AGB binary systems and barium stars. The nature of such a mechanism that can enhance the eccentricity is currently unclear. [@VanWinckel1995] and [@Soker2000] suggested that enhanced mass transfer at periastron may counteract the circularisation of the orbit. [@Axel2008] successfully applied a tidally-enhanced model of mass loss from AGB stars to reproduce the eccentricities observed in moderately wide binary systems with a white dwarf and a less evolved companion, such as Sirius. Alternatively, it has been suggested that a fraction of the material expelled by the donor star may escape through the outer Lagrangian points and form a circumbinary disk. The interaction between this disk and the binary may lead to an increase of the system’s eccentricity which depends on properties such as the orbital separation, the mass of the disk, its lifetime, viscosity and density distribution [e.g. @Artymowicz1994; @Waelkens1996; @Dermine2013; @Vos2015]. Regardless of its nature, a mechanism that enhances the eccentricity of close systems will necessarily affect the final orbital periods. Consequently, any model promising to reconcile the synthetic period distribution of CEMP-$s$ stars with the observations will have to take into account a mechanism to counteract circularisation in short-period binaries.
In our study we focused on binary systems and we did not consider triples. The presence of a third star orbiting a binary system may in some circumstances increase its eccentricity, as described by the Kozai-Lidov mechanism [@Kozai1962; @Lidov1962], opposing the effect of circularisation [see e.g. @Perets2012]. The observed proportion of triple-to-binary systems varies between approximately $10\%$ and $20\%$ depending on the sample and on the detection techniques, [e.g. @Eggleton2008-2; @Rastegaev2010; @Tokovinin2014-2; @Borkovits2016]. Consequently, we could expect that in CEMP-$s$ sample of [@HansenTT2016-2] between two and four objects are, or were, in fact triple systems. This might help to explain the presence of some of the eccentric CEMP-$s$ stars with periods shorter than $10^3$ days. For example, in the eccentric system CS22964–161AB [$P=252$ days and $e=0.66$, from @Thompson2008], which is not included in the survey of [@HansenTT2016-2], both components are CEMP-$s$ stars in the early phases of their evolution, and hence have likely been polluted in the past by a third object [see e.g. @Thompson2008; @Abate2015-2].
On the origin of single CEMP-$s$ stars {#sub:single}
--------------------------------------
If the four CEMP-$s$ stars that do not exhibit radial-velocity variations are actually single stars, their formation history has to be understood. Their observed effective temperatures and surface gravities are relatively high [see Tab. 6 of @HansenTT2016-2 and references therein], which excludes the possibility that they are self-polluted AGB stars. Also, it is very unlikely that they were formed in binary systems that merged after mass transfer, as this would require fine-tuning of the system initial parameters. The initial separation has to be just long enough that the primary reaches the AGB phase, during which some material is transferred onto the companion, before a common-envelope phase shrinks the orbit so much that the system merges within a Hubble time. Furthermore, the merger product would itself end up on the AGB, and the same counter-argument as given above against self-pollution applies. Hence, the most likely hypothesis is that these stars were born with enhanced abundances of carbon and $s$-elements from an already enriched interstellar medium.
@Choplin2017-2 present a model in which the winds of rapidly-rotating massive stars (also called “spinstars”) are the cause of this pollution. They were able to reproduce the abundances of most detected elements in the four apparently-single CEMP-$s$ stars with a model of a $25{M_{\odot}}$ star rotating at $70\%$ of the break-up velocity at $[{\mathrm{Fe}}/{\mathrm{H}}]\approx-1.8$ (their Fig. 3). The comparison between the best fits of [@Choplin2017-2] and those of [@Abate2015-2], who in their model set A used essentially the same input physics as in our model set M2, indicates that for two of these stars (HE $0206$–$1916$ and CS $30301$–$015$) the binary mass-transfer model gives a better fit to the observed abundances. In addition, in one case (star HE $1045$+$0225$) both models fail to reproduce at the same time the abundances of light elements (carbon, sodium, magnesium) and light $s$-elements (strontium and yttrium), with the spinstar model correctly fitting the $s$-elements and the binary model better reproducing the light elements.
Star CS $30301$–$015$ is particularly interesting as it has the largest set of measured abundances, including heavy $s$-elements up to lead, that can put tighter constrains on the nucleosynthesis models. While the analysis of [@HansenTT2016-2] shows no radial-velocity variations within $0.3~{\mathrm{km\,s}^{-1}}$ over a $2,\!300$-day time span, the combination of relatively low abundances of light $s$-elements ($[{\mathrm{ls}}/{\mathrm{Fe}}]\approx 0.4$), mild enrichment in heavy $s$-elements ($[{\mathrm{hs}}/{\mathrm{Fe}}]\approx 1$, though with some scatter) and strong lead enhancement ($[{\mathrm{Pb}}/{\mathrm{Fe}}]=2$, [@Aoki2002-5]) cannot be reconciled with the predictions of the spinstar model and is much more consistent with AGB nucleosynthesis. Also, the binary model of [@Abate2015-2] for this star predicts an orbital period which could be as long as $10^6$ days, which would likely go undetected.
In conclusion, although the spinstar model naturally explains why no orbital motion is detected, binary-star models [@Abate2015-2 and our model set M2] generally predict wide orbits for these systems ($P>15,\!000$), which would be difficult to detect. The abundances of more elements are necessary to discriminate between a spinstar or AGB origin of the chemical enrichment in these stars. In particular, rotating massive stars generally yield more oxygen and sodium compared to AGB stars for a given amount of carbon. Also, they produce more light $s$-elements (e.g. strontium) than heavy $s$-elements (e.g. barium) and not much lead [@Frischknecht2012; @Choplin2017-2]. By contrast, in AGB stars heavy $s$-elements and lead are usually more strongly enhanced than light $s$-elements. When a set of observed abundances of all these elements will be available for these stars, it will be possible to put stronger constraints on their origin.
Summary and conclusions {#concl}
=======================
The motivation for this work is that in population-synthesis models of metal-poor binary systems the majority of CEMP-$s$ stars have periods several times longer than in the observed sample of CEMP-$s$ stars compiled by [@HansenTT2016-2]. This sample is taken as a reference because arguably it is not biased towards a particular period range. In previous studies it has normally been assumed that Roche-lobe overflow is unstable when the donor is an AGB star (except in rare circumstances), that wind ejection occurs in spherical symmetry, and that the initial distribution of orbital separations is flat in the logarithm. This results in about $45\%$ of the synthetic CEMP-$s$ stars having periods exceeding $10,\!000$ days, which is currently the longest period measured for an observed system. If these wide systems are excluded from the synthetic population, the models underestimate the observed CEMP fraction by almost a factor of two.
In this study we consider several modifications of these standard assumptions and investigate their effect on the CEMP period distribution. We show that the stability criterion of Roche-lobe overflow plays a role in determining the proportion of CEMP stars that are formed at periods between a few hundred and a few thousand days. However, even if we assume that Roche-lobe overflow from AGB donors is always stable, the consequences for the period distribution of the synthetic CEMP population are small, because only a relatively small fraction of all our simulated CEMP stars experience a phase of Roche-lobe overflow, between about $15\%$ and $30\%$ in the most extreme case, while the remainder form by wind accretion. Hence, to reproduce the observed fraction of CEMP stars, wind mass transfer from AGB donors in binary systems has to be efficient.
A large uncertainty in our study is the original binary fraction per decade of orbital period in the very metal-poor stellar population of the Halo. Assuming that it was similar to that observed at higher metallicity in the solar neighbourhood, the constraint placed by the observed period distribution of CEMP-$s$ stars requires that the wind ejected by binary systems has to carry away a large amount of angular momentum, up to about ten times higher than in the simplistic case of isotropic wind ejection. At present, state-of-the-art hydrodynamical simulations do not predict the high angular-momentum loss necessary to reconcile the results of our population-synthesis models with the observations. However, another possibility is that at very low metallicity binary systems formed at periods distributed differently from today. Our simulations show that if binary systems are initially formed in a significantly narrower range of periods, up to around ten thousand days, then the period distribution of observed CEMP-$s$ stars can be reproduced.
The CEMP-$s$ star sample of [@HansenTT2016-2] contains four stars that appear to be single, that is, no evidence for binary-induced radial-velocity variations was found. Our simulations show that at least some, and perhaps all, of these could be binaries at periods (much) longer than $10,\!000$ days that are observed at unfavourable orbital phases and/or inclinations. However, it is hard to reconcile at the same time both the substantial number of apparently single stars and the fact that only one detected binary has a (still unmeasured) period exceeding $10,\!000$ days with any of our simulations.
In conclusion, a combination of significant wind accretion efficiency, higher than the predictions of the canonical Bondi-Hoyle-Lyttleton approximation, strong angular-momentum loss carried away by the wind material that escapes the binary system, and possibly an initial distribution of orbital separations significantly different from that observed in solar-vicinity stars, is required to reproduce the orbital periods of the observed population of CEMP-$s$ stars.
We thank the anonymous referee for valuable comments that have helped improve our paper. CA is the recipient of an Alexander von Humboldt Fellowship. RJS is the recipient of a Sofja Kovalevskaja Award from the Alexander von Humboldt Foundation.
[121]{} natexlab\#1[\#1]{}
, C., [Pols]{}, O. R., [Izzard]{}, R. G., [Mohamed]{}, S. S., & [de Mink]{}, S. E. 2013, , 552, A26
, C., [Pols]{}, O. R., [Karakas]{}, A. I., & [Izzard]{}, R. G. 2015, , 576, A118
, C., [Pols]{}, O. R., [Karakas]{}, A. I., & [Izzard]{}, R. G. 2015, , 581, A22
, C., [Pols]{}, O. R., [Stancliffe]{}, R. J., [et al.]{} 2015, , 581, A62
, W., [Beers]{}, T. C., [Christlieb]{}, N., [et al.]{} 2007, , 655, 492
, W., [Ryan]{}, S. G., [Norris]{}, J. E., [et al.]{} 2002, , 580, 1149
, P. & [Lubow]{}, S. H. 1994, , 421, 651
, M., [Grevesse]{}, N., [Sauval]{}, A. J., & [Scott]{}, P. 2009, , 47, 481
, C., [Mazzola]{}, C., [Thompson]{}, T. A., [et al.]{} 2018, , 854, 147
, S., [Gallino]{}, R., [Straniero]{}, O., [Cristallo]{}, S., & [K[ä]{}ppeler]{}, F. 2012, , 422, 849
, S. & [H[ö]{}fner]{}, S. 2012, , 546, A76
, S., [H[ö]{}fner]{}, S., [Aringer]{}, B., & [Eriksson]{}, K. 2015, , 575, A105
, H. M. J. 2015, [Mass Transfer by Stellar Wind]{}, ed. H. M. J. [Boffin]{}, G. [Carraro]{}, & G. [Beccari]{} (Springer), 153
, H. M. J. & [Jorissen]{}, A. 1988, , 205, 155
, H. M. J. & [Zacs]{}, L. 1994, , 291, 811
, A. A., [Glebbeek]{}, E., & [Pols]{}, O. R. 2008, , 480, 797
, H. 1952, , 112, 195
, H. & [Hoyle]{}, F. 1944, , 104, 273
, T., [Hajdu]{}, T., [Sztakovics]{}, J., [et al.]{} 2016, , 455, 4136
, M. L., [McDonald]{}, I., [Srinivasan]{}, S., [et al.]{} 2015, , 810, 116
, M. L., [McQuinn]{}, K. B. W., [Barmby]{}, P., [et al.]{} 2015, , 800, 51
, L. & [Tavani]{}, M. 1993, , 410, 719
, M., [Gallino]{}, R., & [Wasserburg]{}, G. J. 1999, , 37, 239
, D., [Beers]{}, T. C., [Bovy]{}, J., [et al.]{} 2012, , 744, 195
, A., [Bujarrabal]{}, V., [S[á]{}nchez Contreras]{}, C., [Alcolea]{}, J., & [Neri]{}, R. 2002, , 386, 633
, X. & [Han]{}, Z. 2008, , 387, 1416
, Z., [Blackman]{}, E. G., [Nordhaus]{}, J., [Frank]{}, A., & [Carroll-Nellenback]{}, J. 2018, , 473, 747
, Z., [Frank]{}, A., [Blackman]{}, E. G., [Nordhaus]{}, J., & [Carroll-Nellenback]{}, J. 2017, , 468, 4465
, A., [Hirschi]{}, R., [Meynet]{}, G., & [Ekstr[ö]{}m]{}, S. 2017, , 607, L3
, J. S. W., [Pols]{}, O. R., [Izzard]{}, R. G., [Vink]{}, J., & [Verbunt]{}, F. W. M. 2014, , 563, A83
, J. G., [Shectman]{}, S., [Thompson]{}, I., [et al.]{} 2005, , 633, L109
, T., [Teyssier]{}, D., [Justtanont]{}, K., [et al.]{} 2015, , 581, A60
, M., [Karovska]{}, M., & [Sasselov]{}, D. 2009, , 700, 1148
, M., [Karovska]{}, M., [Sasselov]{}, D. D., & [Stone]{}, J. M. 2017, , 468, 3408
, T., [Izzard]{}, R. G., [Jorissen]{}, A., & [Van Winckel]{}, H. 2013, , 551, A50
, G. & [Kraus]{}, A. 2013, , 51, 269
, R. 2004, , 48, 843
, P. P. & [Tokovinin]{}, A. A. 2008, , 389, 869
, A., [Christlieb]{}, N., [Norris]{}, J. E., [et al.]{} 2006, , 652, 1585
, B. & [H[ö]{}fner]{}, S. 2008, , 483, 571
, U., [Hirschi]{}, R., & [Thielemann]{}, F.-K. 2012, , 538, L2
, R., [Arlandini]{}, C., [Busso]{}, M., [et al.]{} 1998, , 497, 388
, S., [Liu]{}, C., [Zhang]{}, X., [et al.]{} 2014, , 788, L37
, S., [Zhao]{}, H., [Yang]{}, H., & [Gao]{}, R. 2017, , 469, L68
, H., [Hjellming]{}, M. S., [Webbink]{}, R. F., [Chen]{}, X., & [Han]{}, Z. 2010, , 717, 724
, H., [Webbink]{}, R. F., [Chen]{}, X., & [Han]{}, Z. 2015, , 812, 40
, S. R., [van Loon]{}, J. T., [Zijlstra]{}, A. A., [et al.]{} 2017, , 465, 403
, T. T., [Andersen]{}, J., [Nordstr[ö]{}m]{}, B., [et al.]{} 2016, , 588, A3
, F. 2005, , 43, 435
, T., [Badenes]{}, C., [Strader]{}, J., [Bickerton]{}, S. J., & [Beers]{}, T. C. 2015, , 806, L2
, M. S. & [Webbink]{}, R. F. 1987, , 318, 794
, S. 2007, in Astronomical Society of the Pacific Conference Series, Vol. 378, Why Galaxies Care About AGB Stars: Their Importance as Actors and Probes, ed. F. [Kerschbaum]{}, C. [Charbonnel]{}, & R. F. [Wing]{}, 145
, S. 2009, in ASP Conf. Ser., Vol. 414, Astronomical Society of the Pacific Conference Series, ed. [T. Henning, E. Gr[ü]{}n, & J. Steinacker]{}, 3
, S. 2015, in Astronomical Society of the Pacific Conference Series, Vol. 497, Why Galaxies Care about AGB Stars III: A Closer Look in Space and Time, ed. F. [Kerschbaum]{}, R. F. [Wing]{}, & J. [Hron]{}, 333
, F. & [Lyttleton]{}, R. A. 1939, Proceedings of the Cambridge Philosophical Society, 35, 405
, J. R., [Tout]{}, C. A., & [Pols]{}, O. R. 2002, , 329, 897
, R. G., [Dermine]{}, T., & [Church]{}, R. P. 2010, , 523, A10+
, R. G., [Dray]{}, L. M., [Karakas]{}, A. I., [Lugaro]{}, M., & [Tout]{}, C. A. 2006, , 460, 565
, R. G., [Glebbeek]{}, E., [Stancliffe]{}, R. J., & [Pols]{}, O. R. 2009, , 508, 1359
, R. G., [Preece]{}, H., [Jofre]{}, P., [et al.]{} 2018, , 473, 2984
, R. G., [Tout]{}, C. A., [Karakas]{}, A. I., & [Pols]{}, O. R. 2004, , 350, 407
, B., [Mitsumoto]{}, M., [Oka]{}, K., [et al.]{} 2005, , 441, 589
, A., [Van Eck]{}, S., [Mayor]{}, M., & [Udry]{}, S. 1998, , 332, 877
, A., [Van Eck]{}, S., [Van Winckel]{}, H., [et al.]{} 2016, , 586, A158
, A. I. 2010, , 403, 1413
, M., [Gaetz]{}, T. J., [Carilli]{}, C. L., [et al.]{} 2010, , 710, L132
, M., [Hack]{}, W., [Raymond]{}, J., & [Guinan]{}, E. 1997, , 482, L175
, M., [Schlegel]{}, E., [Hack]{}, W., [Raymond]{}, J. C., & [Wood]{}, B. E. 2005, , 623, L137
, M. B. N., [Brown]{}, A. G. A., [Portegies Zwart]{}, S. F., & [Kaper]{}, L. 2007, , 474, 77
, Y. 1962, , 67, 591
, P., [Tout]{}, C. A., & [Gilmore]{}, G. 1993, , 262, 545
, Y. S., [Beers]{}, T. C., [Masseron]{}, T., [et al.]{} 2013, , 146, 132
, M. L. 1962, , 9, 719
, Z.-W., [Stancliffe]{}, R. J., [Abate]{}, C., & [Matrozis]{}, E. 2017, , 846, 117
, S., [Beers]{}, T. C., [Christlieb]{}, N., [et al.]{} 2006, , 652, L37
, S., [Tsangarides]{}, S., [Beers]{}, T. C., [et al.]{} 2005, , 625, 825
, M., [Karakas]{}, A. I., [Stancliffe]{}, R. J., & [Rijs]{}, C. 2012, , 47, 1998
, E., [Abate]{}, C., & [Stancliffe]{}, R. J. 2017, , 606, A137
, R. D. & [Woodsworth]{}, A. W. 1990, , 352, 709
, M. & [Di Stefano]{}, R. 2017, , 230, 15
, S. 2010, PhD thesis, St Edmund Hall, University of Oxford
, S. & [Podsiadlowski]{}, P. 2007, in ASP Conf. Ser., Vol. 372, 15th European Workshop on White Dwarfs, ed. [R. Napiwotzki & M. R. Burleigh]{}, 397
, S. & [Podsiadlowski]{}, P. 2011, in Astronomical Society of the Pacific Conference Series, Vol. 445, Why Galaxies Care about AGB Stars II: Shining Examples and Common Inhabitants, ed. F. [Kerschbaum]{}, T. [Lebzelter]{}, & R. F. [Wing]{}, 355
, S. & [Podsiadlowski]{}, P. 2012, Baltic Astronomy, 21, 88
, T., [Oka]{}, K., [Matsuda]{}, T., [et al.]{} 2004, , 419, 335
, W., [H[ö]{}fner]{}, S., & [Aringer]{}, B. 2010, , 514, A35
, W. 1981, , 102, 17
, B. 1965, , 15, 89
, B. 1971, , 9, 183
, B. 1976, in IAU Symposium, Vol. 73, Structure and Evolution of Close Binary Systems, ed. P. [Eggleton]{}, S. [Mitton]{}, & J. [Whelan]{}, 75
, J.-C., [De Marco]{}, O., [Fryer]{}, C. L., [et al.]{} 2012, , 744, 52
, J.-C., [Herwig]{}, F., & [Paxton]{}, B. 2012, , 760, 90
, K. & [Ivanova]{}, N. 2015, , 449, 4415
, H. B. & [Kratter]{}, K. M. 2012, , 760, 99
, V. M., [Frebel]{}, A., [Beers]{}, T. C., & [Stancliffe]{}, R. J. 2014, , 797, 21
, N., [Abia]{}, C., [Limongi]{}, M., [Chieffi]{}, A., & [Cristallo]{}, S. 2018,
, W. H., [Flannery]{}, B. P., [Teukolsky]{}, S. A., & [Vetterling]{}, W. T. 1989, [Numerical recipes in C. The art of scientific computing]{} (Cambridge: University Press, 1989)
, D., [McAlister]{}, H. A., [Henry]{}, T. J., [et al.]{} 2010, , 190, 1
, D. A. 2010, , 140, 2013
, P. M. & [Taam]{}, R. E. 2008, , 672, L41
, D., [Karakas]{}, A. I., [Tosi]{}, M., & [Matteucci]{}, F. 2010, , 522, A32
, M. I., [Pols]{}, O. R., [van der Helm]{}, E., [Pelupessy]{}, I., & [Portegies Zwart]{}, S. 2018, ArXiv e-prints
, N. 2000, , 357, 557
, R. J. 2009, , 394, 1051
, R. J. & [Glebbeek]{}, E. 2008, , 389, 1828
, R. J., [Glebbeek]{}, E., [Izzard]{}, R. G., & [Pols]{}, O. R. 2007, , 464, L57
, E., [Shetrone]{}, M. D., [McConnachie]{}, A. W., & [Venn]{}, K. A. 2014, , 441, 1217
, T., [Katsuta]{}, Y., [Yamada]{}, S., [et al.]{} 2008, , 60, 1159
, T., [Yamada]{}, S., [Katsuta]{}, Y., [et al.]{} 2011, , 412, 843
, T., [Boffin]{}, H. M. J., & [Jorissen]{}, A. 1996, , 280, 1264
, T. & [Jorissen]{}, A. 1993, , 265, 946
, I. B., [Ivans]{}, I. I., [Bisterzo]{}, S., [et al.]{} 2008, , 677, 556
, A. 2014, , 147, 87
, M., [Boffin]{}, H. M. J., [Jorissen]{}, A., & [Van Eck]{}, S. 2017, , 597, A68
, J. T., [Cioni]{}, M.-R. L., [Zijlstra]{}, A. A., & [Loup]{}, C. 2005, , 438, 273
, H., [Waelkens]{}, C., & [Waters]{}, L. B. F. M. 1995, , 293
, E. & [Wood]{}, P. R. 1993, , 413, 641
, J., [[Ø]{}stensen]{}, R. H., [Marchant]{}, P., & [Van Winckel]{}, H. 2015, , 579, A49
, C., [Van Winckel]{}, H., [Waters]{}, L. B. F. M., & [Bakker]{}, E. J. 1996, , 314, L17
, T. E. & [Ivanova]{}, N. 2011, , 739, L48
, D., [Norris]{}, J. E., [Bessell]{}, M. S., [et al.]{} 2013, , 762, 27
Fit to ballistic calculations {#appendixA}
=============================
[@Brookshaw1993] compute the average specific angular momentum $\langle j_w \rangle$ of test particles ejected from the surface of a star and subsequently lost from a binary system, after following their ballistic trajectories in the binary potential. They present their results in terms of a quantity $$\label{eq:app1}
h_\mathrm{cm} = (1 + Q) \frac{\langle j_w \rangle}{a^2 {\Omega_{\mathrm{orb}}}}~,$$ which is related to our parameter $\eta$ as $h_\mathrm{cm} = \eta \, (1+Q)$. However, in their calculations the mass-losing star co-rotates with the orbit, so that $h_\mathrm{cm}$ includes both orbital and spin angular-momentum loss. Since the $\texttt{binary\_c}$ code already accounts for spin angular-momentum loss and spin-orbit coupling explicitly, we should avoid including this effect twice. Thus $\eta$ should include only the orbital angular-momentum loss. We expect the spin angular momentum loss to contribute a term equal to $\langle j_\mathrm{rot} \rangle = \frac{2}{3} R^2 {\Omega_{\mathrm{orb}}}$ for a co-rotating star of radius $R$, which implies that $$\label{eq:app2}
\eta = \frac{h_\mathrm{cm}}{1+Q} - \frac{2}{3} \frac{R^2}{a^2}~.$$ Table 13 of [@Brookshaw1993] presents values of $h_\mathrm{cm}$ for various mass ratios and Roche-lobe filling factors, $\Psi = R/R_\mathrm{L}$, of the mass-losing star. Because in these calculations the particles are ejected at very high velocity, we expect the Jeans approximation to be valid, i.e. we expect $\eta$ to be equal to ${\eta_{\mathrm{iso}}}= (1+Q)^{-2}$ (Eq. \[eq:jeans\]). We verified that this is indeed the case when applying Eq. (\[eq:app2\]) to all the values in Table 13.
We thus proceed to use Eq. (\[eq:app2\]) to study the dependence of $\eta$ on the particle injection velocity $v_\mathrm{in}$. We use the results of Table 1, in which isotropic mass loss from the surface of a Roche-lobe filling star is assumed. Applying Eq. (\[eq:app2\]) in this case is not quite correct because it neglects the non-spherical shape of a Roche-lobe filling star, but it is accurate enough for our purposes. Table 1 tabulates $h_\mathrm{cm}$ as a function of the mass ratio $Q$ and the parameter $V = v_\mathrm{in}/v_\mathrm{orb,d}$, where $v_\mathrm{orb,d} = {v_{\mathrm{orb}}}/(1+Q)$ is the orbital velocity of the donor star around the centre of mass and ${v_{\mathrm{orb}}}$ is the relative orbital velocity of the binary. Fig. \[fig:BT93\] shows the corresponding $\eta$ as a function of $v_\mathrm{in}/v_\mathrm{orb}$ for several values of $Q$, together with the fitting formula of Eq. (\[eq:BT93\]). In using the latter we have identified ${v_{\mathrm{w}}}$ with $v_\mathrm{in}$. For $v_\mathrm{in} \ga 3{v_{\mathrm{orb}}}$ the results of the ballistic calculations reproduce the Jeans mode, ${\eta_{\mathrm{iso}}}$ (horizontal parts of the curves) while for $v_\mathrm{in} \la {v_{\mathrm{orb}}}$ the results converge to a constant value of $\eta \approx 1.7$, independent of mass ratio.
![Results from Table 1 of [@Brookshaw1993] for several mass ratios $Q$ (coloured squares) as a function of $v_\mathrm{in}/v_\mathrm{orb}$. Solid lines show the fitting formula Eq. (\[eq:BT93\]) for the corresponding mass ratios.[]{data-label="fig:BT93"}]({Fig-11}.png){width="50.00000%"}
Reduced time of radial-velocity monitoring {#appendixB}
==========================================
Figure \[fig:baseline\] illustrates how the assumption of two different radial-velocity monitoring times, namely $t_{\mathrm{obs}}=3,\!000$ and $t_{\mathrm{obs}}=1,\!000$ days (blue-dashed and red-dotted lines, respectively) modifies the period distribution of the synthetic CEMP-$s$ systems. The period distributions are computed with our default model set M2 and a detection threshold of ${K_{\mathrm{min}}}=0.1\,{\mathrm{km\,s}^{-1}}$. The period distribution of all our synthetic CEMP-$s$ stars is also shown for comparison (black-dashed line).
![As Fig. \[fig:qcrit\] for models with different time-span of the radial-velocity monitoring. The solid-black line shows the period distribution of all the CEMP-$s$ stars in our simulation. The blue-dashed and red-dotted lines are computed with a detection threshold ${K_{\mathrm{min}}}=0.1\,{\mathrm{km\,s}^{-1}}$ and a time-span of $3,\!000$ and $1,\!000$ days, respectively. []{data-label="fig:baseline"}]({Fig-12}.pdf){width="50.00000%"}
[^1]: Given the elements X and Y and their number densities, $N_{{\mathrm{X}},{\mathrm{Y}}}$, $[{\mathrm{X}}/{\mathrm{Y}}]=\log_{10}\left(N_{{\mathrm{X}}}/N_{{\mathrm{Y}}}\right)_{\star} - \log_{10}\left(N_{{\mathrm{X}}}/N_{{\mathrm{Y}}}\right)_{\odot}$, where $\star$ and $\odot$ indicate the abundances detected in the star and in the Sun, respectively.
[^2]: SVN revision r5045.
[^3]: We refer to Sect. 5 of [@Abate2013] for a discussion about the consequences of varying ${T_{\mathrm{cond}}}$.
[^4]: For comparison, in this formalism $\gamma=Q^{-1}$ for an isotropic wind.
[^5]: For random orientations of the orbit, the cosine of the inclination angle is uniformly distributed.
[^6]: These were calculated with a version, adapted for `Python`, of the procedure `ksone` presented in “Numerical Recipes” [@NumericalRecipes].
|
---
abstract: |
The problem of converting noisy quantum correlations between two parties into noiseless classical ones using a limited amount of one-way classical communication is addressed. A single-letter formula for the optimal trade-off between the extracted common randomness and classical communication rate is obtained for the special case of classical-quantum correlations.
The resulting curve is intimately related to the quantum compression with classical side information trade-off curve $Q^*(R)$ of Hayden, Jozsa and Winter.
For a general initial state we obtain a similar result, with a single-letter formula, when we impose a tensor product restriction on the measurements performed by the sender; without this restriction the trade-off is given by the regularization of this function.
Of particular interest is a quantity we call “distillable common randomness” of a state: the maximum overhead of the common randomness over the one-way classical communication if the latter is unbounded. It is an operational measure of (total) correlation in a quantum state. For classical-quantum correlations it is given by the Holevo mutual information of its associated ensemble, for pure states it is the entropy of entanglement. In general, it is given by an optimization problem over measurements and regularization; for the case of separable states we show that this can be single-letterized.
author:
- |
I. Devetak[^1]\
*[IBM T.J. Watson Research Center, Yorktown Heights, NY 10598, USA]{}\
\
A. Winter[^2]\
*[Department of Computer Science, University of Bristol, Bristol BS8 1UB, U.K.]{}**
title: Distilling common randomness from bipartite quantum states
---
\[thm1\][Lemma]{} \[thm1\][Lemma]{} \[thm1\][Lemma]{} \[thm1\][Corollary]{} \[thm1\][Lemma]{} \[thm1\][Lemma]{} \[thm1\][Example]{}
\#1[[\#1 | ]{}]{} |
${\left(} \def\BL{\Bigr(}
\def$[)]{}
\#1\#2[[\#1 \#2]{}]{} \#1[[1 \#1]{}]{} \#1[[\#1]{}]{} \#1\#2[[\#1 \#2]{}]{} \#1[\#1 ]{} \#1[ | \#1 ]{} pi \#1[[e]{}\^[\^[\#1]{}]{}]{} \#1[\_[[\#1]{}]{}]{} \#1
\#1\#2[ \#1 , \#2 ]{} pi \#1[[e]{}\^[\^[\#1]{}]{}]{} \#1[\_[[\#1]{}]{}]{}
Introduction
============
Quantum, and hence also classical, information theory can be viewed as a theory of inter-conversion between various resources. These resources can be classical or quantum, static or dynamic, noisy or noiseless. Based on the number of spatially separated parties sharing a resource, it can be bipartite or multipartite; local (monopartite) resources are typically taken for granted. In what follows, we shall mainly be concerned with bipartite resources. Let us introduce a notation in which $c$ and $q$ stand for classical and quantum, respectively, curly and square brackets stand for noisy and noiseless, respectively, and arrows ($\rightarrow$) will distinguish dynamic resources from static ones. The possible combinations are tabulated below. Noisy dynamic resources are the four types of noisy channels, classified by the classical/quantum nature of the input/output. Beside the familiar classical $\{ c \rightarrow c \}$ and quantum $\{ q \rightarrow q \}$ channels, this category also includes preparation of quantum states from a given set (labeled by classical indices) $\{ c \rightarrow q \}$ and measurement of quantum states yielding classical outcomes $\{ q \rightarrow c \}$. Dynamic “unit” resources by definition require the input and output to be of the same nature, and they comprise of the noiseless bit $[c \rightarrow c ]$ and qubit $[q \rightarrow q]$ channel, but we additionally introduce symbols for general (higher dimensional) perfect quantum and classical channels: $(q \rightarrow q)$ and $(c \rightarrow c)$, respectively.
Noisy static resources, not having a directionality, can be one of three types: classical $\{ c \, c \}$ , quantum $\{ q \, q \}$ and mixed classical-quantum $\{ c \, q \}$. The first of these is embodied in a pair of correlated random variables $XY$, associated with the product set $\CX \times \CY$ and a probability distribution $p(x,y) = \pr\{ X = x, Y = y\}$ defined on $\CX \times \CY $. The $\{ q \, q \}$ analogue is a bipartite quantum system $\CA \CB$, associated with a product Hilbert space $\CH_\CA \otimes \CH_\CB$ and a density operator $\rho^{\CA \CB}$, the “quantum state” of the system $\CA \CB$, defined on $\CH_\CA \otimes \CH_\CB$. A $\{ c \, q \}$ resource is a hybrid classical-quantum system $X \CQ$, the state of which is now described by an *ensemble* $\{ \rho_x, p(x) \}$, with $p(x)$ defined on $\CX$ and the $\rho_x$ being density operators on the Hilbert space $\CH_\CQ$ of $\CQ$. The state of the quantum system $\CQ$ is thus correlated with the classical index $X$. A useful representation of $\{ c \, q \}$ resources, which we refer to as the “enlarged Hilbert space” (EHS) representation, is obtained by embedding the random variable $X$ in some quantum system $\CA$. Then our ensemble $\{ \rho_x, p(x) \}$ corresponds to the density operator \^ = \_x p(x) \^ \_x\^, \[cq\] where $\{\ket{x}: x \in \CX \}$ is an orthonormal basis for the Hilbert space $\CH_\CA$ of $\CA$. Thus $\{ c \, q \}$ resources may be viewed as a special case of $\{ q \, q \}$ ones. Finally, we have noiseless static resources, which can be classical $(c \, c)$ or quantum $(q \, q)$. The classical resource is a pair of *perfectly* correlated random variables, which is to say that $\CX = \CY$ and $p(x,y) = p(x) \delta(x,y)$ (without loss of generality). We reserve the $[c \, c]$ notation for a *unit* of common randomness (1 rbit), a perfectly correlated pair of binary random variables with a full bit of entropy. The quantum resource is a quantum system $\CA \CB$ in a pure entangled state $\ket{\psi}_{\CA \CB}$. Again, the $[ q \, q ]$ notation denotes a unit of entanglement (1 ebit), a maximally entangled qubit pair $\frac{1}{\sqrt{2}} (\ket{0}_\CA \ket{0}_\CB + \ket{1}_\CA \ket{1}_\CB )$. Since $(c \, c)$ and $[c \, c]$, and $(q \, q)$ and $[q \, q]$ may be inter-converted in an asymptotically lossless way and with an asymptotically vanishing rate of extra resources, for most purposes it suffices to consider the unit resources only. Note the clear hierarchy amongst unit resources: $$[ q \rightarrow q ] \Longrightarrow ( [ c \rightarrow c ] \,\, {\rm or}
\,\, [ q \, q] )
\,\, \Longrightarrow [ c \, c].$$ Any of the conversions ($\Longrightarrow$) can be performed at a unit rate and no additional cost. On the other hand, $[ c \rightarrow c ]$ and $[ q \, q]$ are strictly “orthogonal”: neither can be produced from the other.
[|c|c|]{}\
$ [ c \rightarrow c ]$ & noiseless bit channel\
$ [ q \rightarrow q ]$ & noiseless qubit channel\
[|c|c|]{}\
$ ( c \rightarrow c )$ & general noiseless channel — w.l.o.g. identity on some set\
$ ( q \rightarrow q )$ & noiseless qubit channel — w.l.o.g. identity on some space\
[|c|c|]{}\
$\{ c \rightarrow c \}$ & noisy classical channel, given by a stochastic matrix $W$\
$\{ c \rightarrow q \}$ & quantum state preparation, given by quantum alphabet $\{ \rho_x \}$\
$\{q \rightarrow c \}$ & generalized measurement, given by a POVM $(E_x)$\
$\{ q \rightarrow q \}$ & noisy quantum channel, given by CPTP map $\CN$\
[|c|c|]{}\
$[ c \, c ]$ & maximally correlated bits (1 rbit)\
$[ q \, q ]$ & maximally entangled qubits (1 ebit)\
[|c|c|]{}\
$( c \, c )$ & perfectly correlated random variables $X Y$ with distribution $p(x,y) = p(x) \delta(x,y)$\
$( q \, q )$ & bipartite quantum system $\CA \CB$ in a pure state $\ket{\psi}_{\CA \CB}$\
[|c|c|]{}\
$\{ c \, c \}$ & correlated random variables $X Y$ with joint distribution $p(x,y)$\
$\{ c \, q \}$ & classical-quantum system $X \CQ$ corresponding to an ensemble $\{\rho_x, p(x) \}$\
$\{ q \, q \}$ & bipartite quantum system $\CA \CB$ in a general quantum state $\rho^{\CA \CB}$\
The generality of this classification is illustrated in the table below, where the resource inter-conversion task is identified for a number of examples from the literature. To interpret these “chemical reaction formulas”, there is but rule to obey: if non-unit resources appear on the right, then all non-unit (dynamical) resources are meant to be fed from some fixed source. For example, $(c \rightarrow c)$ in the output of a transformation symbolizes the noiseless transmission of an implicit classical information source, and likewise $(q \rightarrow q)$ the noiseless transmission of an implicit quantum information source
[|l|l|]{}\
$[ c \rightarrow c ] \Longrightarrow ( c \rightarrow c )$ & Shannon compression [@shannon]\
$[ q \rightarrow q ] \Longrightarrow ( q \rightarrow q )$ & Schumacher compression [@nono]\
$( q \, q ) \Longrightarrow [ q \, q ] $ & Entanglement concentration [@bbps]\
$[ q \, q ] + [c \leftrightarrow c] \Longrightarrow ( q \, q ) $ & Entanglement dilution [@bbps; @lo; @koren]\
$[ q \, q ] + [c \leftrightarrow c] \Longrightarrow \{ q \, q \} $ & Entanglement cost, entanglement of\
& purification [@bdsw; @hht; @thld]\
$\{ q \, q \} + [c \leftrightarrow c] \Longrightarrow [ q \, q ] $ & Entanglement distillation [@bdsw; @q]\
$\{ c \, c \} + [c \leftrightarrow c]
\Longrightarrow [ c \, c ] $ & Classical common randomness capacity [@ac1; @ac2]\
$\{ c \, q \} + [c \rightarrow c] \Longrightarrow [ c \, c ] $ & [**present paper**]{}\
$\{ q \, q \} + [c \rightarrow c] \Longrightarrow [ c \, c ] $ & [**present paper**]{}\
$\{ c \rightarrow c \} \Longrightarrow [c \rightarrow c] $ & Shannon’s channel coding theorem [@shannon]\
$\{ c \rightarrow q \} \Longrightarrow [c \rightarrow c] $ & HSW theorem (fixed alphabet) [@hsw]\
$\{ q \rightarrow q \} \Longrightarrow [c \rightarrow c] $ & HSW theorem (fixed channel) [@hsw]\
$\{ q \rightarrow q \} \Longrightarrow [q \rightarrow q] $ & Quantum channel coding theorem [@q]\
$[ c \rightarrow c ] + [ q \, q ] \Longrightarrow [q \rightarrow q] $ & Quantum teleportation [@tele]\
$[ q \rightarrow q ] + [ q \, q ] \Longrightarrow [c \rightarrow c] $ & Quantum super-dense coding [@dense]\
$\{ q \rightarrow q \} + [ q \, q ] \Longrightarrow [c \rightarrow c] $ & Entanglement assisted classical capacity [@eac]\
$\{ q \rightarrow q \} + [ q \, q ] \Longrightarrow [q \rightarrow q] $ & Entanglement assisted quantum capacity [@eac]\
$[ c \rightarrow c ] + [ c \, c ] \Longrightarrow \{ c \rightarrow c \} $ & Classical reverse Shannon theorem [@eac]\
$[ c \rightarrow c ] + [ q \, q ] \Longrightarrow \{ q \rightarrow q \} $ & Quantum reverse Shannon theorem [@qrst]\
$\{ q \rightarrow c \} + [ c \, c ] \Longrightarrow \{ q \rightarrow c \} $ & Winter’s POVM compression theorem [@winter]\
$[ c \rightarrow c ] + [ q \, q ] \Longrightarrow \{ c \rightarrow q \} $ & Remote state preparation [@rsp; @devberger]\
$[ c \rightarrow c ] + [ q \rightarrow q ] \Longrightarrow \{ c \rightarrow q \}$ & Quantum-classical trade-off in quantum\
& data compression [@hjw]\
$\{ c \rightarrow q \} + [ c \rightarrow c ] \Longrightarrow ( c \rightarrow c )$ & Classical compression with quantum\
& side information [@pqsw]\
The present paper addresses the static “distillation” (noisy $\Longrightarrow$ noiseless) task of converting noisy quantum correlations $\{q \, q \}$, i.e. bipartite quantum states, into noiseless classical ones $[c \, c]$, i.e. common randomness (CR). Many information theoretical problems are motivated by simple intuitive questions. For instance, Shannon’s channel coding theorem [@shannon] quantifies the ability of a channel to send information. Similarly, our problem stems from the desire to quantify the classical correlations present in a bipartite quantum state. A recent paper by Henderson and Vedral [@hv] poses this very question, and introduces several plausible measures. However, the ultimate criterion for accepting something as an information measure is whether it appears in the solution to an appropriate asymptotic information processing task; in other words, whether is has an *operational* meaning. It is this operational approach that is pursued here.
The structure of our conversion problem is akin to two other static distillation problems: $\{q \, q \} \Longrightarrow [q \, q]$ and $\{c \, c \} \Longrightarrow [c \, c]$. The former goes under the name of “entanglement distillation”: producing maximally entangled qubit states from a large number of copies of $\rho^{\CA \CB}$ with the help of unlimited one-way or two-way classical communication [@bdsw]. Allowing free classical communication in these problems is legitimate since, as already noted, entanglement and classical communication are orthogonal resources. The $\{c \, c \} \Longrightarrow [c \, c]$ problem is one of creating CR from general correlated random variables, which is known to be impossible without additional classical communication. Now allowing free communication is inappropriate, since it could be used to create unlimited CR. There are at least two scenarios that do make sense, however, and have been studied by Ahlswede and Csiszár in [@ac1] and [@ac2], respectively. In the first, one makes a distinction between the distilled key, which is required to be secret, and the classical communication which is public. The second scenario involves limiting the amount of classical communication to a one-way rate of $R$ bits per input state and asking about the maximal CR generated in this way (see [@ac2] for further generalizations). One can thus think of the classical communication as a quasi-catalyst that enables distillation of a part of the noisy correlations, while itself becoming CR; it is not a genuine catalyst because the original dynamic resource is more valuable than the static one. We find that these classical results generalize rather well to our information processing task. The analogue of the first scenario [@ac1] has been treated in an unpublished paper by Winter and Wilmink [@ww]. In this paper we generalize [@ac2]. As a corollary we give one (of possibly many) operationally motivated answers to the question “How much classical correlation is there in a bipartite quantum state?”.
Alice and Bob share $n$ copies (in classical jargon: an $n$ letter word) of a bipartite quantum state $\rho^{\CA \CB}$ . Alice is allowed $nR$ bits of classical communication to Bob. The question is: how much CR can they generate under these conditions? More precisely, Alice is allowed to perform some measurement on her part of $(\rho^{\CA \CB})^{\otimes n}$, producing the outcome random variable $X^{(n)}$ defined on some set $\CX^{(n)}$. Next, she sends Bob $f(X^{(n)})$, where $f: \CX^{(n)} \rightarrow \{ 1,2, \dots, 2^{nR} \}$. The *rate* $R$ signifies the number of bits per letter needed to convey this information. Conditioned on the value of $f(X^{(n)})$, Bob performs an appropriate measurement with outcome random variable $Y^{(n)}$. We say that a pair of random variables $(K, L)$, both taking values in some set $\CK$, is *permissible* if $$\begin{aligned}
K\! & = & K(X^{(n)}) \nonumber\\
L & = & L(Y^{(n)}, f(X^{(n)})). \nonumber \end{aligned}$$ A permissible pair $(K, L)$ represents *$\epsilon$-common randomness* if ( K L) . \[komon\] In addition we require the technical condition that $K$ and $L$ are in the same set satisfying || 2\^[c’ n]{} \[kardi\] for some constant $c'$. Thus, strictly speaking, our CR is of the $(c \, c)$ type, but it can easily be converted to $[c \, c]$ CR via local processing (intuitively, we would like to say “Shannon data compression”, only that the randomness thus obtained is not uniformly distributed but “almost uniformly” in the sense of the AEP [@coverthomas]). A CR-rate pair $(C, R)$ of common randomness $C$ and classical side communication $R$ is called *achievable* if for all $\epsilon, \delta > 0$ and all sufficiently large $n$ there exists a permissible pair $(K,L)$ satisfying (\[komon\]) and (\[kardi\]), such that $$\frac{1}{n}H(K) \geq C - \delta.$$ We define the CR-rate function $C(R)$ to be $$C(R) = \sup \{C: (C, R) \,\, {\rm {is}} \,\, {\rm{ achievable}} \}.$$ One may also formulate the $C(R)$ problem for Alice and Bob sharing some classical-quantum resource $X \CQ$ rather than the fully quantum $\CA \CB$. In this case Alice’s measurement is omitted since she already has the classical random variable $X^{(n)} = X^n$. In the original classical problem [@ac2] Alice and Bob share the classical resource $XY$. There Bob’s measurement is also omitted, since he already has the random variable $Y^{(n)} = Y^n$. Finally, we introduce the *distillable* CR as D(R) = C(R) - R, the amount of CR generated in excess of the invested classical communication rate. This suggests $D(\infty)$ as a natural asymmetric measure of the total classical correlation in the state. As we shall see, the above turns out to be equivalent to the asymptotic (“regularized”) version of $C_{\CA} (\rho^{\CA \CB})$, as defined in [@hv].
The paper is organized as follows. First we consider the special case of $\{ c \, q \}$ resources for which evaluating $C(R)$ reduces to a single-letter optimization problem. Then we consider the $\{ q \, q \}$ case which builds on it rather like the fixed channel Holevo-Schumacher-Westmoreland (HSW) theorem builds on the fixed alphabet version.
Classical-quantum correlations
==============================
In this section we shall assume that Alice and Bob share $n$ copies of some $\{c, q \}$ resource $X \CQ$, defined by the ensemble $\CE = \{ \rho_x, p(x) \}$ or, equivalently, equation ($\ref{cq}$). Alice knows the random variable $X$ and Bob possesses the $d$-dimensional quantum system $\CQ$. In what follows we shall make use of the EHS representation to define various information theoretical quantities for classical-quantum systems. The von Neumann entropy of a quantum system $\CA$ with density operator $\rho^\CA$ is defined as $H(\CA) = - \tr \rho^\CA \log \rho^\CA$. For a bipartite quantum system $\CA \CB$ define formally the quantities *conditional von Neumann entropy* $$H(\CB| \CA) = H(\CA \CB) - H(\CA),$$ and *quantum mutual information* (introduced earlier as “correlation entropy” by Stratonovich ) $$I(\CA; \CB) = H(\CA) + H(\CB) - H(\CA \CB) = H(\CB) - H(\CB| \CA).$$ For general states of $\CA \CB$ we introduce these quantities without implying an operational meaning for them. (Though the quantum mutual information appears in the entanglement assisted capacity of a quantum channel [@eac], and the negative of the conditional entropy, known as the coherent information appears in the quantum channel capacity [@bdsw; @q].)
Introducing these quantities in formal analogy has the virtue of allowing us to use the familiar identities and many of the inequalities known for classical entropy. This to us seems better than claim any particular operational connection (which, by all we known about quantum information today, cannot be unique anyway).
Subadditivity of von Neumann entropy implies $I(\CA; \CB) \geq 0$. For a tripartite quantum system $\CA \CB \CC$ define the quantum conditional mutual information $$I(\CA; \CB| \CC) = H(\CA| \CC) + H(\CB| \CC) - H(\CA \CB| \CC)
= H(\CA\CC) + H(\CB\CC) - H(\CA\CB\CC) - H(\CC).$$ Strong subadditivity of von Neumann entropy implies $I(\CA; \CB| \CC) \geq 0$. A commonly used identity is the chain rule $$I(\CA; \CB \CC) = I(\CA; \CB) + I(\CA; \CC | \CB).$$ Notice that for classical-quantum correlations ($\ref{cq}$) the von Neumann entropy $H(\CA)$ is just the Shannon entropy $H(X)$ of $X$. We define the mutual information of a classical-quantum system $X \CQ$ as $I(X; \CQ) = I(\CA; \CQ)$. Notice that this is no other than the Holevo information of the ensemble $\CE$ $$\chi(\CE) = H\left( \sum_x p(x) \rho_x \right) - \sum_x p(x) H(\rho_x).$$ (Even though Gordon and Levitin have written down this expression much earlier — see [@holevo:coding] for historical references —, we feel that the honour should be with Holevo for his proof of the information bound named duly after him [@holevo].)
Using the EHS representation for some tripartite classical-quantum system $U X \CQ$, strong subadditivity [@lieb] gives inequalities such as $I(U;X|\CQ) \geq 0$ or $I(U; \CQ | X) \geq 0$ and the chain rule implies, e.g., $$I(U;X \CQ) = I(U;\CQ) + I(U;X |\CQ).$$ We shall take such formulae for granted throughout the paper.
An important classical concept is that of a Markov chain of random variables $U \rightarrow X \rightarrow Y$ whose probabilities obey $\pr\{Y = y | X = x, U = u\} = \pr\{Y = y | X = x\}$, which is to say that $Y$ depends on $U$ only through $X$. Analogously we may define a classical-quantum Markov chain $U \rightarrow X \rightarrow \CQ$ associated with an ensemble $\{ \rho_{ux}, p(u,x) \}$ for which $\rho_{ux} = \rho_x$. Such an object typically comes about by augmenting the system $X \CQ$ by the random variable $U$ (classically) correlated with $X$ via a conditional distribution $Q(u|x) = \pr\{U = u | X = x\}$. In the EHS representation this corresponds to the state \^ = \_x p(x) \_u Q(u|x) \^ \^ \_x\^. \[drei\] We are now ready to state our main result.
\[t1\] C(R) = [C]{}\^\*(R) = R + D\^\*(R), \[main0\] where D\^\*(R)= \_[U|X]{} { I(U;) | I(U;X) - I(U;) R }. \[main\] The supremum is to be understood as one over all conditional probability distributions $p(u|x)$ for the random variable $U$ conditioned on $X$, with finite range $\CU$. We may in fact restrict to the case $|\CU| \leq |\CX| + 1$, which in particular implies that the $\sup$ is actually a $\max$.
The proof of the theorem is divided into two parts: show that $C^*(R)$ is an upper bound (commonly called the “converse” theorem) for $C(R)$, and then providing a direct coding scheme demonstrating its achievability. We start with a couple of lemmas.
\[t6\] $D^*(R)$, and hence ${C}^*(R)$, is monotonically increasing and concave; the latter meaning that for $R_1,R_2\geq 0$ and $0\leq\lambda\leq 1$, $$\lambda {D}^*(R_1)+(1-\lambda){D}^*(R_2)
\leq {D}^*\bigl( \lambda R_1 +(1-\lambda)R_2 \bigr).$$
$\mathbf{Proof } $ The monotonicity of $D^*(R)$ is obvious from its definition. To prove concavity, choose $U_1$, $U_2$ feasible for $R_1$, $R_2$, respectively: in particular, $$\begin{aligned}
I(U_1;X)- I(U_1;\CQ) & \leq & R_1, \nonumber\\
I(U_2;X)- I(U_2;\CQ) & \leq & R_2. \nonumber\end{aligned}$$ Then, introducing the new random variable $$U = \left\{ \begin{array}{ll}
(1,U_1) & \rm{with \,\, probability \,\,}\lambda, \\
(2,U_2) & \rm{with \,\, probability \,\,}1-\lambda,
\end{array} \right.$$ we have $$\begin{aligned}
\lambda I(U_1;X)+(1-\lambda)I(U_2;X) & = & I(U;X), \nonumber\\
\lambda I(U_1;\CQ)+(1-\lambda)I(U_2;\CQ) & = & I(U;\CQ). \nonumber\end{aligned}$$ Thus $$\lambda I(U_1;\CQ)+(1-\lambda)I(U_2;\CQ) = I(U;\CQ) \leq D^*(R),$$ the last step from $$I(U;X) - I(U;\CQ) \leq \lambda R_1 +(1-\lambda) R_2 \leq R.$$
------------------------------------------------------------------------
Consider the $n$ copy classical-quantum system $X^n \CQ^n = X_1 \CQ_1 X_2 \CQ_2
\dots X_n \CQ_n$, in the state given by the $n$th tensor power of the ensemble $\{ \rho_x, p(x) \}$. Define now \_n\^\*(R)= \_[U|X\^n]{} { I(U;\^n) | ( I(U;X\^n) - I(U;\^n) ) R }. \[nkopi\] It turns out that this expression may be “single-letterized”: $$D_n^*(R) = D^*(R).$$ We prove slightly more by showing the following lemma, which implies the above equality by iterative application and then using concavity of $D^*$ in $R$ (lemma \[t6\]):
\[t7\] For two ensembles $\CE_1=\{\rho_x,p(x)\}$ ($x\in{\cal X}_1$) and $\CE_2=\{\sigma_{x'},p'(x')\}$ ($x'\in{\cal X}_2$), denote their respective $D^*$ functions $D^*(\CE_1,R)$ and $D^*(\CE_2,R)$. Then $$D^*(\CE_1\otimes\CE_2,R) = \max \bigl\{ D^*(\CE_1,R_1)+D^*(\CE_2,R_2)
\,|\, R_1+R_2=R \bigr\}.$$
[**Proof**]{} Let $\CE_1$ and $\CE_2$ correspond to the classical-quantum systems $X_1 \CQ_1$ and $X_2 \CQ_2$, respectively. As before, we augment the joint system by the random variable $U$ via the conditional distribution $Q(u|x x')$, so that $U X_1 X_2 \CQ_1 \CQ_2$ obeys the Markov property $U \rightarrow X_1X_2 \rightarrow \CQ_1\CQ_2$. In the EHS representation we have $$\rho^{\CZ\CA_1\CA_2\CQ_1\CQ_2} = \sum_{u,x,x'} p(x)p'(x')Q(u|xx')
\ket{u}\bra{u}^{\CZ}\otimes
\ket{x}\bra{x}^{\CA_1}\otimes\ket{x'}\bra{x'}^{\CA_2}\otimes
\rho_x^{\CQ_1}\otimes\sigma_{x'}^{\CQ_2}.$$ By definition, $D^*(\CE_1\otimes\CE_2,R)$ equals $I(U;\CQ_1\CQ_2)$ maximized over all variables $U$ such that $I(U;X_1X_2) - I(U;\CQ_1\CQ_2) \leq R$.
Now the inequality “$\geq$” in the lemma is clear: for we could choose $U_1$ optimal for $\CE_1$ and $R_1$ and $U_2$ optimal for $\CE_2$ and $R_2$, and form $U=U_1U_2$. By elementary operations with the definition of $D^*$ we see that $D^*(\CE_1,R_1)+D^*(\CE_2,R_2)$ is achieved.
For the reverse inequality, let $U$ be any variable such that $I(U;X_1X_2) - I(U;\CQ_1\CQ_2) \leq R$. First note that the Markov property $U \rightarrow X_1X_2 \rightarrow \CQ_1\CQ_2$ implies $I(U;X_1X_2) = I(U; X_1\CQ_1 X_2\CQ_2)$, which can easily be verified in the EHS representation. Intuitively, possessing $\CQ_1\CQ_2$ in addition to knowing $X_1X_2$ conveys no extra information about $U$. Hence, by the chain rule, $$I(U;X_1X_2) - I(U;\CQ_1\CQ_2) = I(U;X_1X_2|\CQ_1\CQ_2).$$ Now, using the chain rule and once more the fact that the content of $\CQ_1$ is a function of $X_1$, we estimate $$\begin{aligned}
R & \geq & I(U;X_1X_2|\CQ_1\CQ_2) \nonumber\\
& = & I(U;X_1|\CQ_1\CQ_2) + I(U;X_2|\CQ_1\CQ_2 X_1) \nonumber\\
& = & I(U;X_1|\CQ_1\CQ_2) + I(U;X_2|\CQ_2 X_1). \nonumber\\
& \geq & I(U;X_1|\CQ_1) + I(U;X_2|\CQ_2 X_1). \nonumber\end{aligned}$$ Here the inequality of the last line is obtained by the following reasoning: $$\begin{aligned}
I(U;X_1|\CQ_1\CQ_2) & = & I(U\CQ_2;X_1|\CQ_1) - I(X_1;\CQ_2|\CQ_1) \nonumber\\
& \geq & I(U;X_1|\CQ_1) - 0, \nonumber\end{aligned}$$ using strong subadditivity and the fact that $X_1\CQ_1-X_2\CQ_2$ is in a product state.
Hence there are $R_1$ and $R_2$ summing to $R$ for which $$\begin{aligned}
I(U;X_1)-I(U;\CQ_1) = I(U;X_1|\CQ_1) & \leq & R_1, \label{R1} \\
I(U;X_2|X_1)-I(U;\CQ_2|X_1) = I(U;X_2|\CQ_2 X_1) & \leq & R_2. \label{R2}\end{aligned}$$ On the other hand, $$\begin{aligned}
I(U;\CQ_1\CQ_2) & = & I(U;\CQ_1) + I(U;\CQ_2|\CQ_1) \nonumber\\
& = & I(U;\CQ_1) + I(U\CQ_1;\CQ_2) - I(\CQ_1;\CQ_2) \nonumber\\
& \leq & I(U;\CQ_1) + I(UX_1;\CQ_2) \nonumber\\
& = & I(U;\CQ_1) + I(X_1;\CQ_2) + I(U;\CQ_2|X_1) \nonumber\\
& = & I(U;\CQ_1) + I(U;\CQ_2|X_1), \label{I1I2}\end{aligned}$$ using the chain rule repeatedly; the inequality comes from the quantum analogue of the familiar *data processing inequality* [@ahlswede:loeber], another consequence of the content of $\CQ_1$ being a function of $X_1$. With (\[R1\]) and by definition of $D^*$, $I(U;\CQ_1) \leq D^*(\CE_1,R_1)$. But also, with (\[R2\]), $I(U;\CQ_2|X_1) \leq D^*(\CE_2,R_2)$, observing that the conditional mutual information in (\[I1I2\]) as well as in (\[R2\]) are probability averages over unconditional mutual informations, and invoking the concavity of $D^*$ (lemma \[t6\]).
Hence, $$I(U;\CQ_1\CQ_2) \leq D^*(\CE_1,R_1)+D^*(\CE_2,R_2),$$ and since $U$ was arbitrary, we are done.
------------------------------------------------------------------------
[**Proof of Theorem 1 (converse)** ]{} For a given blocklength $n$, measurement on Bob’s side will turn the classical-quantum correlations into classical ones, and $\CQ^n$ gets replaced by the measurement outcome random variable $Y^{(n)}$. Now we can apply the classical converse [@ac2] to the classical random variable pair $(X^n, Y^{(n)})$ $$C(R) \leq R + \max_{U|X^{n}} \left\{ \frac{1}{n} I(U;Y^{(n)}) \, | \,
I(U;X^n) - I(U;Y^{(n)}) \leq n R \right\}.$$ By the the Holevo inequality [@holevo] $$I(U;Y^{(n)}) \leq I(U;{\CQ^n}),$$ this can be further bounded by $C^*_n(R)$ which is, by lemma \[t7\], equal to $C^*(R)$. To complete the proof, we need to show that the supremum in (\[main\]) can be restricted to a set $\CU$ of cardinality $|\CU| \leq |\CX| + 1$. This is a standard consequence of Caratheodory’s theorem, and the proof runs in exactly the same way as that in, e.g., [@hjw].
------------------------------------------------------------------------
We shall need some auxiliary results before we embark on proving the achievability of $C^*(R)$.
\[t2\] The $(C,R)$ pair $(H(X), H(X|\CQ))$ is achievable when Alice and Bob share the classical-quantum system $X \CQ$.
$\mathbf{Proof } $ This follows from the classical-quantum Slepian-Wolf result [@pqsw] which states that, for any $\epsilon, \delta > 0$ and sufficiently large $n$, the classical communication rate from Alice to Bob sufficient for Bob to reproduce $X^n$ with error probability $\leq \epsilon$ is $H(X|\CQ) + \delta$.
------------------------------------------------------------------------
$\mathbf{Remark } $ Lemma \[t2\] already yields the value of $$D(\infty) = D(H(X|\CQ)) = H(X) - H(X|\CQ) =
I(X; \CQ)$$ for the classical-quantum system $X \CQ$. This justifies our interpretation of $D(\infty)$ as the amount of classical correlation in $X \CQ$.
\[t3\] Let $\sigma$ be a state in a $D$-dimensional Hilbert space. Then $\tr(\sigma B) = 1 - \epsilon$ for some operator $0 \leq B \leq \1$ implies H() 1 + D + (1 - ) (B + 1) \[daga\]
$\mathbf{Proof } $ Diagonalize $\sigma$ as $\sigma = \sum_{j = 1}^D p_j \ket{j} \bra{j}$ with $p_1 \leq p_2 \dots \leq p_D$ and define $b_j = \bra{j} B \ket{j}$, so that \_j p\_j b\_j = 1 - \[nida\] and $\tr B = \sum_j b_j$. Further define the random variable $J$ with ${\rm{Pr}}\{J = j\} = p_j$, for which $H(\sigma) = H(J)$. Consider the vector $\tilde{b}^D$ which minimizes $\sum_j b_j$ subject to constraints (\[nida\]) and $0 \leq b_j \leq 1$. This is a trivial linear programming problem, solved at the boundary of the allowed region for the $b_j$. It is easily verified that the solution is given by $$\begin{aligned}
& \tilde{b}_1 = \dots = \tilde{b}_{k-1} = 0 , \nonumber\\
& 0 \leq \tilde{b}_k \leq 1, \nonumber\\
& \tilde{b}_{k+1} = \dots = \tilde{b}_D = 1,\nonumber\end{aligned}$$ for some $1 \leq k \leq D$ for which (\[nida\]) is satisfied. Note that $$D - k \leq \sum_j \tilde{b}_j \leq \tr B$$ and $\sum_{j = 1}^{k - 1} p_j \leq \epsilon$. Define the indicator random variable $I(J)$ $$I(J) = \left\{ \begin{array}{ll}
1 & J \geq k, \\
0 & \rm{otherwise.}
\end{array} \right.$$ We then have $$\begin{aligned}
H(J) & = & H(I) + H(J|I) \nonumber\\
& \leq & 1 + {\rm{Pr}}\{J = 0\} \log D +
{\rm{Pr}}\{J = 1\} \log (D + 1 - k) \nonumber\\
& \leq & 1 + \epsilon \log D + (1 - \epsilon) \log (\tr B + 1),\nonumber\end{aligned}$$ which proves the lemma.
------------------------------------------------------------------------
In order to understand the next two results, some background on typical sets $\CT^n_{U,\delta}$, conditionally typical sets $\CT^n_{X|U,\delta}(u^n)$, typical subspaces $\Pi^n_{\CQ, \delta}$ and conditionally typical subspaces $\Pi^n_{\CQ|U, \delta}(u^n)$ is needed [@ck; @nono; @winter]. This is provided in the Appendix.
\[t4\] For every $\epsilon, \delta > 0$ and set $\CE \subset \CX^n$ with ${\rm{Pr}}\{X^n \in \CE\} \geq \epsilon$, there exists a subset $\CF \subset \CE$ and a sequence $u^n \in \CT^n_{U, \delta}$ such that \^n\_[X|U, ]{}(u\^n), | || - H(X|U) | , \[druga\] whenever $n \geq n_1(|\CU|,|\CX|,\epsilon, \delta)$. In addition, whenever $n \geq n_2(|\CU|,|\CX|,d,\epsilon, \delta)$, H(\^n| X\^n ) H(|U) + . \[treca\]
$\mathbf{Proof } $ Clearly, it suffices to prove the claim for some sufficiently small $\epsilon$. The first claim (\[druga\]) is a purely classical result and corresponds to lemma 3.3.3 of Csiszár and Körner [@ck]. Thus it remains to demonstrate (\[treca\]). We shall need the following facts from the Appendix. For sufficiently large $n \geq n_0(|\CU|,|\CX|,d,\delta',\epsilon)$, for $x^n \in \CT^n_{X|U,\delta'}(u^n)$ and $u^n \in \CT^n_{U, \delta'}$: (\_[u\^n x\^n]{} \^n\_[| U,(|| + 1)’]{}(u\^n)) 1 - , \[babel\] and \^n\_[| U,(|| + 1)’]{}(u\^n) 2\^[n H(|U) + (2 + ||) c ’]{}. \[mabel\] Since $\rho_{u^n x^n} = \rho_{x^n}$, it follows from the linearity of trace and (\[babel\]) that $$\tr(\rho_{\CF} \Pi^n_{\CQ | U,(|\CX| + 1)\delta'}(u^n)) \geq 1 - \epsilon,$$ where $$\rho_{\CF} = \sum_{x^n} {\rm{Pr}}\{X^n = x^n|X^n \in \CF\} \rho_{x^n}.$$ Finally, combining with (\[mabel\]) and lemma \[t3\]: $$\frac{1}{n} H(\CQ^n| X^n \in \CF) = H(\rho_{\CF}) \leq H(\CQ|U) + \frac{1}{n} +
\epsilon \log d + c \delta'.$$ For sufficiently small $\epsilon \leq \delta'$, and setting $n_2 = \max \{ n_0, n_1, \delta'^{-1} \}$, (\[treca\]) follows with $$\delta' = \frac {\delta}{(2 + |\CX|)c + 1 + \log d}.$$
------------------------------------------------------------------------
\[t5\] For every $\epsilon, \delta > 0$ and $n \geq n_2(|\CU|,|\CX|,d,\delta,\epsilon)$ there exists a function $g: \CX^n \rightarrow \CU^n$ such that H(\^n| g(X\^n)) H(|U) + , \[ceta\] | H(X\^n| g(X\^n)) - H(X|U) | . \[peta\]
$\mathbf{Proof } $ Again it suffices to prove the claim for sufficiently small $\epsilon$. By an iterative application of lemma \[t4\] we can find disjoint subsets $\CF_1, \dots, \CF_M$ of $\CX^n$ such that $${\rm{Pr}}\{X^n \notin \bigcup_{\alpha = 1}^M \CF_\alpha \} \leq \epsilon$$ and for some sequences $u^n_\alpha \in \CT^n_{U, \delta}$, $\alpha = 1, \dots, M$ $$\left | \frac{1}{n} \log |\CF_\alpha| - H(X|U) \right | \leq \frac{\delta}{2}$$ and $$\frac{1}{n} H(\CQ^n| X^n \in \CF_\alpha) \leq H(\CQ|U) + \frac{\delta}{2}.$$ Define, choosing some $u^n_0$ different from the $u^n_\alpha$, $$g({x^n}) = \left\{ \begin{array}{ll}
u^n_\alpha & { {x^n} \in \CF_\alpha}\\
u^n_0 & {\rm{otherwise.}}
\end{array} \right.$$ Then $$\frac{1}{n} H(\CQ^n| g(X^n)) \leq H(\CQ|U) + \frac{\delta}{2} + \epsilon H(\CQ)$$ and $$\left | \frac{1}{n} H(X^n| g(X^n)) - H(X|U) \right | \leq
\frac{\delta}{2} + \epsilon H(X).$$ Finally, choose $\epsilon \leq \max \{ \frac{\delta}{2 H(\CQ)}, \frac{\delta}{2 H(X)}
\}$.
------------------------------------------------------------------------
We are now in a position to prove the direct coding part of theorem \[t1\].
[**Proof of Theorem 1 (coding)** ]{} We first show that $(C,R) = (I(U;X), I(U;X) - I(U; \CQ))$ is achievable. We follow the classical proof [@ac2] closely. Define $K(X) = g(X)$. Then $$\frac{1}{n} H(X^n|K) = H(X) - \frac{1}{n} H(K)$$ and (\[peta\]) imply | H(K) - I(U;X) | . \[sesta\] Also by (\[ceta\]) and (\[sesta\]) we have $$\frac{1}{n} (H(K) - I(K; \CQ^n)) \leq I(U;X) - I(U; \CQ) + 2 \delta.
\label{sedma}$$ Note that lemma \[t2\] applied to the *supersystem* $K \CQ^n$ guarantees the achievability of $(H(K), H(K) - I(K; \CQ^n)$. Hence, for sufficiently large (super)blocklength $k$ there exists a mapping $f(K^k)$ of rate $\frac{1}{n k} \log |f| \leq I(U;X) - I(U; \CQ) + 2 \delta $ (here $|f|$ is the image size of $f$), which allows $K^k$ to be reproduced with $\epsilon$ error. This yields an amount of $\epsilon$-randomness bounded from below by $nk (I(U;X) - \delta)$. However, to prove the claim, we need to show that the rate is bounded from above by exactly $I(U;X) - I(U; \CQ)$. This is accomplished by setting the blocklength to $N = n k (1 + 2 \delta \kappa)$, where $\kappa = \frac{1}{I(U;X) - I(U; \CQ)}$, and ignoring the last $ 2 \delta \kappa n k$ source outputs. Then indeed $$R = \frac{1}{N} \log |f| \leq I(U;X) - I(U; \CQ)$$ while $$C = \frac{1}{N} H(K^k) \geq I(U;X) - \delta(\kappa' + 2 \kappa),$$ with $\kappa' = \frac{1}{I(U;X)}$.
If now the classical communication rate $R'$ is available, we may use the procedure outlined above to achieve a CR rate of $I(U;X)$ while communicating at rate $R=I(U;X)-I(U;\CQ)$, at least if $R\leq R'$. But of course the “surplus” $R'-R$ is then still free to generate common randomness trivially by Alice transmitting locally generated fair coin flips. This shows that at communication rate $R'$, CR at rate $$C' = R'-R + I(U;X) = R' + I(U;\CQ)$$ can be generated.
------------------------------------------------------------------------
$\mathbf{Remark } $ For $R\leq H(X)-I(X;\CQ)=H(X|\CQ)$, the maximization constraint in (\[main\]) may be replaced by an equality, i.e., D\^\*(R) = (R) \[cayuga\] where $$\tilde{D}(R)= \max_{U|X} \{ I(U;\CQ)\, | \, I(U;X) - I(U;\CQ) = R \}.$$ To see this, note that $$D^*(R) = \max_{0 \leq R' \leq R} \tilde{D}(R),$$ so it suffices to show that $\tilde{D}(R)$ is monotonically increasing. This, in turn, holds if $\tilde{D}(R)$ is concave and achieves its maximum for $R = H(X|\CQ)$. The concavity proof is virtually identical to the proof of lemma \[t6\]. The second property follows from $$I(U;\CQ) \leq I(UX;\CQ) = I(X;\CQ)$$ and $I(U;X|\CQ) \leq H(X|\CQ)$.
Note that for $R \geq H(X|\CQ)$, the function $D^*(R)$ is simply constant (and equal to $ D(\infty) = I(X;\CQ)$).
Having established (\[cayuga\]), we shall now relate $D^*(R)$ to the quantum compression with classical side information trade-off curve $Q^*(R)$ of Hayden, Jozsa and Winter [@hjw]. For a classical-quantum system $X \CQ$, given by the pure state ensemble $\{ \ket{\varphi_x}, p(x) \}$, and $R\leq H(X)$, $$Q^*(R) = \min_{U|X} \{ H(\CQ|U) \, | \, I(U;X) = R \}
= H(\CQ) - \max_{U|X} \{ I(U;\CQ) \, | \, I(U;X) = R \}.$$ (For rates $R>H(X)$, $Q^*(R)=0$.)
The following relation to our $C^*(R)$ is now easily verified: D\^\*(x) + Q\^\*(D\^\*(x) + x) = H(). \[keuka\] Indeed, for $x\leq H(X|\CQ)$, and a maximizing variable $U$, $x=I(U;X)-I(U;\CQ)$ and $D^*(x)=I(U;\CQ)$. Then, $x+D^*(x)=I(U;X)$, so $U$ is feasible for $Q^*(x+D^*(x))$ and indeed optimal, using once more the monotonicity of $\widetilde{D}$.
We should remark, however, that to the best of our knowledge, eq. (\[keuka\]) has no simple operational meaning. Still, it allows us to “import” the numerically calculated trade-off curves from [@hjw] for various ensembles of interest: the curves are then parametrized via $s=x+D^*(x)$ and $x$.
Figure 1 (cf. [@hjw], figure 2) shows the distillable CR-rate trade-off curve $D(R) = D^*(R)$ for the simple two-state ensemble $\CE$ given by the non-orthogonal pair $\{ \ket{0}, \frac{1}{\sqrt{2}}(\ket{0} +\ket{1}) \}$, each occurring with probability $\frac{1}{2}$. This curve is not much better than the linear lower bound obtained by time-sharing between $(0,0)$ and the Slepian-Wolf point $(1 - H(\CE), H(\CE))$, where $H(\CE)$ denotes the entropy of the average density matrix of the ensemble $\CE$.
Figure 2 (cf. [@hjw], figure 4) corresponds to the three state ensemble $\CE_3$ consisting of the states $\ket{\varphi_1} = \ket{0}, \ket{\varphi_1} = \frac{1}{\sqrt{2}}
(\ket{0} +\ket{1})$ and $\ket{\varphi_3} = \ket{2}$ with equal probabilities. Without any communication it is already possible to extract $h_2(\frac{1}{3})$ bits of CR, due to Bob’s ability to perfectly distinguish whether his state is in $\{ \ket{\varphi_1}, \ket{\varphi_2} \}$ or $ \{ \ket{\varphi_3} \}$. The curve then follows a rescaled version of figure 1 to meet the Slepian-Wolf point $(H(\frac{1}{3}, \frac{1}{3}, \frac{1}{3}) - H(\CE_3), H(\CE_3))$.
Our third example is the parametrized BB84 ensemble $\CE_{\rm BB}(\theta)$, defined by the states $$\begin{aligned}
\ket{\varphi_1} & = & \ket{0} \nonumber\\
\ket{\varphi_2} & = & \cos \theta
\ket{0} + \sin \theta \ket{1} \nonumber\\
\ket{\varphi_3} & = & \ket{1} \nonumber\\
\ket{\varphi_4} & = & - \sin \theta \ket{0} + \cos \theta \ket{1} ,\nonumber\end{aligned}$$ each chosen with probability $\frac{1}{4}$. The $D(R)$ curve for $\theta = \pi/8$, shown in figure 3 (cf. [@hjw], figure 5), has a special point at which the slope is discontinuous. For $0 < \theta \leq \pi/4$, $\CE_{\rm BB}(\theta)$ has a natural coarse graining to the ensemble consisting of two equiprobable mixed states, $\frac{1}{2}(\ket{\varphi_1}\bra{\varphi_1}) + \ket{\varphi_2}\bra{\varphi_2})$ and $\frac{1}{2}(\ket{\varphi_3}\bra{\varphi_3}) + \ket{\varphi_4}\bra{\varphi_4})$. The special point is precisely the Slepian-Wolf point for this coarse-grained ensemble, treating $\ket{\varphi_1}$ and $\ket{\varphi_2}$, and $\ket{\varphi_3}$ and $\ket{\varphi_4}$ as indistinguishable.
Finally, figure 4. (cf. [@hjw], figure 5 and [@devberger]) shows $D(R)$ for the uniform qubit ensemble, a uniform distribution of pure states over the Bloch sphere. Strictly speaking, theorem 1 should be extended to include continuous ensembles; we shall not do this here, but merely conjecture it and refer the reader to [@hjw] for an example of such an extension. The curve approaches $D = 1$ only in the $R \rightarrow \infty$ limit. It has an explicit parametrization computed from (\[keuka\]) and [@devberger]: $$\begin{aligned}
R & = & h_2 \(\frac{1}{\la} - \frac{1}{e^\la - 1} \) +
\frac{\la}{e^\la - 1} - 2 + \log \(\frac{\la e^\la}{e^{\la} - 1} \) \nonumber\\
D(R) & = & 1 - h_2 \(\frac{1}{\la} - \frac{1}{e^\la - 1} \) \ \nonumber\end{aligned}$$ for $\lambda \in (0, \infty)$, where $h_2(p) = - p \log p
- (1 - p) \log (1 - p)$ is the binary Shannon entropy.
General quantum correlations {#general}
============================
Consider the following double-blocking protocol for the case of $\{ q \, q \}$ resources: given a word of length $n L$, Alice performs the same measurement on each of the $n$ blocks of length $L$. This leaves her with $n$ copies of the resulting $\{ c \, q \}$ resource, to which we apply the $\{ c \, q \}$ protocol described in the previous section. Letting $n \rightarrow \infty$ and then $L \rightarrow \infty$ yields the same results as the most general protocol described in Section 1. Let us assume $L = 1$ for the moment. The measurement $\CM$ on Alice’s subsystem $\CA$, defined by the positive operators $(E_x)_{ x \in \CX }$ with $\sum_x E_x = \1$, may be thought of as a map sending a quantum system $\CA \CB$ in the state $\rho^{\CA \CB}$ to a classical-quantum system $X \CQ$ in the state given by the ensemble $\{ \rho_x, p(x) \}$, where $$\begin{aligned}
p(x) & = & \tr_{\!\CA} \, \left( \rho^\CA E_x \right), \nonumber\\
\rho_x & = & \frac{1}{p(x)}
\tr_{\!\CA} \, \left ((\sqrt{E_x} \otimes \1) \rho^{\CA \CB} (\sqrt{E_x} \otimes \1) \right ). \nonumber\end{aligned}$$ All the relevant information is now encoded in the shared ensemble. Theorem 1 now applies, yielding an expression for the $L = 1$ CR-rate curve: C\^[(1)]{}(R) = R + \_[: X ]{} \_[U|X]{} { I(U;) | I(U;X) - I(U; ) R }. \[gen1\] Similarly we have \[D1:infty\] D\^[(1)]{}() = \_[: X ]{} I(X;), which is precisely the classical correlation measure $C_{\CA} (\rho^{\CA \CB})$ proposed in [@hv]. Note that w.l.o.g. we may assume the measurement to be rank-one, and $|{\cal X}|\leq d^2$, $d$ the dimension of the ${\cal A}$-system, because a non-extremal POVM cannot be optimal.
However, in general one must allow for “entangling” measurements performed on an arbitrary number $L$ copies of $\rho^{\CA \CB}$, yielding an expression for $C^{(L)}(R)$ analogous to (\[gen1\]): $$C^{(L)}(R) = R + \max_{\CM: {\CA^L \CB^L} \mapsto X \CQ}
\,\, \frac{1}{L} \max_{U|X} \bigl\{ I(U;\CQ) \, | \, I(U;X) - I(U;\CQ) \leq R \bigr\}.
\label{gen2}$$ Finally, taking the large $L$ limit gives $$C(R) = \lim_{L \rightarrow \infty} C^{(L)}(R).$$ Similarly $$D(\infty) = \lim_{L \rightarrow \infty} D^{(L)}(\infty),$$ which is the “regularized” version of $D^{(1)}(\infty)$ and the more appropriate asymmetric measure of classical correlations present in the bipartite state $\rho^{\CA \CB}$. It is an interesting question whether $L = 1$ suffices to attain $C(R)$, or at least $D(\infty)$. In the remainder of this section we present some partial results concerning this issue.
\[accessible\] Let Alice and Bob switch roles: consider a state $$\rho^{\CA\CB} = \sum_x p(x)\rho_x^{\CA} \otimes \ket{x}\bra{x}^{\CB},$$ i.e. now Alice holds the ensemble states $\rho_x$ while Bob has the classical information $x$, with probability $p(x)$.
According to (\[D1:infty\]), $D^{(1)}(\infty)$ is equal to the *accessible information* of the state ensemble $\CE=\{ \rho_x,p(x) \}$, denoted $I_{\rm acc}(\CE)$ [@holevo2]. On the other hand, we know from [@holevo2] that $I_{\rm acc}(\CE\otimes\CE') = I_{\rm acc}(\CE)+I_{\rm acc}(\CE')$, for a second ensemble $\CE'$, hence $$D(\infty) = D^{(L)}(\infty) = D^{(1)}(\infty) = I_{\rm acc}(\CE).$$
This single-letterization of the accessible correlation can, in fact, be generalized to arbitrary separable states. Indeed, the following holds, in some analogy to the additivity of capacity for entanglement breaking channels [@qcq] (we include the state dependence in our notation of $D^{(1)}$ etc.):
\[dist-add\] Let $\rho^{AB}$ be separable and $\sigma^{A'B'}$ be arbitrary. Then, $$D^{(1)}(\rho\otimes\sigma,\infty) = D^{(1)}(\rho,\infty)+D^{(1)}(\sigma,\infty).$$ From this, by iteration, we get of course $$D(\rho,\infty) = D^{(L)}(\rho,\infty) = D^{(1)}(\rho,\infty).$$
$\mathbf{Proof } $ $D^{(1)}(\rho\otimes\sigma,\infty) \geq D^{(1)}(\rho,\infty)+D^{(1)}(\sigma,\infty)$ is trivial for arbitrary states, for we can always use product measurements. For the opposite inequality, we write $\rho$ as a mixture of product states: $$\rho^{\CA\CB} = \sum_j q_j \hat{\tau}_j^{\CA} \otimes \tau_j^{\CB},$$ which can be regarded as part of a classical-quantum system $J \CA \CB $ with EHS representation $$\rho^{\cal J\!AB} = \sum_j q_j \ket{j}\bra{j}^{\CJ} \otimes\hat{\tau}_j^{\CA}
\otimes \tau_j^{\CB},$$ whose partial trace over $\CJ$ it obviously is.
Now we consider a measurement ${\cal M}=(E_x)_{x\in{\cal X}}$ on the combined system ${\cal AA'}$. Then, by definition, the post–measurement states on ${\cal BB'}$ and the probabilities are given by $$\begin{aligned}
p(x)\rho_x &=& \tr_{\!\cal AA'} \left[ \bigl(\rho^{\cal AB}\otimes\sigma^{\cal A'B'}\bigr)
\bigl(E_x^{\cal AA'}\otimes\1\bigr) \right] \nonumber\\
&=& \sum_j q_j \tau_j^{\CB} \otimes
\tr_{\!\cal AA'}\left[ \bigl(\hat{\tau}_j^{\CA}\otimes\sigma^{\cal A'B'}\bigr)
\bigl(E_x^{\CA \CA'}\otimes\1\bigr) \right]
\nonumber\\
&=& \sum_j q_j \tau_j^{\CB} \otimes
\tr_{\!\CA'}\left[ \sigma^{\cal A'B'}\bigl(F_{x|j}\otimes\1\bigr) \right],
\nonumber
\end{aligned}$$ with the POVMs ${\cal N}_j=(F_{x|j})_{x\in{\cal X}}$ on ${\cal A'}$, labeled by the different $j$: $$F_{x|j} = \tr_{\!\CA}\bigl( E_x(\hat{\tau}_j\otimes\1) \bigr).$$ Thus, applying the measurement $ \CM$ on $\CA \CA'$ on $\rho^{\cal J\!AB}\otimes\sigma^{\cal A'B'}$, and storing the result in $X$ leads to the classical-quantum system $XJ \CB \CB'$ defined by the EHS state $$\omega = \sum_{x,j} \ket{x}\bra{x}^{\CC} \otimes q_j \ket{j}\bra{j}^{\CJ} \otimes
\tau_j^{\CB} \otimes
\tr_{\!\CA'}\left[ \sigma^{\cal A'B'}\bigl(F_{x|j}\otimes\1\bigr) \right].$$ With respect to it, $$\begin{aligned}
I(X;\CB \CB') & = & I(X; \CB) + I(X; \CB'| \CB) \nonumber \\
&= & I(X;\CB) + I(X \CB;\CB')-I(\CB;\CB') \nonumber \\
& = & I(X;\CB)+I(X\CB;\CB') \nonumber \\
& \leq & I(X;\CB)+I(XJ;\CB') \nonumber \\
& = & I(X;\CB)+I(XJ;\CB')-I(J;\CB') \nonumber \\
& = & I(X;\CB)+I(X;\CB'|J), \label{eq:thatsit}
\end{aligned}$$ using the chain rule, the fact that $\CB\CB'$ is in a product state, the data processing inequality [@ahlswede:loeber], the fact that $J\CB'$ is in a product state and the chain rule once more.
In (\[eq:thatsit\]) notice that the first mutual information, $I(X;{\cal B})$, relates to applying the POVM ${\cal M}$ to ${\cal A}$, with an ancilla ${\cal A'}$ in the state $\sigma^{\CA'}$ – but this can be described by a POVM ${\cal N}$ on ${\cal A}$ alone. The second, $I(X;{\cal B'}|J)$, is a probability average over mutual informations relating to different POVMs on ${\cal A'}$. Thus $$I(X;{\cal BB'}) \leq D^{(1)}(\rho,\infty)+D^{(1)}(\sigma,\infty),$$ which yields the claim, as ${\cal M}$ was arbitrary.
------------------------------------------------------------------------
\[entangled\] For a pure entangled state $\psi=\ket{\psi}\bra{\psi}$, we can easily see that $$D^{(1)}(\psi,\infty) = D^{(1)}(\psi, 0) = E(\ket{\psi}) = H(\tr_{\!\CB}\psi).$$ Indeed, the right hand side is attained for Alice and Bob both measuring in bases corresponding to a Schmidt decomposition of $\ket{\psi}$. On the other hand, in the definition of $D^{(1)}$, eq. (\[D1:infty\]), the mutual information $I(X;\CQ)$ is upper bounded by $H(\CQ)$, which is the right hand side in the above equation.
Thus, if both $\psi$ and $\varphi$ are pure entangled states, $$D^{(1)}(\psi\otimes\varphi,\infty) = D^{(1)}(\psi,\infty)+D^{(1)}(\varphi,\infty).$$ In particular, $$D(\psi,\infty) = D^{(L)}(\psi,\infty) = D^{(1)}(\psi,\infty).$$
More generally, we have (compare to the additivity of channel capacity if one of the channels is noiseless [@schu-west]):
\[ent-add\] Let $\rho^{\CA\CB}=\ket{\psi}\bra{\psi}$ be pure and $\sigma^{\CA'\CB'}$ arbitrary. Then $$D^{(1)}(\rho\otimes\sigma,\infty) = D^{(1)}(\rho,\infty) +
D^{(1)}(\sigma,\infty).$$
$\mathbf{Proof } $ As usual, only “$\leq$” has to be proved. Given any POVM ${\cal M}=(E_x)_{x \in \CX}$ on $\CA\CA'$, the classical-quantum correlations $X \CB \CB'$ remaining after this measurement is performed are described by $$\omega = \sum_x \ket{x}\bra{x}^{\CC}\otimes
\tr_{\!\cal AA'}\left[ \bigl(\rho^{\cal AB}\otimes
\sigma^{\cal A'B'}\bigr)
\bigl(E_x^{\cal AA'}\otimes\1\bigr)
\right].$$ We shall assume that $\ket{\psi}$ is in Schmidt form: $$\ket{\psi} = \sum_j \sqrt{\lambda_j}\ket{j}^{\CA}\ket{j}^{\CB}.$$ Measuring in the basis $\ket{j}$ on $\CB$ and recording the result in orthogonal states $\ket{j}\bra{j}$ in a register $\CJ$ transforms $\omega$ into the state $$\omega' = \sum_{x,j} \lambda_j\ket{j}\bra{j}^{\CJ} \otimes
\ket{x}\bra{x}^{\CC} \otimes
\tr_{\!\cal AA'}\left[
\bigl(\ket{j}\bra{j}^{\CA}\otimes
\sigma^{\cal A'B'}\bigr)
\bigl(E_x^{\cal AA'}\otimes
\1^{\CB'}\bigr) \right].$$ We claim that \[subadd-ent\] I\_(X;[BB’]{}) I\_[’]{}(X;’|J)+ H\_(), where the subscript indicates the state relative to which the respective information quantity is understood. Clearly, from this the theorem follows: on the right hand side, the entropy is the entropy of entanglement of $\rho$, and the mutual information is an average of mutual informations for measurements $\CM_j$ on $\CA'$, defined as performing $\CM$ with ancillary state $\ket{j}\bra{j}$ on $\CA$.
To prove (\[subadd-ent\]), we first reformulate it such that all entropies refer to the same state. For this, observe that the measurement of $j$ can be done by adjoining the register $\CJ$ in a null state $\ket{0}$, applying a unitary which maps $\ket{j}^{\CB}\ket{0}^{\CJ}$ to $\ket{j}^{\CB}\ket{j}^{\CJ}$, and tracing out $\CB$. Denote by $\Omega$ the state obtained from $\omega$ by this procedure. Obviously then, (\[subadd-ent\]) is equivalent to \[subadd-alt\] I(X;[’]{}) I(X;’|)+H(), with respect to $\Omega$, because isometries do not alter entropies.
Now, writing out the above quantities as sums and differences of entropies, and using the fact that $\CB\CJ-\CB'$ is in a product state, a number of terms cancel out, and (\[subadd-alt\]) becomes equivalent to $$H(\CB \CJ \CB'|X) \geq H(\CB'|X\CJ).$$ But now rewriting the left hand side, using $H(\CB \CJ|X)\geq 0$ (because it is an average of von Neumann entropies), we estimate: $$\begin{aligned}
H(\CB \CJ \CB'|X) &= & H(\CB'|\CB \CJ X) + H(\CB \CJ|X) \nonumber\\
&\geq& H(\CB'|\CB \CJ X) \nonumber\\
&\geq& H(\CB'|\CJ X), \nonumber\end{aligned}$$ where in the last line we have used strong subadditivity, and we are done.
------------------------------------------------------------------------
We do not know if additivity as in the above cases holds universally, but we regard our results as evidence in favor of this conjecture.
Returning to finite side-communication, it is a most interesting question whether a similar single-letterization can be performed. We do not know if an additivity-formula, similar to the one in lemma \[t7\] for classical-quantum correlation, holds for the rate function $D^{(1)}(\rho\otimes\sigma,R)$. In fact, this seems unlikely because its definition does not even allow one to see that it is concave in $R$ (which it better had to if it be equal to the regularized quantity.). Of course this can easily be remedied by going to the concave hull $\widetilde{D}^{(1)}$ of $D^{(1)}$: note that both regularize to the same function for $L\rightarrow\infty$. However, we were still unable to prove additivity for $\widetilde{D}^{(1)}$. This would be a most desirable property, as it would allow single-letterization of the rate function just as in the case of classical-quantum correlations. As it stands, $\widetilde{D}^{(1)}(\rho,R)$ is the CR obtainable from $\rho$ in excess over $R$, if (one-way) side communication is limited to $R$ and *if the initial measurement is a tensor product*.
Discussion
==========
We have introduced the task of distilling common randomness from a quantum state by limited classical one-way communication, placing it in the context of general resource conversion problems from classical and quantum information theory. Our exposition can be read as a systematic objective for the field of quantum information theory: to study all the conceivable inter-conversion problems between the resources enumerated in the Introduction.
Our main result is the characterization of the optimal asymptotically distillable common randomness $C$ (as a function of the communication bound $R$); in the case of initial classical-quantum correlations this characterization is a single-letter optimization.
A particularly interesting figure is the total “distillable common randomness”, which is the supremum of $C(R)-R$ as $R\rightarrow\infty$: for the classical-quantum correlations it turns out to be simply the quantum mutual information, and in general it is identical to the regularized version of the measure for classical correlation put forward by Henderson and Vedral [@hv].
It should be noted that this quantity is generally smaller than the quantum mutual information $I(\CA;\CB)$ of the state $\rho^{\CA\CB}$ (which was discussed in [@cerf:adami]), but larger than the quantity proposed by Levitin [@levitin]. Interestingly, while the former work simply examines a quantity defined in formal analogy to classical mutual information for its usefulness to (at least, qualitatively) describe quantum phenomena, the latter motivates the definition by recurring to operational arguments. Of course, all this shows is that there can be several operational approaches to the same intuitive concept: quantities thus defined might coincide for classical systems but differ in the quantum version.
This is what we see even within the realm of our definitions. In the classical theory [@ac2] the total distillable CR equals the mutual information of the initial distribution, regardless of the particulars of the noiseless side communication: whether it is one-way from Alice to Bob or vice versa, or actually bidirectional, the answer is the mutual information. There are simple examples of quantum states where the total distillable common randomness depends on the communication model: the classical-quantum correlation associated with an ensemble $\CE=\{ \rho_x,p(x) \}$ of states at Bob’s side (compare eq. (\[cq\])) leads to $I(\CA;\CQ)=\chi(\CE)$ if one-way communication from Alice to Bob is available. If only one-way communication from Bob to Alice is available, it is only $I_{\rm acc}(\CE)$, the accessible information of the ensemble $\CE$, which usually is strictly smaller than the Holevo information $\chi(\CE)$ [@holevo].
An open problem left in this work is to decide the additivity questions in section \[general\]: is the distillable common randomness $D^{(1)}(\rho,\infty)$ additive in general? Does the rate function $D^{(1)}(\rho,R)$ obey an additivity-formula like the one in lemma \[t7\]? Finally, there is the issue of finding the “ultimate” distillable common randomness involving two-way communication.
[**[Acknowledgments]{}**]{} We thank C. H. Bennett, D. P. DiVincenzo, B. M. Terhal, J. A. Smolin and R. Abbot for useful discussions. ID’s work was supported in part by the NSA under the US Army Research Office (ARO), grant numbers DAAG55-98-C-0041 and DAAD19-01-1-06. AW is supported by the U.K. Engineering and Physical Sciences Research Council.
Appendix
========
We shall list definitions and properties of typical sequences and subspaces [@ck; @nono; @winter]. Consider the classical-quantum system $U X \CQ$ in the state defined by the ensemble $\{ p(u,x), \rho_{ux} \}$. $X$ is defined on the set $\CX$ of cardinality $s_1$ and $U$ on the set $\CU$ of cardinality $s_2$. Denote by $p(x)$ and $P(x|u)$ the distribution of $X$ and conditional distribution of $X|U$ respectively.
For the probability distribution $p$ on the set $\CX$ define the set of *typical sequences* (with $\delta>0$) $$\CT^n_{p,\delta}=\left\{x^n:\forall x\ | N(x|x^n)- n p(x)|\leq
n {\delta} \right\},$$ where $N(x|x^n)$ counts the number of occurrences of $x$ in the word $x^n=x_1\ldots x_n$ of length $n$. When the distribution $p$ is associated with some random variable $X$ we may use the notation $\CT^n_{X,\delta}$.
For the stochastic matrix $P: \CU \rightarrow \CX$ and $u^n \in \CU^n$ define the set of *conditionally typical sequences* (with $\delta>0$) by $$\CT^n_{P,\delta}(u^n) = \left\{x^n:\forall u,x \ | N((u,x)| (u^n,x^n))
- P(x|u) N(u|u^n)| \leq n{\delta} \right\}.$$ When the stochastic matrix $P$ is associated with some conditional random variable $X|U$ we may use the notation $\CT^n_{X|U,\delta}(u^n)$.
For a density operator $\rho$ on a $d$-dimensional Hilbert space $\CH$, with eigen-decomposition $\rho = \sum_{k = 1}^{d} \lambda_k \ket{k}\bra{k}$ define (for $\delta>0$) the *typical projector* as $$\Pi^n_{\rho,\delta}=\sum_{k^n\in{\CT}^n_{R,\delta}}
\ket{k^n}\bra{k^n}.$$ When the density operator $\rho$ is associated with some quantum system $\CQ$ we may use the notation $\Pi^n_{\CQ,\delta}$.
For a collection of states ${\rho}_u$, $u \in \CU$, and $u^n\in \CU^n$ define the *conditionally typical projector* as $$\Pi^n_{\{\rho_u\},\delta}(u^n)=\bigotimes_u
\Pi^{I_u}_{\rho_u,\delta},$$ where $I_u=\{i:u_i=u\}$ and $\Pi^{I_u}_{\rho_u,\delta}$ denotes the typical projector of the density operator ${\rho}_u$ in the positions given by the set $I_u$ in the tensor product of $n$ factors. When the $\{\rho_u \}$ are associated with some conditional classical-quantum system system $\CQ|U$ we may use the notation $\Pi^n_{\CQ|U,\delta}(u^n)$. We shall give several known properties of these projectors, some of which are used in the main part of the paper. For any positive $\epsilon, \delta$ and $\delta'$, some constant $c$ depending on the particular ensemble of $UX \CQ$, and for sufficiently large $n \geq n_0(\epsilon, \delta ,\delta')$, the following hold. Concerning the quantum system $\CQ$ alone:
$$\begin{aligned}
\tr \Pi^n_{\CQ,\delta} & \leq & 2^{n (H(\CQ) + c \delta)} \nonumber\\
\tr \rho^{\otimes n} \Pi^n_{\CQ,\delta} & \geq & 1 - \epsilon. \nonumber\end{aligned}$$
Concerning the classical-quantum system $X \CQ$, and for $x^n \in \CT^n_{X,\delta'}$:
$$\begin{aligned}
\tr \Pi^n_{\CQ|X,\delta}(x^n) & \leq & 2^{n (H(\CQ|X)
+ c (\delta + \delta'))} \label{hu} \\
\tr \rho_{x^n} \Pi^n_{\CQ|X,\delta}(x^n) & \geq & 1 - \epsilon \nonumber\\
\tr \rho_{x^n} \Pi^n_{\CQ, \, \delta + |\CX| \delta'} & \geq & 1 - \epsilon. \label{hu2}\end{aligned}$$
These have been proven in [@strong]. Finally, concerning the full classical-quantum system $U X \CQ$, for $x^n \in \CT^n_{X|U,\delta'}(u^n)$ (\[hu2\]) easily extends to \_[u\^n x\^n]{} \^n\_[|U, + || ’]{} 1 - . \[hu3\]
[99]{}
R. Ahlswede and I. Csiszár, “Common Randomness in Information Theory and Cryptography — Part I: Secret Sharing “, IEEE Trans. Inf. Theory, vol. 39, pp. 1121–1132, 1993. R. Ahlswede and I. Csiszár, “Common Randomness in Information Theory and Cryptography — Part II: CR-capacity”, IEEE Trans. Inf. Theory, vol. 44, pp. 225–240, 1998. R. Ahlswede and P. Löber, “Quantum data processing”, IEEE Trans. Inf. Theory, vol.47, pp. 474–478, 2000. C. H. Bennett, H. J. Bernstein, S. Popescu and B. Schumacher, “Concentrating Partial Entanglement by Local Operations”, Phys. Rev. A, vol. 53, pp. 2046–2052, 1996. C. H. Bennett and G. Brassard, “Quantum Cryptography: Public key distribution and coin tossing”, Proc. IEEE Int. Conf. Computers, Systems and Signal Processing (Bangalore, India), pp. 175–179, 1984. C. H. Bennett, G. Brassard, C. Crépeau, R. Jozsa, A. Peres and W. K. Wootters, “ Teleporting an unknown quantum state via dual classical and EPR channels”, Phys. Rev. Lett., vol. 70, pp. 1895–1898, 1993. C. H. Bennett, I. Devetak, A. Harrow, P. W. Shor and A. Winter, “The Quantum Reverse Shannon Theorem”, in preparation. C. H. Bennett, D. P. DiVincenzo, J. A. Smolin and W. K. Wooters, “Mixed-state entanglement and quantum error correction”, Phys. Rev. A, vol. 54, pp. 3824–3851, 1996. C. H. Bennett, P. W. Shor, J. A. Smolin and A. V. Thapliyal, “Entanglement-assisted capacity of a quantum channel and the reverse Shannon theorem”, IEEE Trans. Inf. Theory, vol. 48, pp. 2637–2655, 2002. C. H. Bennett and S. J. Wiesner, “Communication via one- and two- particle operators on Einstein-Podolsky-Rosen states”, Phys. Rev. Lett., vol. 69, pp. 2881–2884, 1992. T. Berger, [*Rate Distortion Theory*]{}, Prentice Hall, 1971. T. M. Cover and J. A. Thomas, *Elements of information theory*, John Wiley & Sons, New York, 1991. N. J. Cerf and C. Adami, “Negative entropy and information in quantum mechanics”, Phys. Rev. Lett., vol. 79, pp. 5194–5197, 1997. I. Csiszár and J. Körner, *Information Theory: Coding Theorems for Discrete Memoryless Systems*, Academic Press, New York, 1981. I. Devetak and T. Berger, “Low entanglement remote state preparation”, Phys. Rev. Lett., vol. 87, pp. 197901–197904, 2001. I. Devetak and T. Berger, “Quantum rate-distortion theory for memoryless sources”, IEEE Trans. Inf. Theory vol. 48, pp. 1580–1589, 2002. H. Barnum, “Quantum rate-distortion coding”, Phys. Rev. A, vol. 62, pp. 42309–42314, 2000. I. Devetak and A. Winter, “Classical data compression with quantum side information”, [[quant-ph/0209029]{}]{}, 2002. A. Winter, Ph.D. thesis, [[quant-ph/9907077]{}]{}, 1999. P. Hayden, M. Horodecki and B. M. Terhal, “The asymptotic entanglement cost of preparing a quantum state”, J. Phys. A: Math. Gen., vol. 34, pp. 6891–6898, 2001. P. Hayden, R. Jozsa and A. Winter, “Trading quantum for classical resources in quantum data compression”, J. Math. Phys., vol. 43, pp. 4404–4444, 2002. P. Hayden and A. Winter, “On the communication cost of entanglement transformations”, Phys. Rev. A, vol. 67, pp. 012326–012333, 2003. A. W. Harrow and H.-K. Lo, “A tight lower bound on the classical communication cost of entanglement dilution”, quant-ph/0204096, 2002. L. Henderson and V. Vedral, “Classical, quantum and total correlations”, quant-ph/0105028, 2001. A. S. Holevo, “Information theoretical aspects of quantum measurements”, Probl. Inf. Transm., vol. 9, pp. 110-118, 1973. A. S. Holevo, ”Bounds for the quantity of information transmitted by a quantum channel”, Probl. Inf. Transm., vol. 9, pp. 177-183, 1973. A. S. Holevo, “The Capacity of the Quantum Channel with General Signal States”, IEEE Trans. Inf. Theory, vol. 44, pp. 269-273, 1998. B. Schumacher and M. D. Westmoreland, “Sending classical information via noisy quantum channels”, Phys. Rev. A, vol. 56, pp. 131-138, 1997. L. B. Levitin, “Quantum Generalization of Conditional Entropy and Information”, in: $1^{\rm st}$ NASA Conf. QCQC, p. 269, 1998. E. H. Lieb and M. B. Ruskai, “Proof of the strong subadditivity of quantum-mechanical entropy”, J. Math. Phys., vol. 14, pp. 1938–1941, 1973. H.-K. Lo and S. Popescu, “The classical communication cost of entanglement manipulation: Is entanglement an inter-convertible resource?” Phys. Rev. Lett., vol. 83, pp. 1459–1462, 1999. A. K. Pati, “Minimum cbits for remote preparation and measurement of a qubit”, Phys. Rev. A, vol. 63, pp. 014320–014326, 2001. H.-K. Lo, “Classical Communication Cost in Distributed Quantum Information Processing - A generalization of Quantum Communication Complexity”, quant-ph/9912009, 1999. C. H. Bennett, D. P. DiVincenzo, J. A. Smolin, P. W. Shor, B. M. Terhal and W. K. Wooters, “Remote State Preparation”, Phys. Rev. Lett., vol. 87, pp. 77902–77905, 2001. D. W. Leung and P. W. Shor, “Oblivious remote state preparation”, quant-ph/0201008, 2002. C. H. Bennett, D. W. Leung, P. Hayden, P. W. Shor and A. Winter, “Remote preparation of quantum states”, in preparation. B. Schumacher,”Quantum coding”, Phys. Rev. A, vol. 51, pp. 2738–2747, 1995. R. Jozsa and B. Schumacher, “A new proof of the quantum noiseless coding theorem”, J. Mod. Opt, vol. 41, pp. 2343–2349, 1994. B. Schumacher, M. D. Westmoreland, “Relative entropy in quantum information theory”, in: *Quantum Computation and Quantum Information: A Millenium Volume*, S. Lomonaco (ed.), American Mathematical Society Contemporary Mathematics series, 2001. C. E. Shannon, “A mathematical theory of communication”, Bell System Tech. Journal, vol. 27, pp. 379–623, 1948. P. W. Shor, “Additivity of the Classical Capacity of Entanglement-Breaking Quantum Channels”, J. Math. Phys., vol. 43, pp. 4334–4340, 2002. P. W. Shor, “The quantum channel capacity and coherent information ”, lecture notes, MSRI Workshop on Quantum Computation, 2002 (Avaliable at http://www.msri.org/publications/ln/msri/2002/quantumcrypto/shor/1/). H. Barnum, E. Knill and M. A. Nielsen, “On Quantum Fidelities and Channel Capacities”, IEEE Trans. Inf. Theory, vol. 46, pp. 1317–1329, 2000. I. Devetak “The private classical information capacity and quantum information capacity of a quantum channel”, quant-ph/0304127. B. M. Terhal, M. Horodecki, D. W. Leung and D. P. DiVincenzo, “The entanglement of purification”, J. Math. Phys., vol. 43, pp. 4286–4298, 2002. A. Winter, "Coding theorem and strong converse for quantum channels”, IEEE Trans. Inf. Theory, vol. 45, pp. 2481-2485, 1999. A. Winter, “[’Extrinsic’]{} and ’intrinsic’ data in quantum measurements: asymptotic convex decomposition of positive operator valued measures”, [[quant-ph/0109050]{}]{}, 2001. A. Winter and R. Wilmink, unpublished.
[^1]: Electronic address: devetak@us.ibm.com
[^2]: Electronic address: winter@cs.bris.ac.uk
|
---
abstract: 'In the weakened 16th Hilbert’s Problem one asks for a bound of the number of limit cycles which appear after a polynomial perturbation of a planar polynomial Hamiltonian vector field. It is known that this number is finite for an individual vector field. In the multidimensional generalization of this problem one considers polynomial perturbation of a polynomial vector field with invariant plane supporting a Hamiltonian dynamics. We present an explicit example of such perturbation with infinite number of limit cycles which accumulate at some separatrix loop.'
address: 'Institute of Mathematics, Warsaw University, ul. Banacha 2, 02-097 Warsaw, Poland'
author:
- Marcin Bobieński
- Henryk Żołdek
title: 'A counterexample to a multidimensional version of the weakened Hilbert’s 16-th problem'
---
1.truecm
[^1]
The result {#sec:result}
==========
Yu. Il’yshenko [@il16] and J. Ecalle [@ec] proved that an individual planar polynomial vector field can have only finite number of limit cycles.
On the other hand multi-dimensional vector fields with chaotic dynamics have infinite number of periodic trajectories. The Lorentz system [@mimr] and the Duffing system [@guho] provide best known examples. In the chaotic systems the periodic orbits are usually encoded by periodic sequences in a suitable symbolic dynamical system. This encoding is proved using topological methods (like the Lefschetz-Coneley index or Smale’s horseshoe). This means that:
1. The periods of the periodic trajectories tend to infinity in rather irregular way.
2. The 1-cycles represented by different periodic trajectories have different “topology” i.e. they are linked between themselves.
In particular, these cycles do not form a continuous family (so called center).
In Main Theorem below we give an example of polynomial 4-dimensional differential system, with infinite number of periodic solutions $\gamma_1,\gamma_2,\ldots$ such that
- the periods of $\gamma_j$ grow monotonically with $j$;
- the corresponding 1-cycles have the same “topology”; they are concentric cycles on an embedded invariant 2-dimensional disc of class $C^1$;
- the $\gamma_j$ are isolated (they are limit cycles).
To construct the example we begin with the Hamiltonian planar system $$\label{hamsys}
\dot x=X_H=(H_{x_2},-H_{x_1}), \quad (x_1,x_2)\in\bbR_x^2,\qquad H=x_1^3-3x_1 - x_2^2 + 2$$ and the 2-dimensional linear system $$\label{2dy}
\dot y= a y,\qquad y=y_1+i y_2\in\bbC \equiv \bbR^2_y,$$ where $a=-\rho+i\omega$. Later we put $\rho=\omega=\sqrt3$.
The Hamiltonian function from (\[hamsys\]) is elliptic with the critical points $x=(-1,0)$ (center) and $x=(1,0)$ (saddle). The phase portrait of the field $X_H$ is shown on Figure \[fig:ell\]
We consider the following coupling of the system (\[hamsys\]) and (\[2dy\]) $$\label{system}
\left\{ \begin{aligned}
\dot x &= X_H + {\mathrm{Re}}(\overline{{\kappa}}\,y)\, e_2\\
\dot y &= a\, y + {\varepsilon}H^4(x)\;(1-x_1),
\end{aligned} \right.$$ where ${\varepsilon}>0$ is a small parameter, $e_2=(0,1)$ is a versor in $\bbR^2_x$ and ${\kappa}\in\bbC$.
\[th:main\] Let $\rho=\omega=\sqrt3$ and $$\label{y0}
{\kappa}= 4\sqrt3 i + \frac{(3-3i)\sqrt6}{\sqrt\pi}(1+2i -{\psi}'(\tfrac{1-i}2)),$$ where ${\psi}(z)$, the Euler Psi-function, is the logarithmic derivative of the Euler Gamma-function ${\psi}= \tfrac{\Gamma'}{\Gamma}$.
Then there exists an ${\varepsilon}_0>0$ such that for any $0<{\varepsilon}<{\varepsilon}_0$ the system (\[system\]) has a sequence of limit cycles $\gamma_n$, $n=1,2,\ldots$ which accumulate at the separatrix loop $$\gamma_0=\{(x,y):\quad y=0,\ H(x)=0,\ x_1\leq1 \}$$ of the singular point $(x=(1,0),\,y=0)$ and lie on an invariant surface $y={\varepsilon}G(x,{\varepsilon})$ of class $C^1$.
The approximated numerical value of ${\kappa}$ in formula (\[y0\]) is $${\kappa}\approx -0.56 + 4.57 i.$$
Systems of the form $$\label{mdgensystem}
\left\{\begin{aligned}
\dot x &= X_H + F(x) y + {\varepsilon}G(x)+\ldots \\
\dot y &= A(x) y + {\varepsilon}b(x) + \ldots,
\end{aligned} \right.$$ $x\in\bbR^2,\ y\in\bbR^\nu$, i.e. like (\[system\]), appear in the so-called multidimensional generalization of the weakened 16-th Hilbert problem (see [@bo; @bozoell; @bozo3d; @lezo]). Before perturbation, i.e. for ${\varepsilon}=0$, we have the invariant plane $y=0$ with the Hamiltonian vector field $X_H(x)$. The ovals $H(x)=h$ form a 1-parameter family of its periodic trajectories. One asks how many of these trajectories survive the perturbation. In the 2-dimensional case ($\nu=0$) the linearization of the problem leads to the problem of real zeroes of an Abelian integral $I(h)=\int_{H(x)=h}\omega$; it is called the weakened 16-th Hilbert problem (see [@aril; @il16]).
If $\nu\geq 1$, then the corresponding Pontryagin-Melnikov integrals (see [@mel; @pont]), denoted $J(h)$, were found in [@lezo] and [@bozoell]. We call them the generalized Abelian integrals.
The Abelian integrals $I(h)$ satisfy ODEs of the Fuchs type and have regular singularities with real spectrum (see [@ya]). Due to this, S. Yakovenko and others have found some effective estimations for the number of zeroes of $I(h)$. However, the generalized Abelian integrals do not satisfy any simple differential equation (see [@bo]) and sometimes have irregular singularities (e.g. at $h=\infty$). Moreover, even if the singularities are regular, then their spectra can be non-real.
Namely, the non-reality of the spectrum of $J(h)$ at the singularity $h=0$ is responsible for accumulation of zeroes of $J$. Below we find the asymptotics $$J(h) \sim C\, h^{9/2}\, \sin(\log\sqrt{h}),\qquad h\to 0^+.$$ It turns out that the zeroes $h_n\to 0^+$ of $J$ correspond to limit cycles $\gamma_n$ of the system (\[system\]); the cycle $\gamma_n$ bifurcates form the oval $H^{-1}(h_n)$ (see Figure \[fig:ell\]).
Therefore the system (\[system\]) can be treated as a counterexample to the multi-dimensional weakened Hilbert’s problem.
The remaining parts of the paper are devoted to the proof of Main Theorem. In Section \[sec:genab\] we investigate the generalized Abelian integral and its zeroes. In Section \[sec:estim\] we perform estimates needed for existence of genuine limit cycles.
Proof of the Main Theorem {#sec:proof}
=========================
Generalized Abelian Integral {#sec:genab}
----------------------------
The generalized Abelian integral is defined in two steps. Firstly one solves the so-called *normal variation equation* $$\label{nveq}
X_H(g) = a g + (1-x_1).$$ Its solution $x\mapsto g(x)\in\bbC$ appears in the first (linear in ${\varepsilon}$) approximation of the invariant surface (see the next section for more details) $$y={\varepsilon}\,H^4(x)\; g(x) + O({\varepsilon}^2).$$
We consider (\[nveq\]) only in the basin $D=\{x:\ H(x)\geq 0,\, x_1\leq 1\}$ of the center $x=(-1,0)$, filled by the periodic solutions $\gamma_h (t)=\gamma (t)\subset \{H^{-1}(h)\}$, each of period $$\label{hamper}
T_\gamma (h) = \int_{\gamma_h} -\frac{\dr x_1}{2 x_2} = \int_{\gamma_h} \dr t.$$ We assume that the Hamiltonian time is chosen in such a way that for $0<h<4$ $x(0)=(x_1^{(1)},0)$, where $x_1^{(1)},x_1^{(2)},x_1^{(3)}$ are roots of the equation $H(x_1,0)-h=0$ (see Figure \[fig:dreg\]). When restricted to $\gamma_h$, the equation (\[nveq\]) is treated as the ODE $\dot g = a g + (1-x_1)$ with periodic boundary condition. Its unique solution is given in the integral form $$\label{gint}
g(t,h) = (e^{-aT_\gamma} -1 )^{-1} \int_t^{t+T_\gamma} e^{a(t-s)} (1-x_1)(s,h)\; \dr s.$$
Substituting the invariant surface equation (see Section \[sec:estim\]) $y={\varepsilon}H^4 g + O({\varepsilon}^2)$ into the right hand side of $\dot x$ from (\[system\]) we get the following perturbation of planar Hamiltonian system $$\label{hamres}
\left\{\begin{aligned}
\dot{x}_1 &= -2 x_2,\\
\dot{x}_2 &= 3(1-x_1^2) + {\varepsilon}\,H^4\,{\mathrm{Re}}(\overline{{\kappa}}\,g) + O({\varepsilon}^2).
\end{aligned}\right.$$ The generating function for limit cycles is given by the integral $$\label{point}
J(h) = h^4 \int_{\gamma_h} {\mathrm{Re}}\big(\overline{{\kappa}}\,g(x)\big)\;\dr x_1.$$ Let us denote the “basic” generalized Abelian integral (see [@bozoell]) by $$\begin{gathered}
\label{psia}
{\Psi}_\gamma (h) = \int_{\gamma_h} g(t) (1-x_1)(t) \dr t = \\
=(e^{-aT\gamma} -1 )^{-1} \int_0^{T_\gamma}\dr t \int_t^{t+T_\gamma}\dr s\; e^{a(t-s)} (1-x_1)(s)(1-x_1)(t).\end{gathered}$$ It is related to the generating function via the following
\[lem:jviapsi\] We have $$J(h) = h^4\, {\mathrm{Re}}\Big[\overline{{\kappa}}\,\Big(a{\Psi}_\gamma + 2\int_{\gamma_h} (1-x_1)\,\dr t \Big)\Big].$$
In this proof we denote by dot, $\dot{f}=\tfrac{\dr}{\dr t} f = X_H(f)$, the differential with respect to the Hamiltonian time $t$. We have $$J(h)= - h^4 \int_{\gamma} {\mathrm{Re}}(\overline{{\kappa}}\,g) \tfrac{\dr}{\dr t}{(1-x_1)}\, \dr t = h^4\, {\mathrm{Re}}\Big(\overline{{\kappa}}\int_{\gamma} \dot{g}(1-x_1)\, \dr t \Big).$$ Next, $\dot{g}=a\,g+(1-x_1)$ gives $$\begin{gathered}
J(h)= h^4\, {\mathrm{Re}}\Big[\overline{{\kappa}}\,\Big( a{\Psi}_\gamma + \int_{\gamma} (1-x_1)^2\, \dr t \Big)\Big] = h^4\, {\mathrm{Re}}\Big[\overline{{\kappa}}\Big( a{\Psi}_\gamma + \int_{\gamma} (2-2 x_1 - \tfrac13 \dot{x}_2)\, \dr t \Big)\Big] =\\
=h^4\, {\mathrm{Re}}\Big[\overline{{\kappa}}\Big(a{\Psi}_\gamma + 2\int_{\gamma_h} (1-x_1)\,\dr t \Big)\Big].\end{gathered}$$
Our next aim is to determine the leading terms in the asymptotic expansion as $h\to 0^+$ of the integrals $\int_{\gamma} (1-x_1)(t)\dr t$, $T_\gamma$ and ${\Psi}_\gamma$. We begin with the Abelian integrals. It is known [@zolbook] that these integrals extends to multivalued holomorphic functions with logarithmic singularities. We shall need explicit form of the leading terms.
\[lem:abi\] There exists an open neighborhood $0\in U\subset \bbC$ in the complex domain and holomorphic functions $\eta_0,\eta_1,\zeta_0,\zeta_1\in\Omega(U)$ such that $$\begin{aligned}
T_\gamma &= \eta_0(h) + \zeta_0(h)\, \log h = - \tfrac1{2\sqrt{3}} \log h+\tfrac{\sqrt3}2\log12 + O(h\log h),\label{tgex}\\
\int_{\gamma} (1-x_1)\dr t &= \eta_1(h) + \zeta_1(h)\, \log h = 2\sqrt3 + O(h\log h)\label{abex}.\end{aligned}$$
We consider the pair of basis elliptic Abelian integrals $$I_0(h) = T_\gamma = \int_{\gamma}\frac{-\dr x_1}{2 x_2},\qquad I_1(h) = \int_{\gamma}\frac{-x_1 \dr x_1}{2 x_2}.$$ Note that $\int_\gamma(1-x_1)\dr t = I_0-I_1$ and that $I_0=\tfrac{\dr}{\dr h}\Big(\text{area of } \{H>h\}\Big)$.
These functions $(I_0,I_1)$ satisfy the Picard-Fuchs equations $$\label{fuchs}
\begin{split}
6 h(h-4) I_0' &= -(h-2) I_0 - 2 I_1 \\
6 h(h-4) I_1' &= 2 I_0 +(h-2) I_1. \\
\end{split}$$ The other, independent solution to this system is the pair $(K_0,K_1)$, where $$K_0(h) = \int_{\delta_h}\frac{-\dr x_1}{2 x_2},\qquad K_1(h) = \int_{\delta_h}\frac{-x_1 \dr x_1}{2 x_2}$$ are integrals along another cycle $\delta_h$ in the complex curve $E_h=\{H(x)=h\}\subset\bbC^2$. If $h\in(0,4)$ then the polynomial $x_1^3-3 x_1 +2 -h$ has three real roots $x^{(1)}_h<x^{(2)}_h<x^{(3)}_h$ (see Figure \[fig:dreg\]). The the cycle $\gamma_h$ (respectively $\delta_h$) is represented as the lift to the Riemann surface $E_h$ of loops in the complex $x_1$-plane surrounding the roots $x^{(1)}_h$ and $x^{(2)}_h$ (respectively $x^{(2)}_h$ and $x^{(3)}_h$). Note the following integral formulas for $T_{\gamma}(h)$: $$\label{tgi}
T_{\gamma} = \int_{x^{(1)}_h}^{x^{(2)}_h} \frac{\dr x_1}{\sqrt{x_1^3-3 x_1 +2 -h}} = \int_{x^{(3)}_h}^\infty \frac{\dr x_1}{\sqrt{x_1^3-3 x_1 +2 -h}},\qquad h\in(0,4).$$ The second equality corresponds to unobstructed deformation of integration contour $\gamma_h$ to loop surrounding $x^{(3)}_h$ and $\infty$.
The system (\[fuchs\]) has resonant singular point $h=0$. Any its solution is either analytic near $h=0$ (it is $(K_1,K_2)$) or it has the form like $I_0,I_1$: $$\label{i12exp}
\begin{split}
I_0(h) &= (a_0+ a_1 h + \ldots) +\tfrac1{2\pi i} K_0\;\log h,\\
I_1(h) &= (b_0+ b_1 h + \ldots) +\tfrac1{2\pi i} K_1\;\log h.
\end{split}$$ This representation follows from the Picard-Lefschetz formula $$\label{piclef}
\gamma_h \longrightarrow \gamma_h\cdot\delta_h, \qquad\qquad\delta_h \longrightarrow \delta_h,$$ which describes the monodromy transformations of the generators of $\pi_1(E_h,*)$, as $h$ surrounds the critical value $0$; here $*$ denotes a basepoint.
We need to calculate the expansions of $I_0,I_1$. As we shall see, it is enough to calculate $K_0(0)$ and $a_0$; all other coefficients follows from the system (\[fuchs\]) and can be recursively determined. Indeed, to compensate terms with $\log h$ in (\[fuchs\]) we must have $$\label{auxk1k0}
K_1(0)=K_0(0).$$ Terms with $h^0$ give $$\label{auxb0a0}
b_0=a_0 +\tfrac{12}{2\pi i}K_0(0).$$ It can be continued further.
To determine $K_0(0)$ and $a_0$ simultaneously, we make a coordinate change $u=(x_1-1)/(x^{(3)}_h-1)$ in the integral (\[tgi\]); we denote also ${p}=(x^{(3)}_h-1)$. Since ${p}= \sqrt{h/3} +O(h)$ as $h\to 0^+$, the following integral $$\begin{gathered}
\int_1^\infty \dr u\Big[\frac1{\sqrt{u^2(3+ {p}u) - h/{p}^2}} - \frac1{\sqrt{u^2(3+ {p}u)}} - \frac1{\sqrt{3 u^2-3}} + \frac1{\sqrt{3}} \Big] \xrightarrow{\ h\to 0^+} 0.\end{gathered}$$ We calculate $$\begin{aligned}
&\int_1^\infty \dr u\Big[ \frac1{\sqrt{3 u^2-3}} - \frac1{\sqrt{3}} \Big] = \frac{\log 2}{\sqrt 3}, \\
&\int_1^\infty \dr u \Big[ \frac1{u \sqrt{3+ {p}u}} \Big] = \tfrac2{\sqrt 3}\log\left(\tfrac{2\sqrt3}{\sqrt{{p}}} + o(1)\right) = - \tfrac1{2\sqrt{3}} \log h + \frac{\log(12\sqrt3)}{\sqrt3} + o(h^{1/2}).\end{aligned}$$ Thus $a_0=\tfrac{\sqrt3}{2}\log 12$, $K_0(0)=- \tfrac{2\pi i}{2\sqrt3}$. Substituting these values to the relations (\[auxk1k0\]), (\[auxb0a0\]) and using the expansion (\[i12exp\]) we get the leading terms of the expansions as in formulas (\[tgex\]) and (\[abex\]).\
Let us pass to expansion of ${\Psi}_\gamma$.
\[pr:genabi\] Let $-2\sqrt3 <{\mathrm{Re}}(a)<0$. There exists an open neighborhood $0\in U\subset \bbC$ in the complex domain and holomorphic functions $\varphi_1,\varphi_2,\varphi_3$ such that $$\begin{gathered}
\label{genabian}
{\Psi}_\gamma (z) = \varphi_1(h) +\varphi_2(h)\, \log h + \varphi_3(h)\cdot \Big(e^{-a T_\gamma}-1\Big)^{-1} = C_0+C_1 h^{-a/2\sqrt3} + \ldots,\end{gathered}$$ where $$\begin{aligned}
C_1 &= \frac{(\pi a)^2}{\sin^2(\pi a/2\sqrt3)}, \label{gabic1}\\
C_0 &= \frac{3\sqrt2}{\sqrt \pi} \Big(-1+2w+2w^2 {\psi}'(-w)\Big), \qquad w=\tfrac{a}{2\sqrt3}. \label{gabic0}\end{aligned}$$
\[rk:y0viac0\] One can easily observe that the value (\[y0\]) of ${\kappa}$ satisfies the relation $$\label{y0viac0}
{\kappa}= i(4\sqrt3 + a C_0).$$ It is chosen in a way to annihilate the leading term ($\sim h^4$) of $J(h)$ and to reveal the term with the infinite sequence of zeroes – see the Corollary \[cor:zeroes\] and its proof below.
\[cor:zeroes\] Providing the values of parameters $(\rho,\omega,{\kappa})$ as in Theorem Main, the integral $J(h)$ (\[point\]) has a sequence $h_n$, $n=1,2,\ldots$ of simple zeroes accumulating at $h=0$.
We calculate the leading term of the expansion of $J(h)$ using Lemma \[lem:jviapsi\], Lemma \[lem:abi\] and Remark \[rk:y0viac0\]: $$\begin{aligned}
J(h) &= h^4{\mathrm{Re}}\Big[\overline{{\kappa}}\,\big(a C_0 + a C_1 h^{1/2-i/2} + 4\sqrt3 +o(h^{3/4})\big)\Big] = \\
&=h^4{\mathrm{Re}}\Big[\overline{{\kappa}}\,\big(a C_0 + 4\sqrt3\big)\Big] + h^4{\mathrm{Re}}\Big[\overline{{\kappa}}\,a\, C_1 h^{1/2-i/2}\Big] + o(h^{4+3/4})=\\
&=R\, h^{4+1/2} \cos(\log \sqrt{h}-\alpha_0) + o(h^{4+3/4}),\end{aligned}$$ where $R=|\overline{{\kappa}}\, a\, C_1|,\quad \alpha_0 = \mathrm{arg}(\overline{{\kappa}}\, a\, C_1)$. Analogously we get $$J'(h) = R_1\, h^{3+1/2} \cos(\log \sqrt{h}-\alpha_1) + o(h^{3+3/4}).$$ Thus, by Implicit Function Theorem the zeroes $(h_n)$ of $J(h)$ approximate the simple zeroes $h_n^{(0)}$ of the function $\cos(\log \sqrt{h}-\alpha_0)$.\
The remaining part of this section is devoted to the proof of Proposition \[pr:genabi\]. It goes in two steps. In first one we show that the function $${\Psi}_\gamma(h) \longrightarrow C_0$$ as $h\to 0^+$ and so is bounded.
In the second step we determine the monodromy of the generalized Abelian integral ${\Psi}_\gamma(h)$ as $h$ surrounds $h=0$. We know that then $\gamma$ changes to ${\mathcal{M}\mathrm{on}}_0\gamma=\gamma\cdot\delta$ and ${\mathcal{M}\mathrm{on}}_0\delta=\delta$. We would like to express ${\Psi}_{\gamma\cdot\delta}$ in simple terms, in order to determine the singularity of ${\Psi}_\gamma(h)$ at $h=0$. Rather complicated formulas for ${\Psi}_{\gamma\cdot\delta}$ are given in [@bozoell] and [@bozo3d]. In [@bo] these formulas were simplified using certain upper triangle representation $\rho$ of the fundamental group $\pi_1(E_h,*)$. We recall this construction below.
We denote $$\label{thtpdef}
\begin{split}
{\Psi}_\gamma (h) & = (e^{-aT\gamma} -1 )^{-1} \int_0^{T_\gamma}\dr t \int_t^{t+T_\gamma}\dr s\; e^{a(t-s)} (1-x_1)(s)\,(1-x_1)(t),\\
\lambda_\gamma(h) & = e^{-a T_\gamma/2},\\
\phi_\gamma(h) & = \lambda_\gamma \,\int_0^{T_\gamma}\dr t\int_0^t\dr s\; (1-x_1)(t)\, (1-x_1)(s)\cdot e^{a(t-s)}= \\
& =\lambda_\gamma \,\int_{-T_\gamma/2}^{T_\gamma/2}\dr t\int_{-T_\gamma/2}^t\dr s\; (1-x_1)(t+\tfrac{T_\gamma}2)\, (1-x_1)(s+\tfrac{T_\gamma}2)\, e^{a(t-s)},\\
{\theta}_\gamma^+ & = \lambda_\gamma\,\int_0^{T_\gamma}\dr t\; (1-x_1)(t)\, e^{a t} = \int_{-T_\gamma/2}^{T_\gamma/2}\dr t\; (1-x_1)(t+\tfrac{T_\gamma}2)\, e^{a t}, \\
{\theta}_\gamma^- & = \lambda_\gamma^{-1}\,\int_0^{T_\gamma}\dr t\; (1-x_1)(t)\, e^{-a t} = \int_{-T_\gamma/2}^{T_\gamma/2}\dr t\; (1-x_1)(t+\tfrac{T_\gamma}2)\, e^{-a t},
\end{split}$$ Here the subscript $\gamma$ underlines dependence of the above functions on the loop $\gamma=\gamma_h$.
We introduce the following space of triangular matrices $$\label{gform}
{\mathbb{T}}\colon = \left\{ \left(\begin{smallmatrix}
\lambda& {\theta}^{\scriptscriptstyle-} &\phi\\
0& \lambda^{\scriptscriptstyle-1}& {\theta}^{\scriptscriptstyle+} \\
0& 0& \lambda \\
\end{smallmatrix}\right), \quad \lambda\in\bbC^*,\quad {\theta}^+,{\theta}^-,\phi\in\bbC \right\};$$ it forms a group. For $W\in {\mathbb{T}}$ we denote $$|W| = \det W = \lambda.$$ Existence of a 2-dimensional Jordan cell is measured in the following formula $$\label{psirdef}
\frac{(W-|W|)(W-1/|W|)}{|W|^2-1} = {\psi}(W)\left(\begin{smallmatrix}0&0&1\\ 0&0&0\\ 0&0&0\\ \end{smallmatrix}\right);$$ explicitly we have $$\label{gabirexplicit}
{\psi}(W) = \frac{{\theta}^+{\theta}^-}{\lambda^2-1} + \frac{\phi}{\lambda}.$$
\[th:bo\] The map $\rho:\pi_1(E_h,*)\rightarrow {\mathbb{T}}$, $$\label{thrho}
\rho(\gamma) = \begin{pmatrix}
\lambda_\gamma& {\theta}_\gamma^- & \phi_\gamma\\
0& \lambda^{-1}_\gamma& {\theta}^+_\gamma \\
0& 0& \lambda_\gamma \\
\end{pmatrix},$$ where $\lambda_\gamma,{\theta}_\gamma^\pm,\phi_\gamma$ are defined in (\[thtpdef\]), defines a representation of the fundamental group of $E_h$. Moreover, we have $$\label{thgabi}
{\Psi}_\gamma ={\psi}\circ \rho\; (h).$$
We have ${\Psi}_\gamma=(\lambda_\gamma^2-1)^{-1}\iint e^{a(t-s)} (1-x)(t)\;(1-x)(s)$, where the integration domain is $\Sigma=\{(t,s):\ 0\leq t \leq T_\gamma,\ t\leq s\leq T_\gamma+t\}$ (see (\[thtpdef\])). We divide $\Sigma$ into two “triangles” $\triangle_1=\{0\leq t\leq T_\gamma,\ t\leq s\leq T_\gamma\}$ and $\triangle_2=\{0\leq t\leq T_\gamma,\ T_\gamma\leq s \leq T_\gamma+t\} = \triangle_0 + (0,T_\gamma)$, where $\triangle_0=\{0\leq t\leq T_\gamma,\ 0\leq s \leq t\}$. We have $\iint_{\triangle_1+\triangle_2}(\cdot)= \iint_{\triangle_1+\triangle_0}(\cdot) + \iint_{\triangle_2-\triangle_0}(\cdot)$, where $\iint_{\triangle_1+\triangle_0}(\cdot)= {\theta}^+_\gamma\, {\theta}^-_\gamma$ and $\iint_{\triangle_2-\triangle_0}(\cdot)=(\lambda_\gamma^2-1)\lambda_\gamma^{-1}\phi_\gamma$. Now the formula (\[thgabi\]) follows from (\[gabirexplicit\]).
The property $\rho(\gamma\cdot\delta)=\rho(\gamma)\;\rho(\delta)$, $\gamma,\delta\in\pi_1(E_h,*)$ is proved analogously. We divide the line integrals in ${\theta}^\pm_{\gamma\delta}$ and the surface integral in $\phi_{\gamma\delta}$ into parts where $t$ or $s$ lies in $\gamma$ or in $\delta$. We use also $\lambda_{\gamma\delta}=\lambda_{\gamma}\lambda_{\delta}$.\
\[pr:gabilim\] Let $\xi(t)=(1-x_1)(t)$ with the initial value $\xi(0)=1-x_1^{(1)}$ (see Figure \[fig:dreg\]). We have the following integral formula for the generalized Abelian integral $$\label{prgabi}
{\Psi}_\gamma (h) = (e^{-a T\gamma} -1)^{-1} \Big(\int_{-T_\gamma/2}^{T_\gamma/2}\xi(t) e^{at}\dr t\Big)^2 + \int_{-T_\gamma/2}^{T_\gamma/2}\dr t \int_{-T_\gamma/2}^{t}\dr s\ \xi(s)\xi(t)e^{a(t-s)}.$$ As $h\to 0^+$ these integrals have finite limits: $$\label{intlims}
\begin{split}
\int_{-T_\gamma/2}^{T_\gamma/2}\xi(t) e^{at}\dr t \longrightarrow \frac{\pi a}{\sin(\pi a/2\sqrt3)}=\sqrt{C_1},\\
\int_{-T_\gamma/2}^{T_\gamma/2}\dr t \int_{-T_\gamma/2}^{t}\dr s\ \xi(s)\xi(t)e^{a(t-s)} \longrightarrow C_0,
\end{split}$$ where $C_0,C_1$ are as defined in Proposition \[pr:genabi\].
The value of generalized Abelian integral ${\Psi}_\gamma$ does not depend on the “shift” of parametrization (e.g. $t\mapsto t+\tfrac{T_\gamma}2$), but values of integrals $\phi_\gamma$ and ${\theta}^\pm_\gamma$ depend. We choose Hamiltonian time parameter in such a way that $x_1(0)=x_1^{(2)}$, where $x^{(1)}_h<x^{(2)}_h<x^{(3)}_h$ are real roots of the polynomial $x_1^3-3 x_1 +2=h$ (see Figure \[fig:dreg\]). Thus $$(1-x_1)(t + T_\gamma/2)=\xi(t)$$ and so, using formula (\[gabirexplicit\]) and formulas (\[thtpdef\]), we get formula (\[prgabi\]).
To determine the asymptotic expansion we notice that the singular curve $E_0=\{H(x)=0\}=\{x_2^2=(x_1-1)^2(x_1+2)\}$ is rational. The Hamiltonian parametrization of the limit loop $\gamma_0$ can be explicitly calculated: $$\label{xi}
\xi(t) \longrightarrow \xi_0(t)=\frac3{\cosh^2(\sqrt3 t)},\qquad -\infty<t<\infty$$ as $h\to 0^+$. Recall that $T_\gamma(0)=\infty$. Substituting these values to integrals in (\[prgabi\]) we get the following limits $$\int_{-T_\gamma/2}^{T_\gamma/2}\xi(t) e^{at}\dr t \longrightarrow \int_{-\infty}^{\infty} \frac3{\cosh^2(\sqrt3 t)} e^{it(a/i)}\dr t = \sqrt{2\pi} \mathcal{F}(\xi_0)(a/i)$$ where $\mathcal{F}$ denotes the Fourier transform. Since $\mathcal{F}(\tfrac1{\cosh^2})(k)=\sqrt{\tfrac{\pi}{2}}\tfrac{k}{\sinh(k\pi/2)}$ (see [@gr], Integral 3.982.1 for example) we find the value $$\frac{i \pi a}{\sinh(i\pi a/2\sqrt3)}=\frac{\pi a}{\sin(\pi a/2\sqrt3)}=\sqrt{C_1}.$$
To determine the limit of the second integral $$\int_{-T_\gamma/2}^{T_\gamma/2}\dr t \int_{-T_\gamma/2}^{t}\dr s\ \xi(t)\xi(s)e^{a(t-s)} \longrightarrow \int_{-\infty}^{\infty}\dr t \int_{-\infty}^{t}\dr s\ \xi_0(t)\xi_0(s)e^{a(t-s)}$$ we substitute $u=t-s$, use the symmetry $\xi_0(-t)=\xi_0(t)$ and the Parsival identity; then we get $$\begin{aligned}
\int_{-\infty}^{\infty}\dr t \int_0^{\infty}\dr u\ \xi_0(t)\xi_0(u-t)e^{a(u)} &= \int_{-\infty}^{\infty}\dr k \Big(\mathcal{F}(\xi_0)(k)\Big)^2 \mathcal{F}\Big(e^{au}\chi_{[0,\infty)} (u)\Big)(k)=\\
&=\frac{3\sqrt{2}i}{\pi^{3/2}}\int_{-\infty}^{\infty} \frac{k^2}{\sinh^2 k}\frac{\dr k}{k-\pi i (a/2\sqrt3)}.\end{aligned}$$ To evaluate the latter integral, which has the form $$\label{fint}
F(w) = \int_{-\infty}^{\infty} \frac{z^2}{\sinh^2 z}\,\frac{\dr z}{z-\pi i w}, \qquad {\mathrm{Re}}w < 0,$$ we must use the logarithmic derivative of the Euler $\Gamma$ function, i.e. the function ${\psi}=(\log\Gamma)'=\tfrac{\Gamma'}{\Gamma}$.
Integrating by parts we obtain $$\begin{aligned}
F(w) &= \lim_{R\to+\infty}\int_{-R}^{-R^{-1}}\hspace{-1.5em}+\int_{R^{-1}}^{R} (\sgn z-\coth z)'\, \frac{z^2}{z-\pi iw}\dr z = \\
&=-2\pi i w +\pi^2 w^2\; \lim_{R\to+\infty}\int_{-R}^{-R^{-1}}\hspace{-1.5em}+\int_{R^{-1}}^{R} \frac{\coth z}{(z-\pi i w)^2}\dr z.\end{aligned}$$ Next, we integrate the function $\tfrac{\coth z}{(z-\pi i w)^2}$ along the contour consisting of the segment $[-R,-R^{-1}]$ followed by the semicircle $R^{-1}e^{i\varphi},\; \varphi\in[\pi,2\pi]$ and segments: $[R^{-1},R]$, $[R,R+i(N+\tfrac12)\pi]$, $[R+i(N+\tfrac12)\pi,-R+i(N+\tfrac12)\pi]$, $[-R,-R+i(N+\tfrac12)\pi]$, where $N\in\bbN$. Using the residue formula and passing to the limit $R,N\to\infty$ we deduce
$$\begin{gathered}
\lim_{R\to+\infty}\int_{-R}^{-R^{-1}}\hspace{-1.5em}+\int_{R^{-1}}^{R} \frac{\coth z}{(z-\pi i w)^2}\dr z + \frac{\pi i}{(\pi iw)^2} = 2\pi i\sum_{n=0}^\infty {\mathrm{Res}}_{i\pi n}\Big(\frac{\coth z}{(z-\pi i w)^2}\Big)=\\
= -\frac{2i}{\pi}\,\sum_{n=0}^\infty \frac1{(n-w)^2} = -\frac{2i}{\pi}\,{\psi}(-w);
$$
in the identification of the latter sum we used [@gr], formula 8.363.8. Finally we have $$F(w) = \pi i \Big(1-2 w-2w^2{\psi}'(-w)\Big)\qquad \text{for}\quad {\mathrm{Re}}(w)<0,$$ and so the second of the limits (\[intlims\]) follows.\
Now we investigate the monodromy properties of the generalized Abelian integral ${\Psi}_\gamma$. We shall need the following
\[lem:algid\] For $W,W'\in{\mathbb{T}}$ we have $${\psi}(W\cdot W') = {\psi}(W)+ {\psi}(W') + \frac{|W|^2\,|W'|^2} {(|W|^2 -1)(|W'|^2 -1)(|W|^2\,|W'|^2 -1)} {\widetilde{\psi}}([W,W']),$$ where $[W,W']=W\,W'\,W^{-1}\,(W')^{-1}$ is the commutant and $${\widetilde{\psi}}(W) = (|W|^2-1)\;{\psi}(W);$$ (for $|W|=1$ we have ${\widetilde{\psi}}(W)={\theta}^+\,{\theta}^-$ in terms of (\[gform\])).
The proof relies on direct calculations.\
\[cor:psian\] The function ${\Psi}_\gamma(h)$ near $h=0$ has the following form $$\label{psian}
{\Psi}_\gamma(h) = \varphi_1(h) + \tfrac1{2\pi i}{\Psi}_\delta(h) \log h - \frac{\lambda_\delta^2(h)}{(\lambda_\delta^2(h) - 1)^2}{\widetilde{\Psi}}_{[\gamma,\delta]}(h)\cdot \frac1{\lambda_\gamma(h) - 1},$$ where $\delta$ is the second cycle in $\pi_1(E_h,*)$ (see the proof of Lemma \[lem:abi\]), ${\widetilde{\Psi}}_{[\gamma,\delta]}(h) = {\widetilde{\psi}}(\rho([\gamma,\delta])$. The functions $\varphi_1$, ${\Psi}_\delta$ and ${\widetilde{\Psi}}_{[\gamma,\delta]}$ are holomorphic near $h=0$.
\[rk:tlgabi\] One can prove that the function ${\widetilde{\Psi}}_{[\gamma,\delta]}$ is constant $${\widetilde{\Psi}}_{[\gamma,\delta]}(h) = (2\pi a)^2.$$ Indeed, since the contour $[\gamma,\delta]$ is monodromy invariant (see proof of Corollary \[cor:psian\] below), the function ${\widetilde{\Psi}}_{[\gamma,\delta]}$ is meromorphic on whole $\bbC$ with possible poles in $h=0,4$. We know, by Proposition \[pr:gabilim\], that it is bounded as $h\to 0$. Similarly one shows that it is bounded as $h\to 4$. These calculations are analogous to proof of the first limit in (\[intlims\]). We also check that ${\widetilde{\Psi}}_{[\gamma,\delta]}$ is bounded as $h\to\infty$. Thus this function has to be constant; its value we calculate by passing to the limit $h\to0$ and comparing respective terms in (\[prgabi\]) and (\[psian\]).
The Picard-Lefschetz formula (\[piclef\]), Theorem \[th:bo\] and Lemma \[lem:algid\] imply that $$\label{mongabi}
{\mathcal{M}\mathrm{on}}_0{\Psi}_\gamma ={\psi}(\rho(\gamma)\cdot\rho(\delta)) = {\Psi}_\gamma+{\Psi}_\delta+\frac{\lambda_\gamma^2\lambda_\delta^2}{(\lambda_\gamma^2 - 1)\,(\lambda_\delta^2 - 1)\,(\lambda_\gamma^2\lambda_\delta^2 - 1)} {\widetilde{\Psi}}_{[\gamma,\delta]}$$ and ${\mathcal{M}\mathrm{on}}_0{\Psi}_\delta={\Psi}_\delta$. Next the following monodromy relations follows the Picard-Lefschetz formula (\[piclef\]) $$\begin{gathered}
{\mathcal{M}\mathrm{on}}_0 \lambda_\gamma = \lambda_\gamma\, \lambda_\delta,\qquad {\mathcal{M}\mathrm{on}}\lambda_\delta=\lambda_\delta,\\
{\mathcal{M}\mathrm{on}}_0[\gamma,\delta] = (\gamma\delta)\cdot\delta\cdot(\gamma\delta)^{-1}\cdot\delta^{-1}=[\gamma,\delta].\end{gathered}$$ Therefore ${\Psi}_\delta$, ${\widetilde{\Psi}}_{[\gamma,\delta]}$ and $\lambda_\delta$ are locally single-valued functions of $h$. Since they are bounded (see Proposition \[pr:gabilim\]), they must be holomorphic. Now $$\begin{aligned}
{\mathcal{M}\mathrm{on}}_0 \Big(\tfrac1{2\pi i} {\Psi}_\delta\, \log h \Big) &= \tfrac1{2\pi i} {\Psi}_\delta\, \log h + {\Psi}_\delta,\\
{\mathcal{M}\mathrm{on}}_0 \Big(-\frac{\lambda_\delta^2\, {\widetilde{\Psi}}_{[\gamma,\delta]} }{(\lambda_\delta^2 - 1)^2\, (\lambda_\gamma^2 - 1)}\Big) &= -\frac{\lambda_\delta^2\, {\widetilde{\Psi}}_{[\gamma,\delta]} }{(\lambda_\delta^2 - 1)^2\, (\lambda_\gamma^2\lambda_\delta^2 - 1)} =\\
= \Big(-\frac{\lambda_\delta^2\, {\widetilde{\Psi}}_{[\gamma,\delta]} }{(\lambda_\delta^2 - 1)^2\, (\lambda_\gamma^2 - 1)}\Big) & + \frac{\lambda_\gamma^2\lambda_\delta^2\, {\widetilde{\Psi}}_{[\gamma,\delta]}}{(\lambda_\gamma^2 - 1)\,(\lambda_\delta^2 - 1)\,(\lambda_\gamma^2\lambda_\delta^2 - 1)}.\end{aligned}$$ Therefore the function $\varphi_1$ defined by (\[psian\]) is single-valued. Since $(\lambda_\gamma^2 -1)$ and $(\lambda_\delta^2 -1)$ are separated from zero and the function ${\Psi}_\gamma$ is bounded (see Proposition \[pr:gabilim\]), the function $\varphi_1$ is holomorphic.\
Corollary \[cor:psian\] allows to finish the proof of Proposition \[pr:genabi\]. The holomorphic function $\varphi_1$ is defined in Corollary and $\varphi_2$, $\varphi_3$ can be read from (\[psian\]): $$\begin{aligned}
\varphi_2 &= \tfrac1{2\pi i} {\Psi}_\delta,\\
\varphi_3 &= -\frac{\lambda_\delta^2\, {\widetilde{\Psi}}_{[\gamma,\delta]} }{(\lambda_\delta^2 - 1)^2}=-\Big(\frac{2\pi a \lambda_\delta }{\lambda_\delta^2 - 1}\Big)^2\end{aligned}$$ (the latter equality follows from Remark \[rk:tlgabi\]).
Since $T_\gamma = -\tfrac1{2\sqrt3} \log h + O(1)$, we have the leading term of expansion $$(\lambda_\gamma^2 - 1)^{-1}=(e^{-a T_\gamma} - 1)^{-1} = h^{-a/2\sqrt3}+ ...$$
Estimates {#sec:estim}
---------
In this subsection we show that the zeroes $h_n$ of the generalized Abelian integral (see Corollary \[cor:zeroes\]) generate corresponding limit cycles of the system (\[system\]), provided ${\varepsilon}$ is sufficiently small.
Recall that the problem of limit cycles of (\[system\]) is reduced to the problem of limit cycles of the following planar system $$\label{plsystem}
\dot x = X_H(x) + {\varepsilon}\,{\mathrm{Re}}\Big(\overline{{\kappa}}\,G(x,{\varepsilon})\Big) e_2,$$ where the function $G(x,{\varepsilon})$ is defined via the invariant surface $L_{\varepsilon}= \{y={\varepsilon}G(x,{\varepsilon})\}$, which is a graph of $G(\cdot,{\varepsilon})$.
At the moment we do not even know whether the invariant surface exists. Indeed, the normal hyperbolicity conditions are *not* satisfied: the eigenvalues in the normal direction are $\lambda_{3,4} = -\sqrt3 \pm i \sqrt3$, whereas the eigenvalues at the saddle point $x=(1,0),y=0$ in the $x$-direction are $\pm 2\sqrt3$ (compare [@bozoell; @bozo3d; @hps; @ni]). We should do two things:
1. prove the existence of the invariant surface,
2. estimate the discrepancy $G(x,{\varepsilon})- H^4\, g(x)$, where $g(x)$ is the solution to the normal variation equation given in (\[nveq\]).
In both tasks the crucial role is played by the following Lemma. Let us recall the notation related to the elliptic Hamiltonian $H(x)=x_1^3 -3x_1 - x_2^2+2$. The basin $D\subset\bbR^2$ (see Figure \[fig:dreg\]) is filled with closed orbits of the Hamiltonian vector field $X_H$.
\[lem:c1inv\] Let $U\supset D\times \{0\}$ be an open neighborhood in $\bbR^2\times \bbC$ and $V_{\varepsilon}$ be the following vector field in $U$ $$\label{vve}
V_{\varepsilon}\left\{ \begin{aligned}
\dot x &= X_H + H^k\,{\mathrm{Re}}(\overline{{\kappa}}\,y) v_o + Q(x,y;{\varepsilon})\\
\dot y &= a y + B(x,y;{\varepsilon}),
\end{aligned} \right.$$ where $k\geq 0$, $a=-\sqrt3+i\sqrt3$, ${\kappa}\in\bbC$, $v_0\in\bbR^2$ and $Q,B$ are functions of class $C^2(U)$ satisfying the following $$\begin{split}
|Q| &\leq {Const}\cdot {\varepsilon}|H(x)|^2,\\
|B| &\leq {Const}\cdot {\varepsilon}|H(x)|^2.
\end{split}$$ Then, for sufficiently small ${\varepsilon}$ there exists a unique invariant surface of $V_{\varepsilon}$: $$L_{\varepsilon}= \{(x,y):\quad x\in D, \ y={\varepsilon}\, G(x,{\varepsilon})\}.$$ The function $G(\cdot,{\varepsilon})$ prolongs by zero to a $C^1$ function on a neighborhood of $D$ in $\bbR_x^2$.
We shall prove that the Poincaré return map associated to vector field $V_{\varepsilon}$ satisfies the normal hyperbolicity condition.
In a neighborhood of the center critical point $(x=(-1,0),y=0)$ for unperturbed vector field $V_0$, the system is normally hyperbolic and so the invariant surface exists. In the further proof we shall concentrate on the neighborhood of separatrix $\gamma_0=\{(x,y):y=0,\quad H(x)=0\}$ (see Figure \[fig:estimaux\]).
The non-degenerate critical point $p_0=((1,0),0)$, located on this separatrix, is preserved after the perturbation. Let us choose a 3-dimensional hypersurface $S$ transversal to $\gamma_0$ and close to the singular point $p_0$. Let $$\begin{aligned}
\Sigma_{\alpha,d} &= S\cap \{(x,y):\quad |y|^2 \leq \alpha^2 (H(x))^2, \quad H(x) < d\}\\
\intertext{be a sector in $S$ with vertex}
\sigma_0 &= \Sigma_{\alpha,d}\cap \gamma_0 = \{(x,0)\in\Sigma_{\alpha,d}:\quad H(x)=0\}.
$$
\[lem:poinc1\] For sufficiently small $\alpha >0,\ d_2>d_1>0$ and ${\varepsilon}\in(-{\varepsilon}_0,{\varepsilon}_0)$, the Poincaré return map $$\label{pvempa}
{\mathcal{P}}_{\varepsilon}: \Sigma_{\alpha,d_1} \rightarrow \Sigma_{\alpha,d_2}$$ is a diffeomorphism onto the image and prolongs to a map of class $C^1$ in point $\sigma_0$.
Now we finish the proof of Lemma \[lem:c1inv\]. The unperturbed Poincare map ${\mathcal{P}}_0$ is the identity on the invariant segment $I_0=\Sigma_{\alpha,d_1}\cap\{y=0\}$. Thus ${\mathcal{P}}_0$ is normally hyperbolic on $I_0$, since we have strong contraction in the normal direction. In virtue of the Hirsh Pugh Shub Theorem [@hps], for sufficiently small ${\varepsilon}$ there exist the unique invariant embedded interval $I_{\varepsilon}$ close to $I_0$; it is of class $C^1$. Considering Hamiltonian as the parameter on $I_0$, we get $$I_{\varepsilon}=\{y={\varepsilon}F(h,{\varepsilon}), \ h\in [0,\delta)\},\quad F\in C^1([0,\delta)\times (-{\varepsilon}_0,{\varepsilon}_0)).$$ The surface $S_{\varepsilon}$ spanned by trajectories of $V_{\varepsilon}$ passing through $I_{\varepsilon}$ is $V_{\varepsilon}$-invariant, due to the invariance of $I_{\varepsilon}$ under the Poincaré map ${\mathcal{P}}_{\varepsilon}$. The form of invariant interval $I_{\varepsilon}$ and the form of vector field $V_{\varepsilon}$ implies that the surface $S_{\varepsilon}$ is graph of a $C^1(D)$ function: $$L_{\varepsilon}=\{(x,y):x\in D, \quad y={\varepsilon}G(x,{\varepsilon})\}.$$ We prove that $G$ can be extended by zero outside $D$.
We check that, providing the assumptions of Lemma \[lem:c1inv\] hold, the set $$\{(x,y):x\in D, |y|^2 \leq R^2 {\varepsilon}^2 |H(x)|^4\},$$ for $R$ big enough, is invariant for $V_{\varepsilon}$. Namely, denoting all constants by $C$, we calculate $$\begin{gathered}
\label{invcal}
V_{\varepsilon}(|y|^2 - R^2 {\varepsilon}^2 |H|^4) |_{|y| =R\, {\varepsilon}\, |H|^2} =\\
= 2 {\mathrm{Re}}\Big[\overline{y}\; (a\, y+B)\Big] -4 R^2 {\varepsilon}^2 H^3\Big(H^3{\mathrm{Re}}(\overline{{\kappa}}\,y\, <\dr H,v_0> + <\dr H,Q>)\Big) \\
\leq 2R^2\,{\varepsilon}^2\, H^4 \Big(-\rho + \tfrac{C}{R} + 2R\,{\varepsilon}\,C\,|H|^{1+k}+ {\varepsilon}\,C\,|H|\Big).\end{gathered}$$ Since the latter expression is $\leq 0$ (for sufficiently large $R$), the considered subset is $V_{\varepsilon}$-invariant.
Thus $$|G(x,{\varepsilon})|\leq {Const}\cdot |H(x)|^2$$ and the function $G(x,{\varepsilon})$ can be prolonged by zero to a function of class $C^1$.\
By calculations analogous to (\[invcal\]) we find that $$V_{\varepsilon}(|y|^2 - \alpha^2 H^2) |_{|y| =\alpha\, |H|} \leq 2\alpha^2\, H^2 \Big(-\rho + \tfrac{{\varepsilon}\,|H|}{\alpha} + \alpha\,C\,|H|^k + C{\varepsilon}\,|H|\Big).$$ Thus for sufficiently small $\alpha$ and ${\varepsilon}$ the subset $$\{(x,y):x\in D,\quad |y|^2\leq \alpha^2 (H(x))^2\}$$ is $V_{\varepsilon}$-invariant. Moreover, the separatrix $\gamma_0$ is also $V_{\varepsilon}$-invariant. This proves, that the Poincaré return map defines the dyffeomorphism (\[pvempa\]) which is of class $C^1$ outside the border.
Thus it remains to show that ${\mathcal{P}}_{\varepsilon}$ can be prolonged to the $C^1$ map in the point $\sigma_0$. We choose an additional, auxiliary, 3-dimensional hypersurface $\widetilde{S}$ transversal to $\gamma_0$, close to $p_0$, which lies “on another side” with respect to the point $p_0$ (see Figure \[fig:estimaux\]). The return map ${\mathcal{P}}_{\varepsilon}$ is the composition ${\mathcal{P}}_{\varepsilon}= {\mathcal{P}}^s_{\varepsilon}\circ {\mathcal{P}}^r_{\varepsilon}$ of the correspondence maps $${\mathcal{P}}^s_{\varepsilon}: S\rightarrow \widetilde{S}, \qquad \text{and}\qquad {\mathcal{P}}^r_{\varepsilon}: \widetilde{S}\rightarrow S,$$ defined by trajectories near the singular point $p_0$ and trajectories near the regular part of $\gamma_0$ respectively. The regular map naturally extends to the $C^1$ map (even $C^2$ in fact) as the flow of the non-vanishing, $C^2$ vector field $V_{\varepsilon}$. To analyze the singular part ${\mathcal{P}}^s_{\varepsilon}$, we use the following theorem of H. Belitskii.
Let $\Lambda\in \mathrm{End}(\bbR^n)$ be a linear endomorphism whose eigenvalues $(\lambda_1,\ldots,\lambda_n)$ satisfy $$\mathrm{Re}\lambda_i \neq \mathrm{Re}\lambda_j + \mathrm{Re}\lambda_k$$ for all $i$ and $j,k$ such that $\mathrm{Re}\lambda_j \leq 0\leq \mathrm{Re}\lambda_k$. Then any $C^2$ differential system $$\frac{\dr x}{\dr t} = \Lambda x + f(x), \qquad f(0)=0=f'(0)$$ is in the neighborhood of $0$ $C^1$-equivalent to the linearization.
The eigenvalues of the linearization of our vector field $V_{\varepsilon}$ in $p_0$ are $\pm2\sqrt3,\, -\sqrt3\pm i\sqrt3$, so $V_{\varepsilon}$ satisfies the assumptions of the Belitskii theorem. In suitable coordinates $(u,v)$, associated with the linearization of $V_{\varepsilon}$ in the neighborhood of $p_0$, the correspondence map ${\mathcal{P}}^s_{\varepsilon}$ has the form $${\mathcal{P}}^s_{\varepsilon}(u,v)=(u,C\ u^\beta\, v),\quad u\in\bbR_+,\quad v\in (\bbC,0),\quad \beta= \tfrac12-\tfrac{i}2.$$ The restriction to $\Sigma_{\alpha,d}$ corresponds to the restriction to the set $\{|v|^2\leq \widetilde{\alpha}(u)\, u$, $u\in\bbR_+$, $\widetilde{\alpha}(0)>0\}$. In such region the map ${\mathcal{P}}^s_{\varepsilon}$ is of class $C^1$ in $\sigma_0$; we have $(u,v)(\sigma_0)=(0,0)$ and $({\mathcal{P}}^s_{\varepsilon})'(0,0)=\left(\begin{smallmatrix}1&0\\ 0&0\\ \end{smallmatrix}\right)$. Thus, the thesis of the lemma follows.\
Now we can prove the existence of the invariant surface and estimate the distance to its linear approximation $H^4g$.
\[pr:discr\] For sufficiently small ${\varepsilon}$, there exists the invariant surface $$\label{invsurf}
L_{\varepsilon}=\{y={\varepsilon}\, G(x,{\varepsilon}),\quad x\in D \}$$ of the system (\[system\]). The function $G$ is of class $C^1$ and the distance to the linearization $(H^4g)(x)=G(x,0)$ is bounded by $$\begin{aligned}
|G - H^4g| &\leq C |{\varepsilon}| |h|^5, \label{Gg} \\
|G'-(H^4g)'| &\leq C |{\varepsilon}| |h|^4, \label{Ggprim}
\end{aligned}$$ where $G'=G,_x$ is the derivative with respect to $x$.
The existence of the invariant surface of class $C^1$ and the form (\[invsurf\]) is a direct consequence of Lemma \[lem:c1inv\].
To show the bounds (\[Gg\]), (\[Ggprim\]), we make a coordinate change $$(x,y) \longmapsto (x,z),\qquad z=\Big(y-{\varepsilon}\, H^4g(x)\Big)/({\varepsilon}\, H^4).$$ The system (\[system\]) takes the form $$\label{systemxz}
\left\{
\begin{aligned}
\dot{x} =& X_H + H^4\,{\mathrm{Re}}(\overline{{\kappa}}\,z)\,{\varepsilon}\,e_2 + {\varepsilon}\,H^4\,{\mathrm{Re}}(\overline{{\kappa}}\,g)\,e_2,\\
\dot{z} =& a\,z + 4{\varepsilon}\,H^3 (z+g) {\mathrm{Re}}(\overline{{\kappa}}\,(z+g))\, 2x_2 - {\varepsilon}\,H^4\tfrac{\partial g}{\partial X_2}\,{\mathrm{Re}}(\overline{{\kappa}}\,(z+g)).
\end{aligned}\right.$$ Using the integral formula (\[gint\]) for the function $g$ we deduce that it is bounded, $g\leq C$. Since the function $g$ prolongs to the (multivalued), holomorphic function ramified along the singular curve $\gamma_0$, the following bounds for derivatives of $g$ hold $$\label{gbounds}
|g^{(k)}|\leq C_k |H|^{-k}.$$ Using this one can check that the system (\[systemxz\]) satisfies the assumptions of Lemma \[lem:c1inv\]. Thus, the invariant surface has the form $$z=\frac{{\varepsilon}\,G-{\varepsilon}\,H^4g}{{\varepsilon}\,H^4}={\varepsilon}\,U(x,{\varepsilon})$$ and the function $U$ prolongs by zero to a $C^1$ function on a neighborhood of $D$. Thus, the function $U$ satisfies the estimates $$|U|\leq C|H|, \qquad |U'|\leq C,$$ which are equivalent to (\[Gg\]) and (\[Ggprim\]).\
Now we show that the generalized Abelian integral $J(h)$ is a good approximation of the Poincaré return map and so the zeroes of $J(h)$ generate limit cycles for sufficiently small ${\varepsilon}$.
\[pr:jlead\] Let $\Delta H (h,{\varepsilon})$ be the increment of the Hamiltonian after the first return of the system $V_{\varepsilon}$ restricted to the invariant surface $L_{\varepsilon}$. Then, there exists a constant $C$ such that $$\begin{aligned}
|\Delta H -{\varepsilon}J| &\leq C\, {\varepsilon}\, |h|^5, \label{dist}\\
|\partial_h (\Delta H) -{\varepsilon}J'(h)| &\leq C\,{\varepsilon}\, |h|^4\ |\log h|. \label{distprim}\end{aligned}$$
Here we study the phase curves of the 2-dimensional vector field (\[plsystem\]) i.e. $$W_{\varepsilon}\colon = V_{\varepsilon}|_{L_{\varepsilon}} = X_H + {\varepsilon}Re(\overline{{\kappa}}\, G)\, \partial_{x_2}.$$
We fix the segment $I=\{(x_1,0):x_1\in [-2,-1)\}$ transversal to the Hamiltonian flow. We denote by ${\beta}_{\varepsilon}(t,h)$ the integral curves of $W_{\varepsilon}$ which start and finish at $I$. They satisfy $$\label{auxbounds}
\begin{split}
\dot{{\beta}}_{\varepsilon}(t,h) &= X_H + {\varepsilon}\,{\mathrm{Re}}(\overline{{\kappa}}\,G)\; e_2,\\
{\beta}_{\varepsilon}(0,h) &\in I,\qquad {\beta}_{\varepsilon}(T_{\varepsilon},h)\in I \\
H({\beta}_{\varepsilon}(0,h)) &= h.
\end{split}$$ For ${\varepsilon}=0$ the curve ${\beta}_0$ is the oval $\{H=h\}$ and for ${\varepsilon}$ non-zero but small it is a small perturbation of ${\beta}_0$: $${\beta}_{\varepsilon}(t,h) = {\beta}_0 (t,h) + {\varepsilon}{b}(t,h;{\varepsilon}).$$ Above $T_{\varepsilon}=T_{\varepsilon}(h)$ is the time of the first return to the unit $I$.
\[lem:bound\] There exist a constant $C$ and the positive, small constant $\nu $ such that the following estimates hold: $$\begin{aligned}
|{b}| &\leq C |h|^4\; e^{(2\sqrt3 +\nu) t}, \label{bndF}\\
|\partial_h {b}| &\leq C |h|^3 \ e^{(2\sqrt3 +\nu) t},\label{bndFprim}\\
|T_{\varepsilon}- T_0| &\leq C\,{\varepsilon}\, |h|^{3}.\label{bndtt}
\end{aligned}$$
We use the scalar product $x\cdot x' = 3 x_1\,x_1' + x_2\,x_2'$ and $|x|=\sqrt{x\cdot x}$. It follows from the equation (\[auxbounds\]) that the function ${b}$ satisfies the following initial value problem $$\label{auxF}
\left\{
\begin{aligned}
\dot{{b}} &= \widetilde{\dr X}_H\, {b}+ {\mathrm{Re}}(\overline{{\kappa}}\,G({\beta}_0+{\varepsilon}\,{b}))\, e_2,\\
{b}(0,h;{\varepsilon}) &= 0,
\end{aligned}\right.$$ where $\widetilde{\dr X}_H\, {b}= \tfrac1{\varepsilon}((X_H({\beta}_0+{\varepsilon}\,{b}) - X_H({\beta}_0))$. We have $\widetilde{\dr X}_H = \dr X_H ({\beta}_0+\theta\,{\varepsilon}\,{b})$, for some $\theta\in(0,1)$. Hence $$\widetilde{\dr X}_H = \left(
\begin{smallmatrix}
0& -2\\ -6x_1&0
\end{smallmatrix}\right),\qquad x_1\in [-2,1].$$ Moreover, using estimates (\[Gg\]), (\[Ggprim\]), (\[gbounds\]) we get $|G({\beta}_0+{\varepsilon}\,{b})|\leq C_1\,h^4$. For the solution ${b}$ to the equation (\[auxF\]) we have $$\begin{gathered}
\tfrac{\dr}{\dr t} | {b}|^2 = 2 | {b}| \; \tfrac{\dr}{\dr t} |{b}| = 2 {b}\cdot\widetilde{\dr X}_H\, {b}+ 2 {b}\cdot{\mathrm{Re}}(\overline{{\kappa}}\,G({\beta}_0+{\varepsilon}\,{b}))\, e_2 =\\
= -12(x_1+1)\, {b}_1\,{b}_2 + 2 {b}_2\, {\mathrm{Re}}(\overline{{\kappa}}\,G({\beta}_0+{\varepsilon}\,{b})) \leq 4\sqrt3 | {b}|^2 + 2 C_2 h^4 | {b}|.\end{gathered}$$ Therefore $\tfrac{\dr}{\dr t} \left(| {b}|\right) \leq 2\sqrt3 | {b}| + C_2 h^4,\quad |{b}| (0) = 0$ and the Gronwall inequality [@har] gives the bound (\[bndF\]).
Since the difference of flows ${\varepsilon}\,{b}$ after the Hamiltonian period $T_0$ is $\leq \widetilde{C}|h|^{3}$ and the “velocity” $|V_{\varepsilon}|\sim 1$, the difference of periods $|T_{\varepsilon}-T_0|$ satisfies (\[bndtt\]).
The derivative $\tfrac{\partial {b}}{\partial_h}$ satisfies the respective linear variation equation related to (\[auxF\]) $$\tfrac{\dr}{\dr t}\Big(\tfrac{\partial {b}}{\partial_h}\Big) = \Big(\widetilde{\dr X}_H+{\varepsilon}\,(\ldots)\Big)\,\tfrac{\partial {b}}{\partial_h} + h^3\,(\ldots)\,(\tfrac{\partial {\beta}_0}{\partial_h}),\qquad \tfrac{\partial {b}}{\partial_h}(0,h;{\varepsilon})=0,$$ where we denoted by $(\ldots)$ the bounded terms (see estimations (\[bndF\],\[bndtt\],\[Ggprim\],\[gbounds\])). Since the flow variation $\tfrac{\partial {\beta}_0}{\partial_h}$ of the Hamiltonian field satisfies $\left|\tfrac{\partial {\beta}_0}{\partial_h}\right|\leq C\,e^{2\sqrt3s}$, the inequality (\[bndFprim\]) holds. This finishes the proof of Lemma \[lem:bound\].\
*We continue the proof of Proposition \[pr:jlead\].*\
We split the difference between the Poincaré map and the linearization ${\varepsilon}\, J(h)$ in two integrals $R_1(h,{\varepsilon})$ and $R_2(h,{\varepsilon})$: $$\begin{aligned}
R_1 &= \int_{\gamma_{\varepsilon}} {\mathrm{Re}}(\overline{{\kappa}}\,(G-H^4g))\; \dr x_1,\\
R_2 &= \int_{\gamma_{\varepsilon}} {\mathrm{Re}}(\overline{{\kappa}}\,H^4g)\; \dr x_1 - \int_{\gamma_0} {\mathrm{Re}}(\overline{{\kappa}}\,H^4g)\; \dr x_1.\end{aligned}$$ We shall show that the estimations (\[dist\]) and (\[distprim\]) hold for both $R_1$ and $R_2$.
The inequality (\[dist\]) is a direct consequence of the bound (\[Gg\]). The difference of $R_1$ in close values of $h$ takes the form $$R_1(h+\delta)- R_1(h) = \int_{\gamma_{\varepsilon}(h,h+\delta)} \partial_{x_2}{\mathrm{Re}}(\overline{{\kappa}}\,(G-H^4g))\;\dr x_2\wedge\dr x_1 + O({\varepsilon}|h-2|^5),$$ where the integral is taken along the strip $\gamma_{\varepsilon}(h,h+\delta)$ between $\gamma_{\varepsilon}(h)$ and $\gamma_{\varepsilon}(h+\delta)$. The area of this strip is of the same order as the area of the domain $\{X:\ h<H<h+\delta\}$, i.e. $\sim \delta\cdot I_0\sim C\,\delta\,|\log h|$ (see proof of Lemma \[lem:abi\]). So the estimate (\[distprim\]) follows (\[Ggprim\]).
To prove the estimate for $R_2$ we use (\[auxbounds\]) and (\[gbounds\]): $$\begin{gathered}
|R_2|\leq C_1 \int_0^{T_0}{\mathrm{Re}}(\overline{{\kappa}}\,H^4g) ({\beta}_0+ {\varepsilon}{b}) - {\mathrm{Re}}(\overline{{\kappa}}\,H^4g) ({\beta}_0) + \int_{T_0}^{T_{\varepsilon}} C_2\,{\varepsilon}\,|h|^4 \leq \\
\leq C_3\,{\varepsilon}\, |h|^7 \int_0^{T_0} e^{(2\sqrt3 +\nu)t}\;\dr t + C_4\,|h|^4\,|T_{\varepsilon}-T_0|\leq C_5\,{\varepsilon}\, |h|^{6-2\nu} \leq C {\varepsilon}|h|^5.\end{gathered}$$
Similarly, using the following formula for differential of integral $$\frac{\partial}{\partial h} \int_{{\beta}_{\varepsilon}} \omega = \int_{{\beta}_{\varepsilon}} i_{\partial {\beta}_{\varepsilon}/\partial_h}\;\dr \omega + \omega \left(\left.\tfrac{\partial{\beta}_{\varepsilon}}{\partial_h}\right|_{t=0}\right) - \omega \left(\left.\tfrac{\partial{\beta}_{\varepsilon}}{\partial_h}\right|_{t=T_{\varepsilon}}\right)$$ and the bounds (\[auxbounds\]), (\[gbounds\]), we get $$|R_2|\leq C\,{\varepsilon}\, |h|^5,\qquad |\partial_h R_2|\leq C\,{\varepsilon}\, |h|^4.$$
Now we can finish the proof of the Main Theorem. The restriction of system (\[system\]) to its invariant surface $L_{\varepsilon}=\{y={\varepsilon}\,G(x,{\varepsilon})\}$ has the form (\[plsystem\]). The increment $\Delta H={\mathcal{P}}(h)-h$, associated with the Poincaré map ${\mathcal{P}}(h)$ (on a section transversal to $\gamma_0$), equals (see Proposition \[pr:jlead\]) $$\begin{aligned}
\Delta H(h) &= {\varepsilon}J(h) + O(|{\varepsilon}|\,|h|^5),\\
(\Delta H)'(h) &= {\varepsilon}J'(h) + o(|{\varepsilon}|\,|h|^{4-1/4}).
\intertext{Since we also have (see proof of Corollary \ref{cor:zeroes})}
J(h) &= R h^{4+1/2} \cos(\log\sqrt h -\alpha_0) + o(h^{4-1/4}),\end{aligned}$$ any simple zero of $J(h)$, which is sufficiently close to $0$ generates, by the Implicit Function Theorem, a simple zero of $\Delta H$. Thus the sequence $h_n\to 0^+$ of simple zeroes of $J(h)$ (see Corollary \[cor:zeroes\]) guarantees the existence of *infinite sequence* $\widetilde{h}_n\to 0^+$, $n\geq N_0$ of simple zeroes of the increment $\Delta H$. Any simple zero of $\Delta H$ corresponds to a limit cycle of (\[system\]).
The proof of Main Theorem is now complete.\
[Span]{} Arnold V. I. and Il’yashenko Yu. S., *Ordinary differential equations, in: “Ordinary Differential Equations and Smooth Dynamical Systems”*, Springer–Verlag, New York, (1997), pp. 1–148; (Russian: Fundamental Directions, v. **1**, [VINITI, Moscow]{}, 1985, pp. 1–146). Belitskii H., “Normal forms, invariant and local mappings”, [Naukova Dumka]{}, [Kiev]{}, 1979 (in Russian). Bobieński M., *Contour integrals as ${\mathrm{Ad}}$-invariant functions on the fundamental group* (submitted). Bobieński M. and Żołdek H., *Limit cycles for multidimensional vector field. The elliptic case*, [J. Dynam. Control Systems]{}, [**9**]{}, [(2003)]{}, No 2, [265–310]{}. Bobieński M. and Żołdek H., *Limit cycles of three dimensional polynomial vector fields*, Nonlinearity [**18**]{}, (2005), No 1, 175–209. Ecalle J., “Introduction aux fonctions analysables et preuve constructivede la conjecture de Dulac”, Actualités Mathématiques, Herman, Paris, 1992. Gradshteyn I. S., Ryzhik I. M., “Table of Integrals, Series and Products. Fifth edition”, Academic Press, Inc. 1994. Guckenheimer J. and Holmes P., “Nonlinear oscillations, dynamical systems, and bifurcations of vector fields”, [Applied Mathematical Sciences]{}, **42**, [Springer-Verlag]{}, 1983. Hartman, P., “Ordinary differential equations”, [Classics in Applied Mathematics]{}, [**38**]{}, [Philadelphia, PA]{}, 2002. Hirsch M., Pugh C., Shub M., “Invariant manifolds”, Lect. Notes in Math. **583**, Springer-Verlag, New York, 1977. Il’yashenko Yu. S., *Centennial history of Hilbert’s 16th problem*, Bull. Amer. Math. Soc. [**39**]{} (2002), No 3, 301–354. Melnikov V. K., *On the stability of a center for time-periodic perturbations*, Trans. Moscow Math. Soc. [**12**]{} (1963), 1–57. Mischaikow K., Mrozek M., *Chaos in the [L]{}orenz equations: a computer-assisted proof*, [Bull. Amer. Math. Soc. (N.S.)]{}, **32**, (1995), No 1, 66–72. Nitecki Z., “Differentiable dynamics. An introduction to the orbit structure of diffeomorphisms”, MIT Press, Cambridge, 1971. Pontryagin L. S., *On dynamical systems close to Hamiltonian systems*, in: “Selected Works”, v. 1, Gordon & Breach, New York, 1986; \[Russian: Zh. Ekper. Teoret. Fiziki 4 (1934), 234–238\]. Leszczyński P., Żołdek H., *Limit cycles appearing after perturbation of certain multi-dimensional vector fields*, J. Dynam. Diff. Equat. **13** (2001), No 4, 689–709. S. Yakovenko, *On functions and curves defined by ordinary differential equations*, [The Arnoldfest (Toronto, ON, 1997)]{}, [Fields Inst. Commun.]{}, [**24**]{}, [Amer. Math. Soc.]{}, Providence, 1999, pp. 497–525. H. Żo[ł]{}dek, “The Monodromy Group”, Monografie Matematyczne, Birkhäuser, Basel, 2006.
[^1]: This research was supported by the KBN Grant No 2 P03A 015 29
|
---
abstract: 'We investigate optimal resource allocation for delay-limited cooperative communication in time varying wireless networks. Motivated by real-time applications that have stringent delay constraints, we develop a dynamic cooperation strategy that makes optimal use of network resources to achieve a target outage probability (reliability) for each user subject to average power constraints. Using the technique of Lyapunov optimization, we first present a general framework to solve this problem and then derive quasi-closed form solutions for several cooperative protocols proposed in the literature. Unlike earlier works, our scheme does not require prior knowledge of the statistical description of the packet arrival, channel state and node mobility processes and can be implemented in an online fashion.'
author:
- 'Rahul Urgaonkar, Michael J. Neely\'
title: 'Delay-Limited Cooperative Communication with Reliability Constraints in Wireless Networks'
---
Cooperative Communication, Delay-Limited Communication, Mobile Ad-Hoc Networks, Reliability, Resource Allocation, Lyapunov Optimization
Introduction {#section:intro}
============
There is growing interest in the idea of utilizing cooperative communication [@NOW_survey; @laneman_survey; @Laneman1; @Laneman2; @Sendonaris1; @Sendonaris2] to improve the performance of wireless networks with time varying channels. The motivation comes from the work on MIMO systems [@tsebook] which shows that employing multiple antennas on a wireless node can offer substantial benefits. However, this may be infeasible in small-sized devices due to space limitations. Cooperative communication has been proposed as a means to achieve the benefits of traditional MIMO systems using *distributed single antenna* nodes. Much recent work in this area promises significant gains in several metrics of interest (such as diversity [@Laneman1][@Laneman2], capacity [@Sendonaris1; @Sendonaris2; @gastpar1; @kramer1; @host-madsen], energy efficiency [@alouini; @wanjen_survey], etc.) over conventional methods. We refer the interested reader to a recent comprehensive survey [@NOW_survey] and its references.
The main idea behind cooperative communication can be understood by considering a simple $2$-hop network consisting of a source $s$, its destination $d$ and a set of $m$ relay nodes as shown in Fig. \[fig:one\]. Suppose $s$ has a packet to send to $d$ in timeslot $t$. The channel gains for all links in this network are shown in the figure. In direct communication, $s$ uses the full slot to transmit its packet to $d$ over link $s-d$ as shown in Fig. \[fig:one\](a). In conventional multi-hop relaying, $s$ uses the first half of the slot to transmit its packet to a particular relay node $i$ over link $s-i$ as shown in Fig. \[fig:one\](b). If $i$ can successfully decode the packet, it re-encodes and transmits it to $d$ in the second half of the slot over link $i-d$. In both scenarios, to ensure reliable communication, the source and/or the relay must transmit at high power levels when the channel quality of any of the links involved is poor. However, note that due to the broadcast nature of wireless transmissions, other relay nodes may receive the signal from the transmission by $s$ and can cooperatively relay it to $d$. The destination now receives multiple copies/signals and can use all of them jointly to decode the packet. Since these signals have been transmitted over independent paths, the probability that all of them have poor quality is significantly smaller. Cooperative communication protocols take advantage of this *spatial diversity gain* by making use of multiple relays for cooperative transmissions to increase reliability and/or reduce energy costs. This is different from traditional multi-hop relaying in which only one node is responsible for forwarding at any time and in which the destination does not use multiple signals to decode a packet.
![Example $2$-hop network with source, destination and relays. The time slot structures for different transmission strategies are also shown. Due to the half-duplex constraint, cooperative protocols need to operate in two phases. Hence, there is an inherent loss in the multiplexing gain under any such cooperative transmission strategy over direct transmission.[]{data-label="fig:one"}](one_2){width="9cm"}
Because of the half-duplex nature of wireless devices, a relay node cannot send and receive on the same channel simultaneously. Therefore, such cooperative communication protocols typically operate over a two phase slot structure as shown in Figs. \[fig:one\](c) and \[fig:one\](d). In the first phase, $s$ transmits its packet to the set of relay nodes. In the second phase, a subset of these relays transmit their signals to $d$. Note that the destination may receive the source signal from the first phase as well. At the end of the second phase, the destination appropriately combines all of these received signals to decode the packet. The exact slot structure as well as the signals transmitted by the relays depend on the cooperative protocol being used.[^1] For example, Fig. \[fig:one\](c) shows the slot structure under a cooperative scheme that transmits over orthogonal channels. Specifically, the time slot is divided into $m+1$ equal mini-slots. In phase one, the source transmits its packet in the first mini-slot. In the second phase, the relays transmit one after the other in their own mini-slots. Fig. \[fig:one\](d) shows the slot structure under a cooperative scheme in which the cooperating relays use distributed space-time codes (DSTC) or a beamforming technique to transmit simultaneously in the second phase. It should be noted that due to this half-duplex constraint, there is an inherent loss in the multiplexing gain under any such cooperative transmission strategy over direct transmission. Therefore, it is important to develop algorithms that cooperate opportunistically.
In this work, we consider a mobile ad-hoc network with *delay-limited* traffic and cooperative communication. Many real-time applications (e.g., voice) have stringent delay constraints and fixed rate requirements. In slow fading environments (where decoding delay is of the order of the channel coherence time), it may not be possible to meet these delay constraints for every packet. However, these applications can often tolerate a certain fraction of lost packets or outages. A variety of techniques are used to combat fading and meet this target outage probability (including exploiting diversity, channel coding, ARQ, power control, etc.). Cooperative communication is a particularly attractive technique to improve reliability in such delay-limited scenarios since it can offer significant spatial diversity gains in addition to these techniques.
Much prior work on cooperative communication considers physical layer resource allocation for a static network, particularly in the case of a single source. Objectives such as minimizing sum power, minimizing outage probability, meeting a target SNR constraint, etc., are treated in this context [@host-madsen; @alouini; @wanjen_survey; @Maric1; @Maric2; @Adve1; @gunduz; @minchen]. We draw on this work in the development of *dynamic* resource allocation in a stochastic network with fading channels, node mobility, and random packet arrivals, where *opportunistic cooperation decisions* are required. Dynamic cooperation was also considered in the prior work [@yeh] which investigates throughput optimality and queue stability in a multi-user network with static channels and randomly arriving traffic using the framework of Lyapunov drift. Our formulation is different and does not involve issues of queue stability. Rather, we consider a delay-limited scenario where each packet must either be transmitted in one slot, or dropped. This is similar to the concept of *delay-limited capacity* [@tse2]. Also related to such scenarios is the notion of *minimum outage probability* [@caire1]. These quantities are also investigated in the recent work [@gunduz] that considers a $3$ node static network with Rayleigh fading and shows that opportunistic cooperation significantly improves the delay-limited capacity.
In this work, we use techniques of both Lyapunov drift and Lyapunov optimization [@neely-NOW] to develop a control algorithm that takes dynamic decisions for each new slot. Different from most work that applies this theory, our solution involves a $2$-stage stochastic shortest path problem due to the cooperative relaying structure. This problem is non-convex and combinatorial in nature and does not admit closed form solutions in general. However, under several important and well known classes of physical layer cooperation models, we develop techniques for reducing the problem exactly to an $m$-stage set of convex programs. The convex programs themselves are shown to have quasi-closed form solutions and can be computed in real time for each slot, often involving simple water-filling strategies that also arise in related static optimization problems.
Basic Network Model {#section:basic}
===================
We consider a mobile ad-hoc network with delay-limited communication over time varying fading channels. The network contains a set $\mathcal{N}$ of nodes, all potentially mobile. All nodes are assumed to be within range of each other, and any node pair can communicate either through direct transmission or through a $2$-phase cooperative transmission that makes use of other nodes as relays. The system operates in slotted time and the channel coefficient between nodes $i$ and $j$ in slot $t$ is denoted by $h_{ij}(t)$. We assume a block fading model [@tsebook] for the channel coefficients so that their value remains fixed during a slot and changes from one slot to the other according to the distribution of the underlying fading and mobility processes.
For simplicity, we assume that the set $\mathcal{N}$ contains a single source node $s$ and its destination node $d$ and that all other nodes act simply as cooperative relays. This is similar to the single-source assumption treated in [@Maric1; @Maric2; @gunduz; @minchen; @Adve1] for static networks. We derive a dynamic cooperation strategy for this single source problem in Sec. \[section:CNC\] that optimizes a weighted sum of reliability and power expenditure subject to individual reliability and average power constraints at the source and at all relays. This highlights the decisions involved from the perspective of a source node, and these decisions and the resulting solution structure are similar to the multi-source scenario operating under an orthogonal medium access scheme (such as TDMA or FDMA) studied later in Sec. \[section:extensions\]. In the following, we denote the set of relay nodes by $\mathcal{R}$ and the set $\{s\}
\cup \mathcal{R}$ by $\mathcal{\widehat{R}}$. All nodes $i \in \mathcal{\widehat{R}}$ have both long term average and instantaneous peak power constraints given by $P_i^{avg}$ and $P_i^{max}$ respectively.
We consider two models for the availability of the channel state information (CSI). The first is the *known channels, unknown statistics* model. Under this model, we assume that the channel gains between the source node and its relay set and destination as well as the channel gains between the relays and the destination are known every slot. These could be obtained by sending pilot signals and via feedback. This model has also been considered in prior works [@Maric1; @Maric2; @gunduz; @minchen] on power allocation in static networks where, in addition to the current channel gains, a knowledge of the distribution governing the fading process is assumed. In our work, under this *known channels, unknown statistics* model, we do not assume any knowledge of the distributions governing the evolution of the channel states, mobility processes, or traffic. Thus, our algorithm and its optimality properties hold for a very general class of channel and mobility models that satisfy certain ergodicity requirements (to be made precise later). We note that the channel gain could represent just the amplitude of the channel coefficient if an orthogonal cooperative scheme is being used. However, in case of cooperative schemes such as beamforming, this could represent the complete description of the fading coefficient that includes the phase information.
The second model we consider is the *unknown channels, known statistics* model. In this case, we assume that the current set of potential relay nodes is known on each slot $t$, but the exact channel realizations between the source and these relays, and the relays and the destination, are unknown. Rather, we assume only that the *statistics* of the fading coefficients are known between the source and current relays, and the current relays and destination. However, we still do not require knowledge of the distributions governing the arriving traffic or the mobility pattern (which affects the set of relays we will see in future slots). This is in contrast to prior works that have considered resource allocation in the presence of partial CSI only for static networks.
For both models, we use $\mathcal{T}(t)$ to represent the collection of all channel state information known on slot $t$. For the known channels, unknown statistics model, $\mathcal{T}(t)$ represents the collection of channel coefficients $h_{ij}(t)$ between the source and relays and relays and destination. For the unknown channels, known statistics model, $\mathcal{T}(t)$ represents the set of all nodes that are available on slot $t$ for relaying and the distribution of the fading coefficients. We assume that $\mathcal{T}(t)$ lies in a space of finite but arbitrarily large size and evolves according to an ergodic process with a well defined steady state distribution. This variation in channel state information affects the reliability and power expenditure associated with the direct and cooperative transmission modes that are discussed in Sec. \[section:options\].
Example of Channel State Information Models {#section:example}
-------------------------------------------
As an example of these models, suppose the nodes move in a cell-partitioned network according to a Markovian random walk (see also Fig. \[fig:cell\] in Sec. \[section:sim\] on Simulations). Each slot, a node may decide to stay in its current cell or move to an adjacent cell according to the probability distribution governing the random walk. Suppose that each slot, the set of potential relays consists only of nodes in either the same or an adjacent cell of the source. Suppose channel gains between nodes in the same cell are distributed according to a Rayleigh fading model with a particular mean and variance, while gains for nodes in adjacent cells are Rayleigh with a different mean and variance. Under the [known channels, unknown statistics]{} model, the $\mathcal{T}(t)$ information is the set of current gains $h_{ij}(t)$, and the Rayleigh distribution is not needed. Under the [unknown channels, known statistics]{} model, the $\mathcal{T}(t)$ information is the set of nodes currently in the same and adjacent cells of the source, and we assume we know that the fading distribution is Rayleigh, and we know the corresponding means and variances. However, neither model requires knowledge of the mobility model or the traffic rates.
Control Options {#section:options}
---------------
Suppose the slot size is normalized to integer slots $t \in \{0, 1, 2, \ldots, \}$. In each slot, the source $s$ receives new packets for its destination $d$ according to an i.i.d. Bernoulli process $A_s(t)$ of rate $\lambda_s$. Each packet is assumed to be $R$ bits long and has a *strict* delay constraint of $1$ slot. Thus, a packet not served within $1$ slot of its arrival is dropped. Further, packets that are not successfully received by their destinations due to channel errors are not retransmitted. The source node has a minimum time-average reliability requirement specified by a fraction $\rho_s$ which denotes the fraction of packets that were transmitted successfully. In any slot $t$, if source $s$ has a new packet for transmission, it can use one of the following transmission modes (Fig. \[fig:one\]):
1. Transmit directly to $d$ using the full slot
2. Transmit to $d$ using traditional relaying over two hops
3. Transmit cooperatively with the set $\mathcal{R}$ of relay nodes using the two phase slot structure
4. Stay idle (so that the packet gets dropped)
We consider all of these transmission modes because, depending on the current channel conditions and energy costs in slot $t$, it might be better to choose one over the other. For example, due to the half-duplex constraint, direct transmission using the full slot might be preferable to cooperative transmission over two phases on slots when the source-destination link quality is good. Note that this is similar to the much studied framework of opportunistic transmission scheduling in time varying channels. Further, even in the special case of static channels, the optimal strategy may involve a mixture of these modes of operation to meet the target reliability and average power constraints.
Let $\mathcal{I}^\eta(t)$ denote the collective control action in slot $t$ under some policy $\eta$ that includes the choice of the transmission mode at the source, power allocations for the source and all relevant relays, and any additional physical layer choices such as modulation and coding. Specifically, we have: $$\begin{aligned}
\mathcal{I}^\eta(t) = [\textrm{mode choice}, \textbf{\emph{P}}^\eta(t), \textrm{other PHY layer choices}]\end{aligned}$$ where the mode choice refers to one of the $4$ transmission modes for the source, and where $\textbf{\emph{P}}^\eta(t)$ is the collection of coefficients $P_i^\eta(t)$ representing power allocations for each node $i \in \mathcal{\widehat{R}}$. Note that $P_i^\eta(t) = 0$ for all $i$ under transmission mode $4$ (idle). If the source $s$ chooses mode $1$, we have $P_i(t) = 0$ for all relay nodes $i \in \mathcal{R}$, whereas if $s$ chooses mode $2$, we have $P_i(t) > 0$ for at most one relay $i \in \mathcal{R}$. Note that under any feasible policy $\eta$, $P_i^\eta(t)$ must satisfy the instantaneous peak power constraint every slot for all $i$. Also note that under the cooperative transmission option, the power allocation for the source node and the relays corresponds to the first and second phase respectively. Thus, the source is active in the first phase while the relays are active in the second phase. We denote the set of all valid power allocations by $\mathcal{P}$ and define $\mathcal{C}$ as the set of all valid control actions: $$\begin{aligned}
\mathcal{C} = \{1, 2, 3, 4\} \times \{ \mathcal{P} \} \times \{ \textrm{other PHY layer choices} \}\end{aligned}$$
The success/failure outcome of the control action is represented by an indicator random variable $\Phi_s(\mathcal{I}^\eta(t), \mathcal{T}(t))$ that depends on the current control action and channel state. Successful transmission of a packet is usually a complicated function of the transmission mode chosen, the associated power allocations and channel states, as well as physical layer details like modulation, coding/decoding scheme, etc. In this work, the particular physical layer actions are included in the $\mathcal{I}^\eta(t)$ decision variable. Specifically, given a control action $\mathcal{I}^\eta(t)$ and a channel state $\mathcal{T}(t)$, the outcome is defined as follows: $$\begin{aligned}
\Phi_s(\mathcal{I}^\eta(t),
\mathcal{T}(t)) {\mbox{\raisebox{-.3ex}{$\overset{\vartriangle}{=}$}}}\left\{ \begin{array}{lll} 1 & \textrm{if a packet transmitted by $s$ in slot} \\
& \textrm{$t$ is successfully received by $d$} \\
0 & \textrm{else}
\end{array} \right.
\label{eq:phi}\end{aligned}$$ Note that $\Phi_s(\mathcal{I}^\eta(t), \mathcal{T}(t))$ is a random variable, and its conditional expectation given $(\mathcal{I}^\eta(t), \mathcal{T}(t))$ is equal to the success probability under the given physical layer channel model. Use of this abstract indicator variable allows a unified treatment that can include a variety of physical layer models. Under the known channels, unknown statistics model (where $\mathcal{T}(t)$ includes the full channel realizations between source and relays and relays and destination on slot $t$), $\Phi_s(\mathcal{I}^\eta(t), \mathcal{T}(t))$ can be a determinisitic $0/1$ function based on the known channel state and control action. Specific examples for this model are considered in Sec. \[section:2stage\]. Under the unknown channels, known statistics model (where $\mathcal{T}(t)$ represents only the set of current possible relays and the fading statistics), we assume we know the value of $Pr[\Phi_s(\mathcal{I}^\eta(t), \mathcal{T}(t))=1]$ under each possible control action $\mathcal{I}^\eta(t)$. This model is considered in Sec. \[section:stats\]. Under both models, we assume that explicit ACK/NACK information is received at the end of each slot, so that the source knows the value of $\Phi_s(\mathcal{I}^\eta(t), \mathcal{T}(t))$. For notational convenience, in the rest of the paper, we use $\Phi_s^\eta(t)$ instead of $\Phi_s(\mathcal{I}^\eta(t), \mathcal{T}(t))$ noting that the dependence on $(\mathcal{I}^\eta(t), \mathcal{T}(t))$ is implicit.
Discussion of Basic Model {#section:discuss}
-------------------------
The basic model described above extends prior work on $2$-phase cooperation in static networks to a mobile environment, and treats the important example scenario where a team of nodes move in a tight cluster but with possible variation in the relative locations of nodes within the cluster. We note that our model and results are applicable to the special case of a static network as well. Another example scenario captured by our model is an OFDMA-based cellular network with multiple users that have both inter-cell and intra-cell mobility. In each slot, a set of transmitters is determined in each orthogonal channel (for example, based on a predetermined TDMA schedule, or dynamically chosen by the base station). The remaining nodes can potentially act as cooperative relays in that slot. The basic model treats scenarios in which a source node can transmit to its destination, possibly with the help of multiple relay nodes, in $2$ stages. While this is a simplifying assumption, the framework developed here can be applied to more general scenarios in which, in a single slot, cooperative relaying over $K$ stages is performed (for some $K > 2$) using multi-hop cooperative techniques (e.g., [@scaglione; @shashi]).
Control Objective {#section:objective}
=================
Let $\alpha_s$ and $\beta_i$ for $i \in \mathcal{\widehat{R}}$ be a collection of non-negative weights. Then our objective is to design a policy $\eta$ that solves the following *stochastic optimization problem*:
$$\begin{aligned}
\textrm{Maximize:} \qquad & \alpha_s \bar{r}^\eta_s - \sum_{i\in \mathcal{\widehat{R}}} \beta_i \bar{e}^\eta_i\nonumber \\
\textrm{Subject to:} \qquad & \bar{r}^\eta_s \geq \rho_s \lambda_s \nonumber \\
& \bar{e}^\eta_i \leq P_i^{avg} \; \forall \; i \in \mathcal{\widehat{R}} \nonumber \\
& 0 \leq P_i^\eta(t) \leq P_i^{max} \; \forall \; i \in \mathcal{\widehat{R}}, \; \forall t \nonumber \\
& \mathcal{I}^\eta(t) \in \mathcal{C} \; \forall t
\label{eq:obj1}\end{aligned}$$
where $\bar{r}_s^\eta$ is the time average reliability for source $s$ under policy $\eta$ and is defined as: $$\begin{aligned}
&\bar{r}_s^\eta {\mbox{\raisebox{-.3ex}{$\overset{\vartriangle}{=}$}}}\lim_{t\rightarrow\infty}
\frac{1}{t}\sum_{\tau=0}^{t-1}{\mathbb{E}\left\{\Phi_s^\eta(\tau)\right\}}
\label{eq:ri}\end{aligned}$$
and $\bar{e}_i^\eta$ is the time average power usage of node $i$ under $\eta$:
$$\begin{aligned}
&\bar{e}_i^\eta {\mbox{\raisebox{-.3ex}{$\overset{\vartriangle}{=}$}}}\lim_{t\rightarrow\infty}
\frac{1}{t}\sum_{\tau=0}^{t-1}{\mathbb{E}\left\{P_i^\eta(\tau)\right\}} \label{eq:ei}\end{aligned}$$
Here, the expectation is with respect to the possibly randomized control actions that policy $\eta$ might take. The $\alpha_s$ and $\beta_i$ weights allow us to consider several different objectives. For example, setting $\alpha_s = 0$ and $\beta_i = 1$ for all $i$ reduces (\[eq:obj1\]) to the problem of minimizing the average sum power expenditure subject to minimum reliability and average power constraints. This objective can be important in the multiple source scenario when the resources of the relays must be shared across many users. Setting all of these weights to $0$ reduces (\[eq:obj1\]) to a feasibility problem where the objective is to provide minimum reliability guarantees subject to average power constraints. Problem (\[eq:obj1\]) is similar to the general stochastic utility maximization problem presented in [@neely-NOW]. Suppose (\[eq:obj1\]) is feasible and let $r^*_s$ and $e^*_i \; \forall i
\in \mathcal{\widehat{R}}$ denote the optimal value of the objective function, potentially achieved by some arbitrary policy. Using the techniques developed in [@neely-NOW; @neely-energy], it can be shown that it is sufficient to consider only the class of stationary, randomized policies that take control decisions purely as a (possibly random) function of the channel state $\mathcal{T}(t)$ every slot to solve (\[eq:obj1\]). However, computing the optimal stationary, randomized policy explicitly can be challenging and often impractical as it requires knowledge of arrival distributions, channel probabilities and mobility patterns in advance. Further, as pointed out earlier, even in the special case of a static channel, the optimal strategy may involve a mixture of direct transmission, multi-hop, and cooperative modes of operation, and the relaying modes must select different relay sets over time to achieve the optimal time average mixture.
However, the technique of Lyapunov optimization [@neely-NOW] can be used to construct an alternate dynamic policy that overcomes these challenges and is provably optimal. Unlike the stationary, randomized policy, this policy does not need to be computed beforehand and can be implemented in an online fashion. In the known channels model, it does not need a-priori statistics of the traffic, channels, or mobility. In the unknown channels model, it does not need a-priori statistics of the traffic or mobility. We present this policy in the next section.
Optimal Control Algorithm {#section:CNC}
=========================
In this section, we present a dynamic control algorithm that achieves the optimal solution $r^*_s$ and $e^*_i \; \forall
i \in \mathcal{\widehat{R}}$ to the stochastic optimization problem presented earlier. This algorithm is similar in spirit to the backpressure algorithms proposed in [@neely-NOW; @neely-energy] for problems of throughput and energy optimal networking in time varying wireless ad-hoc networks.
The algorithm makes use of a “reliability queue” $Z_s(t)$ for source $s$. Specifically, let $Z_s(t)$ be a value that is initialized to zero (so that $Z_s(0) = 0$), and that is updated at the end of every slot $t$ according to the following equation: $$\begin{aligned}
Z_s(t+1) = \max[Z_s(t)- \Phi_s(t),0] + \rho_s A_s(t)\label{eq:p1u1}\end{aligned}$$ where $A_s(t)$ is the number of arrivals to source $s$ on slot $t$ (being either $0$ or $1$), and $\Phi_s(t)$ is $1$ if and only if a packet that arrived was successfully delivered (recall that ACK/NACK information gives the value of $\Phi_s(t)$ at the end of every slot $t$). Additionally, it also uses the following virtual power queues $\forall i \in \mathcal{\widehat{R}}$: $$\begin{aligned}
X_i(t+1)=\max[X_i(t) - P_i^{avg},0] + P_i(t) \label{eq:p1x1}\end{aligned}$$ All these queues are also initialized to $0$ and updated at the end of every slot $t$ according to the equation above. We note that these queues are virtual in that they do not represent any real backlog of data packets. Rather, they facilitate the control algorithm in achieving the time average reliability and energy constraints of (\[eq:obj1\]) as follows. If a policy $\eta$ stabilizes (\[eq:p1u1\]), then we must have that its service rate is no smaller than the input rate, i.e., $$\begin{aligned}
\bar{r}^\eta_s = \lim_{t\rightarrow\infty}
\frac{1}{t}\sum_{\tau=0}^{t-1}{\mathbb{E}\left\{\Phi^\eta_s(\tau)\right\}} \geq
\lim_{t\rightarrow\infty}\frac{1}{t}\sum_{\tau=0}^{t-1}{\mathbb{E}\left\{\rho_s
A_s(\tau)\right\}} = \rho_s \lambda_s\end{aligned}$$
Similarly, stabilizing (\[eq:p1x1\]) yields the following: $$\begin{aligned}
\bar{e}_i^\eta = \lim_{t\rightarrow\infty}
\frac{1}{t}\sum_{\tau=0}^{t-1}{\mathbb{E}\left\{P_i^\eta(\tau)\right\}} \leq P_i^{avg}\end{aligned}$$ where we have used definitions (\[eq:ri\]), (\[eq:ei\]). This technique of turning time-average constraints into queueing stability problems was first used in [@neely-energy].
To stabilize these virtual queues and optimize the objective function in (\[eq:obj1\]), the algorithm operates as follows. Let $\textbf{\emph{Q}}(t) = (Z_s(t), X_i(t)) \; \forall i \in \mathcal{\widehat{R}}$ denote the collection of these queues in timeslot $t$. Every slot $t$, given $\textbf{\emph{Q}}(t)$ and the current channel state $\mathcal{T}(t)$, it chooses a control action $\mathcal{I}^*(t)$ that minimizes the following stochastic metric (for a given control parameter $V \geq 0$): $$\begin{aligned}
\textrm{Minimize:} \qquad & (X_s(t) +V\beta_s){\mathbb{E}\left\{P_s(t)|\textbf{\emph{Q}}(t), \mathcal{T}(t)\right\}} + \nonumber \\
& \sum_{i \in \mathcal{R}}(X_i(t)+V\beta_i){\mathbb{E}\left\{P_i(t)|\textbf{\emph{Q}}(t), \mathcal{T}(t)\right\}} - \nonumber \\
& (Z_s(t)+V \alpha_s){\mathbb{E}\left\{\Phi_s(t)|\textbf{\emph{Q}}(t), \mathcal{T}(t)\right\}} \nonumber\\
\textrm{Subject to:} \qquad & 0 \leq P_i(t) \leq P_i^{max} \; \forall i \in \mathcal{\widehat{R}} \nonumber \\
& \mathcal{I}(t) \in \mathcal{C}
\label{eq:p1ssp3}\end{aligned}$$
After implementing $\mathcal{I}^*(t)$ and observing the outcome, the virtual queues are updated using (\[eq:p1u1\]), (\[eq:p1x1\]). Recall that there are no actual queues in the system. Our algorithm enforces a strict $1$-slot delay constraint so that $\Phi_s(t) = 0$ if the packet is not successfully delivered after $1$ slot. The virtual queues $X_i(t), Z_s(t)$ are maintained only in software and act as known weights in the optimization (\[eq:p1ssp3\]) that guide decisions towards achieving our time average power and reliability goals. The control action $\mathcal{I}^*(t$) that optimizes (\[eq:p1ssp3\]) affects the powers $P_i(t)$ allocated and the $\Phi_s(t)$ value according to (\[eq:phi\]).
The above optimization is a $2$-stage *stochastic shortest path* problem [@bertsekas] where the two stages correspond to the two phases of the underlying cooperative protocol. Specifically, when $s$ decides to use the option of transmitting cooperatively, the cost incurred in the first stage is given by the first term $(X_s(t) +V\beta_s){\mathbb{E}\left\{P_s(t)|\textbf{\emph{Q}}(t), \mathcal{T}(t)\right\}}$. The cost incurred during the second stage is given by $\sum_{i \in \mathcal{R}}(X_i(t)+ V\beta_i){\mathbb{E}\left\{P_i(t)|\textbf{\emph{Q}}(t), \mathcal{T}(t)\right\}}$ and at the end of this stage, we get a reward of $(Z_s(t)+V \alpha_s){\mathbb{E}\left\{\Phi_s(t)|\textbf{\emph{Q}}(t), \mathcal{T}(t)\right\}}$. The transmission outcome $\Phi_s(t)$ depends on the power allocation decisions in *both* phases which makes this problem different from greedy strategies (e.g., [@yeh], [@neely-energy]). In order to determine the optimal strategy in slot $t$, the source $s$ computes the minimum cost of (\[eq:p1ssp3\]) for all transmission modes described earlier and chooses one with the least cost.
Note that this problem is unconstrained since the long term time average reliability and power constraints do not appear explicitly as in the original problem. These are implicitly captured by the virtual queue values. Further, its solution uses the value of the *current* channel state $\mathcal{T}(t)$ and does not require knowledge of the statistics that govern the evolution of the channel state process. Thus, the control strategy involves implementing the solution to the sequence of such unconstrained problems every slot and updating the queue values according to (\[eq:p1u1\]), (\[eq:p1x1\]). Assuming i.i.d. $\mathcal{T}(t)$ states, the following theorem characterizes the performance of this dynamic control algorithm A similar statement can be made for more general Markov modulated $\mathcal{T}(t)$ using the techniques of [@neely-NOW]. For simplicity, here we consider the i.i.d. case.
*Theorem 1*: (Algorithm Performance) Suppose all queues are initialized to $0$. Then, implementing the dynamic algorithm (\[eq:p1ssp3\]) every slot stabilizes all queues, thereby satisfying the minimum reliability and time-average power constraints, and guarantees the following performance bounds (for some $\epsilon > 0$ that depends on the slackness of the feasibility constraints): $$\begin{aligned}
&\lim_{t \rightarrow \infty} \frac{1}{t}
\sum_{\tau=0}^{t-1}{\mathbb{E}\left\{Z_s(\tau)\right\}} \leq \frac{B + V(\alpha_s +
\sum_{i\in \mathcal{\widehat{R}}} \beta_iP_i^{max})}{\epsilon} \\
&\lim_{t \rightarrow \infty} \frac{1}{t}
\sum_{\tau=0}^{t-1}\sum_{i\in \mathcal{\widehat{R}}}{\mathbb{E}\left\{X_i(\tau)\right\}}
\leq \frac{B + V(\alpha_s + \sum_{i\in \mathcal{\widehat{R}}}
\beta_iP_i^{max})}{\epsilon}\end{aligned}$$ Further, the time average utility achieved for any $V \geq 0$ satisfies: $$\begin{aligned}
\lim_{t \rightarrow \infty} \frac{1}{t} \sum_{\tau=0}^{t-1} {\mathbb{E}\left\{\alpha_s\Phi_s(\tau) - \sum_{i\in \mathcal{\widehat{R}}} \beta_i P_i(\tau)\right\}}
\geq \zeta ^* - \frac{B}{V}
$$ where $$\begin{aligned}
& \zeta^* {\mbox{\raisebox{-.3ex}{$\overset{\vartriangle}{=}$}}}\alpha_s r^*_s- \sum_{i\in \mathcal{\widehat{R}}} \beta_ie^*_i \\
& B {\mbox{\raisebox{-.3ex}{$\overset{\vartriangle}{=}$}}}\frac{1 + \lambda_s^2 \rho_s^2 + \sum_{i\in \mathcal{\widehat{R}}} (P_i^{avg})^2 + (P_i^{max})^2}{2}\end{aligned}$$
*Proof*: Appendix A. $\Box$
Thus, one can get within $O(1/V)$ of the optimal values by increasing $V$ at the cost of an $O(V)$ increase in the virtual queue backlogs. The size of these queues affects the time required for the time average values to converge to the desired performance.
In the following sections, we investigate the basic $2$-stage resource allocation problem (\[eq:p1ssp3\]) in detail and present solutions for two widely studied classes of cooperative protocols proposed in the literature: Decode-and-Forward (DF) and Amplify-and-Forward (AF) [@Laneman1; @Laneman2]. These protocols differ in the way the transmitted signal from the first phase is processed by the cooperating relays. In DF, a relay fully decodes the signal. If the packet is received correctly, it is re-encoded and transmitted in the second phase. In AF, a relay simply retransmits a scaled version of the received analog signal. We refer to [@Laneman1; @Laneman2] for further details on the working of these protocols as well as derivation of expressions for the mutual information achieved by them. Let $m = |\mathcal{R}|$. In the following, we assume a Gaussian channel model with a total bandwidth $W$ and unit noise power per dimension. We use the information theoretic definition of a transmission failure (an outage event) as discussed in [@tse2], [@caire1]. Here, an outage occurs when the total instantaneous mutual information is smaller than the rate $R$ at which data is being transmitted.
We first consider the case when the channel gains are known at the source (Sec. \[section:2stage\]). In this scenario, (\[eq:p1ssp3\]) becomes a $2$-stage *deterministic shortest path problem* because the outcome $\Phi_s(t)$ due to any control decision and its power allocation can be computed beforehand. Specifically, $\Phi_s(t) = 1$ when the resulting total mutual information exceeds $R$ and $\Phi_s(t) = 0$ otherwise. Further, this outcome is a function of control actions taken over two stages when cooperative transmission is used. This resulting problem is combinatorial and non-convex and does not admit closed-form solutions in general. However, for these protocols, we can reduce it to a set of simpler convex programs for which we can derive quasi-closed form solutions. Then in Sec. \[section:stats\], we consider the case when only the statistics of the channel gains are known. In this case, the outcome $\Phi_s(t)$ is random function of the control actions (taken over the two stages in case of cooperative transmission) and (\[eq:p1ssp3\]) becomes a $2$-stage *stochastic dynamic program*. While standard dynamic programming techniques can be used to compute the optimal solution, they are typically computationally intensive. Therefore, for this case, we present a Monte Carlo simulation based technique to efficiently solve the resulting dynamic program.
$2$-Stage Resource Allocation Problem with Known Channels, Unknown Statistics {#section:2stage}
=============================================================================
Recall that in order to determine the optimal control action in any slot $t$, we must choose between the four modes of operation as discussed in Sec. \[section:basic\]: $(1)$ direct transmission, $(2)$ multi-hop relay, $(3)$ cooperative, and $(4)$ idle. Let $c_i(t)$ and $I_i(t)$ denote the optimal cost of the metric (\[eq:p1ssp3\]), and the corresponding action that achieves that metric, assuming that mode $i \in \{1, 2, 3, 4\}$ is chosen in slot $t$. Every slot, the algorithm computes $c_i(t)$ and $I_i(t)$ for each mode and then implements the mode $i$ and the resulting action $I_i(t)$ that minimizes cost. Note that the cost $c_4(t)$ for the idle mode is trivially $0$. The minimum cost for direct transmission can be computed as follows. When the source transmits directly, we have $P_i(t) = 0 \; \forall i \in
\mathcal{R}$. The minimum cost $c_1(t)$ associated with a *successful* direct transmission ($\Phi_s(t) = 1$) can be obtained by solving the following convex problem [^2]: $$\begin{aligned}
\textrm{Minimize:}\qquad & \Big(X_s(t)+ V \beta_s\Big){P_s(t)} -Z_s(t) - V \alpha_s \nonumber \\
\textrm{Subject to:} \qquad &{W} \log\Big(1 +
\frac{P_s(t)}{W}|h_{sd}(t)|^2\Big) \geq R \nonumber \\
& 0 \leq P_s(t) \leq P_s^{max}
\label{eq:p1ssp4}\end{aligned}$$ where the constraint ${W} \log\Big(1 +
\frac{P_s(t)}{W}|h_{sd}(t)|^2\Big) \geq R$ represents the fact that to get $\Phi_s(t) = 1$, the mutual information must exceed $R$. It is easy to see that if there is a feasible solution to the above, then for minimum cost, this constraint must be met with equality. Using this, the minimum cost corresponding to the direct transmission mode is given by: $\Big(X_s(t)+ V \beta_s\Big){P_s^{dir}(t)} -Z_s(t) - V\alpha_s$ if $P_s^{dir}(t) = \frac{W}{|h_{sd}(t)|^2}(2^{R/W} - 1) \leq
P_s^{max}$. Otherwise, direct transmission is infeasible and so we set $c_1(t) = +\infty$. In this case, direct transmission will not be considered as the idle mode cost $c_4(t) = 0$ is strictly better, but we must also compare with the costs $c_2(t)$ and $c_3(t)$.
To compute the minimum cost $c_2(t)$ associated with multi-hop transmission, note that in this case, the slot is divided into two parts (Fig. \[fig:one\](b)) and $P_i(t) > 0$ for at most one $i \in \mathcal{R}$. This strategy is a special case of the Regenerative DF protocol (to be discussed next) that uses only $1$ relay and in which the destination does not use signals received from the first stage for decoding. Therefore, the optimal cost for this can be calculated using the procedure for the Regenerative DF case by imposing the single relay constraint and setting $h_{sd}(t) = 0$. Below we present the computation of the minimum cost $c_3(t)$ for the cooperative transmission mode under several protocols. In what follows, we drop the time subscript $(t)$ for notational convenience.
Regenerative DF, Orthogonal Channels {#section:df_regen}
------------------------------------
Here, the source and relays are each assigned an orthogonal channel of equal size. An example slot structure is shown in Fig. \[fig:one\](c) in which the entire slot is divided into $m+1$ equal mini-slots. In the first phase of the protocol, $s$ transmits the packet in its slot using power $P_s$. In the second phase, a subset $\mathcal{U} \subset \mathcal{R}$ of relays that were successful in reliably decoding the packet, re-encode it using the *same* code book and transmit to the destination on their channels with power $P_i$ (where $i \in \mathcal{U}$). Given such a set $\mathcal{U}$, the total mutual information under this protocol is given by [@Laneman1]: $$\begin{aligned}
\frac{W}{m} \log\Big(1 + \frac{mP_s}{W}|h_{sd}|^2 + \sum_{i\in \mathcal{U}} \frac{mP_i}{W}|h_{id}|^2\Big)\end{aligned}$$ This is derived by assuming that the receiver uses Maximal Ratio Combining to process the signals. As seen in the expression for the mutual information, such an orthogonal structure increases the SNR, but utilizes only a fraction of the available degrees of freedom leading to reduced multiplexing gain.
Define binary variables $x_i$ to be $1$ if relay $i$ can reliably decode the packet after the first stage and $0$ else. Then, for this protocol, (\[eq:p1ssp3\]) is equivalent to the following optimization problem: $$\begin{aligned}
\textrm{Minimize:} & (X_s + V \beta_s) P_s + \sum_{i \in \mathcal{R}} (X_i+V \beta_i)P_i - Z_s - V \alpha_s\nonumber \\
\textrm{Subject to:} & \frac{W}{m} \log\Big(1 + \frac{mP_s}{W}|h_{sd}|^2 + \sum_{i \in \mathcal{R}}x_i\frac{mP_i}{W}|h_{id}|^2\Big) \geq R \nonumber \\
& \frac{W}{m} \log\Big(1 + \frac{mP_s}{W}|h_{si}|^2\Big) \geq x_iR \nonumber \\
& 0 \leq P_s \leq P_s^{max} \nonumber \\
& 0 \leq P_i \leq P_i^{max}, x_i\in\{0,1\} \; \forall i \in \mathcal{R}
\label{eq:dfortho1}\end{aligned}$$
The variables $x_i$ capture the requirement that a relay can cooperatively transmit in the second stage only if it was successful in reliably decoding the packet using the first stage transmission. A similar setup is considered in [@Maric1] but it treats the limiting case when $W$ goes to infinity. Because of the integer constraints on $x_i$, (\[eq:dfortho1\]) is non-convex. However, we can exploit the structure of this protocol to reduce the above to a set of $m+1$ subproblems as follows. We first order the relays in decreasing order of their $|h_{si}|^2$ values. Define $\mathcal{U}_k$ as the set that contains the first $k$ (where $0 \leq k \leq m$) relays from this ordering. Let $P_s^{\mathcal{U}_k}$ denote the minimum source power required to ensure that all relays in $\mathcal{U}_k$ can reliably decode the packet after the first stage. We note that for all values of $P_s$ in the range $(P_s^{\mathcal{U}_k}, P_s^{\mathcal{U}_{k+1}})$, the relay set that can reliably decode remains the same, i.e., $\mathcal{U}_k$. Thus, we need to consider only $m+1$ subproblems, one for each $\mathcal{U}_k$. The subproblem for any set $\mathcal{U}_k$ is given by: $$\begin{aligned}
\textrm{Minimize:} \; & (X_s + V \beta_s) P_s + \sum_{i\in \mathcal{U}_k} (X_i+V \beta_i)P_i - Z_s- V \alpha_s\nonumber \\
\textrm{Subject to:} \; & \frac{W}{m} \log\Big(1 + \frac{mP_s}{W}|h_{sd}|^2 + \sum_{i\in \mathcal{U}_k} \frac{mP_i}{W}|h_{id}|^2\Big) \geq R \nonumber \\
& P_s^{\mathcal{U}_k} \leq P_s \leq P_s^{max} \nonumber \\
& 0 \leq P_i \leq P_i^{max} \qquad \forall i \in \mathcal{U}_k
\label{eq:dfortho2}\end{aligned}$$ This can easily be expressed as the following LP: $$\begin{aligned}
\textrm{Minimize:} \; & (X_s + V \beta_s) P_s + \sum_{i\in \mathcal{U}_k} (X_i+V \beta_i)P_i - Z_s- V \alpha_s\nonumber \\
\textrm{Subject to:} \; & P_s|h_{sd}|^2 + \sum_{i\in \mathcal{U}_k}P_i|h_{id}|^2 \geq \theta \nonumber \\
& P_s^{\mathcal{U}_k} \leq P_s \leq P_s^{max} \nonumber \\
& 0 \leq P_i \leq P_i^{max} \qquad \forall i \in \mathcal{U}_k
\label{eq:dfortho3}\end{aligned}$$ where $\theta = \frac{W}{m} (2^{Rm/W} - 1)$. The solution to the LP above has a greedy structure where we start by allocating increasing power to the nodes (including $s$) in decreasing order of the value of $\frac{|h_{id}|^2}{(X_i+V\beta_i)}$ (where $i \in \mathcal{U}_k \cup \{s\}$) till any constraint is met.
Therefore, for this protocol, the optimal solution to finding the cost $c_3(t)$ associated with the cooperative transmission mode in (\[eq:p1ssp3\]) can be computed by solving (\[eq:dfortho3\]) for each $\mathcal{U}_k$ and picking the one with the least cost. It is interesting to note that if we impose a constraint on the sum total power of the relays instead of individual node constraints, then due to the greedy nature of the solution to (\[eq:dfortho3\]), it is optimal to select at most $1$ relay for cooperation. Specifically, this relay is the one that has the highest value of $\frac{|h_{id}|^2}{(X_i+V\beta_i)}$.
Non-Regenerative DF, Orthogonal Channels {#section:df_nonregen}
----------------------------------------
This protocol is similar to Regenerative DF protocol discussed in Sec. \[section:df\_regen\]. The only difference is that here, in the second stage, the subset $\mathcal{U} \subset \mathcal{R}$ relays that were successful in reliably decoding the packet re-encode it using *independent* code books. In this case, the total mutual information is given by [@Laneman2]: $$\begin{aligned}
\frac{W}{m}\log\Big(1 + \frac{mP_s}{W}|h_{sd}|^2\Big) + \sum_{i\in
\mathcal{R}} \frac{W}{m}\log\Big(1 +
x_i\frac{mP_i}{W}|h_{id}|^2\Big)\end{aligned}$$ Using the same definition of binary variables $x_i$ as in Sec.\[section:df\_regen\] , we can express (\[eq:p1ssp3\]) for this protocol as an optimization problem that resembles (\[eq:dfortho1\]). Similar to the Regenerative DF case, we can then reduce this to a set of $m+1$ subproblems, one for each $\mathcal{U}_k$. The subproblem for set $\mathcal{U}_k$ is given by: $$\begin{aligned}
&\textrm{Minimize:}\; \; (X_s + V \beta_s) P_s + \sum_{i\in \mathcal{U}_k} (X_i+V \beta_i)P_i - Z_s- V \alpha_s \nonumber \\
&\textrm{Subject to:} \nonumber \\
&\log\Big(1 +
\frac{mP_s}{W}|h_{sd}|^2\Big) + \sum_{i\in \mathcal{U}_k} \log\Big(1
+ \frac{mP_i}{W}|h_{id}|^2\Big)
\geq \frac{mR}{W}\nonumber \\
& P_s^{\mathcal{U}_k} \leq P_s \leq P^{max} \nonumber \\
& 0 \leq P_i \leq P^{max} \qquad \forall i \in \mathcal{U}_k
\label{eq:nonregen_dfortho2}\end{aligned}$$ The above problem is convex and we can use the KKT conditions to get the optimal solution (see Appendix B for details). Define $[x]_0^{P^{max}}
{\mbox{\raisebox{-.3ex}{$\overset{\vartriangle}{=}$}}}\min[\max(x, 0), P^{max}]$. Then the solution to the subproblem for set $\mathcal{U}_k$ is given by: $$\begin{aligned}
&P_s^*(\mathcal{U}_k) = \Big[\frac{\nu^*}{X_s+V\beta_s} - \frac{W}{m|h_{sd}|^2}\Big]_{P_s^{\mathcal{U}_k}}^{P_s^{max}} \nonumber \\
& P_i^*(\mathcal{U}_k) = \Big[\frac{\nu^*}{X_i+V\beta_i} - \frac{W}{m|h_{id}|^2}\Big]_0^{P_i^{max}} \forall i \in \mathcal{U}_k
\label{eq:nonregen_dfortho_sol}\end{aligned}$$ where $\nu^* \geq 0$ is chosen so that the total mutual information constraint is met with equality. Therefore, the optimal solution for the cost $c_3(t)$ in (\[eq:p1ssp3\]) for this protocol can be computed by solving (\[eq:nonregen\_dfortho\_sol\]) for each $\mathcal{U}_k$ and picking one with the least cost. We note that the solution above has a water-filling type structure that is typical of related resource allocation problems in static settings.
AF, Orthogonal Channels {#section:af_regen}
-----------------------
In this protocol, the source and relays are again assigned an orthogonal channel of equal size. An example slot structure is shown in Fig. \[fig:one\](c). However, instead of trying to decode the packet, the relays amplify and forward the received signal from the first stage. The total mutual information under this protocol is given by [@Maric2] [@Adve1]: $$\begin{aligned}
\frac{W}{m} \log\Bigg(1 + \frac{mP_s}{W}\Big(|h_{sd}|^2 + \sum_{i \in \mathcal{R}} \psi_i \Big)\Bigg)\end{aligned}$$ where $\psi_i {\mbox{\raisebox{-.3ex}{$\overset{\vartriangle}{=}$}}}\frac{P_i|h_{si}|^2|h_{id}|^2}{P_s|h_{si}|^2 + P_i|h_{id}|^2 + W/m}$. Using this, we can express (\[eq:p1ssp3\]) for this model as follows. $$\begin{aligned}
\textrm{Minimize:} \; & (X_s + V \beta_s) P_s + \sum_{i \in \mathcal{R}} (X_i+V \beta_i)P_i - Z_s - V \alpha_s\nonumber \\
\textrm{Subject to:} \; & \frac{W}{m} \log\Bigg(1 + \frac{mP_s}{W}\Big(|h_{sd}|^2 + \sum_{i \in \mathcal{R}} \psi_i \Big)\Bigg)\geq R \nonumber \\
& 0 \leq P_s \leq P_s^{max} \nonumber \\
& 0 \leq P_i \leq P_i^{max} \; \forall i \in \mathcal{R}
\label{eq:afortho1}\end{aligned}$$ This problem is non-convex. However, if we fix the source power $P_s$, then it becomes convex in the other variables. This reduction has been used in [@Adve1] as well, although it considers a static scenario with the objective of minimizing instantaneous outage probability. After fixing $P_s$, we can compute the optimal relay powers for this value of $P_s$ by solving the following: $$\begin{aligned}
\textrm{Minimize:} \qquad & \sum_{i \in \mathcal{R}} (X_i+V \beta_i)P_i - Z_s - V \alpha_s\nonumber \\
\textrm{Subject to:} \qquad & P_s|h_{sd}|^2 + \sum_{i \in
\mathcal{R}} P_s\psi_i \geq \theta \nonumber \\
\qquad &0 \leq P_i \leq P_i^{max} \qquad \forall i \in \mathcal{R}
\label{eq:afortho2}\end{aligned}$$ where $\theta = \frac{W}{m} (2^{Rm/W} - 1)$. The first constraint can be simplified as: $P_s|h_{sd}|^2 + \sum_{i \in \mathcal{R}} P_s\psi_i = P_s(|h_{sd}|^2 + \sum_{i \in \mathcal{R}} |h_{si}|^2)
- \sum_{i \in \mathcal{R}} \frac{P_s^2|h_{si}|^4 + P_s|h_{si}|^2W/m}{P_s|h_{si}|^2 + P_i|h_{id}|^2 +W/m}$
Since we have fixed $P_s$, we can express (\[eq:afortho2\]) as: $$\begin{aligned}
\textrm{Minimize:} \qquad & \sum_{i \in \mathcal{R}} (X_i+V \beta_i)P_i - Z_s - V \alpha_s\nonumber \\
\textrm{Subject to:} \qquad & \sum_{i\in
\mathcal{R}}\frac{P_s^2|h_{si}|^4 +
P_s|h_{si}|^2W/m}{P_s|h_{si}|^2 + P_i|h_{id}|^2 +W/m} \leq \theta' \nonumber \\
\qquad &0 \leq P_i \leq P_i^{max} \qquad \forall i \in
\mathcal{R}\label{eq:afortho3}\end{aligned}$$ where $\theta' = P_s(|h_{sd}|^2 + \sum_{i \in \mathcal{R}_{s}}
|h_{si}|^2) - \theta$. Using the KKT conditions, the solution the above convex optimization problem is given by (see Appendix C for details): $P_i^* = \Big[\sqrt{\frac{\nu^*(P_s^2|h_{si}|^4 + P_s|h_{si}|^2W/m)}{(X_i+V\beta_i)|h_{id}|^2}}
- \frac{P_s|h_{si}|^2+ W/m}{|h_{id}|^2}\Big]_0^{P_i^{max}}$ where $\nu^* \geq 0$ is chosen so that the second constraint is met with equality. We note that this solution has a water-filling type structure as well. Therefore, to compute the optimal solution to (\[eq:p1ssp3\]) for this protocol, we would have to solve the above for each value of $P_s \in [0, P_s^{max}]$. In practice, this computation can be simplified by considering only a discrete set of values for $P_s$. Because we have derived a simple closed form expression for each $P_s$, it is easy to compare these values over, say, a discrete list of $100$ options in $[0, P_s^{max}]$ to pick the best one, which enables a very accurate approximation to optimality in real time.
DF with DSTC {#section:df_dstc}
------------
In this protocol, all the cooperating relays in the second stage use an appropriate distributed space-time code (DSTC) [@Laneman2] so that they can transmit simultaneously on the same channel. The slot structure under this scheme is shown in Fig.\[fig:one\](d). Suppose in the first phase of the protocol, $s$ transmits the packet in the first half of the slot using power $P_s$. In the second phase, a subset $\mathcal{U} \subset \mathcal{R}$ of relays that were successful in reliably decoding the packet, re-encode it using a DSTC and transmit to the destination with power $P_i$ (where $i \in \mathcal{U}$) in the second half of the slot. Given such a set $\mathcal{U}$, the total mutual information under this protocol is given by [@Laneman1]: $$\begin{aligned}
\frac{W}{2} \log\Big(1 + \frac{2P_s}{W}|h_{sd}|^2 + \sum_{i\in \mathcal{U}} \frac{2P_i}{W}|h_{id}|^2\Big)\end{aligned}$$ The factor of $2$ appears because only half of the slot is being used for transmission. As seen in the expression above, unlike the earlier examples, this protocol does not suffer from reduced multiplexing gains due to orthogonal channels.
We can now express (\[eq:p1ssp3\]) for this protocol as follows. Define binary variables $x_i$ to be $1$ if relay $i$ can reliably decode the packet after the first stage and $0$ else. Then, for this protocol, (\[eq:p1ssp3\]) is equivalent to the following optimization problem: $$\begin{aligned}
\textrm{Minimize:} \; & (X_s + V \beta_s) P_s + \sum_{i \in \mathcal{R}} (X_i+V \beta_i)P_i - Z_s - V \alpha_s\nonumber \\
\textrm{Subject to:} \; & \frac{W}{2} \log\Big(1 + \frac{2P_s}{W}|h_{sd}|^2 + \sum_{i \in \mathcal{R}}x_i\frac{2P_i}{W}|h_{id}|^2\Big) \geq R \nonumber \\
& \frac{W}{2} \log\Big(1 + \frac{2P_s}{W}|h_{si}|^2\Big) \geq x_iR \nonumber \\
& 0 \leq P_s \leq P_s^{max} \nonumber \\
& 0 \leq P_i \leq P_i^{max}, x_i\in\{0,1\} \; \forall i \in \mathcal{R}
\label{eq:dfdstc1}\end{aligned}$$ By comparing the above with (\[eq:dfortho1\]), it can be seen that the computation of minimum cost under this protocol follows the same procedure as described in Sec. \[section:df\_regen\] of solving $m+1$ subproblems, each an LP, by ordering the relays greedily and hence we do not repeat it.
AF with DSTC {#section:af_dstc}
------------
Here, all cooperating relays use amplify and forward along with DSTC. The total mutual information under this protocol is given by: $$\begin{aligned}
\frac{W}{2} \log\Bigg(1 + \frac{2P_s}{W}\Big(|h_{sd}|^2 + \sum_{i \in \mathcal{R}} \psi_i \Big)\Bigg)\end{aligned}$$ where $\psi_i = \frac{P_i|h_{si}|^2|h_{id}|^2}{P_s|h_{si}|^2 + P_i|h_{id}|^2 + W/2}$. Using this, we can express (\[eq:p1ssp3\]) for this model as follows. $$\begin{aligned}
\textrm{Minimize:} \; & (X_s + V \beta_s) P_s + \sum_{i \in \mathcal{R}} (X_i+V \beta_i)P_i - Z_s - V \alpha_s\nonumber \\
\textrm{Subject to:} \; & \frac{W}{2 } \log\Bigg(1 + \frac{mP_s}{W}\Big(|h_{sd}|^2 + \sum_{i \in \mathcal{R}} \psi_i \Big)\Bigg)\geq R \nonumber \\
& 0 \leq P_s \leq P_s^{max} \nonumber \\
& 0 \leq P_i \leq P_i^{max} \; \forall i \in \mathcal{R}
\label{eq:afdstc1}\end{aligned}$$ This is similar to (\[eq:afortho1\]) and thus, we fix $P_s$ and use a similar reduction to get a convex optimization problem whose solution can be derived using KKT conditions and is given by:
$P_i^* = \Big[\sqrt{\frac{\nu^*(P_s^2|h_{si}|^4 + P_s|h_{si}|^2W/2)}{(X_i+V\beta_i)|h_{id}|^2}}
- \frac{P_s|h_{si}|^2+ W/2}{|h_{id}|^2}\Big]_0^{P_i^{max}}$ where $\nu^* \geq 0$ is chosen so that the constraint on the total mutual information at the destination is met with equality.
$2$-Stage Resource Allocation Problem with Unknown Channels, Known Statistics {#section:stats}
=============================================================================
We next consider the solution to (\[eq:p1ssp3\]) when the source does not know the current channel gains and is only aware of their statistics. In this case, (\[eq:p1ssp3\]) becomes a $2$-stage stochastic dynamic program. For brevity, here we focus on its solution for the cooperative transmission mode.
Suppose the source uses power $P_s$ in the first stage. Let $\omega$ denote the outcome of this transmission. This lies in a space $\Omega$ of possible network states which is assumed to be of a finite but arbitrarily large size. For example, in the DF protocol, $\omega$ might represent the set of relay nodes that received the packet successfully after the first stage as well as the mutual information accumulated so far at the destination. For AF, $\omega$ can represent the SNR value at each relay node and at the destination. Let $J_1^*(P_s, \omega)$ be the optimal cost-to-go function for the $2$-stage dynamic program (\[eq:p1ssp3\]) given that the source uses power $P_s$ in the first stage and the network state is $\omega$ at the beginning of the second stage. Let $J_0^*$ denote the optimal cost-to-go function starting from the first stage. Also, let $\mathcal{R}(\omega)$ denote the set of relay nodes that can take part in cooperative transmission when the network state in $\omega$. We define the following probabilities. Let $f(P_s, \omega)$ be the probability that the outcome of the first stage is $\omega$ when the source uses power $P_s$. Also, let $g(\overrightarrow{P}_{\mathcal{R}(\omega)},
P_s, \omega)$ be the probability that the receiver gets the packet successfully when relays in $\mathcal{R}(\omega)$ use a power allocation $\overrightarrow{P}_{\mathcal{R}(\omega)}$ and the source uses power $P_s$. Note that these probabilities are obtained by taking expectation over all channel state realizations. We assume these are obtained from the knowledge of the channel statistics.
Using these definitions, we can now write the Bellman optimality equations [@bertsekas] for this dynamic program $\forall \omega
\in \Omega$: $$\begin{aligned}
&J_0^* = \min_{P_s} \Big[(X_s + V \beta_s) P_s + \sum_{\omega \in \Omega}f(P_s,\omega)J_1^*(P_s, \omega)\Big] \label{eq:dp1} \\
&J_1^*(P_s, \omega) = \min_{\overrightarrow{P}_{\mathcal{R}(\omega)}} \Big[\sum_{i \in \mathcal{R}(\omega)} (X_i + V \beta_i) P_i \nonumber \\
&\qquad \qquad \qquad - (Z_s + V \alpha_s) g(\overrightarrow{P}_{\mathcal{R}(\omega)},P_s, \omega)\Big]
\label{eq:dp2}\end{aligned}$$
While this can be solved using standard dynamic programming techniques, it has a computational complexity that grows with the state space size $\Omega$ and can be prohibitive when this is large. We therefore present an alternate method based on the idea of Monte Carlo simulation.
Simulation Based Method
-----------------------
Suppose the transmitter performs the following simulation. Fix a source power $P_s$. Define $J^*_0(P_s)$ as the optimal cost-to-go function *given* that the source uses power $P_s$. Note that this is simply the expression on the right hand side of (\[eq:dp1\]) with $P_s$ fixed. Simulate the outcome of a transmission at this power $n$ times independently using the values of $f(P_s, \omega)$. Let $\omega_j \in \Omega$ denote the outcome of the $j^{th}$ simulation. For each generated outcome $\omega_j$, compute the optimal cost-to-go function $J_1^*(P_s,
\omega_j)$ by solving (\[eq:dp2\]) (this could be done using the knowledge of $g(\overrightarrow{P}_{\mathcal{R}_(\omega)}, P_s, \omega)$ either analytically or numerically). Use this to update $J_0^{est}(P_s,
n)$, which is an *estimate* of $J^*_0(P_s)$ for a given $P_s$ after $n$ iterations and is defined as follows: $$\begin{aligned}
J_0^{est}(P_s, n) = (X_s + V \beta_s)P_s + \frac{1}{n} \sum_{j=1}^n
J_1^*(P_s, \omega_j) \label{eq:est1}\end{aligned}$$
We now show that, for a given $P_s$, $J_0^{est}(P_s, n)$ can be pushed arbitrarily close to the optimal cost-to-go function $J_0^*(P_s)$ by increasing $n$. Since we have fixed $P_s$, from (\[eq:dp1\]), we have: $$\begin{aligned}
&J_0^*(P_s) = (X_s + V \beta_s) P_s + \sum_{\omega \in \Omega}f(P_s,
\omega)J_1^*(P_s, \omega)\end{aligned}$$
Define the following indicator random variables for each simulation $j$ and $\forall \omega \in \Omega$: $$1_{\omega}(P_s, j) = \left\{ \begin{array}{lll} 1 & \textrm{if the outcome of simulation $j$ is $\omega$}\\
0 & \textrm{else}
\end{array} \right.$$
Note that by definition ${\mathbb{E}\left\{1_{\omega}(P_s, j)\right\}} = f(P_s,
\omega)$. Therefore, we can express $J_0^{est}(P_s, n)$ in terms of these indicator variables as follows: $$\begin{aligned}
J_0^{est}(P_s, n) = &(X_s + V \beta_s) P_s + \frac{1}{n} \sum_{j=1}^n \sum_{\omega \in \Omega} 1_{\omega}(P_s,j)J_1^*(P_s, \omega)
$$
We note that $\Big(\sum_{\omega \in \Omega}
1_{\omega}(P_s,j)J_1^*(P_s, \omega)\Big)$ are i.i.d. random variables with mean $\mu = \sum_{\omega \in \Omega}f(P_s,
\omega)J_1^*(P_s, \omega)$ and variance $\sigma^2 = \sum_{\omega \in
\Omega}f(P_s, \omega)(J_1^*(P_s, \omega))^2 - \mu^2$. Using Chebyshev’s inequality, we get for any $\epsilon > 0$: $$\begin{aligned}
Pr\Big[|\frac{1}{n} \sum_{j=1}^n \Big(\sum_{\omega \in \Omega}
1_{\omega}(P_s,j)J_1^*(P_s, \omega)\Big) - \mu| \geq \epsilon \Big]
\leq \frac{\sigma^2}{n\epsilon^2}\end{aligned}$$
This shows that the value of the estimate quickly converges to the optimal cost-to-go value. Thus, this method can be used to get a good estimate of the optimal cost-to-go function for a fixed value of $P_s$ in a reasonable number of steps.
Multi-Source Extensions {#section:extensions}
=======================
In this section, we extend the basic model of Sec. \[section:basic\] to the case when there are multiple sources in the network. Let the set of source nodes be given by $\mathcal{S}$. We consider the case when all source nodes have orthogonal channels.[^3] In particular, we assume that in each slot, a medium access process $\chi(t)$ determines which source nodes get transmission opportunities. For simplicity, we assume that at most one source transmits in a slot. This models situations where there might be a pseudo-random TDMA schedule that determines a unique transmitter node every slot. It also models situations where the source nodes use a contention-resolution mechanism such as CSMA. Our model can be extended to scenarios where more than one source node can transmit, potentially over orthogonal frequency channels.
Let $s(t) = s(\chi(t)) \in \mathcal{S}$ be the source node that gets a transmission opportunity in slot $t$. Then, the optimal resource allocation framework developed in Sec. \[section:CNC\] can be applied as follows. A virtual reliability queue is defined for each source node $s \in \mathcal{S}$ and is updated as in (\[eq:p1u1\]). Note that in slots where a source node $s$ does not get a transmission opportunity, $\Phi_s(t) = 0$. We assume that each incoming packet gets one transmission opportunity so that the delay constraint of $1$ slot per packet only measures the transmission delay and not the queueing delay that would be incurred due to contention. Similarly, a virtual power queue is maintained for each node as in (\[eq:p1x1\]) including the source nodes and relay nodes. Note that in this model, it is possible for a source node to act as a relay for another source node when it is not transmitting its own data. We denote the set of relay nodes (that includes such source nodes) in slot $t$ as $\mathcal{R}(t)$.
Then the optimal control algorithm operates as follows. Let $\textbf{\emph{Q}}(t)$ denote the collection of all virtual queues in timeslot $t$. Every slot, given $\textbf{\emph{Q}}(t)$ and any channel state $\mathcal{T}(t)$, it chooses a control action $\mathcal{I}_{s(t)}$ that minimizes the following stochastic metric (for a given control parameter $V \geq 0$): $$\begin{aligned}
\textrm{Minimize:} \qquad & (X_{s(t)} +V\beta_{s(t)}){\mathbb{E}\left\{P_{s(t)}|\textbf{\emph{Q}}(t), \mathcal{T}(t)\right\}} \nonumber \\
&+ \sum_{i \in \mathcal{R}(t)}(X_i(t)+V\beta_i){\mathbb{E}\left\{P_i(t)|\textbf{\emph{Q}}(t), \mathcal{T}(t)\right\}}\nonumber \\
&- (Z_{s(t)}+V \alpha_{s(t)}){\mathbb{E}\left\{\Phi_{s(t)}|\textbf{\emph{Q}}(t), \mathcal{T}(t)\right\}} \nonumber \\
\textrm{Subject to:} \qquad & 0 \leq P_{s(t)} \leq P_{s(t)}^{max} \nonumber \\
& 0 \leq P_i(t) \leq P_i^{max} \; \forall i \in \mathcal{R}(t) \nonumber \\
& \mathcal{I}_{s(t)} \in \mathcal{C}
\label{eq:p1ssp3_ms}\end{aligned}$$ This problem can be solved using the techniques described for the single source case.
Simulations {#section:sim}
===========
We simulate the dynamic control algorithm (\[eq:p1ssp3\]) in an ad-hoc network with $3$ stationary sources and $7$ mobile relays as shown in Fig. \[fig:cell\]. Every slot, the sources receive new packets destined for the base station according to an i.i.d. Bernoulli process of rate $\lambda$ and each packet has a delay constraint of $1$ slot. The sources are assumed to have orthogonal channels and can transmit either directly or cooperatively with a subset of the relays in their vicinity. We impose a cell-partitioned structure so that a source can only cooperate with the relays that are in the same cell in that slot. The relays move from one cell to the other according to a Markovian random walk. In the simulation, at the end of every slot, a relay decides to stay in its current cell with probability $0.8$, else decides to move to an adjacent cell with probability $0.2$ (where any of the feasible adjacent cells are equally likely).
We assume a Rayleigh fading model. The amplitude squares of the instantaneous gains on the links involving a source, the set of relays in its cell in that slot and the base station are exponentially distributed random variables with mean $1$. All power values are normalized with respect to the average noise power. All nodes have an average power constraint of $1$ unit and a maximum power constraint of $10$ units.
![A snapshot of the example network used in simulation.[]{data-label="fig:cell"}](cell_2){width="8cm"}
We consider the Regenerative DF cooperative protocol over orthogonal channels and implement the optimal resource allocation strategy as computed in (\[eq:dfortho3\]) for this network. In the first experiment, we consider the objective of minimizing the average sum power expenditure in the network given a minimum reliability constraint $\rho_s = 0.98$ and input rate $\lambda_s = 0.5$ packets/slot for all sources. For this, we set $\alpha_s =0$ and $\beta_i = 1$. Fig. \[fig:fig3b\] shows the average sum power for different values of the control parameter $V$. It is seen that this value converges to $2.6$ units for increasing values of $V$, as predicted by the performance bounds on the time average utility in Theorem $1$. Fig. \[fig:fig3c\] shows the resulting average reliability queue occupancy. It is seen to increase linearly in $V$, again as predicted by the bound on the time average queue backlog in Theorem $1$. We emphasize again that there are no actual queues in the system, and all successfully delivered packets have a delay exactly equal to $1$ slot. The fact that all reliability queues are stable ensures that we are indeed meeting or exceeding the $98\%$ reliability constraint. Indeed, in our simulations we found reliability to be almost exactly equal to the $98\%$ constraint, as expected in an algorithm designed to minimize average power subject to this constraint. We further note that the instantaneous reliability queue value $Z(t)$ represents the worst case “excess” packets that did not meet the reliability constraints over any interval ending at time $t$, so that maintaining small $Z(t)$ (with a small $V$) makes the timescales over which the time average reliability constraints are satisfied smaller.
![Average Sum Power vs. V.[]{data-label="fig:fig3b"}](avg_power_vs_V_lambda_005){width="8cm"}
In the second experiment, we choose both $\alpha_s =0$ and $\beta_i
= 0$ so that (\[eq:obj1\]) becomes a feasibility problem. We fix the average and peak power values to $1$ and $10$ respectively and implement (\[eq:dfortho3\]) for different rate-reliability pairs. In Table \[table:notation\], we show whether these are feasible or not under three resource allocation strategies: direct transmission, always cooperative transmission and dynamic cooperation (that corresponds to implementing the solution to (\[eq:dfortho3\]) every slot). It can be seen that dynamic cooperation significantly increases the feasible rate-reliability region over direct transmission as well as static cooperation. For example, it is impossible to achieve $95\%$ reliability using direct transmission alone, even if the traffic rate is only $0.2$ packets/slot. This can be achieved by an algorithm that uses the cooperation mode (mode $3$) always, but optimizes over the power allocation decisions of this cooperation mode as specified in previous sections. However, always using cooperation fails if we desire $98\%$ reliability, but using our optimal policy that dynamically mixes between the different modes, and chooses efficient power allocation decisions in each mode, can achieve $98\%$ reliability, even at increased rates up to $0.6$ packets/slot.
(rate, reliability) = $(\lambda_s, \rho_s$) (0.1, 0.9) (0.2, 0.9) (0.2, 0.95) (0.5, 0.95) (0.5, 0.98) (0.6, 0.98) (0.7, 0.99)
--------------------------------------------- ------------ ------------ ------------- ------------- ------------- ------------- -------------
direct transmission **x** **x** **x** **x** **x**
always cooperate **x** **x** **x**
optimal strategy **x**
Conclusions {#section:conclu}
===========
In this paper, we considered the problem of optimal resource allocation for delay-limited cooperative communication in a mobile ad-hoc network. Using the technique of Lyapunov optimization, we developed dynamic cooperation strategies that make optimal use of network resources to achieve a target outage probability (reliability) for each user subject to average power constraints. Our framework is general enough to be applicable to a large class of cooperative protocols. In particular, in this paper, we derived quasi-closed form solutions for several variants of the Decode-and-Forward and Amplify-and-Forward strategies.
Appendix A: Proof of Theorem $1$ {#appendix-a-proof-of-theorem-1 .unnumbered}
================================
Here, we prove Theorem $1$ by comparing the Lyapunov drift of the dynamic control algorithm (\[eq:p1ssp3\]) with that of an optimal stationary, randomized policy. Let $r^*_s$ and $e^*_i \; \forall i
\in \mathcal{\widehat{R}}$ denote the optimal value of the objective in (\[eq:obj1\]). Then we have the following fact[^4]:
![Average Reliability Queue Occupancy vs. V.[]{data-label="fig:fig3c"}](avg_z_vs_V_lambda_005){width="8cm"}
: Assuming i.i.d. $\mathcal{T}(t)$ states, there exists a stationary randomized policy $\pi$ that chooses feasible control action $\mathcal{I}^{\pi}(t)$ and power allocations ${P}_i^{\pi}(t)$ for all $i \in \mathcal{\widehat{R}}$ every slot purely as a function of the current channel state $\mathcal{T}(t)$ and yields the following for some $\epsilon > 0$: $$\begin{aligned}
&{\mathbb{E}\left\{\Phi_s^{\pi}(t)\right\}} \geq \rho_s \lambda_s + \epsilon \label{eq:stat0}\\
&{\mathbb{E}\left\{P_i^{\pi}(t)\right\}} + \epsilon \leq P_i^{avg}
\label{eq:stat1}\\
&\alpha_s {\mathbb{E}\left\{\Phi_s^{\pi}(t)\right\}} - \sum_{i\in \mathcal{N}} \beta_i
{\mathbb{E}\left\{P_i^{\pi}(t)\right\}} = \alpha_s r^*_s- \sum_{i\in \mathcal{N}}
\beta_ie^*_i \label{eq:stat2}\end{aligned}$$ Let $\textbf{\emph{Q}}(t) = (Z_s(t), X_i(t)) \; \forall i \in \mathcal{\widehat{R}}$ represent the collection of these queue backlogs in timeslot $t$. We define a quadratic Lyapunov function: $$\begin{aligned}
L(\textbf{\emph{Q}}(t)) {\mbox{\raisebox{-.3ex}{$\overset{\vartriangle}{=}$}}}\frac{1}{2}\Big[ Z^2_s(t) +
\sum_{i \in \mathcal{\widehat{R}}} X_i^2(t)\Big]\end{aligned}$$
Also define the conditional Lyapunov drift $\Delta(\textbf{\emph{Q}}(t))$ as follows: $$\begin{aligned}
\Delta(\textbf{\emph{Q}}(t)) {\mbox{\raisebox{-.3ex}{$\overset{\vartriangle}{=}$}}}{\mathbb{E}\left\{L(\textbf{\emph{Q}}(t+1)) - L(\textbf{\emph{Q}}(t))|\textbf{\emph{Q}}(t)\right\}}\end{aligned}$$
Using queueing dynamics (\[eq:p1u1\]), (\[eq:p1x1\]), the Lyapunov drift under any control policy can be computed as follows: $$\begin{aligned}
\Delta(\textbf{\emph{Q}}(t)) \leq \; B &- Z_s(t) {\mathbb{E}\left\{\Phi_s(t) - \rho_s A_s(t) |\textbf{\emph{Q}}(t)\right\}} \nonumber \\
&- \sum_{i\in \mathcal{\widehat{R}}}X_i(t) {\mathbb{E}\left\{P_i^{avg} - P_i(t)|\textbf{\emph{Q}}(t)\right\}}
\label{eq:p1lpdrift2}\end{aligned}$$ where $B = \frac{1 + \lambda_s^2 \rho_s^2 + \sum_{i\in \mathcal{\widehat{R}}} (P_i^{avg})^2 + (P^{max})^2}{2}$.
For a given control parameter $V \geq 0$, we subtract a “reward” metric $V{\mathbb{E}\left\{\alpha_s\Phi_s(t) - \sum_{i\in \mathcal{\widehat{R}}} \beta_i P_i(t)|\textbf{\emph{Q}}(t)\right\}}$ from both sides of the above inequality to get the following: $$\begin{aligned}
\Delta(\textbf{\emph{Q}}(t)) &- V {\mathbb{E}\left\{\alpha_s\Phi_s(t) - \sum_{i\in \mathcal{\widehat{R}}} \beta_i P_i(t)|\textbf{\emph{Q}}(t)\right\}} \leq \; B \nonumber \\
&- Z_s(t) {\mathbb{E}\left\{\Phi_s(t) - \rho_s A_s(t) |\textbf{\emph{Q}}(t)\right\}} \nonumber \\
&- \sum_{i\in \mathcal{\widehat{R}}}X_i(t) {\mathbb{E}\left\{P_i^{avg} - P_i(t)|\textbf{\emph{Q}}(t)\right\}} \nonumber \\
&- V {\mathbb{E}\left\{ \alpha_s\Phi_s(t) - \sum_{i\in \mathcal{\widehat{R}}} \beta_i P_i(t)|\textbf{\emph{Q}}(t)\right\}}
\label{eq:p1lpdrift3}\end{aligned}$$
From the above, it can be seen that the dynamic control algorithm (\[eq:p1ssp3\]) is designed to take a control action that minimizes the right hand side of (\[eq:p1lpdrift3\]) over all possible options every slot, including the stationary policy $\pi$. Thus, using (\[eq:stat0\]), (\[eq:stat1\]), (\[eq:stat2\]), we can write the above as: $$\begin{aligned}
\Delta(\textbf{\emph{Q}}(t)) &- V {\mathbb{E}\left\{\alpha_s\Phi_s(t) - \sum_{i\in \mathcal{\widehat{R}}} \beta_i P_i(t)|\textbf{\emph{Q}}(t)\right\}} \leq B \nonumber\\
&- Z_s(t)\epsilon - \sum_{i\in \mathcal{\widehat{R}}}X_i(t) \epsilon -V \alpha_s r^*_s- \sum_{i\in \mathcal{\widehat{R}}} \beta_ie^*_i
\label{eq:p1lpdrift4}\end{aligned}$$ Theorem $1$ now follows by a direct application of the Lyapunov optimization Theorem [@neely-NOW].
Appendix B – Solution to Non-Regenerative DF orthogonal using KKT conditions {#appendix-b-solution-to-non-regenerative-df-orthogonal-using-kkt-conditions .unnumbered}
============================================================================
We ignore the constant terms in the objective. It is easy to see that the first constraint in (\[eq:nonregen\_dfortho2\]) must be met with equality. The Lagrangian is given by: $$\begin{aligned}
\mathcal{L} = &(X_s + V \beta_s)P_s + \sum_{i\in \mathcal{U}_k} (X_i + V \beta_i) P_i - \lambda_s (P_s - P_s^{\mathcal{U}_k})
\\ & - \sum_{i\in \mathcal{U}_k}\lambda_iP_i + \beta_s(P_s - P_s^{max})+ \sum_{i\in \mathcal{U}_k} \beta_i(P_i - P_i^{max})
\\ & + \nu\Big[\log(1+{\theta_sP_s}) + \sum_{i\in \mathcal{U}_k} \log(1+\theta_iP_i) - \frac{mR}{W}\Big]\end{aligned}$$ where $\theta_s = \frac{m}{W}|h_{sd}|^2, \theta_i =
\frac{m}{W}|h_{id}|^2$. The KKT conditions for all $i\in
\mathcal{U}_{k}$ are: $$\begin{aligned}
& \lambda_s^*(P_s^* - P_s^{\mathcal{U}_k}) = 0 \qquad \lambda_i^*P_i^* = 0 \\
& \beta_s^*(P_s^* - P_s^{max}) = 0 \qquad \beta_i^* (P_i^* - P_i^{max}) = 0 \\
& \lambda_s^*, \lambda_i^*, \beta_s^*, \beta_i^* \geq 0 \\
& (X_s + V \beta_s) - \lambda_s^* + \beta_s^* + \frac{\nu^* \theta_s}{1 + \theta_s P_s^*} = 0 \\
& (X_i + V \beta_i) - \lambda_i^* + \beta_i^* + \frac{\nu^* \theta_i}{1 + \theta_i P_i^*} = 0\end{aligned}$$ If $\nu^* > 0$, then we must have that $\lambda_s^* - \beta_s^* > 0$ and $\lambda_i^* - \beta_i^* > 0$ for all $i$. This would mean that $P_s^* = P_s^{\mathcal{U}_k}$ and $P_i^* = 0$. For some $\nu^* \leq
0$, we have three cases:
1. If $\lambda_i^* = \beta_i^*$, we get $P_i^* = \frac{-\nu^*}{X_i + V \beta_i} - \frac{1}{\theta_i}$
2. If $\lambda_i^* > \beta_i^*$, then we must have $\lambda_i^* > 0$ and we get $P_i^* = 0$
3. If $\lambda_i^* < \beta_i^*$, then we must have $\beta_i^* > 0$ and we get $P_i^* = P_i^{max}$
Similar results can be obtained for $P_s^*$. Combining these, we get:
$P_s^* = \Big[\frac{-\nu*}{X_s + V \beta_s} - \frac{1}{\theta_s}\Big]_{P_s^{\mathcal{U}_k}}^{P_s^{max}}$ $P_i^* = \Big[\frac{-\nu*}{X_i + V \beta_i} - \frac{1}{\theta_i}\Big]_0^{P_i^{max}}$
where $[X]_0^{P_{max}}$ denotes $\min[\max(X, 0), P_{max}]$
Appendix C – Solution to AF orthogonal using KKT conditions {#appendix-c-solution-to-af-orthogonal-using-kkt-conditions .unnumbered}
===========================================================
It is easy to see that the first constraint in (\[eq:afortho3\]) must be met with equality. The Lagrangian is given by: $$\begin{aligned}
\mathcal{L} = &\sum_{i\in \mathcal{R}_{s}} (X_i + V\beta_i) P_i - \sum_{i\in \mathcal{R}_{s}}\lambda_iP_i
+ \sum_{\in \mathcal{R}_{s}} \beta_i(P_i - P_i^{max}) \\
& + \nu \Big[\sum_{\in \mathcal{R}_{s}} \frac{P_s^2|h_{si}|^4 + P_s|h_{si}|^2W/m}{|h_{si}|^2P_s + |h_{id}|^2P_i + W/m}- \theta' \Big]\end{aligned}$$ The KKT conditions for all $i\in \mathcal{R}_{s}$ are: $$\begin{aligned}
& \lambda_i^* P_i^* = 0 \qquad \beta_i^* (P_i^* - P_i^{max}) = 0 \qquad \lambda_i^*, \beta_i^* \geq 0 \\
& (X_i + V \beta_i) - \lambda_i^* + \beta_i^* = \frac{\nu^*|h_{id}|^2(P_s^2|h_{si}|^4 + P_s|h_{si}|^2 W/m)}{(|h_{si}|^2P_s
+ |h_{id}|^2P_i^* + W/m)^2} \end{aligned}$$ If $\nu^* < 0$, then we must have that $\lambda_i^* - \beta_i^* > 0$ for all $i$. This would mean that $P_i^* = 0$. For some $\nu^* \geq
0$, we have three cases:
1. If $\lambda_i^* = \beta_i^*$, we get $P_i^* = \sqrt{\frac{\nu^*(P_s^2|h_{si}|^4 + P_s|h_{si}|^2 W/m)}{(X_i + V \beta_i)|h_{id}|^2}} -
\frac{P_s|h_{si}|^2 + W/m}{|h_{id}|^2}$
2. If $\lambda_i^* > \beta_i^*$, then we must have $\lambda_i^* > 0$ and we get $P_i^* = 0$
3. If $\lambda_i^* < \beta_i^*$, then we must have $\beta_i^* > 0$ and we get $P_i^* = P_i^{max}$
Combining these, we get:
$P_i^* = \Big[\sqrt{\frac{\nu^*(P_s^2|h_{si}|^4 + P_s|h_{si}|^2W/m)}{(X_i + V \beta_i)|h_{id}|^2}}
- \frac{P_s|h_{si}|^2 + W/m}{|h_{id}|^2}\Big]_0^{P_i^{max}}$ where $[X]_0^{P_{max}}$ denotes $\min[\max(X, 0), P_{max}]$
[1]{}
G. Kramer, I. Maric, and R. D. Yates. Cooperative communications. *Foundations and Trends in Networking*, NOW Publishers, vol. 1, no. 3-4, 2006.
A. Scaglione, D. Goeckel, and J. N. Laneman. Cooperative communications in mobile ad-hoc networks: Rethinking the link abstraction. *IEEE Signal Processing Magazine*, vol. 23, no. 5, pp. 18-29, Sept. 2006.
J. N. Laneman, D. N. C. Tse, and G. W. Wornell. Cooperative diversity in wireless networks: Efficient protocols and outage behavior. *IEEE Trans. on Inform. Theory*, vol. 50, no. 12, pp. 3062-3080, Dec. 2004.
J. N. Laneman and G. W. Wornell. Distributed space-time coded protocols for exploiting cooperative diversity in wireless networks. *IEEE Trans. on Inform. Theory*, vol. 49, no. 10, pp. 2415-2425, Oct. 2003.
A. Sendonaris, E. Erkip, and B. Aazhang. User cooperation-Part 1: System description. *IEEE Trans. on Communications*, vol. 51, no. 11, pp. 1927-1938, Nov. 2003.
A. Sendonaris, E. Erkip, and B. Aazhang. User cooperation-Part 2: Implementation aspects and performance analysis. *IEEE Trans. on Communications*, vol. 51, no. 11, pp. 1939-1948, Nov. 2003.
M. Gastpar and M. Vetterli. On the capacity of large gaussian relay networks. *IEEE Trans. on Inform. Theory*, vol. 51, no. 3, pp. 765-779, March 2005.
G. Kramer, M. Gastpar, and P. Gupta. Cooperative strategies and capacity theorems for relay networks. *IEEE Trans. on Inform. Theory*, vol. 51, no. 9, pp. 3037-3063, Sep. 2005.
A. Høst-Madsen and J. Zhang. Capacity bounds and power allocation for wireless relay channels. *IEEE Trans. on Inform. Theory*, vol. 51, no. 6, pp. 2020-2050, June 2005.
M. O. Hasna and M.-S. Alouini. Optimal power allocation for relayed transmissions over rayleigh-fading channels. *IEEE Trans. on Wireless Comm.*, vol. 3, no. 6, pp. 1999-2004, Nov. 2004.
Y.-W. Hong, W.-J. Huang, F.-H. Chiu, and C.-C. J. Kuo. Cooperative communications in resource-constrained wireless networks. *IEEE Signal Processing Magazine*, vol. 24, pp. 47-57, May 2007.
I. Maric and R. Yates. Forwarding strategies for parallel-relay networks. *Proc. of CISS*, Mar. 2004.
I. Maric and R. D. Yates. Bandwidth and power allocation for cooperative strategies in gaussian relay networks. *38th Asilomar Conference On Signals, Systems and Computers*, Pacific Grove, CA, Nov. 2004.
D. Gündüz and E. Erkip. Opportunistic cooperation by dynamic resource allocation. *IEEE Trans. on Wireless Comm.*, vol. 6, no. 4, Apr. 2007.
M. Chen, S. Serbetli, and A. Yener. Distributed power allocation strategies for parallel relay networks. *IEEE Trans. on Wireless Comm.*, vol. 7, no. 2, pp. 552-561, Feb. 2008.
Y. Zhao, R. S. Adve, and T. J. Lim. Improving amplify-and-forward relay networks: Optimal power allocation versus selection. *IEEE Trans. on Wireless Comm.*, vol. 6, no. 8, pp. 3114-3123, Aug. 2007.
R. U. Nabar, H. Bölcskei, and F. W. Kneubühler. Fading relay channels: Performance limits and space-time signal design. *IEEE Journal on Selected Areas in Comm.*, vol. 22, no. 6, pp. 1099-1109, Aug. 2004.
E. Yeh and R. Berry. Throughput optimal control of cooperative relay networks. *IEEE Trans. on Inform. Theory*, vol. 53, no. 10, pp. 3827-3833, Oct. 2007.
S. V. Hanly and D. N. Tse. Multiple-access fading channels-Part II: Delay-limited capacities. *IEEE Trans. on Inform. Theory*, vol. 44, no. 7, pp. 2816-2831, Nov. 1998.
G. Caire, G. Taricco, and E. Biglieri. Optimum power control over fading channels. *IEEE Trans. on Inform. Theory*, vol. 45, no. 5, pp. 1468-1489, July 1999.
B. Sirkeci-Mergen, A. Scaglione, and G. Mergen. Asymptotic analysis of multistage cooperative broadcast in wireless networks. *IEEE Trans. on Inform. Theory*, vol. 52, no. 6, pp. 2531-2550, June 2006.
S. Borade, L. Zheng, and R. Gallager. Amplify and forward in wireless relay networks: Rate, diversity and network size. *IEEE Trans. on Inform. Theory, Special Issue on Relaying and Cooperation in Communication Networks*, vol. 53, no. 10, pp. 3302-3318, Oct. 2007.
M. J. Neely. Energy optimal control for time varying wireless networks. *IEEE Trans. on Inform. Theory*, vol. 52, no. 7, pp. 2915-2934, July 2006.
L. Georgiadis, M. J. Neely, and L. Tassiulas. Resource allocation and cross-layer control in wireless networks. *Foundations and Trends in Networking*, vol. 1, no. 1, pp. 1-149, 2006.
D. Tse and P. Viswanath. *Fundamentals of Wireless Communication*. Cambridge University Press, 2005.
D. P. Bertsekas. *Dynamic Programming and Optimal Control*. vol. 1&2 [Athena Scientific]{}, 2007.
S. Boyd and L. Vandenberghe. *Convex Optimization*. [Cambridge University Press]{}, 2004.
[^1]: We consider several protocol examples in Sec. \[section:2stage\]
[^2]: Note that the term $-Z_s(t) - V \alpha_s$ in the objective is a constant in any given slot and does not affect the solution. However, we keep it to compare the net cost between all modes of operation.
[^3]: For the non-orthogonal scenario, there will two sources of outages: transmission failure at the physical layer and delay violation due to contention in medium access. Hence, MAC scheduling in addition to physical layer resource allocation must be considered. This is not the focus of the current work.
[^4]: This can be shown using the techniques developed in [@neely-energy].
|
---
abstract: 'Whether or not the solution to 2D resistive MHD equations is globally smooth remains open. This paper establishes the global regularity of solutions to the 2D almost resistive MHD equations, which require the dissipative operators $\mathcal{L}$ weaker than any power of the fractional Laplacian. The result is an improvement of the one of Fan et al. (Global Cauchy problem of 2D generalized MHD equations, Monatsh. Math., 175 (2014), pp. 127-131) which ask for $\alpha>0, \beta=1$.'
author:
- |
Baoquan Yuan and Jiefeng Zhao[^1]\
School of Mathematics and Information Science,\
Henan Polytechnic University, Henan, 454000, China.\
(Email: bqyuan@hpu.edu.cn,zhaojiefeng001@163.com)
title: Global Regularity of 2D almost resistive MHD Equations
---
Almost resistive MHD equations, global regularity.
Introduction
============
Consider the Cauchy problem of the two-dimensional generalized magnetohydrodynamic equations: $$\begin{aligned}
\left\{
\begin{array}{llll}\label{eq}
u_t + u \cdot \nabla u = - \nabla p + b \cdot \nabla b - \nu \Lambda^{
2\alpha} u, \\
b_t + u \cdot \nabla b = b \cdot \nabla u - \kappa \Lambda^{2\beta} b,\\
\nabla \cdot u = \nabla \cdot b = 0, \\
u\left(x,0\right)=u_0\left(x\right),\,\,\, b\left(x,0\right)=b_0\left(x\right)
\end{array}\right.\end{aligned}$$ for $x\in \mathbb{R}^2$ and $t>0$, where $ u=u\left(x,t\right) $ is the velocity, $ b =b\left(x,t\right) $ the magnetic, $ p =p\left(x,t\right) $ the pressure, and $
u_0\left(x\right),\,b_0\left(x\right) $ with $\mathrm{div}
u_0\left(x\right)=\mathrm{div} b_0\left(x\right)=0$ are the initial velocity and magnetic, respectively. Here $\nu, \kappa,
\alpha, \beta \ge 0$ are nonnegative constants and $\Lambda=\sqrt{-\Delta}$.
The global regularity of the d-D GMHD (\[eq\]) has attracted a lot of attention and progress has been made in the last few years (see \[1-8, 12-20, 22, 23, 25, 26, 28-36\]). In 2D case, it follows from [@CWY2014; @JiZ2015; @FMMNZ2014] that the problem (\[eq\]) has a unique global regular solution if $\alpha = 0, \ \beta > 1$ or $\alpha>0,\ \beta=1$. In 2D or 3D case, there have been various results on partial regularity, Serrin type regularity criterions for weak solutions, or blow-up criterions for smooth solution to the usual MHD equations, for example [@CKS1997; @CaW2010; @ChMZ2008; @ChMZ-2010; @HeX2005-1; @HeX2005-2; @LeZ2009]. Recently, some important progresses have been made on the global well-posedness for non-resistive MHD equations ( $\kappa=0,\,\alpha=1$) near an equilibrium(see [@HuL2014; @LiXZ2015; @LiZ2014; @ReWXZ2014; @XuZ2015; @Zh2014]). Local existence for 2D non-resistive MHD equations in rough spaces have been obtained in [@JiN2006; @FMRR2014; @CMRR2016; @FMRR2016]. Some results on global regularity of 2D MHD equations with partial viscosity and resistivity refer to [@CaRW2013; @CaW2011]. To the best of our knowledge, whether or not there exists an global regular solution for 2D resistive MHD ($\nu=0,\,\beta=1$) is still an open problem.
In this paper, motivated by [@CoV2012], we are concerned with the following 2D GMHD $$\begin{aligned}
\left\{
\begin{array}{llll}\label{eq2}
u_t + u \cdot \nabla u +\mathcal{L}u = - \nabla p + b \cdot \nabla b, \\
b_t + u \cdot \nabla b -\Delta b= b \cdot \nabla u ,\\
\nabla \cdot u = \nabla \cdot b = 0, \\
u\left(x,0\right)=u_0\left(x\right),\,\,\,
b\left(x,0\right)=b_0\left(x\right).
\end{array}\right.\end{aligned}$$ where $\mathcal{L}$ is the dissipative operator with $$\label{c1}
\mathcal{L}u(x)=P.V.\int_{\mathbb{R}^2} \frac{u(x)-u(x-y)}{|y|^2m(|y|)} \mathrm dy.$$ Here $m:[0,\infty)\rightarrow [0,\infty)$ is a smooth, non-decreasing function that behaves like $\frac{1}{(-\log{r})^{1+\varepsilon_1}}$ for sufficiently small $r$ with $\varepsilon_1>0$ and that grows fast at least at the rate of $(\log{r})^{1+\varepsilon_2}$ for sufficiently large $r$ with $\varepsilon_2>0$, satisfying $$\label{c2}
\int_0^1\frac{m(r)}{r} \mathrm dr<\infty$$ and the doubling condition $$\label{c3}
m(2r)<cm(r)$$ for some positive constants c.
The main result of this paper is stated as follows.
\[thm1\] Let m(r) satisfy - and $\rho \geqslant 4$. Assume that $u_0, b_0 \in H^\rho(\mathbb R^2)$ with $\mathrm{div}
u_0=\mathrm{div} b_0=0$. Then for any $T>0$, the Cauchy problem has a unique regular solution $$(u,b)\in C([0,T];H^\rho(\mathbb R^2)) \,\,\mbox{and}\,\, b\in L^2([0,T];H^{\rho+2}(\mathbb R^2)).$$
The existence and uniqueness are standard we omit their proofs, and only give the a priori estimates.
Due to $$\Lambda^{2\alpha}u(x)=c_\alpha P.V.\int_{\mathbb{R}^2} \frac{u(x)-u(x-y)}{|y|^{2+2\alpha}} \mathrm dy$$ for $\alpha\in(0,1)$ (see [@CoC2004]), the dissipative operator $\mathcal{L}$ defined in Theorem \[thm1\] is weaker than any power of the fractional Laplacian. Thus we improve the results in [@FMMNZ2014] for equations which require $\alpha>0, \beta=1$.
Inspired by the work [@CoV2012; @KN2010], can be replaced by weaker conditions $\lim_{r\rightarrow 0^+}m(r)=0$, then we can obtain the global regularity of solutions to with arbitrary weak dissipation $\mathcal{L}$ (see Remark \[rmk3\]).
In virtue of Remark \[mr1\] and Section 3, we require only $u_0, b_0 \in H^\rho(\mathbb R^2)$ with $\rho > 3$.
For the 2D GMHD , it remains an open problem whether there exists a global smooth solution without the dissipative operator $\mathcal{L}$.
Preliminaries {#sec:the preparations}
=============
Let us first consider the heat equation $$\begin{aligned}
\left\{
\begin{array}{llll}
v_t -\Delta v = f,
\\
v\left(x,0\right)=v_0(x).
\end{array}\right.\end{aligned}$$ As we all know $$\begin{aligned}
v\left(x,t\right)&=& e^{t\Delta}v_0+\int^t_0 e^{(t-s)\Delta}f(\cdot,s)\mathrm{d}s\nonumber\\
&=&h(\cdot,t)*v_0+\int^t_0 h(\cdot,t-s)*f(\cdot,s)\mathrm{d}s,\end{aligned}$$ where $h(x,t)=\frac{1}{(4\pi t)^{\frac{d}{2}}}e^{\frac{-|x|^2}{4t}}$.
Recalled the following maximal $L^p(L^q)$ regularity theorem for the heat kernel.
[([@Le2002])]{}\[lem1\] Assume $f\in L^p((0,T),L^q(\mathbb{R}^d))(1<p,q<\infty)$. Let $$A:v\mapsto A f(x,t)=\int^t_0 e^{(t-s)\Delta}\Delta f(\cdot,s)\mathrm{d}s,$$ then $$\left\|Af\right\|_{L^p((0,T),L^q(\mathbb{R}^d))}\leqslant C \left\|f\right\|_{L^p((0,T),L^q(\mathbb{R}^d))}$$ for every $T\in(0,\infty]$£©and some positive constants $C$ (independent of $T$).
\[lem2\] Let $u_0, b_0 \in L^2(\mathbb R^2) $, for any $T>0$ and $0<t<T$, we have $$\begin{aligned}
&& \left\| u \right\|_{L^2}^2 \left( t \right) + \left\| b
\right\|_{L^2}^2 \left( t \right)\\ && +\frac{1}{2}\int_0^t \int_{\mathbb{R}^2}\int_{\mathbb{R}^2} \frac{\left|u(x,\tau)-u(x-y,\tau)\right|^2}{|y|^2m(|y|)} \mathrm dx\mathrm dy\mathrm{d} \tau +\int_0^t \left\| \nabla b
\right\|_{L^2}^2 \mathrm{d} \tau \leqslant \left\| u_0 \right\|_{L^2}^2 + \left\| b_0
\right\|_{L^2}^2 .
\end{aligned}$$
Due to $$\int_{\mathbb{R}^2}u\mathcal{L}u\mathrm{d}x=\frac{1}{2}\int_{\mathbb{R}^2}\int_{\mathbb{R}^2} \frac{\left|u(x,t)-u(x-y,t)\right|^2}{|y|^2m(|y|)} \mathrm dx\mathrm dy,$$ we get the Lemma \[lem2\] easily by the standard $L^2$-energy estimates.
Denote $\omega = \nabla^{\bot} \cdot u = - \partial_2 u_1 +
\partial_1 u_2$ the vorticity of the velocity fields and $j = \nabla^{\bot} \cdot b = - \partial_2 b_1 + \partial_1 b_2$ the current of the magnetic fields. Applying $\nabla^{\bot} \cdot$ on both sides of the equations , we obtain the following equations for $\omega$ and $j$: $$\begin{aligned}
\omega_t + u \cdot \nabla \omega + \mathcal{L}\omega & = & b \cdot \nabla j, \label{eq:omega-L2}\\
j_t + u \cdot \nabla j-\triangle j & = & b \cdot \nabla \omega + T \left( \nabla u,
\nabla b \right) , \label{eq:j-L2}
\end{aligned}$$ where $$T \left( \nabla u, \nabla b \right) = {\color{black} 2 \partial_1 b_1
\left( \partial_1 u_2 + \partial_2 u_1 \right) + 2 \partial_2 u_2 \left(
\partial_1 b_2 + \partial_2 b_1 \right)} .$$
\[lem3\] Let $u_0, b_0 \in H^1(\mathbb R^2)$. Then for any $T>0$ and $0<t<T$, we have $$\begin{aligned}
&&\left\| \omega \right\|_{L^2}^2 \left( t \right) + \left\| j
\right\|_{L^2}^2 \left( t \right) \\ \nonumber &&+\int_0^t \int_{\mathbb{R}^2}\int_{\mathbb{R}^2} \frac{\left|\omega(x,\tau)-\omega(x-y,\tau)\right|^2}{|y|^2m(|y|)} \mathrm dx\mathrm dy\mathrm{d} \tau +\int_0^t \left\| \nabla j
\right\|_{L^2}^2 \mathrm d\tau \leqslant C \left( T
\right).
\end{aligned}$$
Multiplying by $\omega$ and by $j$ respectively, integrating and adding together, we have $$\begin{aligned}
& & \frac{1}{2} \frac{\mathrm{d}}{\mathrm{d} t} (\left\|\omega\right\|_{L^2}^2+ \left\|j\right\|_{L^2}^2)+\frac{1}{2}\int_{\mathbb{R}^2}\int_{\mathbb{R}^2} \frac{\left|\omega(x,t)-\omega(x-y,t)\right|^2}{|y|^2m(|y|)} \mathrm dx\mathrm dy +\left\|\nabla j\right\|_{L^2}^2\\
&=& \int_{\mathbb{R}^2} b \cdot\nabla j \, \omega \mathrm{d} x
+\int_{\mathbb{R}^2} b\cdot\nabla \omega \, j\mathrm{d} x+ \int_{\mathbb{R}^2} T \left( \nabla u, \nabla b \right)j\mathrm{d} x\\
&=&\int_{\mathbb{R}^2} T \left( \nabla u, \nabla b \right)j\mathrm{d} x\\
&\leqslant&C\left\|\nabla u\right\|_{L^2}\left\|j\right\|_{L^4}^2\\
& \leqslant& C\left\|\omega\right\|_{L^2}^2\left\|j\right\|_{L^2}^2+\frac{1}{2}\left\|\nabla j\right\|_{L^2}^2,\end{aligned}$$ where the Gagliardo-Nirenberg inequality has been used in the last inequality.
Thus, we have $$\begin{aligned}
& &\frac{\mathrm{d}}{\mathrm{d} t} (\left\|\omega\right\|_{L^2}^2+ \left\|j\right\|_{L^2}^2)+\int_{\mathbb{R}^2}\int_{\mathbb{R}^2} \frac{\left|\omega(x,t)-\omega(x-y,t)\right|^2}{|y|^2m(|y|)} \mathrm dx\mathrm dy +\left\|\nabla j\right\|_{L^2}^2\\
& \leqslant& C\left\|\omega\right\|_{L^2}^2\left\|j\right\|_{L^2}^2.\end{aligned}$$ By taking advantage of Gronwall inequality and Lemma \[lem2\], we complete the proof of Lemma \[lem3\].
\[lem4\] Let $u_0, b_0 \in H^2(\mathbb R^2)$. Then for any $T>0$ and $0<t<T$, we have $$b\in L^{\infty}((0,T);L^{\infty}(\mathbb{R}^2)),\,\, \nabla b\in L^{p}((0,T);L^{q}(\mathbb{R}^2)).$$ for any $p,q\in(2,\infty)$.
$(\ref{eq2})_2$ can be written as $$\label{6.4mhd}
b_t-\Delta b=\sum_{i=1}^2\partial_i ( b_i u-u_i b).$$ Due to $b_i u-u_i b\in L^{\infty}((0,T);L^{p}(\mathbb{R}^2))$ and Lemma \[lem1\], we obtain Lemma \[lem4\].
\[lem5\] Let $u_0, b_0 \in H^2(\mathbb R^2)$. Then for any $T>0$ and $0<t<T$, we have $$\omega\in L^{\infty}((0,T);L^{p}(\mathbb{R}^2))$$ for any $p\in(2,\infty)$.
In virtue of $$\int_{\mathbb{R}^2}|\omega|^{p-2}\omega(x)\mathcal{L}\omega(x)\mathrm{d}x\geqslant 0$$ for all $2\leqslant p<\infty$ (see [@CoV2012]), the proof of Lemma \[lem5\] can be obtained similar to [@FMMNZ2014].
can be encoded by $$j_t-\Delta j=\sum_{i=1}^2\partial_i (b_i \omega-u_i j)+T(\nabla u,\nabla b).$$ Similar to Lemma \[lem4\], we have the following lemma.
\[lem6\] Let $u_0, b_0 \in H^2(\mathbb R^2)$. Then for any $T>0$ and $0<t<T$, we have $$j\in L^{\infty}((0,T);L^{r}(\mathbb{R}^2)),\,\,\nabla j\in L^{q}((0,T);L^{p}(\mathbb{R}^2))$$ for any $p,q\in(2,\infty)$ and $r\in(2,\infty]$.
Exploiting the structure of the , we can get further estimates.
\[lem7\] Let $p,q\in[2,\infty)$, $r\in[2,\infty]$. Assume $u_0,b_0\in H^4(\mathbb{R}^2)$, then for any $T>0$, we have $$\begin{aligned}
& &\nabla j\in L^{\infty}((0,T);L^{p}(\mathbb{R}^2)),\,\,\Delta b+b \cdot \nabla u\in L^{\infty}((0,T);L^{r}(\mathbb{R}^2)),\label{ic1}\\
& &\nabla(\Delta b+b \cdot \nabla u)\in L^{q}((0,T);L^{p}(\mathbb{R}^2))\label{ic2}.\end{aligned}$$
\[mr1\] In fact, the estimates of need only $u_0\in H^{2+\epsilon_3}(\mathbb{R}^2),\,b_0\in H^{2+\epsilon_3}(\mathbb{R}^2)$ with $\epsilon_3>0$.
Concerning the 2D resistive MHD, we still obtain the estimates and .
For a 2D Euler equation with nonlocal forces $$\omega_t+u\cdot\nabla\omega=-\partial_yu_1=\mathcal{R}_{22}\omega,$$ where $\mathcal{R}_{ij}\omega$ denotes the Riesz transform $\partial_{ij}\Lambda^{-2}\omega$. Elgindi and Masmoudi [@ElM2014] prove that it is mildly ill-posed in $L^\infty$.
Similar to , we have $$\begin{aligned}
\label{eq:omega1}
\omega_t + u \cdot \nabla \omega &=& b_1( \Delta b_2+b \cdot \nabla u_2)-b_2( \Delta b_1+b \cdot \nabla u_1)-b_1 b \cdot \nabla u_2+b_2 b \cdot \nabla u_1\nonumber\\
&=& f+b_1 \sum _{i=1}^2 b_i\mathcal{R}_{i2}\omega -b_2 \sum_{i=1}^{2}b_i\mathcal{R}_{i1}\omega,
\end{aligned}$$ where $f=b_1( \Delta b_2+b \cdot \nabla u_2)-b_2( \Delta b_1+b \cdot \nabla u_1)\in L^\infty(0,T;L^\infty(\mathbb{R}^2))$. So the results in [@ElM2014] suggest that it may be mildly ill-posed in $L^\infty$ in the case of 2D resistive MHD.
Applying $b\cdot \nabla$ and $\Delta$ to $\eqref{eq2}_1$ and $\eqref{eq2}_2$ respectively, and multiplying $\eqref{eq2}_2$ by $\nabla u$, then adding the resulting equations together we obtain $$\begin{aligned}
& &(\Delta b+b \cdot \nabla u)_t-\Delta(\Delta b+b \cdot \nabla u)\nonumber\\
&=&-b\cdot\nabla(u\cdot\nabla u)+b\cdot\nabla(b\cdot\nabla b)-(u\cdot\nabla b)\cdot\nabla u+(b\cdot\nabla u)\cdot\nabla u\nonumber\\
& &+\Delta b\cdot\nabla u
-b\cdot\nabla(\nabla p) -\Delta(u \cdot \nabla b)-b\cdot\nabla\mathcal{L}u.\label{equb}\end{aligned}$$ Firstly, we give the following estimates $$\Delta b+b \cdot \nabla u\in L^{\infty}((0,T);L^{2}(\mathbb{R}^2))\bigcap L^{2}((0,T);H^{1}(\mathbb{R}^2)).$$ Multiplying by $\Delta b+b \cdot \nabla u$ and integrating on $\mathbb{R}^2$, we have $$\begin{aligned}
&&\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d} t} \left\| \Delta b+b \cdot \nabla u \right\|_{L^2}^2+
\left\|\nabla(\Delta b+b \cdot \nabla u)\right\|_{L^2}^2 \\
&=&-\int_{\mathbb{R}^2} b\cdot\nabla(u\cdot\nabla u)(\Delta b+b \cdot \nabla u)\mathrm{d}x
+\int_{\mathbb{R}^2} b\cdot\nabla(b\cdot\nabla b)(\Delta b+b \cdot \nabla u)\mathrm{d}x\\
& &-\int_{\mathbb{R}^2} (u\cdot\nabla b)\cdot\nabla u(\Delta b+b \cdot \nabla u)\mathrm{d}x
+\int_{\mathbb{R}^2} (b\cdot\nabla u)\cdot\nabla u(\Delta b+b \cdot \nabla u)\mathrm{d}x\\
& &+\int_{\mathbb{R}^2} \Delta b\cdot\nabla u(\Delta b+b \cdot \nabla u)\mathrm{d}x
-\int_{\mathbb{R}^2} b\cdot\nabla(\nabla p)(\Delta b+b \cdot \nabla u)\mathrm{d}x\\
& &-\int_{\mathbb{R}^2} \Delta(u \cdot \nabla b)(\Delta b+b \cdot \nabla u)\mathrm{d}x
-\int_{\mathbb{R}^2} b\cdot\nabla\mathcal{L}u(\Delta b+b \cdot \nabla u)\mathrm{d}x\\
&=&RHS.\end{aligned}$$ Thanks to $$\begin{aligned}
\label{iequb}
&&\left\|b_i\mathcal{L}u\right\|_{L^2}\nonumber\\
&=&\left(\int_{\mathbb{R}^2}\left|b_i(x)P.V.\int_{\mathbb{R}^2} \frac{u(x)-u(x-y)}{|y|^2m(|y|)}
\mathrm dy\right|^2\mathrm{d}x\right)^{\frac{1}{2}}\nonumber\\
&=&\left(\int_{\mathbb{R}^2}\left|b_i(x)P.V.\int_{|y|\leq1} \frac{u(x)-u(x-y)}{|y|^2m(|y|)}
\mathrm dy+b_i(x)P.V.\int_{|y|\geq1} \frac{u(x)-u(x-y)}{|y|^2m(|y|)}
\mathrm dy\right|^2\mathrm{d}x\right)^{\frac{1}{2}}\nonumber\\
&\leqslant&C\left(\int_{\mathbb{R}^2}\left|b_i(x)\int_{|y|\leq1}\int_0^1 \frac{|(\nabla u)(x-(1-t)y)|}{|y|m(|y|)}\mathrm dt
\mathrm dy\right|^2\mathrm{d}x\right)^{\frac{1}{2}}\nonumber\\
&&+C\left\|b\right\|_{L^2}\left\|u\right\|_{L^\infty}\int_{|y|\geq1} \frac{1}{|y|^2m(|y|)}
\mathrm dy\nonumber\\
&\leqslant&C (\left\|b\right\|_{L^\infty}\left\|\nabla u\right\|_{L^2}+\left\|b\right\|_{L^2}\left\|u\right\|_{L^\infty})\end{aligned}$$ and Lemmas 4-6, the right hand side above can be simply estimated as follows $$\begin{aligned}
RHS &\leqslant&(\left\| b\right\|_{L^6}\left\| u\right\|_{L^6}\left\| \nabla u\right\|_{L^6}+\left\| b\right\|_{L^6}^2\left\| \nabla b\right\|_{L^6})\left\|\nabla(\Delta b+b \cdot \nabla u)\right\|_{L^2}\\
& &+(\left\| u\right\|_{L^6}\left\|\nabla b\right\|_{L^6}+\left\| b\right\|_{L^6}\left\|\nabla u\right\|_{L^6})\left\|\nabla u\right\|_{L^6}\left\|\Delta b+b \cdot \nabla u\right\|_{L^2}\\
& &+\left\|\Delta b\right\|_{L^4}\left\|\nabla u\right\|_{L^4}\left\|\Delta b+b \cdot \nabla u\right\|_{L^2}
+\left\|b\right\|_{L^4}\left\|\nabla p\right\|_{L^4}\left\|\nabla(\Delta b+b \cdot \nabla u)\right\|_{L^2}\\
& &+(\left\|\nabla u\right\|_{L^4}\left\|\nabla b\right\|_{L^4}+\left\|u\right\|_{L^4}\left\|\nabla^2 b\right\|_{L^4})\left\|\nabla(\Delta b+b \cdot \nabla u)\right\|_{L^2}\\
& &+C (\left\|b\right\|_{L^\infty}\left\|\nabla u\right\|_{L^2}+\left\|b\right\|_{L^2}\left\|u\right\|_{L^\infty})\left\|\nabla(\Delta b+b \cdot \nabla u)\right\|_{L^2}\\
&\leqslant& c(t)(\left\|\Delta b+b \cdot \nabla u\right\|_{L^2}+C(T))+\frac{1}{2}\left\|\nabla(\Delta b+b \cdot \nabla u)\right\|_{L^2}\end{aligned}$$ where $c(t)\in L^p(0,T)(2\leqslant p<\infty)$. Taking advantage of Gronwall inequality, we get the result.
Secondly, we prove the following estimates $$\label{eqi}
\Delta b+b \cdot \nabla u\in L^{\infty}((0,T);L^{p}(\mathbb{R}^2))\,\,(2<p<\infty).$$ Thus, $\Delta b\in L^{\infty}((0,T);L^{p}(\mathbb{R}^2))\,\,(2<p<\infty).$
can be written as $$\begin{aligned}
\label{6.16mhd}
(\Delta b+b \cdot \nabla u)\left(x,t\right)\triangleq I_1+I_2+I_3+I_4+I_5+I_6+I_7+I_8+I_9,\end{aligned}$$ where\
$I_1=h(\cdot,t)*(\Delta b_0+b_0 \cdot \nabla u_0)$, $I_2=-\int^t_0 h(\cdot,t-s)*(b\cdot\nabla(u\cdot\nabla u))(\cdot,s)\mathrm{d}s$,\
$I_3=\int^t_0 h(\cdot,t-s)*(b\cdot\nabla(b\cdot\nabla b))(\cdot,s)\mathrm{d}s$, $I_4=-\int^t_0 h(\cdot,t-s)*((u\cdot\nabla b)\cdot\nabla u ) (\cdot,s)\mathrm{d}s$, $I_5=\int^t_0 h(\cdot,t-s)*((b\cdot\nabla u)\cdot\nabla u)(\cdot,s)\mathrm{d}s$, $I_6=\int^t_0 h(\cdot,t-s)*(\Delta b\cdot\nabla u)(\cdot,s)\mathrm{d}s$, $I_7=-\int^t_0 h(\cdot,t-s)*(b\cdot\nabla(\nabla p))(\cdot,s)\mathrm{d}s$, $I_8=-\int^t_0 h(\cdot,t-s)*(\Delta(u \cdot \nabla b))(\cdot,s)\mathrm{d}s$, $I_9=-\int^t_0 h(\cdot,t-s)*( b\cdot\nabla\mathcal{L}u)(\cdot,s)\mathrm{d}s$.
Then $$\begin{aligned}
\label{esI}
\left\| I_1\right\|_{L^{\infty}((0,T);L^{p}(\mathbb{R}^2))}&=&\left\|h(\cdot,t)*(\Delta b_0+b_0 \cdot \nabla u_0)\right\|_{L^{\infty}((0,T);L^{p}(\mathbb{R}^2))}\nonumber\\
&\leqslant& C\left\| h\right\|_{L^{\infty}((0,T);L^{1}(\mathbb{R}^2))}\left\|\Delta b_0+b_0 \cdot \nabla u_0\right\|_{L^{p}(\mathbb{R}^2)}\nonumber\\
&\leqslant& C\left\|\nabla^2 b_0\right\|_{L^{p}(\mathbb{R}^2)}
+C\left\| b_0\right\|_{L^{2p}(\mathbb{R}^2)}\left\|\nabla u_0\right\|_{L^{2p}(\mathbb{R}^2)}\nonumber\\
&\leqslant&C(\left\| u_0\right\|_{H^{\rho}(\mathbb{R}^2)}+\left\| b_0\right\|_{H^{\rho}(\mathbb{R}^2)}).\end{aligned}$$ $$\begin{aligned}
\left\| I_2\right\|_{L^{\infty}((0,T);L^{p}(\mathbb{R}^2))}&=&\left\|\int^t_0 h(\cdot,t-s)*(b\cdot\nabla(u\cdot\nabla u))(\cdot,s)\mathrm{d}s\right\|_{L^{\infty}((0,T);L^{p}(\mathbb{R}^2))}\\
&\leqslant& C\left\|\nabla h\right\|_{L^{1}((0,T);L^{1}(\mathbb{R}^2))}\left\|bu\nabla u\right\|_{L^{\infty}((0,T);L^{p}(\mathbb{R}^2))}\\
&\leqslant& C\left\|b\right\|_{L^{\infty}((0,T);L^{\infty}(\mathbb{R}^2))}\left\|u\right\|_{L^{\infty}((0,T);L^{\infty}(\mathbb{R}^2))}
\left\|\nabla u\right\|_{L^{\infty}((0,T);L^{p}(\mathbb{R}^2))}\\
&\leqslant& C(T).\end{aligned}$$ Arguing similarly to above, it can be derived $\left\| I_3\right\|_{L^{\infty}((0,T);L^{p}(\mathbb{R}^2))}\leqslant C(T)$, $\left\| I_7\right\|_{L^{\infty}((0,T);L^{p}(\mathbb{R}^2))}\leqslant C(T)$. Using an argument deriving the estimate , we have $\left\|b_i\mathcal{L}u\right\|_{L^{\infty}((0,T);L^{p}(\mathbb{R}^2))}\leqslant C(T)$, so $\left\| I_9\right\|_{L^{\infty}((0,T);L^{p}(\mathbb{R}^2))}\leqslant C(T)$.
For $I_4$, we obtain $$\begin{aligned}
\left\| I_4\right\|_{L^{\infty}((0,T);L^{p}(\mathbb{R}^2))}&=&\left\| \int^t_0 h(\cdot,t-s)*((u\cdot\nabla b)\cdot\nabla u ) (\cdot,s)\mathrm{d}s\right\|_{L^{\infty}((0,T);L^{p}(\mathbb{R}^2))}\\
&\leqslant& C\left\| h\right\|_{L^{1}((0,T);L^{1}(\mathbb{R}^2))}\left\|(u\cdot\nabla b)\cdot\nabla u\right\|_{L^{\infty}((0,T);L^{p}(\mathbb{R}^2))}\\
&\leqslant& C\left\|u\right\|_{L^{\infty}((0,T);L^{3p}(\mathbb{R}^2))}\left\|\nabla b\right\|_{L^{\infty}((0,T);L^{3p}(\mathbb{R}^2))}
\left\|\nabla u\right\|_{L^{\infty}((0,T);L^{3p}(\mathbb{R}^2))}\\
&\leqslant& C(T).\end{aligned}$$ Similarly, $\left\| I_5\right\|_{L^{\infty}((0,T);L^{p}(\mathbb{R}^2))}\leqslant C(T)$.
Choosing $2<q<\infty$, one has $$\begin{aligned}
\left\| I_6\right\|_{L^{\infty}((0,T);L^{p}(\mathbb{R}^2))}&=&\left\| \int^t_0 h(\cdot,t-s)*(\Delta b\cdot\nabla u)(\cdot,s)\mathrm{d}s\right\|_{L^{\infty}((0,T);L^{p}(\mathbb{R}^2))}\\
&\leqslant& C\left\| h\right\|_{L^{q'}((0,T);L^{1}(\mathbb{R}^2))}\left\|\Delta b\cdot\nabla u\right\|_{L^{q}((0,T);L^{p}(\mathbb{R}^2))}\\
&\leqslant& C\left\|\Delta b\right\|_{L^{2q}((0,T);L^{2p}(\mathbb{R}^2))}
\left\|\nabla u\right\|_{L^{2q}((0,T);L^{2p}(\mathbb{R}^2))}\\
&\leqslant& C(T),\end{aligned}$$ $$\begin{aligned}
\left\| I_8\right\|_{L^{\infty}((0,T);L^{p}(\mathbb{R}^2))}&=&\left\| \int^t_0 h(\cdot,t-s)*(\Delta(u \cdot \nabla b))(\cdot,s)\mathrm{d}s\right\|_{L^{\infty}((0,T);L^{p}(\mathbb{R}^2))}\\
&\leqslant& C\left\|\nabla h\right\|_{L^{q'}((0,T);L^{1}(\mathbb{R}^2))}\left\|\nabla(u\nabla b)\right\|_{L^{q}((0,T);L^{p}(\mathbb{R}^2))}\\
&\leqslant& C\left\|\nabla u\right\|_{L^{2q}((0,T);L^{2p}(\mathbb{R}^2))}
\left\|\nabla b\right\|_{L^{2q}((0,T);L^{2p}(\mathbb{R}^2))}\\
&& +C\left\|u\right\|_{L^{2q}((0,T);L^{2p}(\mathbb{R}^2))}
\left\|\nabla^2 b\right\|_{L^{2q}((0,T);L^{2p}(\mathbb{R}^2))}\\
&\leqslant& C(T),\end{aligned}$$ where $q$ and $q'$ satisfy $\frac{1}{q}+\frac{1}{q'}=1$ and $q'<2$. So we arrive at .
Thirdly, we prove $$\label{eqi1}
\Delta b+b \cdot \nabla u\in L^{\infty}((0,T);L^{\infty}(\mathbb{R}^2)).$$ For $I_1$, similar to , we have $$\left\| I_1\right\|_{L^{\infty}((0,T);L^{\infty}(\mathbb{R}^2))}
\leqslant C(\left\| u_0\right\|_{H^{\rho}(\mathbb{R}^2)}+\left\| b_0\right\|_{H^{\rho}(\mathbb{R}^2)}).$$
Let $2<p_1<\infty$ and $\frac{1}{p_1}+\frac{1}{p'_1}=1$. $$\begin{aligned}
\left\| I_2\right\|_{L^{\infty}((0,T);L^{\infty}(\mathbb{R}^2))}&=&\left\|\int^t_0 h(\cdot,t-s)*(b\cdot\nabla(u\cdot\nabla u))(\cdot,s)\mathrm{d}s\right\|_{L^{\infty}((0,T);L^{\infty}(\mathbb{R}^2))}\\
&\leqslant& C\left\|\nabla h\right\|_{L^{1}((0,T);L^{p'_1}(\mathbb{R}^2))}\left\|bu\nabla u\right\|_{L^{\infty}((0,T);L^{p_1}(\mathbb{R}^2))}\\
&\leqslant& C\left\|b\right\|_{L^{\infty}((0,T);L^{\infty}(\mathbb{R}^2))}\left\|u\right\|_{L^{\infty}((0,T);L^{\infty}(\mathbb{R}^2))}
\left\|\nabla u\right\|_{L^{\infty}((0,T);L^{p_1}(\mathbb{R}^2))}\\
&\leqslant& C(T).\end{aligned}$$ Similarly, $\left\| I_3\right\|_{L^{\infty}((0,T);L^{\infty}(\mathbb{R}^2))}\leqslant C(T)$, $\left\| I_7\right\|_{L^{\infty}((0,T);L^{\infty}(\mathbb{R}^2))}\leqslant C(T)$, $\left\| I_8\right\|_{L^{\infty}((0,T);L^{\infty}(\mathbb{R}^2))}\leqslant C(T)$, $\left\| I_9\right\|_{L^{\infty}((0,T);L^{\infty}(\mathbb{R}^2))}\leqslant C(T)$.
For $I_4$, we have $$\begin{aligned}
\left\| I_4\right\|_{L^{\infty}((0,T);L^{\infty}(\mathbb{R}^2))}&=&\left\| \int^t_0 h(\cdot,t-s)*((u\cdot\nabla b)\cdot\nabla u ) (\cdot,s)\mathrm{d}s\right\|_{L^{\infty}((0,T);L^{\infty}(\mathbb{R}^2))}\\
&\leqslant& C\left\| h\right\|_{L^{1}((0,T);L^{p'_1}(\mathbb{R}^2))}\left\|(u\cdot\nabla b)\cdot\nabla u\right\|_{L^{\infty}((0,T);L^{p_1}(\mathbb{R}^2))}\\
&\leqslant& C\left\|u\right\|_{L^{\infty}((0,T);L^{3p_1}(\mathbb{R}^2))}\left\|\nabla b\right\|_{L^{\infty}((0,T);L^{3p_1}(\mathbb{R}^2))}
\left\|\nabla u\right\|_{L^{\infty}((0,T);L^{3p_1}(\mathbb{R}^2))}\\
&\leqslant& C(T),\end{aligned}$$ Similarly, $\left\| I_5\right\|_{L^{\infty}((0,T);L^{\infty}(\mathbb{R}^2))}\leqslant C(T)$, $\left\| I_6\right\|_{L^{\infty}((0,T);L^{\infty}(\mathbb{R}^2))}\leqslant C(T)$. And is proved.
Finally, we prove .
For $\nabla I_1$, we have $$\begin{aligned}
\left\|\nabla I_1\right\|_{L^{q}((0,T);L^{p}(\mathbb{R}^2))}&=&\left\|\nabla(h(\cdot,t)*(\Delta b_0+b_0 \cdot \nabla u_0))\right\|_{L^{q}((0,T);L^{p}(\mathbb{R}^2))}\\
&\leqslant& C\left\| h\right\|_{L^{q}((0,T);L^{1}(\mathbb{R}^2))}\left\|\nabla(\Delta b_0+b_0 \cdot \nabla u_0)\right\|_{L^{p}(\mathbb{R}^2)}\\
&\leqslant& C\left\|\nabla^3 b_0\right\|_{L^{p}(\mathbb{R}^2)}
+C\left\|\nabla b_0\right\|_{L^{2p}(\mathbb{R}^2)}\left\|\nabla u_0\right\|_{L^{2p}(\mathbb{R}^2)}\\
& &+C\left\| b_0\right\|_{L^{2p}(\mathbb{R}^2)}\left\|\nabla^2 u_0\right\|_{L^{2p}(\mathbb{R}^2)}\leqslant C(T).\end{aligned}$$
Thanks to Lemma \[lem1\], we obtain $$\begin{aligned}
\left\|\nabla I_2\right\|_{L^{q}((0,T);L^{p}(\mathbb{R}^2))}&=&\left\|\nabla \int^t_0 h(\cdot,t-s)*(b\cdot\nabla(u\cdot\nabla u))(\cdot,s)\mathrm{d}s\right\|_{L^{q}((0,T);L^{p}(\mathbb{R}^2))}\\
&\leqslant& C\left\|bu\nabla u\right\|_{L^{q}((0,T);L^{p}(\mathbb{R}^2))}\\
&\leqslant& C\left\|b\right\|_{L^{\infty}((0,T);L^{\infty}(\mathbb{R}^2))}\left\|u\right\|_{L^{\infty}((0,T);L^{\infty}(\mathbb{R}^2))}
\left\|\nabla u\right\|_{L^{q}((0,T);L^{p}(\mathbb{R}^2))}\\
&\leqslant& C(T).\end{aligned}$$ Similarly, $\left\|\nabla I_3\right\|_{L^{q}((0,T);L^{p}(\mathbb{R}^2))}\leqslant C(T)$, $\left\|\nabla I_7\right\|_{L^{q}((0,T);L^{p}(\mathbb{R}^2))}\leqslant C(T)$, $\left\|\nabla I_8\right\|_{L^{q}((0,T);L^{p}(\mathbb{R}^2))}\leqslant C(T)$, $\left\|\nabla I_9\right\|_{L^{q}((0,T);L^{p}(\mathbb{R}^2))}\leqslant C(T)$.
For $\nabla I_4$, we get $$\begin{aligned}
\left\|\nabla I_4\right\|_{L^{q}((0,T);L^{p}(\mathbb{R}^2))}&=&\left\|\nabla \int^t_0 h(\cdot,t-s)*((u\cdot\nabla b)\cdot\nabla u ) (\cdot,s)\mathrm{d}s\right\|_{L^{q}((0,T);L^{p}(\mathbb{R}^2))}\\
&\leqslant& C\left\|\nabla h\right\|_{L^{1}((0,T);L^{1}(\mathbb{R}^2))}\left\|(u\cdot\nabla b)\cdot\nabla u\right\|_{L^{q}((0,T);L^{p}(\mathbb{R}^2))}\\
&\leqslant& C\left\|u\right\|_{L^{3q}((0,T);L^{3p}(\mathbb{R}^2))}\left\|\nabla b\right\|_{L^{3q}((0,T);L^{3p}(\mathbb{R}^2))}
\left\|\nabla u\right\|_{L^{3q}((0,T);L^{3p}(\mathbb{R}^2))}\\
&\leqslant& C(T).\end{aligned}$$ Similarly, $\left\|\nabla I_4\right\|_{L^{q}((0,T);L^{p}(\mathbb{R}^2))}\leqslant C(T)$, $\left\|\nabla I_5\right\|_{L^{q}((0,T);L^{p}(\mathbb{R}^2))}\leqslant C(T)$, $\left\|\nabla I_6\right\|_{L^{q}((0,T);L^{p}(\mathbb{R}^2))}\leqslant C(T)$.
Therefore, we obtain and finish the proof of lemma \[lem7\].
The Proof of Theorem 1.1 {#sec:the proof of theorem 1.1}
========================
Due to $$\partial_1 j=\Delta b_2,\,\,\,\partial_2 j=-\Delta b_1,$$ (\[eq:omega-L2\]) can be changed into $$\begin{aligned}
\label{eq:omega1}
\omega_t + u \cdot \nabla \omega +\mathcal{L}\omega&=& b_1( \Delta b_2+b \cdot \nabla u_2)-b_2( \Delta b_1+b \cdot \nabla u_1)-b_1 b \cdot \nabla u_2+b_2 b \cdot \nabla u_1\nonumber\\
&=& f-b_1 b \cdot \nabla u_2+b_2 b \cdot \nabla u_1,
\end{aligned}$$ where $f=b_1( \Delta b_2+b \cdot \nabla u_2)-b_2( \Delta b_1+b \cdot \nabla u_1)$.
Multiplying by $\omega(x,t)$, we obtain $$\frac{1}{2}(\partial_t+u\cdot\nabla)\left|\omega(x,t)\right|^2+\omega(x,t)\mathcal{L}\omega(x,t)=
(f-b_1 b \cdot \nabla u_2+b_2 b \cdot \nabla u_1)(x,t)\omega(x,t).$$ Using the pointwise identity $$\omega(x,t)\mathcal{L}\omega(x,t)=\frac{1}{2}\mathcal{L}(\left|\omega(x,t)\right|^2)+\frac{D(x,t)}{2}$$ (see [@CoV2012]), where $$D(x,t)=P.V.\int_{\mathbb{R}^2} \frac{(\omega(x,t)-\omega(x-y,t))^2}{|y|^2m(|y|)} \mathrm dy,$$ we get $$\label{eq:omega2}
\frac{1}{2}(\partial_t+u\cdot\nabla+\mathcal{L})\left|\omega(x,t)\right|^2+\frac{D(x,t)}{2}=
(f-b_1 b \cdot \nabla u_2+b_2 b \cdot \nabla u_1)(x,t)\omega(x,t).$$ Choosing a non-negative radial smooth cut-off function $\chi_1(x)$ supported in $|x|\leqslant1$, identically equal to 1 for $|x|\leqslant\frac{1}{2}$ and $|\nabla\chi_1(x)|\leqslant C$. Let $\chi_2(x)=1-\chi_1(x)$.
By Biot-Savart law [@MaB2002], $$\label{bse}
u(x,t)=\frac{1}{2\pi}\int_{\mathbb{R}^2}(-\frac{y_2}{|y|^2},\frac{y_1}{|y|^2})\omega(x-y,t)\mathrm dy,$$ so $$\begin{aligned}
\label{ieqomega}
&&\left|(b_1b\cdot\nabla u_2\omega)(x,t)\right|\nonumber\\
&=&\left|b_1(x,t)\omega(x,t)b(x,t)\cdot \frac{1}{2\pi}\int_{\mathbb{R}^2}\frac{y_1}{|y|^2}\nabla_y\omega(x-y,t)\mathrm dy\right|\nonumber\\
&\leqslant&\left|b_1(x,t)\omega(x,t)b(x,t)\cdot \frac{1}{2\pi}\int_{|y|\leq1}\frac{y_1}{|y|^2}\nabla_y(\omega(x,t)-\omega(x-y,t))\chi_1(y)\mathrm dy\right|\nonumber\\
&+&\left|b_1(x,t)\omega(x,t)b(x,t)\cdot \frac{1}{2\pi}\int_{|y|\geq\frac{1}{2}}\frac{y_1}{|y|^2}\nabla_y\omega(x-y,t)\chi_2(y)\mathrm dy\right|\nonumber\\
&\leqslant&c_1 \left\|b\right\|_{L^\infty}^2|\omega(x,t)|\int_{|y|\leq1}\frac{1}{|y|^2}|\omega(x,t)-\omega(x-y,t)|\mathrm dy\nonumber\\
&+&c_2 \left\|b\right\|_{L^\infty}^2|\omega(x,t)|\int_{|y|\geq\frac{1}{2}}\frac{1}{|y|^2}|\omega(x-y,t)|\mathrm dy\nonumber\\
&\leqslant&c_1 \left\|b\right\|_{L^\infty}^2|\omega(x,t)|\int_{|y|\leq1}\frac{|\omega(x,t)-\omega(x-y,t)|}{|y|\sqrt{m(|y|)}}
\frac{\sqrt{m(|y|)}}{|y|}\mathrm dy+c_3\left\|b\right\|_{L^\infty}^2\left\|\omega\right\|_{L^2}|\omega(x,t)|\nonumber\\
&\leqslant&c_4 \left\|b\right\|_{L^\infty}^2|\omega(x,t)|\sqrt{D(x,t)}(\int_0^1\frac{m(r)}{r} \mathrm dr)^{\frac{1}{2}}+c_3\left\|b\right\|_{L^\infty}^2\left\|\omega\right\|_{L^2}|\omega(x,t)|\nonumber\\
&\leqslant &\frac{D(x,t)}{8}+c_5\left\|b\right\|_{L^\infty}^4|\omega(x,t)|^2+c_3\left\|b\right\|_{L^\infty}^2\left\|\omega\right\|_{L^2}|\omega(x,t)|.\end{aligned}$$ Similarly, $$\begin{aligned}
&&\left|(b_2b\cdot\nabla u_1\omega)(x,t)\right|\nonumber\\
&\leqslant&\frac{D(x,t)}{8}+c_5\left\|b\right\|_{L^\infty}^4|\omega(x,t)|^2+c_3\left\|b\right\|_{L^\infty}^2\left\|\omega\right\|_{L^2}|\omega(x,t)|.\end{aligned}$$ Thus, thanks to Lemma \[lem7\], and give $$\begin{aligned}
&&\frac{1}{2}(\partial_t+u\cdot\nabla+\mathcal{L})\left|\omega(x,t)\right|^2+\frac{D(x,t)}{4}\nonumber\\
&\leqslant&2c_5\left\|b\right\|_{L^\infty}^4|\omega(x,t)|^2+
(2c_3\left\|b\right\|_{L^\infty}^2\left\|\omega\right\|_{L^2}+\left\|f\right\|_{L^\infty})|\omega(x,t)|\nonumber\\
&\leqslant&2c_5\left\|b\right\|_{L^\infty}^4|\omega(x,t)|^2+
(2c_3\left\|b\right\|_{L^\infty}^2\left\|\omega\right\|_{L^2}+2\left\|b\right\|_{L^\infty}\left\|\Delta b+b \cdot \nabla u\right\|_{L^\infty})|\omega(x,t)|\nonumber\\
&\leqslant& C_1(T)(|\omega(x,t)|+|\omega(x,t)|^2).\end{aligned}$$ Since $$D(x,t)\geqslant \frac{c_6}{m(1)}|\omega(x,t)|^2\log\frac{1}{\delta}-c_7|\omega(x,t)|\left\|\omega\right\|_{L^2}\frac{1}{\delta m(\delta)}$$ (see (5.18) in [@CoV2012]), where $\delta<1$. We pick $\delta=\delta(m,T)\in(0,1)$ to be such that $$\frac{c_6}{8m(1)}\log\frac{1}{\delta}>C_1(T).$$ Hence, $$\begin{aligned}
\label{ieqomega1}
&&\frac{1}{2}(\partial_t+u\cdot\nabla+\mathcal{L})\left|\omega(x,t)\right|^2+C_2(T)|\omega(x,t)|^2\nonumber\\
&\leqslant&(C_1(T)+C_3(T)\left\|\omega\right\|_{L^2})|\omega(x,t)|\nonumber\\
&\leqslant&C_4(T)|\omega(x,t)|.\end{aligned}$$ Let $\varphi(r)$ be a non-decreasing positive convex smooth function which is identically 0 on $0\leqslant r\leqslant \mathrm{max}\{\left\|\omega_0\right\|_{L^\infty}^2,(\frac{C_4(T)}{C_1(T)})^2\}$. Multiplying by $\varphi^{'}(|\omega(x,t)|^2)$ gives $$\label{ieqomega2}
(\partial_t+u\cdot\nabla+\mathcal{L})\varphi(|\omega(x,t)|^2)\leqslant 0$$ for all $x$ and all $t\in[0,T)$. Thanks to $$\int_{\mathbb{R}^2}|\omega(x)|^{p-2}\omega(x)\mathcal{L}\omega(x)\mathrm{d}x\geqslant 0,$$ for all $2\leqslant p<\infty$ (see [@CoV2012]). Hence from , we obtain $$\left\|\varphi(|\omega(x,t)|^2)\right\|_{L^\infty}\leqslant\left\|\varphi(|\omega_0|^2)\right\|_{L^\infty}=0.$$ This gives that $\left\|\omega(\cdot,t)\right\|_{L^\infty}\leqslant \mathrm{max}\{\left\|\omega_0\right\|_{L^\infty},\frac{C_4(T)}{C_1(T)}\}$ for all $t\in [0,T)$.
By taking advantage of the BKM type criterion for global regularity (see [@CKS1997]), we finish the proof of Theorem \[thm1\].
\[rmk3\] Similar to Remark 5.4 in [@CoV2012], if is replaced by $\lim_{r\rightarrow 0^+}m(r)=0$, we can still obtain the global regularity of . We only give $\omega\in L^\infty([0,T);L^\infty(\mathbb{R}^2))$. A sketch of the proof is as follows. We can assume that $\sup_{x\in\mathbb{R}^2}\omega(x,t)$ is obtained at $\bar{x}(t)$ for $t\in[0.T)$, if not, we only need to consider multiplied by a smooth cut-off function. Then, at $\bar{x}$ the convection term in vanishes and we have $$\partial_t\omega(\bar{x},t)+ \mathcal{L}\omega(\bar{x},t)= ( f-b_1 b \cdot \nabla u_2+b_2 b \cdot \nabla u_1)(\bar{x},t).$$ Choosing a non-negative radial smooth cut-off function $\chi_3(x)$ supported in $|x|\leqslant \eta\,(\eta>0)$, identically equal to 1 for $|x|\leqslant\frac{1}{2}\eta$ and $|\nabla\chi_3(x)|\leqslant \frac{C}{\eta}$. Let $\chi_4(x)=1-\chi_3(x)$. Then by , we have $$\begin{aligned}
&&\left|(b_1b\cdot\nabla u_2)(\bar{x},t)\right|\nonumber\\
&=&\left|b_1(\bar{x},t)b(\bar{x},t)\cdot \frac{1}{2\pi}\int_{\mathbb{R}^2}\frac{y_1}{|y|^2}\nabla_y\omega(\bar{x}-y,t)\mathrm dy\right|\nonumber\\
&\leqslant&\left|b_1(\bar{x},t)b(\bar{x},t)\cdot \frac{1}{2\pi}\int_{|y|\leqslant\eta}\frac{y_1}{|y|^2}\nabla_y(\omega(\bar{x},t)-\omega(\bar{x}-y,t))\chi_3(y)\mathrm dy\right|\nonumber\\
&+&\left|b_1(\bar{x},t)b(\bar{x},t)\cdot \frac{1}{2\pi}\int_{|y|\geqslant\frac{1}{2}\eta}\frac{y_1}{|y|^2}\nabla_y\omega(\bar{x}-y,t)\chi_4(y)\mathrm dy\right|\nonumber\\
&\leqslant&C \left\|b\right\|_{L^\infty}^2\int_{|y|\leqslant\eta}\frac{1}{|y|^2}|\omega(\bar{x},t)-\omega(\bar{x}-y,t)|\mathrm dy\nonumber\\
&+&C\left\|b\right\|_{L^\infty}^2\int_{|y|\geqslant\frac{1}{2}\eta}\frac{1}{|y|^2}|\omega(\bar{x}-y,t)|\mathrm dy.\end{aligned}$$ Similarly, we have $$\begin{aligned}
&&\left|(b_1b\cdot\nabla u_2)(\bar{x},t)\right|\nonumber\\
&\leqslant&C \left\|b\right\|_{L^\infty}^2\int_{|y|\leqslant\eta}\frac{1}{|y|^2}|\omega(\bar{x},t)-\omega(\bar{x}-y,t)|\mathrm dy\nonumber\\
&+&C \left\|b\right\|_{L^\infty}^2\int_{|y|\geqslant\frac{1}{2}\eta}\frac{1}{|y|^2}|\omega(\bar{x}-y,t)|\mathrm dy.\end{aligned}$$ Hence, $$\begin{aligned}
\partial_t\omega(\bar{x},t)&\leqslant&\int_{\mathbb{R}^2} \frac{\omega(\bar{x}-y,t)-\omega(\bar{x},t)}{|y|^2m(|y|)} \mathrm dy+C \left\|b\right\|_{L^\infty}^2\int_{|y|\leqslant\eta}\frac{1}{|y|^2}|\omega(\bar{x},t)-\omega(\bar{x}-y,t)|\mathrm dy\nonumber\\
&+&C \left\|b\right\|_{L^\infty}^2\int_{|y|\geqslant\frac{1}{2}\eta}\frac{1}{|y|^2}|\omega(\bar{x}-y,t)|\mathrm dy+f(\bar{x},t)\\
&\leqslant&\int_{|y|\leqslant\eta}\frac{\omega(\bar{x}-y,t)-\omega(\bar{x},t)}{|y|^2}
\left(\frac{1}{m(|y|)}-C\left\|b\right\|_{L^\infty}^2\right)\mathrm dy\\
&&-\omega(\bar{x},t)\int_{|y|\geqslant\eta} \frac{1}{|y|^2m(|y|)}\mathrm dy
+C \frac{1}{\eta}(\left\|b\right\|_{L^\infty}^2+\frac{1}{m(\eta)})\left\|\omega\right\|_{L^2}+\left\|f\right\|_{L^\infty}.\end{aligned}$$ Thanks to Lemma \[lem4\], we can choose $\eta$ (dependent on $T$) small so that $\frac{1}{m(|y|)}-C\left\|b\right\|_{L^\infty}^2>0$. Due to Lemma \[lem7\], we obtain $$\partial_t\omega(\bar{x},t)\leqslant C_5(T)-C_6(T)\omega(\bar{x},t).$$ Therefore $\omega(\bar{x},t)\leqslant C(T)$ for $t\in[0,T)$. A similar argument can be applied to the minimum and we obtain $\left\|\omega(\cdot,t)\right\|_{L^\infty}\leqslant C(T) $ for all $t\in [0,T).$
**Acknowledgement** The research of B Yuan was partially supported by the National Natural Science Foundation of China (No. 11471103).
[10]{}
R. E. Caflisch, I. Klapper, G. Steele, Remarks on singularities, dimension and energy dissipation for ideal hydrodynamics and MHD, Comm. Math. Phys. 184 (1997) 443–455.
J. Y. Chemin, D. S. McCormick, J. C. Robinson, J. L. Rodrigo, Local existence for the non-resistive MHD equations in Besov spaces, Adv. Math. 286 (2016) 1–31
C. Cao, D. Regmi, J. Wu, The 2D MHD equations with horizontal dissipation and horizontal magnetic diffusion, J. Differential Equations 254 (7) (2013) 2661–2681.
C. Cao, J. Wu, Global regularity for the 2D MHD equations with mixed partial dissipation and magnetic diffusion, Adv. Math. 226 (2) (2011) 1803–1822.
C. Cao, J. Wu, Two regularity criteria for the 3D MHD equations, J. Differential Equations 248 (9) (2010) 2263–2274.
C. Cao, J. Wu, B. Yuan, The 2D incompressible magnetohydrodynamics equations with only magnteic diffusion, SIAM J. Math.ANAL. 46 (1) (2014) 588–602.
Q. Chen, C. Miao, Z. Zhang, On the regularity criterion of weak solution for the 3D viscous magneto-hydrodynamics equations, Comm. Math. Phys. 284 (3) (2008) 919–930.
Q. Chen, C. Miao, Z. Zhang, On the well-posedness of the ideal MHD equations in the Triebel-Lizorkin spaces, Arch. Ration. Mech. Anal. 195 (2) (2010) 561–578.
P. Constantin, V. Vicol, Nonlinear maximum principles for dissipative linear nonlocal operators and applications, Geom. Funct. Anal. 22 (5) (2012) 1289–1321.
A. Córdoba, D. Córdoba, A maximum principle applied to quasi-geostrophic equations, Comm. Math. Phys. 249 (3) (2004) 511–528.
T. M. Elgindi, N. Masmoudi, $L^\infty$ Ill-posedness for a class of equations arising in hydrodynamics, arXiv:1405.2478v2\[math.AP\]24 Jun 2014.
J. Fan, H. Malaikah, S. Monaquel, G. Nakamura, Y. Zhou, Global Cauchy problem of 2D generalized MHD equations, Monatsh. Math. 175 (2014) 127–131.
C. L. Fefferman. D. S. McCormick, J. C. Robinson. J. L. Rodrigo, Higher order commutator estimates and local existence for the non-resistive MHD equations and related models, J. Funct. Anal. 267 (2014) 1035–1056
C. L. Fefferman. D. S. McCormick, J. C. Robinson. J. L. Rodrigo, Local existence for the non-resistive MHD equations in nearly optimal Sobolev spaces, arXiv: 1602.02588v1\[math.AP\] 2 Feb 2015.
C. He, Z. Xin, On the regularity of weak solutions to the magnetohydrodynamic equations, J. Differential Equations 213 (2) (2005) 234–254.
C. He, Z. Xin, Partial regularity of suitable weak solutions to the incompressible magnetohydrodynamic equations, J. Funct. Anal. 227 (2005) 113–152.
X. Hu, F. Lin, Global existence for two dimensional incompressible magnetohydrodynamic flows with zero magnetic diffusivity, arXiv: 1405.0082v1\[math.AP\] 1 May 2014.
Q. Jiu, D. Niu, Mathematical results related to a two-dimensional magneto-hydrodynamic equations, Acta Math. Sci. Ser. B English. Ed. 26 (2006) 744–756.
Q. Jiu, J. Zhao, A remark on global regularity of 2D generalized magnetohydrodynamic equations, J. Math. Anal. Appl. 412 (2014) 478–484.
Q. Jiu, J. Zhao, Global regularity of 2D generalized MHD equations with magnetic diffusion, Z. Angew. Math. Phys. 66 (2015) 677–687
A. Kiselev, F, Nazarov, Global regularity for the critical dispersive dissipative surface quasi-geostrophic equation, Nonlinearity 23 (2010) 549–554
Z. Lei, On axially symmetric incompressible magnetohydrodynamics in three dimensions, J. Differential Equations. 259 (2015) 3202–3215
Z. Lei, Y. Zhou, BKM’s Criterion and global weak solutions for magnetohydrodynamics with zero viscosity, Discrete Contin. Dyn. Syst. 25 (2009) 575–583.
P. G. Lemarié-Rieusset, Recent developments in the Navier-Stokes problem, Chapman $\&$ Hall/CRC Research Notes in Mathematics 431,Chapman $\&$ Hall/CRC, Boca Raton, FL (2002).
F. Lin, L. Xu, P. Zhang, Global small solutions of 2-D incompressible MHD system, J. Differential Equations. 259 (2015) 5440–5485
F. Lin, P. Zhang, Global small solutions to an MHD-type system: the three-dimensional case, Comm. Pure Appl. Math. 67 (2014) 531–580.
A. Majda, A. Bertozzi, Vorticity and incompressible flow, Cambridge University Press (2002).
X. Ren, J. Wu, Z. Xiang, Z. Zhang, Global existence and decay of smooth solution for the 2-D MHD equations without magnetic diffusion, J. Funct. Anal. 267 (2) (2014) 503–541.
S. Sermange, R. Temam, Some mathematical questions related to the MHD equations, Comm. Pure Appl. Math. 36 (1983) 635–664.
C.V. Tran, X. Yu, Z. Zhai, On global regularity of 2D generalized magnetodydrodynamics equations, J. Differential Equations. 254 (2013) 4194–4216.
J. Wu, Generalized MHD equations, J. Differential Equations. 195 (2) (2003) 284–312
J. Wu, Global regularity for a class of generalized magnetohydrodynamic equations, J. Math. Fluid Mech. 13 (2011) 295–305.
L. Xu, P. Zhang, Global small solutions to three-dimensional incompressible MHD system, SIAM J.Math Anal. 47 (2015) 26–65
K. Yamazaki, On the global regularity of two-dimensional generalized magnetohydrodynamics system, J. Math. Anal. Appl. 416 (2014) 99–111.
B. Yuan, L. Bai, Remarks on global regularity of 2D generalized MHD equations, J. Math. Anal. Appl. 413 (2014) 633–640.
T. Zhang, An elementary proof of the global existence and uniqueness theorem to 2D incompressible non-resistive MHD system, arXiv: 1404.5681v2\[math.AP\] 23 Oct 2014.
[^1]: Corresponding author.
|
---
abstract: 'We establish an axiomatic framework for indistinguishability of quantum particles in terms of hidden variables, which gives an ontology for microscopic particles. Such an axiomatic framework is set-theoretical. We also discuss the quantum distribution functions with the help of our axioms.'
author:
- 'Adonai S. Sant’Anna'
- |
Décio Krause\
\
Dep. Matemática\
Universidade Federal do Paraná\
C.P. 19081, Curitiba, PR, 80.530-900, Brasil
title: Indistinguishable particles and hidden variables
---
\[section\] \[section\] \[section\] \[section\] \[section\] \[section\] \[section\]
Introduction
============
In classical physics it is possible to label individual particles, even in the case that they look alike. But in quantum mechanics, it is not possible, using the language of the physicist, to keep track of individual particles in order to distinguish ‘identical’ particles. It is not possible to label electrons, for example, even in principle. The reason is that it is not possible to specify more than a complete set of commuting observables for each quantum particle. Yet, we cannot “follow the trajectory because that would entail a position measurement at each instant of time, which necessarily disturbs the system” [@Sakurai-94]. We consider that this is true for a quantum theory with no ontological picture. We suggest in this paper a description for quantum mechanics that allows, in principle, to distinguish particles that are physically indistinguishable. There is no contradiction in our words, since we distinguish those particles at the ontological level.
The search for axioms like those of set theories for dealing with collections of indistinguishable elementary particles was posed by Yu. Manin [@Manin-74], in 1974, as one of the important problems of present day researches on the foundations of mathematics. As he said:
> *I would like to point out that it is rather an extrapolation of common-place physics, whether we can distinguish things, count them, put them in some order, etc.. New quantum physics has shown us models of entities with quite different behaviour. Even [*sets*]{} of photons in a looking-glass box, or of electrons in a nickel piece are much less Cantorian than the [*sets*]{} of grains of sand.*
>
> The twentieth century return to Middle Age scholastics taught us a lot about formalisms. Probably it is time to look outside again. Meaning is what really matters.
>
> [@Manin-74]
Other authors [@Krause-92] [@Chiara-93] [@Krause-95] have considered that standard set theories are not adequate to represent microphysical phenomena as they are presented by the standard formulation of quantum mechanics. It is argued that the ontology of microphysics apparently does not reduce to that one of usual sets. In this paper we present a negative answer to this conjecture. We show that it is possible to give a set-theoretical framework for indistinguishability of quantum particles, specially for the ontology of quantum physics[^1]. Our main tool is the use of hidden variables. In this sense our solution to deal with physically indistinguishable particles is different from the approach proposed by Manin. We could consider this use of hidden variables as a try to complete the usual description of quantum particles. As van Fraassen remarks [@vanFraassen-85]:
> [*if two particles are of the same kind, and have the same state of motion, nothing in the quantum mechanical description distinguishes them.*]{}
In this sense, quantum mechanics needs something more to distinguish particles, in order keep the classical mathematics to describe the theory. We propose that this something more could be hidden variables.
The approach in terms of quasi-set theories to deal with indistinguishable objects is not appropriate to label quantum particles in order to obtain Bose-Einstein or Fermi-Dirac statistics if we are interested to follow the same mathematical techniques used by the physicist. In the hidden variables picture such a problem does not exist. We can easily label particles which are physically indistinguishable, since we assume that each particle has a different value to its hidden variable. Hence, with this approach it is possible to justify the quantum distribution functions as well as the symmetrical and antisymmetrical states of collections of quantum particles.
It is well known the use of hidden variables in physics, specially in the description of quantum mechanics due to D. Bohm. Bohm [@Bohm-94] considered that the electron, e.g., “has more properties than can be described in terms of the so-called ‘observables’ of the quantum theory.” He used hidden variables to give a deterministic picture to the ontology of quantum mechanics, although quantum systems behave in a probabilistic fashion, from the experimental point of view. Here, we preserve the concept of hidden variable as something that corresponds to inner properties of physical objects[^2] that, at present, are not measured in laboratories. But our use of hidden variables is quite different, in principle, from that one of Bohm, since it has nothing to do with any explanation to the probabilistic behaviour of quantum phenomena.
Our approach is out of the range of the proofs on the impossibility of hidden variables in the quantum theory, like von Neumann’s theorem [@vonNeumann-55], Gleason’s work [@Gleason-57], Kochen and Specker results [@Kochen-67] or Bell’s inequalities [@Bell-87]. There are other works which claim to show that no distribution of hidden variables can account for the statistical predictions of the quantum theory. But in our ontological description of particles, specially quantum particles, we are not interested in the statistical aspects of quantum theory. Our concern is only with the so-called indistinguishability among particles.
It is also well known that systems containing $n$ indistinguishable quantum particles are either totally symmetrical under the interchange of any pair (bosons) or are totally antisymmetrical (fermions). Our question is: if we have a system of $n$ indistinguishable particles, how can we label them in order to make the mentioned interchange of any pair? Usually it is said that we can mathematically label the particles. But if that is the case, we have an important mathematical concept that does not correspond to any physical interpretation: the label of physically indistinguishable particles. In the present paper we say that we can ontologically label each particle by the use of hidden variables which correspond to inner properties that are not characterized by the observables. This means that we can establish two kinds of identity: the physical and the ontological. Two particles physically indistinguishable (they have the same physical properties, in a sense to made clearer in the text) are always ontologically different or distinguishable. In other words, a system of $n$ quantum particles does never have two particles ontologically indistinguishable or even two particles with the same value for their respective hidden variables.
Lowe [@Lowe-94] has suggested that quantum particles are genuinely (in a fundamentally ontological sense) [*vague*]{} objects. He considers a situation in which a free electron $a$ is captured by an atom to form a negative ion which then emits an electron labeled $b$ and notes that,
> [*according to currently accepted quantum mechanical principles there may be no objective fact of the matter as to whether or not $a$ is identical with $b$. It should be emphasized that what is being proposed here is not merely that we have no way of telling whether or not $a$ and $b$ are identical, which would imply only an epistemic indeterminacy. It is well known that the sort of indeterminacy pressuposed by orthodox interpretations of quantum theory is more than merely epistemic - it is ontic.*]{}
According to our ontological picture, each electron has a well defined hidden variable, which allows to attribute a label. But in our axiomatic treatment we are not able to describe the dynamics of the process remarked by Lowe. We only know that if electron $b$ has the same hidden variable of electron $a$, then they are identical in the sense that they are the same particle. But if $a$ and $b$ have different values for their hidden variables, then we are really talking about [*two*]{} electrons. In this case, they are two indistinguishable particles, but they still are two electrons (ontologically distinguishable).
Dalla Chiara [@Chiara-85] develops a quantum logical semantics for identical particles in which proper names and definite descriptions may lack a precise [*denotatum*]{} within some possible worlds. In [@Chiara-93] Dalla Chiara and Toraldo di Francia conclude, as a philosophical consequence of this semantics, that there is no [*trans-world*]{} identity. But it is obvious that this inexistence of a trans-world identity is a consequence of the hypothesis that there is no trans-world identity. The semantics developed by Dalla-Chiara comes from the observation that the world of [*identical particles*]{} in microphysics gives rise to examples of uncertain and ambiguous denotation relations. It is clear that Dalla-Chiara did not consider the possibility of ontological denotation relations.
In the next section we present an axiomatic framework for ontologically distinguishable particles in terms of a set-theoretical predicate. This predicate allows to cope with collections of physically indistinguishable particles as sets. Then, in section 3 we present the physical consequences of this picture, with special attention to the quantum distribution functions.
Set-Theoretical Predicate for Ontologically Distinguishable Particles
=====================================================================
We are not interested to give an axiomatic framework for quantum physics, quantum mechanics or even mechanics. Our concern is with the process of labeling indistinguishable particles, so widely used by physicists.
Our system has seven primitive notions: $\lambda$, $X$, $P$, $m$, $M$, $\equiv$, and $\doteq$. $\lambda$ is a function $\lambda:N\rightarrow$[**R**]{}, where $N$ is the set $\{1,2,3...,n\}$, $n$ is a nonnegative integer, and [**R**]{} is the set of real numbers; $X$, and $P$ are finite sets; $m$ and $M$ are predicates defined on elements of $P$; and $\equiv$ and $\doteq$ are binary relations between elements of $P$. Intuitivelly, the images $\lambda_{i}$ of the function $\lambda$, where $i\in N$, correspond to the so-called hidden variables. $X$ is to be interpreted as one set such that each one of its elements corresponds to measurements of physicall observables of one particle. Such measurements can be precisely characterized by the [*generalized operational definition of a physical quantity*]{}[^3]. Basically, a physical quantity is defined by a union $C = \bigcup\{ C_{k}\}$ over a set $\{ C_{k}\}$ of equivalence classes of measuring procedures, such that the set $\{ C_{k}\}$ is connected and each $C_{k}$ is defined over a well-determined class $\Sigma_{k}$ of physical systems, where $\Sigma_{k}\neq\Sigma_{l}$ for $k\neq l$. For details see [@Chiara-79]. The elements of $X$ are denoted by $x$, $y$, etc. $P$ is to be physically interpreted as the set of particles. $m(p)$, where $p\in P$, means that $p$ is a microscopic particle. $M(p)$ means that $p\in P$ is a macroscopic particle. $\equiv$ corresponds to the ontological identity between particles and $\doteq$ corresponds to the physical identity between particles.
$\Lambda$ is the set of images of the function $\lambda$.
${\cal D_{O}} = \langle\lambda,X,P,m,M,\equiv,\doteq\rangle$ is a system of ontologically distinguishable particles if and only if the following axioms are satisfied:
D1
: $\lambda:N\rightarrow {\bf R}$ is an injective function.
D2
: $P\subset X\times\Lambda$.
D3
: $x\neq y\rightarrow\neg (\langle x,\lambda_{i}\rangle\in P \wedge \langle y,\lambda_{i}\rangle\in P)$
D4
: $\langle x,\lambda_{i}\rangle\equiv \langle y,\lambda_{j}\rangle \leftrightarrow x=y\wedge i=j$.
D5
: $\langle x,\lambda_{i}\rangle\doteq \langle y,\lambda_{j}\rangle\leftrightarrow x=y$.
D6
: $\langle x,\lambda_{i}\rangle\doteq \langle y,\lambda_{j}\rangle\rightarrow m(\langle x,\lambda_{i}\rangle)\wedge m(\langle y,\lambda_{j}\rangle)$.
D7
: $m(\langle x,\lambda_{i}\rangle)\vee M(\langle x,\lambda_{i}\rangle)\rightarrow\neg(m(\langle x,\lambda_{i}\rangle)\wedge M(\langle x,\lambda_{i}\rangle)$.
Axiom [**D1**]{} corresponds to say that the cardinality of $\Lambda$ coincides with the cardinality of $N$ ($\#\Lambda = \# N$). Axiom [**D2**]{} just says that particles are represented by ordered pairs[^4], where the first element corresponds to the physical properties measurable in laboratory, and the second element corresponds to the hidden inner property that allows to distinguish particles at the ontological level. Yet, axioms [**D2**]{} and [**D3**]{} guarantee that $\# P=\# N=\#\Lambda$, which corresponds to the number of particles of the system. In other words, two particles in a system of ontologically distinguishable particles do never have the same hidden variable. Axiom [**D4**]{} says that two particles are ontologically indistinguishable if and only if they have the same physical properties and the same hidden variables. Axiom [**D5**]{} means that two particles are physically indistinguishable if and only if they have the same physical properties. Axiom [**D6**]{} corresponds to say that if two particles are physically indistinguishable, then both of them are microscopic or quantum particles. Axiom [**D7**]{} means that one particle cannot be microscopic and macroscopic.
One could argue that function $\lambda$ is desnecessary, since we could interpret the elements of $N$ as the hidden variables that allow to label particles even when they are physically indistinguishable. We consider that this is not a satisfactory assumption, since we are interested to emphasize that the hidden variables correspond to inner properties of all particles, macroscopic or microscopic, that are not measurable in laboratory, at least in the present. Our hidden variables are not just a mathematical tool to label particles. We mean that it is possible that some day, some experimental physicist discovers a new physical property of quantum particles that allows to label them. Such a physical observable would correspond to our hidden variables. To interpret the images of $\lambda$ as the hidden variables means that the measurements of this possible future observable would assume values in the set of real numbers. Obviously, our concept of hidden variable could be extended to a function $\lambda:N\rightarrow V$, where $V$ is a vector space.
The theorem given below says that two macroscopic particles cannot be physically indistinguishable, or, in other words, we can always label macroscopic particles in one laboratory.
$M(\langle x,\lambda_{i}\rangle)\wedge M(\langle y,\lambda_{j}\rangle)\rightarrow \neg(\langle x,\lambda_{i}\rangle\doteq \langle y,\lambda_{j}\rangle)$.\[macro\]
[**Proof**]{}: If $M(\langle x,\lambda_{i}\rangle)\wedge M(\langle y,\lambda_{j}\rangle)$, then, by axiom [**D7**]{},\
$\neg(m(\langle x,\lambda_{i}\rangle)\wedge m(\langle y,\lambda_{j}\rangle)$. Hence, by axiom [**D6**]{},\
$\neg(\langle x,\lambda_{i}\rangle\doteq \langle y,\lambda_{j}\rangle)$.$\Box$\
The theorem given below is relevant for the discussions about quantum distribution functions in the next section.
If $X$ is a unitary set and $\# N\geq 2$, then the system of ontologically distinguishable particles has only microscopic particles.\[unitaryX\]
[**Proof**]{}: If $\# N\geq 2$, then $\# P\geq 2$, by axioms [**D1**]{}, [**D2**]{}, and [**D3**]{}. This means that we have a system with more than just one particle. But all these particles have the same physical properties, since we assume, by hypothesis, that $X$ is unitary. Hence, all particles are physically indistinguishable, by axiom [**D5**]{}. So, all particles are microscopic, by axiom [**D6**]{}.$\Box$
Distribution Functions for Quantum Particles
============================================
Our main objective in this section is to show how to establish the sufficient conditions to obtain the quantum distribution functions in our picture for indistinguishable particles in terms of hidden variables.
To obtain the quantum distribution functions in the standard way it is necessary to assume that the quantum particles are indistinguishable. In the case of fermions, we assume also the [*Pauli exclusion principle*]{}. Bosons do not satisfy such a principle. But the fundamental assumption of indistinguishability between quantum particles means that either we deal with this collection of particles as a quasi-set or we assume the existence of hidden variables. The second alternative allows to deal with collections of physically indistinguishable particles as sets. In this section we present an interpretation of Bose-Einstein and Fermi-Dirac statistics in set-theoretical terms.
The Pauli exclusion principle states that two or more fermions cannot occupy the same state. This occurs because a state like $\mid k'\rangle\mid k'\rangle$ is necessarily symmetrical, which is not possible for a fermion. But different states cannot be used to label fermions, since a fermion can change its state. In the case of bosons, the situation is more dramatic, since we can have several bosons occupying the same single state. If we have a collection of indistinguishable bosons or indistinguishable fermions, is this collection a set? In our picture the answer is positive.
The fermion case will be discussed first. To cope with a collection of fermions we consider, as a first assumption, an ${\cal D_{O}}$-system with an unitary set $X$. We know that if $X$ is unitary in a system with more than one particle, then all particles are microscopic, according to theorem \[unitaryX\]. So, fermions are microscopic particles because they are physically indistinguishable. It must be emphasized that to deal with fermions, we consider that the unique element $x$ of $X$ corresponds to the measurements of a complete set of commuting observables, otherwise it would be impossible to satisfy axiom [**D5**]{}, since non-commuting observables do satisfy Heisenberg principle of uncertainty. Our second assumption is the Pauli exclusion principle written in terms of our language. But before that, we need to establish the meaning of symmetrical and antisymmetrical states.
For the sake of simplicity, we consider a system of two physically indistinguishable particles, ontologically labeled particle $\lambda_{1}$ and particle $\lambda_{2}$. Suppose that, in the Hilbert space formalism, particle $\lambda_{1}$ is characterized by the state vector $\mid k'_{\lambda_{1}}\rangle$, where $k'$ corresponds to a collective index for a complete set of observables (commuting or not), or, in other words, $k'$ contains more physical information in terms of observables than $x$. Actually, if we were concerned with a rigorous notation, we should denote the state of particle $\lambda_{1}$ as $\mid k'-x,\langle x,\lambda_{1}\rangle\rangle$, where $k'-x$ corresponds to the extra physical information that is not available in $x$. But, in practice, we are abbreviating the notation. Likewise, we denote the ket of the remaining particle $\mid k''_{\lambda_{2}}\rangle$. The state ket for the two particles system is
$$\mid k'_{\lambda_{1}}\rangle\mid k''_{\lambda_{2}}\rangle.$$
If a measurement is performed on this system, it may be obtained $k'$ for one particle and $k''$ for the other one. But, in the laboratory, it is not possible to know if the state ket of the system is $\mid k'_{\lambda_{1}}\rangle\mid k''_{\lambda_{2}}\rangle$, $\mid k''_{\lambda_{1}}\rangle\mid k'_{\lambda_{2}}\rangle$ or any linear combination $c_{1}\mid k'_{\lambda_{1}}\rangle\mid k''_{\lambda_{2}}\rangle + c_{2}\mid k''_{\lambda_{1}}\rangle\mid k'_{\lambda_{2}}\rangle$. This is called the exchange degeneracy, which means that to determine the eigenvalue of a complete set of observables does not uniquely specify the state ket.
Using a notation similar to Sakurai’s [@Sakurai-94] we define the permutation operator $P_{12}$ by
$$P_{12}\mid k'_{\lambda_{1}}\rangle\mid k''_{\lambda_{2}}\rangle = \mid k''_{\lambda_{1}}\rangle\mid k'_{\lambda_{2}}\rangle.\label{interchange}$$
It is obvious that $P_{21} = P_{12}$ and $P_{12}^{2} = 1$. In the case we are discussing:
$$P_{12}\mid k'_{\lambda_{1}}\rangle\mid k''_{\lambda_{2}}\rangle = -\mid k'_{\lambda_{1}}\rangle\mid k''_{\lambda_{2}}\rangle,$$
or, in the more general situation: $$P_{ij}\mid\mbox{$n$ physically indistinguishable fermions}\rangle =$$ $$-\mid\mbox{$n$ physically indistinguishable fermions}\rangle,\label{fermions}$$ where $P_{ij}$ is the permutation operator that interchanges the particle ontologically labeled as $\lambda_{i}$ and the particle ontologically labeled as $\lambda_{j}$, with $i$ and $j$ arbitrary but distinct elements of $N$. We must recall again that in equation (\[fermions\]) the sentence “$n$ physically indistinguishable fermions” means that each arbitrary pair of fermions has the same values for measurements of a complete set of commuting observables.
In our picture it is possible to count fermions, since we can label them and, so, to deal with collections of fermions as sets. These sets could be called “ontological sets”. It is clear also what means to say that a system of fermions is totally antisymmetrical under the interchange of any pair, since now it is clear the meaning of the word “interchange” according to equation (\[interchange\]). With this in mind we observe that, by equation (\[interchange\]), $$P_{12}\mid k'_{\lambda_{1}}\rangle\mid k'_{\lambda_{2}}\rangle = \mid k'_{\lambda_{1}}\rangle\mid k'_{\lambda_{2}}\rangle,$$ which contradicts equation (\[fermions\]). Hence, as expected, fermions cannot occupy the same physical state, which is a translation of the exclusion principle in our language of hidden variables.
The discussion about bosons is very similar and we let this case as an exercise for the reader.
Since we characterized the permutation operator, symmetrical and antisymmetrical states, Pauli exclusion principle and the labeling of quantum particles, now we can easily deduce the quantum distribution functions by standard ways. For details see, for example, [@Garrod-95].
In texts like [@Sakurai-94] other physical consequences of the indistinguishability among quantum particles are cited. But all these effects are consequences of the symmetrical or antisymmetrical properties of quantum particles, which we have ever discussed.
[99]{} Bell, J.S., 1987, [*Speakable and Unspeakable in Quantum Mechanics*]{}, Cambridge University Press. Bohm, D., 1994, [*Wholeness and the Implicate Order*]{}, Ark Paperbacks. Bohm, D. and B.J. Hilley, 1995, [*The Undivided Universe*]{}, Routledge. Dalla Chiara, M.L., 1985, “Names and Descriptions in Quantum Logics”, in P. Mittelstaedt and E.-W. Stachow (eds.) [*Recent Developments in Quantum Logics*]{}, Manheim, Bibliographisches Institut, 189-202. Dalla Chiara, M.L., and G. Toraldo di Francia, 1979, “Formal Analysis of Physical Theories”, in G. Toraldo di Francia (ed.) [*Problems in the Foundations of Physics*]{}, North-Holland, pp. 134-201. Dalla Chiara, M.L., and G. Toraldo di Francia, 1993, “Individuals, kinds and names in physics”, in G. Corsi et al. (eds.), [*Bridging the gap: philosophy, mathematics, physics*]{}, Dordrecht, Kluwer, Ac. Press, pp. 261-283. Reprint from [*Versus*]{} [**40**]{}, 1985, 29-50. da Costa, N.C.A., and D. Krause, 1994, “Schrödinger Logics”, [*Studia Logica*]{} [**53**]{} 533-550. da Costa, N.C.A., and D. Krause, 1996, “Set-Theoretical Models for Quantum Systems”, to appear. van Fraassen, B., 1985, “Statistical Behaviour of Indistinguishable Particles: Problems and Interpretation”, in P. Mittelstaedt and E.-W. Stachow (eds.) [*Recent Developments in Quantum Logics*]{}, Manheim, Bibliographisches Institut, 161-187. Garrod, C., 1995, [*Statistical Mechanics and Thermodynamics*]{}, Oxford University Press. Gleason, A.M., 1957, [*J. Math. Mech.*]{}, [**6**]{} 885-893. Kochen, S., and E.P. Specker, [*J. Math. Mech.*]{} [**17**]{} 59-87. Krause, D., 1992, “On a quasi-set theory”, [*Notre Dame Journal of Formal Logic*]{} [**33**]{} 3, 402-411. Krause, D. and S. French, 1995, “A formal framework for quantum non-individuality”, [*Synthese*]{} [**102**]{} 1 pp. 195-214. Lowe, E.J., 1994, “Vague Identity and Quantum Indeterminacy”, [*Analysis*]{} [**54**]{}, pp. 110-114. Manin, Yu. I., 1976, “Problems of present day mathematics: I (Foundations)”, in Browder, F.E. (ed.) [*Proceedings of Symposia in Pure Mathematics*]{} [**28**]{} American Mathematical Society, Providence, p. 36. von Neumann, J., 1955, [*Mathematical Foundations of Quantum Mechanics*]{}, Princeton University Press. Sakurai, J.J., 1994, [*Modern Quantum Mechanics*]{}, Addison-Wesley. Scerri, E.R., 1995, “The exclusion principle, chemistry and hidden variables”, [*Synthese*]{} [**102**]{} 1 pp. 165-169.
[^1]: In [@daCosta-96], da Costa and Krause show that it is possible to establish set-theoretical models for quantum systems, since quasi-set theory can be translated into the usual Zermelo-Fraenkel set theory with the Axiom of Choice. Such a translation is related with the Heisenberg’s paradox: “The Copenhagen interpretation of quantum theory starts from a paradox. Any experiment in physics, whether it refers to the phenomena of daily life or to atomic events, is to be described in terms of classical physics.”
[^2]: We do not intend to discuss the concept of physical object. In the present text we consider that this concept is intuitivelly established.
[^3]: Although this definition for physical quantity receives some criticisms by science philosophers, we consider, as Dalla-Chiara and Toraldo di Francia [@Chiara-79], that such a definition reflects a methodology that is largely accepted by physicists.
[^4]: In [@daCosta-94] da Costa and Krause discuss the possible representation of a quantum particle in terms of an ordered pair $\left< E,L\right>$, where $E$ corresponds to a predicate which in some way characterizes the particle in terms, e.g., of its rest mass, its charge, and so on. $L$ denotes an apropriate label, which could be, for example, the spatio-temporal location of the particle. Even in the case that the particles (in a system) have the same $E$, they might be distinguished by their labels. In this case, we are dealing with a classical representation of the particles. But if the particles have the same label, the tools of classical mathematics cannot be applied. In our picture, according to axioms [**D1**]{}-[**D3**]{}, it is prohibited a system where two particles have the same (ontological) label.
|
---
abstract: 'In this letter, we generalize the convolutional NMF by taking the $\beta$-divergence as the contrast function and present the correct multiplicative updates for its factors in closed form. The new updates unify the $\beta$-NMF and the convolutional NMF. We state why almost all of the existing updates are inexact and approximative w.r.t. the convolutional data model. We show that our updates are stable and that their convergence performance is consistent across the most common values of $\beta$.'
author:
- 'Pedro J. Villasana T., Stanislaw Gorlow, and Arvind T. Hariraman'
bibliography:
- 'IEEEabrv.bib'
- 'references.bib'
title: 'Multiplicative Updates for Convolutional NMF Under $\beta$-Divergence'
---
[Villasana : $\beta$-CNMF]{}
Convolution, nonnegative matrix factorization, multiplicative update rules, $\beta$-divergence.
Introduction
============
matrix factorization finds its application in the fields of machine learning and in connection with inverse problems, mostly. It became immensely popular after Lee and Seung derived multiplicative update rules that made the up until then additive steps in the direction of the negative gradient obsolete [@Lee1999]. In [@Lee2001], they gave empirical evidence of their convergence to a stationary point, using (a) the squared Euclidean distance, and (b) the generalized Kullback–Leibler divergence as the contrast function. The factorization’s origins can be traced back to [@Paatero1994; @Paatero1997]. A convolutional variant of the factorization is introduced in [@Smaragdis2004] based on the Kullback–Leibler divergence. The main idea is to exploit temporal dependencies in the neighborhood of a point in the time-frequency plane. In their original form, the updates result in a biased factorization. To provide a remedy, multiple coefficient matrices are updated in [@Smaragdis2007], one for each translation, and the final update is by taking the average over all coefficient matrices. A nonnegative matrix factor deconvolution in 2D based on the Kullback–Leibler divergence is found in [@Schmidt2006]. Not only do the authors give a derivation of the update rules, they show a simple way of making the update rules multiplicative. It may be pointed out that the update rule for the coefficient matrix is different from the one in [@Smaragdis2007]. The same guiding principles are applied to derive the convolutional factorization based on the (squared) Euclidean distance in [@Wang2009]. But in the attempt to give a formal proof for their update rules, the authors largely reformulate a biased factorization comparable to [@Smaragdis2004].
In [@Cichocki2006], nonnegative matrix factorization is generalized to a family of $\alpha$-divergences under the constraints of sparsity and smoothness, while the unconstrained $\beta$-divergence is brought into focus in [@Fevotte2011]. For both cases multiplicative update rules were given. The properties of the $\beta$-divergence are discussed in detail in [@Cichocki2010; @Fevotte2011]. The combined $\alpha$-$\beta$-divergence together with the corresponding multiplicative updates can be found in [@Cichocki2011].
In this letter, we provide multiplicative update rules for the factorial deconvolution under the $\beta$-divergence. Furthermore, we argue that the updates in [@Smaragdis2004; @Smaragdis2007] and in [@Schmidt2006] are empirical and/or inexact w.r.t. the convolutional data model. According to our simulation, the updates in [@Wang2009] do not signify any extra improvement over [@Smaragdis2007] despite the additional load. Finally, we show that the exact updates are stable and that their behavior is consistent for $\beta \in {\left\{0, 1, 2\right\}}$.
Nonnegative Matrix Factorization {#sec:nmf}
================================
The nonnegative matrix factorization (NMF) is an umbrella term for a low-rank matrix approximation of the form $${\mathbf{V}} \simeq {\mathbf{W}} \, {\mathbf{H}}
\label{eq:nmf}$$ with ${\mathbf{V}} \in {\mathbb{R}}_{\geqslant 0}^{K \times N}$, ${\mathbf{W}} \in {\mathbb{R}}_{\geqslant 0}^{K \times I}$, and ${\mathbf{H}} \in {\mathbb{R}}_{\geqslant 0}^{I \times N}$, where $I$ is the predetermined rank of the factorization. The letters above help distinguish between visible ($v$) and hidden variables ($h$) that are put in relation through weights ($w$). The factorization is usually formulated as a convex minimization problem with a dedicated cost function $C$ according to $$\underset{{\mathbf{W}},\,{\mathbf{H}}}{\text{minimize}}\ {C{\left({\mathbf{W}}, {\mathbf{H}}\right)}} \qquad \text{subject to}\ w_{ki}, h_{in} \geqslant 0$$ with $$C{\left({\mathbf{W}}, {\mathbf{H}}\right)} = L{\left({\mathbf{V}}, {\mathbf{W}}\,{\mathbf{H}}\right)} \text{,}
\label{eq:cost}$$ where $L$ is a loss function that assesses the error between ${\mathbf{V}}$ and the factorization ${\mathbf{W}}\,{\mathbf{H}}$.
$\beta$-Divergence
------------------
The loss in can be expressed by means of a contrast or distance function between the elements of ${\mathbf{V}}$ and ${\mathbf{W}}\,{\mathbf{H}}$. Due to its robustness with respect to outliers for certain values of the input parameter $\beta \in {\mathbb{R}}$, we resort to the $\beta$-divergence [@Basu1998] as a subclass of the Bregman divergence [@Bregman1967; @Cichocki2010], which for the points $p$ and $q$ is given by [@Cichocki2010] $$d_\beta{\left(p, q\right)} = \begin{dcases}
\frac{p^\beta + {\left(\beta - 1\right)} \, q^\beta - \beta \, p \, q^{\beta - 1}}{\beta\,{\left(\beta - 1\right)}} \text{,} & \beta \neq 0, 1 \text{,} \\
p \, \log\frac{p}{q} - p + q \text{,} & \beta = 1 \text{,} \\
\frac{p}{q} - \log\frac{p}{q} - 1 \text{,} & \beta = 0 \text{.}
\end{dcases}$$ Accordingly, the $\beta$-divergence for matrices ${\mathbf{V}}$ and ${\mathbf{W}}\,{\mathbf{H}}$ can be defined entrywise, as $$D_\beta{\left({\mathbf{V}} \parallel {\mathbf{W}} \, {\mathbf{H}}\right)} {\stackrel{\text{\tiny def}}{=}}\sum_{k=1}^K\sum_{n=1}^N{d_\beta{\left(v_{kn}, \sum\nolimits_i{w_{ki} \, h_{in}}\right)}}\text{,}
\label{eq:entrywise_divergence}$$ which can further be viewed as the $\beta$-divergence between two (unnormalized) marginal probability mass functions with $k$ as the marginal variable. Note that the $\beta$-divergence has a single global minimum for $\sum_n v_{kn} = \sum_{i, n} w_{ki} \, h_{in}$, $\forall k$, although it is strictly convex only for $\beta \in {\left[1, 2\right]}$ [@Cichocki2010; @Fevotte2011].
Discrete Convolution
--------------------
As can be seen explicitly in , the weight $w_{ki}$ for the $i$th hidden variable $h_{in}$ at point $\left(k, n\right)$ is applied using the scalar product. Given that $h_i$ evolves with $n$, we can assume that $h_i$ is correlated with its past and future states. We can take this into account by extending the dot product to a convolution in our model. Postulating causality and letting the weights have finite support of cardinality $M$, the convolution writes $$u(n) = {\left(w_{ki} \ast h_i\right)}(n) {\stackrel{\text{\tiny def}}{=}}\sum_{m = 0}^{M - 1}{w_{ki}(m) \, h_{i, n - m}} \text{.}
\label{eq:conv}$$ The operation can be converted to a matrix multiplication by lining up the states $h_{in}$ in a truncated Toeplitz matrix: $${\begin{bmatrix}
h_{i, 1} & h_{i, 2} & h_{i, 3} & \cdots & h_{i, N - 1} & h_{i, N} \\
0 & h_{i, 1} & h_{i, 2} & \cdots & h_{i, N - 2} & h_{i, N - 1} \\
\vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\
0 & 0 & 0 & \cdots & h_{i, N - M} & h_{i, N - M + 1}
\end{bmatrix}} \text{.}
\label{eq:toeplitz}$$ In accordance with , the convolutional NMF (CNMF) can be formulated as follows to accommodate the structure given in , see also [@Smaragdis2004; @Smaragdis2007]: $${\mathbf{V}} \simeq {\mathbf{U}} = \sum_{m = 0}^{M - 1}{{\mathbf{W}}_m \, {{{\mathbf{H}}}_{\stackrel{m\:}{\longrightarrow}}}} \text{,}
\label{eq:cnmf}$$ where ${{\cdot}_{\stackrel{m\:}{\longrightarrow}}}$ is a columnwise right-shift operation (similar to a logical shift in programming languages) that shifts all the columns of ${\mathbf{H}}$ by $m$ positions to the right, and fills the vacant positions with zeros. The operation is size-preserving. It can be seen that the CNMF has $M$ times as many weights as the regular NMF, whereas the number of hidden states is equal.
$\beta$-CNMF {#sec:scnmf}
============
Ensuing from the preliminary considerations in Section \[sec:nmf\], we adopt the formulation of the CNMF from and derive multiplicative update rules for gradient descent, while taking the entrywise $\beta$-divergence from as the loss function. The result is referred to as the $\beta$-CNMF.
Problem Statement
-----------------
Under the premise that ${\mathbf{V}}$ is factorizable into $\left\{{\mathbf{W}}_m\right\}$ and ${\mathbf{H}}$, $m = 0, 1, \dots, M - 1$, and given the cost function $$C{\left({\left\{{\mathbf{W}}_m\right\}}, {\mathbf{H}}\right)} = D_\beta{\left({\mathbf{V}} \parallel {\mathbf{U}}\right)} \text{,}$$ we seek to find the multiplicative equivalents of the iterative update rules for gradient descent:
$$\begin{aligned}
{\mathbf{W}}_m^{t + 1} &= {\mathbf{W}}_m^t - \kappa \, \frac{\partial}{\partial {\mathbf{W}}_m^t} \, C{\left({\left\{{\mathbf{W}}_m^t\right\}}, {\mathbf{H}}^t\right)} \text{,} \label{eq:w} \\
{\mathbf{H}}^{t + 1} &= {\mathbf{H}}^t - \mu \, \frac{\partial}{\partial {\mathbf{H}}^t} \, C{\left({\left\{{\mathbf{W}}_m^t\right\}}, {\mathbf{H}}^t\right)} \text{,} \label{eq:h}\end{aligned}$$
\[eq:gradient\_descent\]
where and alternate at each iteration ($t \geqslant 0$). The step sizes $\kappa$ and $\mu$ are allowed to change at every iteration.
Multiplicative Updates Rules
----------------------------
Computing the partial derivatives of $C$ w.r.t. ${\mathbf{W}}_m$ and ${\mathbf{H}}$, and by choosing appropriate values for $\kappa$ and $\mu$, the iterative update rules from become multiplicative in the form
$$\begin{aligned}
{\mathbf{W}}_m^{t + 1} &= {\mathbf{W}}_m^t \circ {\left[{{\mathbf{U}}^{t^{\circ{\left(\beta - 1\right)}}} \, {{{\mathbf{H}}}_{\stackrel{m\:}{\longrightarrow}}}^{t^{\mathsf{T}}}}\right]}^{\circ{-1}} \nonumber \\
&\qquad{} \circ {\left[{{\mathbf{V}} \circ {\mathbf{U}}^{t^{\circ{\left(\beta - 2\right)}}}}\right]} \, {{{\mathbf{H}}}_{\stackrel{m\:}{\longrightarrow}}}^{t^{\mathsf{T}}} \text{,} \label{eq:mur_wm} \\
{\mathbf{H}}^{t + 1} &= {\mathbf{H}}^t \circ {\left[{\sum\nolimits_m{{\mathbf{W}}_m^{t^{\mathsf{T}}} \, {{\widetilde{{\mathbf{U}}}}_{\stackrel{\:m}{\longleftarrow}}}^{t^{\circ{\left(\beta - 1\right)}}}}}\right]}^{\circ{-1}} \nonumber \\
&\qquad{} \circ \sum\nolimits_m{{\mathbf{W}}_m^{t^{\mathsf{T}}} \, {\left[{{{{\mathbf{V}}}_{\stackrel{\:m}{\longleftarrow}}} \circ {{\widetilde{{\mathbf{U}}}}_{\stackrel{\:m}{\longleftarrow}}}^{t^{\circ{\left(\beta - 2\right)}}}}\right]}} \text{,} \label{eq:mur_h} \end{aligned}$$
\[eq:multiplicative\]
for $m = 0, 1, \dots, M - 1$, with $$\widetilde{{\mathbf{U}}}^{t} = \sum\nolimits_m{{\mathbf{W}}_m^{t+1} \, {{{\mathbf{H}}}_{\stackrel{m\:}{\longrightarrow}}}^t} \text{,}
\label{eq:mur_u}$$ where $\circ$ denotes the Hadamard, i.e. entrywise, product, ${\cdot}^{\circ p}$ is equivalent to entrywise exponentiation and $\cdot^{\circ{-1}}$, respectively, stands for the entrywise inverse. The ${{\cdot}_{\stackrel{\:m}{\longleftarrow}}}$ operator is the left-shift counterpart of the right-shift operator. The details of the derivation of can be found in the Appendix.
Relation to Previous Works
--------------------------
Several multiplicative updates for the CNMF can be found in the existing literature using different loss functions. In [@Smaragdis2004], the loss function is stated as $$L{\left({\mathbf{V}}, {\mathbf{U}}\right)} = \sqrt{\sum_{k=1}^K\sum_{n=1}^N{{{{\left|{v_{kn} \, \log{\frac{v_{kn}}{u_{kn}}} - v_{kn} + u_{kn}}\right|}}}^2}}
\label{eq:frobenius_loss}$$ and the corresponding update rules for ${\mathbf{W}}_m$ and ${\mathbf{H}}$ are
$$\begin{aligned}
{\mathbf{W}}_m^{t + 1} &= {\mathbf{W}}_m^t \circ {\left[{{\mathbf{1}} \, {{{\mathbf{H}}}_{\stackrel{m\:}{\longrightarrow}}}^{t^{\mathsf{T}}}}\right]}^{\circ{-1}} \circ {\left[{{\mathbf{V}} \circ {\mathbf{U}}^{t^{\circ{-1}}}}\right]} \, {{{\mathbf{H}}}_{\stackrel{m\:}{\longrightarrow}}}^{t^{\mathsf{T}}} \text{,} \label{eq:mu_1_wm} \\
{\mathbf{H}}^{t + 1} &= {\mathbf{H}}^t \circ {\left[{{{\mathbf{W}}_m^{t^{\mathsf{T}}} \, {\mathbf{1}}}}\right]}^{\circ{-1}} \circ {{\mathbf{W}}_m^{t^{\mathsf{T}}} \, {\left[{{{{\mathbf{V}}}_{\stackrel{\:m}{\longleftarrow}}} \circ {{\widetilde{{\mathbf{U}}}}_{\stackrel{\:m}{\longleftarrow}}}^{t^{\circ{-1}}}}\right]}} \text{,} \label{eq:mu_1_h} \end{aligned}$$
\[eq:mu\_1\]
where the ${\mathbf{1}}$-matrix is of size $K$-by-$N$ with ${\begin{bmatrix}{\mathbf{1}}\end{bmatrix}}_{kn} = 1$. At $t$, for every $m$, ${\mathbf{H}}$ is updated first using ${\mathbf{W}}_m^{t}$ and subsequently ${\mathbf{W}}_m^{t + 1}$ is computed from the updated ${\mathbf{H}}$, or vice versa. In [@Smaragdis2004] it is mentioned and more explicitly stated in [@Smaragdis2007] that to avoid bias in ${\mathbf{H}}$ towards ${\mathbf{W}}_{M - 1}$ it is best to first update all $\left\{{\mathbf{W}}_m\right\}$ using ${\mathbf{H}}^t$ and to update ${\mathbf{H}}$ according to $$\begin{aligned}
\overline{{\mathbf{H}}}^{t + 1} &= \frac{1}{M} \sum_{m=0}^{M-1}{\mathbf{H}}^t \circ {\left[{{{\mathbf{W}}_m^{{t+1}^{\mathsf{T}}} \, {\mathbf{1}}}}\right]}^{\circ{-1}} \\
&\qquad{} \circ {{\mathbf{W}}_m^{{t+1}^{\mathsf{T}}} \, {\left[{{{{\mathbf{V}}}_{\stackrel{\:m}{\longleftarrow}}} \circ {{\widetilde{{\mathbf{U}}}}_{\stackrel{\:m}{\longleftarrow}}}^{t^{\circ{-1}}}}\right]}} \text{.}
\end{aligned}
\label{eq:mu_2_h}$$ Comparing and with , it can be noted that both update rules have the same factors for $\beta = 1$. The respective loss function in this case is the generalized Kullback–Leibler divergence $D_1{\left({\mathbf{V}} \parallel {\mathbf{U}}\right)}$ and not . Moreover, the ${\mathbf{1}}$-matrix in is not aligned with the ${\mathbf{U}}$-matrix. Given that $\beta = 1$, and are identical for $M = 1$, i.e. when the NMF is nonconvolutional. For $M > 1$, is equal to the updates in [@Lee2001] for $D_1{\left({\mathbf{V}} \parallel {\mathbf{W}} \, {\mathbf{H}}\right)}$ for different ${\mathbf{W}}_m$-matrices where the ${\mathbf{H}}$-matrix is time-aligned via $m$. Eqs. and are also different for $M > 1$ because unlike is the update derived from a nonconvolutional model. $\overline{{\mathbf{H}}}$ in brings out the central tendency of the elements in ${\mathbf{H}}$ but does not make the factorization in the original loss convolutional. For all the reasons given above, the updates in and are at best an approximation of the update in for $\beta = 1$.
In [@Schmidt2006], multiplicative updates are given for a CNMF in 2D (time and frequency) with the (generalized) Kullback–Leibler divergence and the squared Euclidean distance as the loss or cost function. In the dimension of time, the updates are very much the same as the updates in for $\beta = 2$. For $\beta = 1$, there is the minor difference that the ${\mathbf{1}}$-matrix is not aligned with the ${\mathbf{U}}$-matrix, just like in in .
Other multiplicative updates for a CNMF with the squared Euclidean distance can be found in [@Wang2009]. In essence, they are derived in the exact same manner as the updates in , but for a different loss function, which is $D_2{\left({\mathbf{V}} \parallel {\mathbf{U}}\right)}$. Thus, the updates are equal to the ones in [@Lee2001] with ${\mathbf{W}} = {\mathbf{W}}_m$, ${\mathbf{U}}$ as in and ${\mathbf{H}}$ time-aligned. For the same reasons that the updates in and are approximative of the update in for $\beta = 1$, the updates in [@Wang2009] are approximative of the update in for $\beta = 2$. Beyond, is updated more efficiently as $$\widetilde{{\mathbf{U}}}^t = \widetilde{{\mathbf{U}}}^t + {\left({\mathbf{W}}_m^{t+1} - {\mathbf{W}}_m^{t}\right) {{{\mathbf{H}}}_{\stackrel{m\:}{\longrightarrow}}}^t}$$ in between the updates of ${\mathbf{W}}_m$ and ${\mathbf{W}}_{m+1}$.
Interpretation
--------------
The two update rules given in are of significant value because, apart from being exact:
- They are multiplicative, and thus, they converge fast and are easy to implement.
- Eq. extends the update rule of the $\beta$-NMF for ${\mathbf{W}}$ to a set of $M$ weight matrices that are linked through a convolution operation. It also extends the corresponding update rule of the existing convolutional NMFs with the squared Euclidean distance or the generalized Kullback–Leibler divergence to the family of $\beta$-divergences.
- Eq. is even more important, as it yields a complete update of the hidden states at every iteration taking all $M$ weight matrices into account at once.
The update rule in can be viewed as an equivalent to a descent in the direction of the Reynolds gradient, which is to take the average over the partial derivatives under the group of time translations (in $m$). The average operator reduces the gradient spread as a function of ${\mathbf{W}}_m$ at each iteration $t$ such that the loss function converges to a single point that is most likely.
The $\beta$-NMF being referred to here is the heuristic $\beta$-NMF derived in [@Cichocki2006]. It was shown in [@Fevotte2011] to converge faster than a computationally equivalent maximization-minimization (MM) algorithm for $\beta \not\in {\left[1, 2\right]}$ and equally fast in the opposite case. The heuristic updates were proven to converge for $\beta \in {\left[0, 2\right]}$, which is the interval of practical value.
![image](simulation.pdf){width="\textwidth"}
Simulation
==========
In this section, we compare our proposed updates with the existing ones in terms of convergence behavior and run time for 1000 iterations. To that end, we generate 100 distinct ${\mathbf{V}}$-matrices from $M = 16$ $\chi^2$-distributed ${\mathbf{W}}_m$-matrices, $$w_{ki}(m) = \sum_{p = 1}^2{w_{{ki}_p}^2(m)} \sim \chi^2_2 \qquad w_{{ki}_p}(m) \sim \mathcal{N}{(0, 1)} \text{,}$$ and a uniformly distributed ${\mathbf{H}}$-matrix, $$h_{in} \sim \mathcal{U}{(0, 1)} \text{.}$$ The factorizations are repeated with 10 random initializations of $\left\{{\mathbf{W}}_{m}^{t_0}\right\}$ and ${\mathbf{H}}^{t_0}$ with non-zero entries. The results shown in Fig. \[fig:simulation\] thus are computed over ensembles of 1000 losses at each iteration. The number of visible and hidden variables is $K = 1000$ and $I = 10$, and the number of realizations (time samples) is $N = 100$. The run time was measured on an Intel Xeon E5-2637 v3 CPU at 3.5 GHz with 16 GB of RAM. In Table \[tab:runtime\], the figures represent the averages over 1000 runs put into relation to the average run time of the proposed updates for 1000 iterations.
------------- -------- ------------------ --------------- ------
Schmidt *et al.* Wang *et al.*
Biased Average
$\beta = 0$ 3.42 1.05 1 2.11
$\beta = 1$ 2.47 0.72 0.70 0.93
$\beta = 2$ 2.74 1.04 1 1.15
------------- -------- ------------------ --------------- ------
: Average Run Time of the Existing Convolutional Updates Relative to the Proposed Updates for Different Betas[]{data-label="tab:runtime"}
In Fig. \[fig:simulation\] it can be seen that the biased updates are clearly least stable under the divergence for which they were meant in the first place ($\beta = 1$). Already in [@Schmidt2006], convergence issues with these updates were reported. For $\beta \in {\left\{0, 2\right\}}$ at less than 100 iterations they can converge faster because ${\mathbf{H}}$ and ${\mathbf{U}}$ are updated $M$ times per iteration, which explains the significant increase in run time. Between 100 and 1000 iterations, other updates show better performance. Wang’s updates are similar in performance to Smaragdis’ average updates in spite of the additional intermediate updates of ${\mathbf{U}}$ for $\beta = 1$, and slightly worse otherwise. As stated above, Schmidt’s updates are the same as ours for $\beta \neq 1$, and so is their behavior. For $\beta = 1$, our updates show the smallest variance overall and yield the the lowest cost below 100 iterations, which is a typical upper bound in practice. The longer run time is due to the shifting of the ${\mathbf{1}}$-matrix. For $\beta \in {\left\{0, 2\right\}}$, the loss distributions of our, Wang’s, and Smaragdis’ average updates look like they have the same mean but a slightly different standard deviation. To test the hypothesis that the costs are statistically equivalent in respect of the mean, we employ Welch’s $t$-test. The $p$-values suggest that the null hypothesis can be rejected almost surely for $\beta = 0$, whereas for $\beta = 2$ in general it cannot.
Conclusion
==========
To the best of our knowledge, our letter is the only one to provide a complete and exact derivation of the multiplicative updates for the convolutional NMF. Above, the cost function is generalized to the family of $\beta$-divergences. It is shown by simulation that the updates are stable and that their behavior is consistent for $\beta \in {\left\{0, 1, 2\right\}}$.
Let $u_{kn} = \sum\nolimits_{i, m}{w_{ki}(m) \, h_{i, n - m}}$ and ${\mathbf{U}} = {\begin{bmatrix} u_{kn} \end{bmatrix}} \in {\mathbb{R}}^{K \times N}$. Then, for any $p \in {\left\{1, 2, \dots, K\right\}}$, $q \in {\left\{1, 2, \dots, I\right\}}$, and $r \in {\left\{0, 1, \dots, M - 1\right\}}$: $$\begin{aligned}
&\frac{\partial}{\partial {w}_{pq}(r)} \, C{\left({\left\{{\mathbf{W}}_m\right\}}, {\mathbf{H}}\right)} \nonumber \\
&\qquad{} = \frac{\partial}{\partial {w}_{pq}(r)} \, \sum\nolimits_{n}{\left(\frac{u_{pn}^{\beta}}{\beta} - \frac{v_{pn} \, u_{pn}^{\beta - 1}}{\beta - 1} \right)} \nonumber \\
&\qquad{} = \sum\nolimits_n{{\left(u^{\beta - 1}_{pn} - v_{pn} \, u_{pn}^{\beta - 2}\right)} \, h_{q, n - r} } \text{.} \end{aligned}$$ Choosing $\kappa$ from as $$\kappa = \frac{w_{pq}(r)}{\sum\nolimits_n{u_{pn}^{\beta - 1} \, h_{q, n - r}}} \text{,}$$ leads to the first update rule $$w_{ki}^{t + 1}(m) = w_{ki}^t(m) \, \frac{\sum\nolimits_n{v_{kn} \, u_{kn}^{t^{\beta - 2}} \, h_{i, n - m}^t}}{\sum\nolimits_n{u_{kn}^{t^{\beta - 1}} \, h_{i, n - m}^t}} \text{.} \qquad \blacksquare$$ Further, for any $p \in {\left\{1, 2, \dots, I\right\}}$ and $q \in {\left\{1, 2, \dots, N\right\}}$: $$\begin{aligned}
&\frac{\partial}{\partial h_{pq}} \, C{\left({\left\{{\mathbf{W}}_m\right\}}, {\mathbf{H}}\right)} \nonumber \\
&\qquad{} = \frac{\partial}{\partial h_{pq}} \, \sum\nolimits_{k, n}{\left(\frac{u_{kn}^{\beta}}{\beta} - \frac{v_{kn} \, u_{kn}^{\beta - 1}}{\beta - 1}\right)} \text{.}
\label{eq:partial_C_h}\end{aligned}$$ It is straightforward to show that $$\frac{\partial}{\partial h_{pq}} \, u_{kn} = w_{kp}{\left(n - q\right)}
\label{eq:partial_u_h}$$ by setting $n - m = q \leadsto m = n - q$. As a result, plugging in $q + m$ for $n$ in and using , we finally obtain $$\begin{aligned}
&\frac{\partial}{\partial h_{pq}} \, C{\left({\left\{{\mathbf{W}}_m\right\}}, {\mathbf{H}}\right)} \nonumber \\
&\qquad{} = \sum\nolimits_{k, m}{w_{kp}(m) \, {\left(u^{\beta - 1}_{k, q + m} - v_{k, q + m} \, u_{k, q + m}^{\beta - 2}\right)}} \text{.}
$$ Choosing $\mu$ from as $$\mu = \frac{h_{pq}}{\sum\nolimits_{k, m}{w_{kp}(m) \, u_{k, q + m}^{\beta - 1}}} \text{,}$$ leads to the second update rule $$h_{in}^{t + 1} = h_{in}^t \, \frac{\sum\nolimits_{k, m}{w_{ki}^t(m) \, v_{k, n + m} \, u_{k, n + m}^{t^{\beta - 2}}}}{\sum\nolimits_{k, m}{w_{ki}^t(m) \, u_{k, n + m}^{t^{\beta - 1}}}} \text{.} \qquad \blacksquare$$
|
---
abstract: 'The article presents calculated dissociative recombination (DR) rate coefficients for H$_3^+$. The previous theoretical work on H$_3^+$ was performed using the adiabatic hyperspherical approximation to calculate the target ion vibrational states and it considered just a limited number of ionic rotational states. In this study, we use accurate vibrational wave functions and a larger number of possible rotational states of the H$_3^+$ ground vibrational level. The DR rate coefficient obtained is found to agree better with the experimental data from storage-ring experiments than the previous theoretical calculation. We present evidence that excited rotational states could be playing an important role in those experiments for collision energies above 10 meV. The DR rate coefficients calculated separately for ortho- and para-H$_3^+$ are predicted to differ significantly at low energy, a result consistent with a recent experiment. We also present DR rate coefficients for vibrationally-excited initial states of H$_3^+$, which are found to be somewhat larger than the rate coefficient for the ground vibrational level.'
author:
- 'Samantha Fonseca dos Santos$^\dag$, Viatcheslav Kokoouline$^\dag$, and Chris H. Greene$^\ddag$'
title: 'Dissociative recombination of H$_3^+$ in the ground and excited vibrational states.'
---
Introduction
============
Dissociative recombination (DR) of the simplest polyatomic ion H$_3^+$ $$\mathrm{H}_3^+ + e^- \longrightarrow \mathrm{H}_2+\mathrm{H}\ \mathrm{or}\ \mathrm{H}+\mathrm{H}+\mathrm{H}$$ has been studied for several decades both in experiment and theory [@oka_general; @larsson00; @kokoouline01; @kokoouline03b]. The measured rate of the reaction is relatively fast [@larsson00; @sundstrom94; @jensen01; @larsson93; @mccall03; @mccall04; @kreckel02; @kreckel05] for electron energies below 1 eV, which was eventually attributed to the strong Jahn-Teller coupling between vibrational motion of the ion and the incident $p$-wave in the electronic continuum [@kokoouline01; @kokoouline03b; @kokoouline03a]. Currently, there is general agreement between theory [@kokoouline03b; @kokoouline03a; @orel93] and most recent experiments [@sundstrom94; @larsson00; @jensen01; @larsson93; @mccall03; @mccall04; @kreckel02; @kreckel05] for the DR rate coefficient in H$_3^+$. However, the detailed energy dependence of the theoretical rate coefficient [@kokoouline03b; @kokoouline03a] in the range 0-2 eV exhibits differences from the rate coefficient measured in recent high resolution storage ring experiments [@mccall03; @mccall04; @kreckel05]. One prominent point of disagreement is the much more pronounced resonance structure in the theoretical rate coefficient; this plethora of resonances is associated with Rydberg states of the neutral molecule H$_3^*$. Although similar resonance structure is visible in the experimental data, it is much less pronounced.
Dissociative recombination of H$_3^+$ is a four-body problem: Because the DR process starts with an electron-H$_3^+$ collision, one must account for the motion of the electron and its coupling with the molecular degrees of freedom. In order to represent Jahn-Teller coupling between electronic and vibrational motion, one must take into account at least two degrees of freedom of vibrational motion (two hyperangles in our approach). Finally, the third vibrational coordinate (the hyper-radius) should be included in order to describe the dissociative channel. Therefore, the theoretical approach includes several ingredients [@kokoouline03b; @kokoouline03a] and a number of approximations were made to simplify the calculations in our previous theoretical study [@kokoouline03b; @kokoouline03a].
One of the important approximations used in Refs. [@kokoouline03b; @kokoouline03a] is the adiabatic hyperspherical approximation: motion in the hyper-radial coordinate was treated as adiabatic compared to motion in the hyperangles, i.e. motion in the hyperangles was considered to be much faster than motion in the hyper-radius. Correspondingly, vibrational states $\Phi_v$ of the ion have been represented as simple products of hyperangular and hyper-radial functions. Somewhat surprisingly, this approximation describes reasonably well (at about the 1% level) vibrational energies of the ion for the low vibrational states in low-lying hyperspherical potential curves (see Table \[table:vibr\_energies\]). For excited vibrational states with energies higher than 1 eV above the ground rovibrational level, the error increased to the vicinity of 10 meV. Such highly excited vibrational states may support Rydberg states of the neutral H$_3^*$ molecule in the energy region corresponding to the low-energy DR process under consideration. Accordingly, the absolute positions of such Rydberg states were presumably calculated with an error of about 10 meV. We should also mention that the highest vibrational levels that must be included in our theoretical treatment (in order to represent the initial dissociation of the neutral molecule) have around 4 eV of vibrational energy. Another source of error in the position of Rydberg states is the neglected energy-dependence of quantum defects used in Refs. [@kokoouline03b; @kokoouline03a]. Analysis of an [*ab initio*]{} calculation [@mistrik00] shows that the quantum defects are in fact only weakly energy-dependent close to the equilibrium geometry of the molecular ion. The maximum error in positions of Rydberg states for $n\approx 2-3$ associated with this approximation is estimated to be around 10-15 meV, at least for Rydberg states close to that equilibrium configuration. Certain Rydberg states appear in the DR spectrum as resonances and, therefore, play an important role in the detailed comparison of the theoretical DR rate coefficient with the high-resolution experimental data. On the other hand, the DR rate coefficient that has been thermally averaged over a Maxwell-Boltzmann distribution should be comparatively insensitive to the detailed positions of Rydberg states, as long as the average level density and the resonance widths are described correctly. Indeed, the theoretical thermal rate coefficient calculated in the previous study [@kokoouline03b] agrees with the experimentally-measured thermal rate coefficient.
The adiabatic hyperspherical approximation implemented previously for the ionic vibrational eigenstate calculation neglected all non-adiabatic coupling between different adiabatic channels. This approximation could also adversely affect the calculated DR rate coefficient since the main DR mechanism in H$_3^+$ is indirect: neglect of some non-adiabatic effects might therefore be expected to cause an underestimation of the theoretical DR rate coefficient. However, the most important coupling responsible for the high DR rate coefficient of H$_3^+$ is non-Born-Oppenheimer Jahn-Teller coupling, which has been accounted for in the previous study.
Finally, in the previous study only a few rotational levels of the initial state of the ion were included up to $N^+=3$. (The rotational state (3,1) was not included. Here and below we use the generally accepted notations for rotational states of the ion [@lindsay01], where the two numbers in () are the total angular momentum $N^+$ of the ion and its projection $K^+$ on the ionic symmetry axis.) The inclusion of a larger number of rotational states has since been recognized as possibly important due to the following: If the electron energy is high enough, there is a large probability that the H$_3^+$ ion is excited into a higher rotational level [@kokoouline03b; @faure06]. In fact, the probability of this rotational excitation is high enough to be competitive with the DR process. Thus, for electron energies above 10 meV, higher rotational states of the ion could be populated in the storage ring even though, initially, the ionic beam has been prepared in the ground rovibrational state. Excitations can happen, for instance, when the ions pass through the toroidal region or the electron cooler [@mccall04].
The present study improves upon two of the three approximations discussed above. First, we calculate accurate vibrational states of the ion instead of relying on the adiabatic hyperspherical approximation; the adiabatic hyperspherical [*representation*]{} is still utilized, but since channel coupling is now included, the calculated eigenspectrum can be made arbitrarily accurate, in principle. Second, we account for a larger number of initial rotational states. Finally, we calculate the rate coefficients of DR processes that start from two different excited vibrational states of H$_3^+$.
The article is organized as follows. Section II briefly summarizes our theoretical approach and, in particular, the method to obtain accurate vibrational energies and wave functions of the ion. Section \[sec:averaging\] discusses the various averaging procedures that must be performed on the raw theoretical DR rate coefficient in order to compare theory with the data from existing storage ring experiments. Section \[sec:results\] presents our results and Section \[sec:conclusion\] gives our conclusions.
Theoretical approach
====================
The procedure of calculation of the cross-section is very similar to that described in Ref. [@kokoouline05]. Here we briefly summarize the main steps.
The treatment is based on construction of the multi-channel electron-H$_3^+$ scattering matrix $\cal S$. The asymptotic channels $|i\rangle$ of the matrix corresponds to different rovibrational levels of the ion. After a collision with the electron the rovibrational state of the ion may change: $$\label{eq: collision process}
e^- + {\rm H}_{3}^{+}(i) \longrightarrow e^- + {\rm H}_{3}^{+}(i')\,.$$ However, the collision conserves the overall symmetry of the system. (That is, $\cal S$ is diagonal in the irreducible representation $\Gamma_{tot}$ of the symmetry group $D_{3h}$ of the Hamiltonian.) We assume that the electronic state of the ion is the ground $^1A_1'$ state. Since protons are fermions, for H$_3^+$ in the ground electronic state, the allowed irreducible representations of the total nuclear wave function (including space and nuclear spin coordinates) are $A_2'$ and $A_2''$ of the $D_{3h}$ symmetry group. When the scattering matrix ${\cal S}_{i,i'}$ is constructed, highly excited vibrational levels of H$_3^+$ are included. These vibrational levels in fact represent the discretized vibrational continuum, and they have finite lifetimes with respect to dissociation. The energy of the incident electron is not high enough for these levels to represent open channels for dissociative ionization, but the electron can be captured into the Rydberg states attached to the highly excited levels. If this happens, the system dissociates instead of ionizing, because such low principle quantum number Rydberg states are locally open for dissociation. This causes the electron-ion scattering matrix $S$ to be non-unitary, and the ‘defect’ from unitarity of the relevant columns of $S$ can be identified with the dissociation probability.
The total wave function of the ion-electron system is constructed by taking into account all symmetry restrictions determined by the two allowed irreducible representations of the system, which is discussed in detail in Refs.[@kokoouline03b; @kokoouline03a]. The wave functions of the $A_2'$ and $A_2''$ irreducible representations of the channel states $|i\rangle$ are constructed from the product of the rotational, vibrational, nuclear, and electronic degrees of freedom of the system (see also Eqs. (2) of Ref. [@kokoouline03a]): $$\begin{aligned}
\label{eq:Total Wave Function}
|i\rangle=\hat P^{sym}\Phi_{total}\,,\nonumber\\
\Phi_{total} = \Phi_{rot}\Phi_{vib}\Phi_{ns}\Phi_{el}\,.\end{aligned}$$ The operator $\hat P^{sym}$ projects the product $\Phi_{total}$ on the corresponding irreducible representation, $A_2'$ or $A_2''$. Each factor in the second equation is calculated by the diagonalization of the respective Hamiltonian, except $\Phi_{el}$. In the above equation, $\Phi_{rot}$ is the rotational wave function of the ion, $\Phi_{vib}$ is the vibrational wave function, $\Phi_{ns}$ is the nuclear spin wave function, and $\Phi_{el}$ represents the wave function of the incoming electron. The functions $\Phi_{el}$ do not diagonalize completely the clamped-nucleus electronic Hamiltonian; the matrix of the electronic Hamiltonian represented by $\Phi_{el}$ has non-diagonal elements responsible for Jahn-Teller coupling in H$_3$ [@mistrik00; @staib90a; @staib90b; @stephens94; @stephens95].
The rotational, nuclear spin, and electronic functions in the product $\Phi_{total}$ are represented in the same way as in our previous study, Ref. [@kokoouline03b]. But the vibrational part $\Phi_{vib}$ is obtained differently here, because we no longer utilize the hyperspherical adiabatic approximation [@Macek68; @Fano_review; @Lin_review]. In that approximation, the non-adiabatic couplings between different hyperspherical adiabatic states of the same vibrational symmetry were entirely neglected, an approximation that works reasonably well for low vibrational levels of H$_3^+$ ion. In this study we use much better vibrational states calculated using the slow variable discretization (SVD) approach [@tolstikhin96; @kokoouline2006], where non-adiabatic couplings are taken into account. Table \[table:vibr\_energies\] compares accuracy of calculations of the present approach with the hyperspherical adiabatic approach. The calculations of H$_3^+$ vibrational states by Jaquet [*et al.*]{} [@jaquet98], obtained for the same Born-Oppenheimer ionic surface that we use here [@cencek98], are taken as an ‘exact’ reference.
Once $\Phi_{total}$ are known, the matrix elements ${\cal S}_{i,i'}$ are calculated from the scattering matrix $S_{\Lambda,\Lambda'}(\cal Q)$ depending on three distances between protons in H$_3^+$ and describing the $e^-$+H$_3^+$ collision in the molecular frame, where the appropriate quantum numbers are projection $\Lambda$ of the electronic orbital momentum on the ionic principal axis and the set of internuclear coordinates $(\cal Q)$. As in the previous study [@kokoouline03b; @kokoouline03a], we consider only the ‘$p-$wave’ of the electron, when it moves beyond the range of the ionic core. The nonspherical nature of the electron-ion interaction potential undoubtedly mixes other electronic orbital momenta when the electron is at short range, but scattering calculations have shown that the probability for an incident $p-wave$ electron to scatter into an $s$ or $d$ orbital momenta is quite low in this near-threshold energy range. The matrix $S_{\Lambda,\Lambda'}(\cal Q)$ is obtained from the reaction matrix $K^0_{\Lambda,\Lambda'}(\cal Q)$ given by formulas in Refs. [@staib90a; @staib90b]. As was mentioned above, the electronic Hamiltonian and, correspondingly, the matrices related to body-frame scattering, i.e. $S$ and $K$, are not diagonal. The nonzero off-diagonal elements $K^0_{1,-1}$/$S_{1,-1}$ are due to the Jahn-Teller coupling.
When the total energy of the $e^-$+H$_3^+$ system is not high enough for all the channels $|i\rangle$ to be energetically open for ionization, the usual situation, the physically meaningful scattering matrix ${\cal S}^{phys}(E)$ is obtained from ${\cal S}_{i,i'}$ by the standard closed-channel elimination procedure of multi-channel quantum defect theory (MQDT) [@seaton_review; @aymar96]. The dissociative recombination rate coefficient is then calculated using the unitarity ‘defect’ of the corresponding columns of ${\cal S}^{phys}$ [@kokoouline03b; @kokoouline03a]. In order to compare our results with the data from storage ring experiments, we carry out a number of averaging procedures in order to model the experimental conditions, as detailed below.
$v_1v_2^{l_2}$, irrep. adiab. approx. SVD calc. Jaquet [*et al*]{} [@jaquet98]
------------------------ ---------------- ----------- --------------------------------
$00^0\,A_1$ 0 0 0
$10^0\,A_1$ 3188 3177.5 3178.15
$02^0\,A_1$ 4754 4777.9 4778.01
$20^0\,A_1$ 6273 6260.6 6261.81
$03^0\,A_1$ 7382 7275.8 7285.32
$12^0\,A_1$ 7648 7772.9 7769.06
$04^0\,A_1$ 8979 8995.6 9000.58
$30^0\,A_1$ 9248 9255.5 9252.08
$13^3\,A_1$ 10129 9958.6 9963.98
$22^0\,A_1$ 10420 10598. 10590.51
$05^3\,A_1$ 10912. 10915.47
$01^1\,E$ 2516 2521.1 2521.20
$02^2\,E$ 5001 4996.6 4997.73
$11^1\,E$ 5554 5552.9 5553.95
$03^1\,E$ 6978 6999.2 7005.81
$12^2\,E$ 7897 7865.2 7869.82
$21^1\,E$ 8478 8487.3 8487.53
$04^2\,E$ 9131 9096.6 9112.90
$13^1\,E$ 9736 9649.2 9653.42
$04^4\,E$ 9802 9999.2 9996.72
$22^2\,E$ 10677 10646. 10644.59
$05^1\,E$ 10916 10827. 10862.46
$31^1\,E$ 11265 11349. 11322.31
$14^2\,E$ 11739 11656. 11657.69
$05^5\,E$ 12078. 12078.43
$03^3\,A_2$ 7482 7493.2 7491.89
$13^3\,A_2$ 10243 10209.7 10209.55
: Accuracy test of the adiabatic hyperspherical approximation and the improved coupled-channels hyperspherical calculation adopted for the computations presented in this paper. Specifically, this table compares several vibrational energies in cm$^{-1}$ calculated in the present approach with the older adiabatic approximation results and those taken from a full three-dimensional diagonalization [@jaquet98].[]{data-label="table:vibr_energies"}
Calculation of the raw DR rate coefficient and its average {#sec:averaging}
==========================================================
The rate coefficient for dissociative recombination depends on the initial rovibrational state $|rv\rangle$ of the ion. Using the defect of unitarity of the physical scattering matrix ${\cal S}^{phys}$, the DR rate coefficient $\alpha_{rv}$ for a particular rovibrational state $|rv\rangle$ is given by [@kokoouline03b] $$\begin{aligned}
\label{eq:CS}
\alpha_{rv} (E_{el})=\frac{\pi}{\sqrt{2E_{el}}}\sum_{N} \frac{2N+1}{2N^++1}\left(1-\sum_{\substack{i=1,N_o}}{\cal S}_{i, i'}^{phys}(E_{el}){\cal S}_{i', i}^{\dagger phys}(E_{el})\right)\,,\end{aligned}$$ where $E_{el}$ is the kinetic energy of the electron at infinity; the channel index $i'$ at ${\cal S}$ corresponds to the initial $|rv\rangle$ state. The scattering matrix ${\cal S}$ is calculated separately for each total angular momentum $N$ of the ion-electron system, $N^+$ is the angular momentum of the initial $|rv\rangle$ state. Notice that several values of $N$ may contribute to the DR rate coefficient for the given state $|rv\rangle$.
We now address the way we account for the experimental conditions in storage ring experiments, especially the experimental distribution over relative velocities of the ion and electron. In the storage ring experiments, the distribution is not uniform: the parallel component $u_{\parallel}$ of the $e^--$H$_3^+$ relative velocity $\vec u$ has a smaller distribution width than the perpendicular component $u_{\perp}$. The rate coefficient $\alpha_{rv} (E_{el})$ also depends on the rotational level. In the experiments, the initial vibrational state is usually the ground state $\{00^0\}$, but several rotational levels are typically populated. The population of different rotational levels is accounted for by introducing a finite rotational temperature $T_{rv}$ of H$_3^+$, though it should be remembered that this assumption that the ions are in thermodynamic equilibrium at some $T$ has not been explicitly confirmed experimentally. Present generation storage ring experiments measure the DR rate coefficient as a function of the parallel component $E_{\parallel}$ of the total relative energy $E_{el}$ of the ion and electron. The average over the non-uniform electron velocity distribution is then given by the following formula [@kokoouline05] $$\begin{aligned}
\label{eq:averaging_final}
\alpha_{sr}(E_\parallel)=\frac{1}{N_{sr}}\sum_{rv}\int_{-\infty}^{\infty} du_{\parallel}\int_{0}^{\infty} dE_\perp \alpha_{rv}\left[(v_{\parallel}+u_{\parallel})^2/2+E_\perp\right]w_{sr}(rv,T_{rv})\,,\end{aligned}$$ where the normalization constant $N_{sr}$ and the statistical factor $w_{sr}(rv,T_{rv})$ are $$\begin{aligned}
\label{eq:Weights_sr}
N_{sr}=\sum_{rv}\int_{-\infty}^{\infty} du_{||}\int_{0}^{\infty} dE_\perp w_{sr}(rv,T_{rv})\,,\nonumber\\
w_{sr}(rv,T_{rv})=(2I+1)(2N^++1)\exp\left(-\frac{E_{rv}}{kT_{rv}}\right)\exp\left(-\frac{u_{||}^2}{2\Delta E_{\parallel}}\right)\exp\left(-\frac{E_\perp}{\Delta E_{\perp}}\right)\,.\end{aligned}$$ In the above equations, $\Delta E_{\perp}$ and $\Delta E_{||}$ are distribution widths (measured in energy units) for the parallel $\vec u_{\parallel}$ and perpendicular $\vec u_{\perp}$ components of the relative velocity $\vec v=\vec v_{\parallel}+\vec u_{\parallel}+\vec u_{\perp}$; $\vec v_{\parallel}$ represents the center of the velocity distribution, i.e. velocity at which the actual measurements are made in the storage ring experiments: $E_{\parallel}=v_{\parallel}^2/2$. The perpendicular component of the energy is $E_{\perp}=u_{\perp}^2/2$. $I$ is the total nuclear spin, which can be $\frac{1}{2}$ or $\frac{3}{2}$ depending on the rovibrational state $|rv\rangle$; $E_{rv}$ is the energy of the H$_3^+$ rotational state (assuming that the vibrational state is always the same). The sums in the above equations are over all possible rovibrational states $|rv\rangle$ including all symmetries of the rovibrational states that can be populated at a given rotational temperature $T_{rv}$.
To compare with the storage ring experiments, one also must take into account the so-called toroidal effect, which is due to the geometry of the merged electron and ion beams, as there are two regions where the electrons are bent into or out of a trajectory that is parallel to the ions. The experimentally-observed rate coefficient with the toroidal effect correction is [@kokoouline05]: $$\begin{aligned}
\label{eq:Toroidal_Correction}
\alpha_{tor}(E_{\parallel})=\alpha_{sr}(E_{\parallel}) + \frac{2}{L}\int_{0}^{l_{bend}}\alpha_{sr}(\tilde{E}_{\parallel}(x))dx\,.\end{aligned}$$ The function $\tilde{E}_{\parallel}(x)$ and the length $l_{bend}$ account for the geometry of the merged electron and ion beams. A detailed discussion can be found in Ref. [@kokoouline05]. After the averaging procedures described above, the DR rate coefficient $\alpha_{tor}(E_{\parallel})$ can be compared with the raw experimental data from the storage ring experiments [@mccall03; @mccall04; @kreckel05].
The thermally averaged DR rate coefficient relevant to a situation in which the ions and electrons are in common thermal equilibrium at temperature $T$ is calculated from the following integral: $$\begin{aligned}
\label{eq:Thermal Rate}
\alpha_{th}(kT)=\frac{1}{N_{th}}\int_{0}^{\infty}\sum_{rv}\alpha_{rv}(E_{el})w(rv,kT)\sqrt{E_{el}}dE_{el}\end{aligned}$$ where the normalization constant $N_{th}$ and the statistical factor $w_{th}(rv,kT)$ are $$\begin{aligned}
\label{eq:Weights}
N_{th}=\int_{0}^{\infty}\sum_{rv}w_{th}(rv,kT)\sqrt{E_{el}}dE_{el},\nonumber\\
w_{th}(rv,T)=(2I+1)(2N^+ +1)e^{-E_{rv}(R)/kT}e^{-E_{el}/kT}\,.\end{aligned}$$
Results {#sec:results}
=======
![\[fig:alpha\_SR\] Comparison of experimental [@mccall03; @mccall04; @kreckel05] (black circles and red diamonds) and present theoretical (solid line) dissociative recombination rate coefficients. In the theoretical calculation, the rotational temperature is $T_{rv}=$1000K, the widths $\Delta E_{\parallel}$ of the parallel component of the electron energy is 0.1 meV, and $\Delta E_{\perp}=$2 meV.](fig1.eps){width="12cm"}
Figures \[fig:alpha\_SR\], \[fig:alpha\_SR\_dif\_rot\], \[fig:alpha\_ortho\_para\_new\_exp\], and \[fig:alpha\_SR\_dif\_vib\] summarize the computational results of the present study. Fig. \[fig:alpha\_SR\] compares the experimental DR rate coefficient from a recent storage ring experiment [@mccall03; @mccall04] with the present calculation. Overall agreement with experiment is better than in the previous theory (see Fig. 6 in Ref. [@kokoouline05]). This is due to two factors: In the present treatment, we use more accurate vibrational wave functions, which are calculated using SVD, i.e. without the hyperspherical adiabatic approximation. The second improvement is due to the larger rotational temperature $T_{rv}$ and the larger number of rotational states that are taken into account in the averaging formula Eq. (\[eq:averaging\_final\]). The energy of the highest rotational level (5,1) included in the present calculation is 1250.3 cm$^{-1}$ [@lindsay01]. The energy of the lowest state (1,1) allowed for H$_3^+$ is 64.1 cm$^{-1}$ above the symmetry-forbidden state (0,0). In Table \[table:rot\_energies\] we show the partial contributions of the rotational states with different $N^+$ to the total DR rate coefficient with $T_{rv}$=300 K for four different energies 0.001, 0.01, 0.1 and 1 eV. Although the relative population of the $N^+=5$ rotational states is about 3% at $T_{rv}$=300 K, there can be accidental cases, at certain energies where other contributions happen to be small, where it can become an important contributor to the observed DR rate. For example, at a collision energy 0.0997 eV, the cumulative contribution of the states with $N^+=5$ contributes 14% of the calculated DR rate.
![\[fig:alpha\_SR\_dif\_rot\] This figure presents calculations with different rotational temperatures in Eq.. The theoretical DR rate coefficients obtained at higher rotational temperatures, e.g. 300K-1000K, agree better with both the CRYRING and TSR experiments.](fig2.eps){width="12cm"}
Energy (eV) Total DR rate coefficient (cm$^{3}$/s) $N^{+}=1$ $N^{+}=2$ $N^{+}=3$ $N^{+}=4$ $N^{+}=5$
------------- ---------------------------------------- ----------- ----------- ----------- ----------- -----------
0.00101 $2.19\times 10^{-7}$ 0.295 0.198 0.364 0.126 0.017
0.0103 $4.79\times 10^{-8}$ 0.388 0.139 0.301 0.136 0.035
0.0997 $6.30\times 10^{-9}$ 0.157 0.238 0.266 0.196 0.143
1.02 $7.10\times 10^{-11}$ 0.324 0.201 0.315 0.111 0.049
: Partial fractional contributions to the DR rate coefficient from individual ionic angular momenta, $N^{+}=1,\cdots ,5$, at four different energies. The DR rate coefficient is calculated for $T_{rv}$=300K. In the table, the DR rate coefficient and partial contributions are calculated without the toroidal averaging. For larger temperatures, higher rotational states can have important contributions at certain electron energies.[]{data-label="table:rot_energies"}
Figure \[fig:alpha\_SR\_dif\_rot\] demonstrates the theoretical DR rate coefficients we obtain for different ionic rotational temperatures $T_{rv}$. The results with higher $T_{rv}$ agree better with the experiments than when the experimentally estimated rotational temperature is adopted. While this better agreement at a higher rotational temperature could be fortuitous, our results suggest that it is worth exploring whether the rotational temperature in both of the recent storage-ring experiments [@mccall03; @mccall04; @kreckel05] might be larger than 40 K or 13 K, respectively. (40 K and 13 K are the estimated rotational temperatures in the two experiments). This conclusion would conflict with another suggestion by Kreckel [*et al.*]{} [@kreckel05], that only the two lowest rotational states are populated in the recent storage ring experiments [@mccall03; @mccall04; @kreckel05] and that, therefore, the rotational temperature $T_{rv}$ is about 13 K—40 K. In fact, it was demonstrated previously, that if the electron energy is high enough, the electron-ion collision might not only cause DR or result in an elastic collision, but it can also result in rotational excitation of the ions when they circulate in the storage ring [@kokoouline03b; @faure06]. In all the tests we have carried out for temperatures in the range 13 K-40 K, the calculated DR rate coefficient has pronounced structure due to Rydberg states present in the raw rate coefficient of Eq. \[eq:CS\]. The experimental DR rate coefficient has some structure, but it is less pronounced than our averaged and convolved theoretical rate coefficient. Since the resonances due to the Rydberg states are smeared out in the experiments, it suggests the possibility that in the experiment there could be an additional source of broadening. The broadening could arise from a higher rotational temperature, from a broadened electron energy distribution, from additional broadening associated with the toroidal region, or perhaps from something else.
An alternative possibility that cannot be ruled out is that our theoretical treatment might have underestimated the resonance widths. For the resonances that dominate the DR rate, the predissociation partial width is larger than the autoionization partial width, and under those conditions, the calculated DR rate is comparatively insensitive to changes in the predissociation linewidth. Thus, it would be a valuable benchmark for experiments (or other, improved theories) to determine the predissociation partial widths of individual resonances above the ionization threshold, to provide a direct test of the accuracy of our present calculations at the level of spectroscopic accuracy.
One possible source of rotational excitation could be the repeated circulation of the molecular ions through electron cooler during the ramping of the cathode voltage [@mccall04]. The authors of Ref. [@mccall04] deduced that the rotational temperature is 40 K based on theoretical cross-sections of the rotational excitation of H$_3^+$ given in Ref. [@faure02]. Since then, the cross-sections have been reconsidered and corrected [@faure06]. Correspondingly, we revisit here the arguments of Ref. [@mccall04] based on the new inelastic probabilities determined by Ref. [@faure06]. If we take the (1,1)$\to$(2,1) rotational excitation cross-section to be 710 Å$^2$ from Ref. [@faure06] instead of 210 Å$^2$ from Ref. [@faure02], we obtain the relative population 7.5% instead of 2.2% in Ref. [@faure02]. If we take the (1,0)$\to$(3,0) rotational excitation cross-section to be 270 Å$^2$ from Ref. [@faure06] instead of 120 Å$^2$ from Ref. [@faure02], we obtain the relative population 4.6% instead of 2.0% in Ref. [@faure02]. The relative population 7.5% of the (2,1) states corresponds to the temperature $T_{rv}$=96 K, the relative population 4.6% of the (3,0) states gives $T_{rv}$=210 K. Both values of the temperature are significantly higher than the values (40 K and 13 K) quoted in the experimental papers, although not as high as 1000K, in our present estimation.
The disagreement between the current theory and experimental data around energy $E_{\parallel}=$ 6 meV (see Fig. \[fig:alpha\_SR\]) could be caused by errors in the calculated positions of Rydberg states in that region, which are attached to highly-excited rovibrational levels of the ion. In our calculation, we use energy-independent quantum defects even though they depend weakly on the principal quantum number. In addition, the accuracy of calculation of energies for the highly-excited rovibrational levels may be of the order of 6 meV. Note that there is also a disagreement with experiment in the region of very small energies, below 0.2 meV. This region is well below the net effective energy resolution of the experiment ($\Delta E_{\perp}=$2 meV). The experimental rate coefficient appears to behave there as ${E_{\parallel}}^{-1/2}$, which is the same total energy dependence expected for the raw, unconvolved DR rate coefficient. However, in our theoretical calculation, we of course included the convolution according to the perpendicular energy distribution with the width $\Delta E_{\perp}=$2 meV. This makes the convolved theoretical DR rate coefficient become essentially flat at very low energy $E_{\parallel} \ll 2$ meV even though the raw theoretical rate coefficient also grows as $E^{-1/2}$ at very low energy. This suggests that in the experiment, the distribution of relative electron energies could be even more complicated than discussed above, and in particular, the resolution might be even better at very low energies than the quoted energy resolution. Another possible explanation for the disagreement is that a Rydberg resonance exists in H$_3$ at an energy just above (+0.3 meV) the (1,1) rotational state of the ion. The predissociation linewidth of the resonance must also be of the order 0.3 meV. We have made a simple test calculation in which we artificially tuned one of the Rydberg resonance to be placed just above the (1,1) ionization threshold. The resulting theoretical DR rate coefficient looks very similar to the experimental DR rate coefficients at energies below 0.5 meV. Therefore, the sharp increase of the experimental DR rate coefficient for energies below 0.3 meV requires additional consideration. Similar discrepancies have been observed between DR theory and experiment at very low parallel energies, in other systems such as LiH$^+$,[@CurikGreene2007] so this could be a systematic issue for theory and experiment to confront which extends beyond the H$_3^+$ system alone.
![\[fig:alpha\_ortho\_para\_new\_exp\] This figure compares the theoretical DR rate coefficient to the high-resolution storage ring experiment of Kreckel [*et al.*]{} [@kreckel05] carried out at TSR. The experimental resolution parameters are $\Delta E_{\parallel}$ and $\Delta E_{\perp}$ are 25$\mu$eV and 0.5 meV respectively. The theoretical curve shown has been calculated with these parameters and rotational temperature $T_{rv}=$1000 K. The figure also shows the theoretical DR rate coefficients calculated separately for ortho- and para- configurations of H$_3^+$ with the same parameters $\Delta E_{\parallel}$, $\Delta E_{\perp}$, and $T_{rv}$.](fig3.eps){width="12cm"}
Figure \[fig:alpha\_ortho\_para\_new\_exp\] compares the present theoretical DR rate coefficient (dashed grey curve) with the recent TSR storage ring experiment by Kreckel [*et al.*]{} [@kreckel05]. In the TSR experiment, the parallel and perpendicular energy resolution parameters $\Delta E_{\parallel}=$25$\mu$eV and $\Delta E_{\perp}$=0.5 meV are slightly smaller than in the CRYRING experiment [@mccall03; @mccall04]. Therefore, the theoretical curve shown in the figure has been correspondingly convolved using these parameters. These calculations have also assumed that the target ion rotational temperature $T_{rv}=$ is equal to 1000 K. The overall agreement between theory and experiment is good except in the energy region below 0.15 meV already discussed above. As one can see, when the width $\Delta E_{\perp}$ is decreased from 2 meV in Fig. \[fig:alpha\_SR\] to 0.5 meV in Fig. \[fig:alpha\_ortho\_para\_new\_exp\], the theoretical rate coefficient agrees better with the sharp increase of the experimental DR rate coefficient for energies below 0.5 meV.
In the TSR experiment, Kreckel [*et al.*]{} [@kreckel05]. observed the dependence of the DR rate coefficient on the nuclear spin of H$_3^+$. They found that para-H$_3^+$ has a larger DR rate coefficient than ortho-H$_3^+$ for low energies ($<0.5$ meV). The previous theory [@kokoouline03b] has predicted different DR rate coefficient for para-H$_3^+$ and ortho-H$_3^+$: At low energies the theoretical DR rate coefficient for ortho-H$_3^+$ was larger than for para-H$_3^+$, i.e. opposite to what was observed in the experiment by Kreckel [*et al.*]{} We now revisit this issue in the context of our new and presumably improved theoretical description. Figure \[fig:alpha\_ortho\_para\_new\_exp\] shows the separate ortho-H$_3^+$ and para-H$_3^+$ DR rate coefficients calculated in the present treatment. The para-H$_3^+$ rate coefficient is significantly higher than the rate coefficient obtained for ortho-H$_3^+$. This dramatic difference between the ortho-H$_3^+$ and para-H$_3^+$ rate coefficients obtained at low electron energies and those of our previous theoretical study appears to result from slightly different positions of the calculated Rydberg H$_3$ states whose energies lie close to the (1,1) and (1,0) ionic rotational states.
Figure \[fig:alpha\_SR\_dif\_vib\] presents our theoretical DR rate coefficients obtained for a target ion that is initially in an excited vibrational state. These calculations have been carried out for the first $\{01^1\}$ and second $\{10^0\}$ excited ionic vibrational states. (In {} we specify the vibrational quantum numbers of the ion using the normal mode notation [@lindsay01].) The energy of the lowest rotational state (0,0) for $\{01^1\}$ is 2521.4 cm$^{-1}$, and the energy of the lowest rotational state (1,1) for $\{10^0\}$ is 3240.7 cm$^{-1}$ [@lindsay01]. The DR rate coefficient for the two excited vibrational levels of the ion is higher than the DR rate coefficient for the ground state, which is reasonable, considering that a similar qualitative increase of the DR rate coefficient was previously observed in both theory and experiment for diatomic ions.
![\[fig:alpha\_SR\_dif\_vib\] Comparison between the theoretical DR rate coefficients for the H$_3^+$ ion prepared in the ground and excited vibrational levels. At low energy, the DR rate coefficient for the vibrationally excited ion is significantly larger than for the ion in the ground state. This result is consistent with trends observed for DR rate coefficients in diatomic ions, where the DR rate coefficient typically increases with vibrational excitation. In the legend, the numbers in parentheses have the same meaning as in Fig. \[fig:alpha\_SR\_dif\_rot\].](fig4.eps){width="12cm"}
Finally, Fig. \[fig:ther\_rate\] compares the theoretical and experimental thermal rate coefficients. The theoretical rate coefficients are obtained directly from the raw theoretical data using Eqs. (\[eq:Thermal Rate\]) and (\[eq:Weights\]). Thus, the toroidal effect and the finite widths $\Delta E_{\perp}$, $\Delta E_{\parallel}$, $kT_{rv}$ are not present in these theoretical results. In fact, our calculation shows that the inclusion of the toroidal correction increases the thermal rate coefficient by about 20 % approximately uniformly for all energies. Thus, the agreement with experiment is good. The theoretical thermal rate coefficient at $300$K is $5.6\times 10^{-8}$ cm$^3$/s.
![\[fig:ther\_rate\] The present theoretical thermal rate coefficient for dissociative recombination of H$_3^+$ is compared with the experimental rate coefficient deduced from the storage ring experiment of McCall [*et al.*]{}[@mccall03; @mccall04]](fig5.eps){width="12cm"}
Conclusions {#sec:conclusion}
===========
In summary, we would like to emphasize the following results from the present study:
We have calculated the rate coefficient of H$_3^+$ dissociative recombination using an improved description of the ionic vibrational states, and including more target rotational states. The resulting theoretical rate coefficient agrees reasonably well with two recent storage ring experiments [@mccall03; @mccall04; @kreckel05]. The agreement with experiment has been improved over that achieved in the previous theoretical study [@kokoouline03b; @kokoouline03a]; in particular, the present results may point to a resolution of the largest previous discrepancy, in the energy range from 0.04 eV to 0.15 eV. However, the improved agreement with the experimental data in that energy range was only obtained when we assume that the rotational temperature $T_{rv}$ of H$_3^+$ is significantly larger than 40 K and 13 K, the values given in the experimental study [@mccall03; @mccall04; @kreckel05]. Since no direct measurement of the temperature has actually been made inside the storage ring in those experiments (except indirectly for zero-energy collisions), there is a possibility that the ions get rotationally excited before the DR rate coefficient measurements are conducted. It was shown previously that the probability of rotational excitation of the ion by electrons is comparable to or larger than the DR probability, at energies where rotational excitation is energetically allowed. Thus, the rotational temperature in the experiment could be larger than 40 K or the ions might not even be in thermal equilibrium at any temperature. Thus, it would be desirable to monitor the rotational temperature during the DR measurement. Another possibile way to explore this effect is to artificially increase the temperature of the electron cooler or the width $\Delta E_\perp$ during the DR measurements and ascertain the temperature at which the DR rate coefficient starts to become sensitive to the temperature.
We have calculated the DR rate coefficients for separate ortho- and para-configurations of H$_3^+$. At energies below 10 meV, the DR rate coefficient for para-H$_3^+$ is an order of magnitude larger than for ortho-H$_3^+$. The experiment also shows that the para-H$_3^+$ DR rate coefficient is larger. However, since the ortho-/para-ratio in the experiment is not known it is not clear what is experimental DR rate coefficients for pure para-H$_3^+$ and ortho-H$_3^+$. Our previous calculations stressed that the ortho-para ratio of DR rates at very low collision energies should be used with some caution, because the rates at energies below 100K begin to get very sensitive to the specific resonance positions at the meV level. Those cautionary remarks are still applicable to the present results for the ratio of ortho and para DR rates at low energy. However, if the present order-of-magnitude difference of the low energy DR rate survives future improvements in theory and experiment, it will be interesting to explore possible implications of this difference for the chemistry of interstellar clouds.
Finally, the calculated DR rate coefficient for ions prepared in excited vibrational states is larger than in the ground vibrational state. Our new theoretical values for the DR rate coefficients of excited vibrational states will hopefully be tested one day in a storage-ring experiment. Currently, experimental DR measurements are made after the ions have been cooled. In principle, it seems possible to carry out this measurement using vibrationally-hot ions and, to monitor the DR rate coefficient as a function of the vibrational temperature. Such an experiment might give deeper insights into the energetics and target state dependence of the DR process, and it could then be compared with the present theoretical predictions.
We particularly thank A. Wolf, H. Kreckel, A. Petrignani for extensive and helpful discussions. We have also benefitted from the ongoing use of B. Esry’s 3-body code that solves the adiabatic hyperspherical eigenvalue problem. This work has been supported by the National Science Foundation under Grant No. PHY-0427460 and Grant No. PHY-0427376, by an allocation of NERSC and NCSA (project \# PHY-040022) supercomputing resources.The work of CHG has also been partially supported by the Miller Institute for Basic Research in Science, and from the Alexander von Humboldt Foundation.
[99]{}
T. Oka, Phil. Trans. R. Soc. Lond. A [**1848**]{}, 2847 (2006); T. R. Geballe, T. Oka, Science [**312**]{}, 1610 (2006); T. Oka, Phil. Trans. R. Soc. Lond. A [**358**]{}, 2363 (2000) and references therein.
M. Larsson, Phil. Trans. R. Soc. Lond. A [**358**]{}, 2433 (2000).
V. Kokoouline, C. H. Greene, and B. D. Esry, Nature (London) [**412**]{}, 891 (2001).
V. Kokoouline and C. H. Greene, Phys. Rev. A [**68**]{}, 012703 (2003).
V. Kokoouline and C. H. Greene, Phys. Rev. Lett. [**90**]{}, 133201 (2003).
G. Sundström, J. R. Mowat, H. Danared, S. Datz, L. Broström, A. Filevich, A. Källberg, S. Mannervik, K. G. Rensfelt, P. Sigray, M. af Ugllas, M. Larsson, Science [**263**]{}, 785 (1994).
M. J. Jensen, H. B. Pedersen, C. P. Safvan, K. Seiersen, X. Urbain, L. H. Andersen, Phys. Rev. A [**63**]{}, 052701 (2001).
M. Larsson, H. Danared, J. R. Mowat, P. Sigray, G. Sundström, L. Broström, A. Filevich, A. Källberg, S. Mannervik, K. G. Rensfelt, S. Datz, Phys. Rev. Lett. [**70**]{}, 430 (1993).
B. J. McCall, A. J. Huneycutt, R. J. Saykally, T. R. Geballe, N. Djuric, G. H. Dunn, J. Semaniak, O. Novotny, A. Al-Khalili, A. Ehlerding, F. Hellberg, S. Kalhori, A. Neau, R. Thomas, F. Österdahl, M. Larsson, Nature (London) [**422**]{}, 500 (2003).
B. J. McCall, A. J. Huneycutt, R. J. Saykally, N. Djuric, G. .H. Dunn, J. Semaniak, O. Novotny, A. Al-Khalili, A. Ehlerding, F. Hellberg, S. Kalhori, A. Neau, R. D. Thomas, A. Paal, F. Osterdahl, M. Larsson, Phys. Rev. A [**70**]{}, 052716 (2004).
H. Kreckel, S. Krohn, L. Lammich, M. Lange, J. Levin, M. Scheffel, D. Schwalm, J. Tennyson, Z. Vager, R. Wester, A. Wolf, D. Zajfman, Phys. Rev. A [**66**]{}, 052509 (2002).
H. Kreckel [*et al.*]{} Phys. Rev. Lett. [**95**]{}, 263201 (2005).
A. E. Orel, K. C. Kulander, Phys. Rev. Lett. [**71**]{}, 4315 (1993).
I. Mistrík, R. Reichle, U. Müller, H. Helm, M. Jungen, J. A. Stephens, Phys. Rev. A [**61**]{}, 033410 (2000).
C. M. Lindsay, B. J. McCall, J. Molec. Spectr. [**210**]{}, 60 (2001).
A. Faure, V. Kokoouline, C. H. Greene, J. Tennyson, J. Phys. B [**39**]{}, 4261 (2006).
V. Kokoouline and C. H. Greene Phys Rev. A [**72**]{}, 022712 (2005). A. Staib, W. Domcke, A. L. Sobolewski, Z. Phys. D [**16**]{}, 49 (1990).
A. Staib, W. Domcke, Z. Phys. D [**16**]{}, 275 (1990).
J. A. Stephens, C. H. Greene, Phys. Rev. Lett. [**72**]{}, 1624 (1994).
J. A. Stephens, C. H. Greene, J. Chem. Phys. [**102**]{}, 1579 (1995).
J. H. Macek, J. Phys. B [**1**]{}, 831 (1968).
U. Fano, Rep. Prog. Phys. [**46**]{}, 97 (1983).
C. D. Lin, Phys. Rep. [**257**]{}, 2 (1995).
O. I. Tolstikhin, S. Watanabe, and M. Matsuzawa, J. Phys. B: At. Mol. Opt. Phys. [**29**]{}, L389 (1996).
V. Kokoouline and F. Masnou-Seeuws, Phys. Rev. A [**73**]{}, 012702 (2006).
R. Jaquet, W. Cencek, W. Kutzelnigg, J. Rychlewski, J. Chem. Phys. [**108**]{}, 2837 (1998). W. Cencek, J. Rychlewski, R. Jaquet, W. Kutzelnigg, J. Chem. Phys. [**108**]{}, 2831 (1998). M. J. Seaton, Rep. Prog. Phys. [**46**]{}, 167 (1983).
M. Aymar, C. H. Greene, E. Luc-Koenig, Rev. Mod. Phys. [**68**]{}, 1015 (1996).
A. Faure and J. Tennyson, J. Phys. B [**35**]{}, 3945 (2002).
R. Curik and C. H. Greene, Phys. Rev. Lett. [**98**]{}, 173201 (2007).
|
---
author:
- 'Xuhuan Zhou$^1$ Weiliang Xiao$^{2*}$'
title: '**Well-posedness of a porous medium flow with fractional pressure in Sobolev spaces**'
---
[GBK]{}[song]{}
[^1] [^2]
[**Abstract**]{}.. Besides, we correct a mistake in our previous paper [@zhou01].
Introduction
============
In this paper we consider the following porous medium type equation $$\label{equation01}
\partial_t u=\nabla\cdot(u\nabla p),\ \ p=(-\Delta)^{-s}u,\ \ 0<s<1.$$ Where $x\in \mathbb{R}^n$, $n\geq 2$, and $t>0$. The initial data $u(x,0)\geq 0$.
This model is based on Darcy’s law and the pressure is given by an inverse fractional Laplacian operator. It was first introduced by Caffarelli and Vázquez [@caffarelli02], in which they proved the existence of a weak solution when $u_0$ is a bounded function with exponential decay at infinity. For $\alpha=\frac{n}{n+2-2s}$, Caffarellin, Soria and Vázquez [@caffarelli01] proved that the bounded nonnegative solutions are $C^\alpha$ continuous in a strip of space-time for $s\neq1/2$. And same conclusion for the index $s=1/2$ was proved by Caffarelli and Vázquez in [@caffarelli03]. [@carrillo01; @caffarelli04; @vazquez01] give a detail description of the large-time asymptotic behaviour of the solutions of (\[equation01\]). [@biler01; @stan01] considered some degenerate cases and show the existence and properties of self-similar solutions. Allen, Caffarelli and Vasseur [@allen01] studied the equation with another fractional time derivative, and proved the Hölder continuity for its weak solutions.
In this paper, we study the existence and uniqueness of solutions of (\[equation01\]) in Sobolev spaces. The method we used here is novel: Unlike the usual way to consider the weak solution in $L^{\infty}$ or construct approximate solutions of linear transport systems, we solve equation (\[equation01\]) by constructing linear degenerate diffusion transport systems. The well-posedness and properties of the constructed linear degenerate diffusion transport are interesting problem themselves. By this way we get that for $s\in [\frac12,1)$, $\alpha>\frac d2+1$, $u_0\in H^\alpha(\mathbb{R}^n)$ nonnegative, then for some $T_0>0$, the unique solution of (1.1) in $\mathbb{R}^n\times[0,T_0]$ exists. Besides, using the methods and results in this paper, we correct a mistake in our previous paper [@zhou01].
Preliminaries
=============
Define $\rho(x)\in C_c^\infty(\mathbb{R}^n)$ by $$\rho(x)=
\begin{cases}
c_0\exp(-\frac 1{1-|x|^2}), |x|<1,\\
0, |x|\geq 1,
\end{cases}$$ where $c_0$ is selected such that $\int \rho(x)dx=1$. The operator $J_\epsilon$ is defined by $$J_\epsilon u=\rho_\epsilon*u=\epsilon^{-n}\rho(\frac\cdot\epsilon)*u,$$ and it has the following properties:
\(1) $\Lambda^s J_\epsilon u= J_\epsilon \Lambda^s u$, $s\in \mathbb{R}$.\
(2) For all $u\in L^p(\mathbb{R}^n)$, $v\in H^\alpha(\mathbb{R}^n)$, with $\frac 1p+\frac 1q=1$, $\int (J_\epsilon f)g=\int f(J_\epsilon g)$.\
(3) For all $u\in H^\alpha(\mathbb{R}^n)$, $$\lim_{\epsilon\rightarrow 0}\|J_\epsilon u-u\|_{H^\alpha}=0,\ \ \lim_{\epsilon\rightarrow 0}\|J_\epsilon u-u\|_{H^{\alpha-1}}\leq C\|u\|_{H^\alpha}.$$ (4) For all $u\in H^\alpha(\mathbb{R}^n)$, $s\in \mathbb{R}$, $k\in \mathbb{Z}\cup\{0\}$, then $$\|J_\epsilon u\|_{H^{\alpha+k}}\leq \frac {C_{\alpha k}}{\epsilon^k}\|u\|_{H^\alpha},\ \
\|J_\epsilon D^ku\|_{L^\infty}\leq \frac {C_{k}}{\epsilon^{\frac n2+k}}\|u\|_{H^\alpha}$$
Following propositions can be found in [@cordoba01; @ju01].
Suppose that $s>0$ and $1<p<\infty$. If $f,g\in \mathcal {S}$, the Schwartz class, then we have $$\|\Lambda^s(fg)-f\Lambda^s g\|_{L^p}\leq c\|\nabla f\|_{L^{p_1}}\|g\|_{\dot{H}^{s-1,p_2}}+c\|g\|_{L^{p_4}}\|f\|_{\dot{H}^{s,p_3}}$$ and $$\|\Lambda^s(fg)\|_{L^p}\leq c\|f\|_{L^{p_1}}\|g\|_{\dot{H}^{s,p_2}}+c\|g\|_{L^{p_4}}\|f\|_{\dot{H}^{s,p_3}}$$ with $p_2,p_3\in(1,+\infty)$ such that $\frac 1p=\frac 1{p_1}+\frac 1{p_2}=\frac 1{p_3}+\frac 1{p_4}$.
Let $0\leq s\leq 2$, $f\in \mathcal {S}(\mathbb{R}^n)$, we have the pointwise inequality, $$2f(x)\Lambda^s f(x)\geq \Lambda^s f^2(x).$$
Let $\alpha_1$ and $\alpha_2$ be two real numbers such that $\alpha_1<\frac n2$, $\alpha_2<\frac n2$ and $\alpha_1+\alpha_2>0$. Then there exists a const $C=C_{\alpha_1,\alpha_2}\geq 0$ such that for all $f\in \dot{H}^{\alpha_1}$ and $g\in \dot{H}^{\alpha_2}$, $$\|fg\|_{\dot{H}^{\alpha}}\leq C\|f\|_{\dot{H}^{\alpha_1}}\|g\|_{\dot{H}^{\alpha_2}},$$ where $\alpha=\alpha_1+\alpha_2-\frac n2$.
Main Results
============
Let $s\in [\frac12,1]$, $T>0$, $\alpha>\frac n2+1$, $u_{0}\in H^\alpha(\mathbb{R}^n)$, $v\in C([0,T];H^\alpha(\mathbb{R}^n))$, and $v\geq 0$. Then there is a unique solution $u\in C^1([0,T];H^\alpha(\mathbb{R}^n))$ to the linear initial value problem $$\begin{cases}
\partial_t{u}=\nabla u\cdot \nabla(-\Delta)^{-s}v-v(-\Delta)^{1-s}u,\\
u(x,0)=u_{0}.
\end{cases}$$ And if the initial data $u_0\geq 0$, we can get $u\geq0, (x,t)\in \mathbb{R}^n\times[0,T]$.
For any $\epsilon>0$, we consider the following linear problem $$\begin{cases}
\partial_t{u^\epsilon}=F_\epsilon(u^\epsilon)=J_\epsilon(\nabla J_\epsilon u^\epsilon\cdot \nabla(-\Delta)^{-s} v)-J_\epsilon (v(-\Delta)^{1-s}J_\epsilon u^\epsilon),\\
u^\epsilon(x,0)=u_{0}.\end{cases}$$ By Proposition 2.1, Proposition 2.2 and $s\geq \frac 12$ we can estimate $$\begin{aligned}
\|F_\epsilon(u_1^\epsilon)-F_\epsilon(u_2^\epsilon)\|_{H^\alpha}
&=\|J_\epsilon(\nabla J_\epsilon (u_1^\epsilon-u_2^\epsilon)\cdot \nabla(-\Delta)^{-s} v)-J_\epsilon (v(-\Delta)^{1-s}J_\epsilon (u_1^\epsilon-u_2^\epsilon)\|_{H^\alpha}\\
&\leq C(\epsilon,\|v\|_{H^\alpha})\|u_1^\epsilon-u_2^\epsilon\|_{H^\alpha}.\end{aligned}$$ By Picard Theorem, for any $\alpha>\frac n2+1, \epsilon>0$, there exists a $T_\epsilon=T_\epsilon(u_*)>0$, problem (3.2) has a unique solution $u^\epsilon\in C^1([0,T_\epsilon);H^\alpha)$. By Proposition 2.1 and Proposition 2.3, $$\begin{aligned}
\frac 12 \frac {d}{dt}\|u^\epsilon\|^2_{L^2}
&=\int \nabla J_\epsilon u^\epsilon \cdot\nabla(-\Delta)^{-s}vJ_\epsilon u^\epsilon-
\int v(-\Delta)^{1-s}J_\epsilon u^\epsilon J_\epsilon u^\epsilon\\
&\leq \frac 12\int\nabla |J_\epsilon u^\epsilon|^2\cdot\nabla(-\Delta)^{-s}v-\frac 12\int v(-\Delta)^{1-s}|J_\epsilon u^\epsilon|^2\\
&\leq \frac 12\int|J_\epsilon u^\epsilon|^2(-\Delta)^{1-s}v-\frac 12\int|J_\epsilon u^\epsilon|^2(-\Delta)^{1-s}v=0.\end{aligned}$$ Moreover, for any $\alpha>0$, $$\begin{aligned}
\frac 12 \frac {d}{dt}\|\Lambda^\alpha u^\epsilon\|^2_{L^2}
&=\int \Lambda^\alpha(\nabla J_\epsilon u^\epsilon \cdot\nabla(-\Delta)^{-s}v)J_\epsilon\Lambda^\alpha u^\epsilon-
\int \Lambda^\alpha(v(-\Delta)^{1-s}J_\epsilon u^\epsilon) \Lambda^\alpha J_\epsilon u^\epsilon\\
&\leq C\|[\Lambda^\alpha,\nabla(-\Delta)^{-s}v]\nabla J_\epsilon u^\epsilon\|_{L^2}\|\Lambda^\alpha u^\epsilon\|_{L^2}
+\int\nabla(-\Delta)^{-s}v\Lambda^\alpha \nabla J_\epsilon u^\epsilon\Lambda^\alpha J_\epsilon u^\epsilon\\
&+ C\|[\Lambda^\alpha,v](-\Delta)^{1-s}J_\epsilon u^\epsilon\|_{L^2}\|\Lambda^\alpha u^\epsilon\|_{L^2}-\int v\Lambda^\alpha (-\Delta)^{1-s}J_\epsilon u^\epsilon \Lambda^\alpha J_\epsilon u^\epsilon\end{aligned}$$ By Proposition 2.2 and Sobolev embedding, $$\begin{aligned}
\|[\Lambda^\alpha,\nabla(-\Delta)^{-s}v]\nabla J_\epsilon u^\epsilon\|_{L^2}&\leq C\|(-\Delta)^{1-s}v\|_{L^\infty}\|\Lambda^{\alpha-1} \nabla J_\epsilon u^\epsilon\|_{L^2}+
\|\nabla(-\Delta)^{-s}v\|_{\dot{H}^\alpha}\|\nabla J_\epsilon u^\epsilon\|_{L^\infty}\\
&\leq C\|v\|_{H^\alpha}\|u^\epsilon\|_{H^\alpha},\end{aligned}$$ $$\begin{aligned}
\|[\Lambda^\alpha,v](-\Delta)^{1-s}J_\epsilon u^\epsilon\|_{L^2}&\leq C\|\nabla v\|_{L^\infty}\|(-\Delta)^{1-s}J_\epsilon u^\epsilon\|_{\dot{H}^{\alpha-1}}+
\|v\|_{\dot{H}^\alpha}\|(-\Delta)^{1-s}J_\epsilon u^\epsilon\|_{L^\infty}\\
&\leq C\|v\|_{H^\alpha}\|u^\epsilon\|_{H^\alpha}.\end{aligned}$$ By Proposition 2.3, $$\begin{aligned}
&\int\nabla(-\Delta)^{-s}v\Lambda^\alpha \nabla J_\epsilon u^\epsilon\Lambda^\alpha J_\epsilon u^\epsilon-\int v\Lambda^\alpha (-\Delta)^{1-s}J_\epsilon u^\epsilon \Lambda^\alpha J_\epsilon u^\epsilon\\
& \leq\frac12\int \nabla(-\Delta)^{-s}v\nabla(\Lambda^\alpha J_\epsilon u^\epsilon)^2-\frac12\int v(-\Delta)^{1-s}(\Lambda^\alpha J_\epsilon u^\epsilon)^2\\
&\leq C\|v\|_{H^\alpha}\|u^\epsilon\|^2_{H^\alpha}.\end{aligned}$$ Combine the above estimates, $$\frac d {dt}\|u^\epsilon(\cdot,t)\|_{H^{\alpha}}\leq C\|v\|_{H^\alpha}\|u^\epsilon\|_{H^\alpha}.$$ By Gronwall’s inequality, $$\|u^\epsilon(\cdot,t)\|_{H^\alpha}\leq \|u_0\|_{H^\alpha}\exp(C\sup_{0\leq t\leq T}\|v\|_{H^\alpha}).$$ Such the solution $u^\epsilon$ exists on $[0,T]$. Similarly, $$\frac d {dt}\|u^\epsilon(\cdot,t)\|_{H^{\alpha-1}}\leq C\|v\|_{H^\alpha}\|u^\epsilon\|_{H^\alpha}\leq C(\|v\|_{H^\alpha}, \|u_0\|_{H^\alpha}, T).$$ By Aubin compactness theorem, there is a subsequence of $\{u^{\frac 1n}\}_{n\geq1}$ that convergence strongly to $u$ in $C([0,T];H^\alpha)$. If $\alpha>\frac d2+1$, $H^\alpha \hookrightarrow C^1$, so $u$ is a solution of (3.1).
If $u,\tilde{u}$ are two solutions of problem (3.1), then $w=u-\tilde{u}$ satisfies $$\begin{cases}
\partial_t{w}=\nabla w\cdot \nabla(-\Delta)^{-s}v-v(-\Delta)^{1-s}w,\\
w(x,0)=0.
\end{cases}$$ Similarly, we can get $\frac d{dt}\|w\|_{L^2}\leq 0$ and $\frac d{dt}\|w\|_{\dot{H}^\alpha}\leq \|v\|_{H^\alpha}\|w\|_{H^\alpha}$, i.e, $\frac d{dt}\|w\|_{H^\alpha}\leq \|v\|_{H^\alpha}\|w\|_{H^\alpha}$. By Gronwall’s inequality we can deduce $u(x,t)=0$, $(x,t)\in \mathbb{R}^n\times[0,T]$.
Since $u_0\geq 0$ then if there exists a first time $t_0$ where for some point $x_0$ we have $u(x_0,t_0)=0$, then $(x_0,t_0)$ will correspond to a minimum point and therefore $\nabla u(x_0,t_0)=0$, and $$(-\Delta)^{1-s}u(x)=c\int \frac {u(x)-u(y)}{|y|^{n+2-2s}}dy\leq0.$$ Hence $ u_t|_{(x_0,t_0)}\geq 0$. So $u(x,t)\geq 0$ for all $(x,t)\in \mathbb{R}^n\times[0,T]$.
Let $n\geq 2$, $s\in [\frac12,1)$, $\alpha>\frac d2+1$, $u_0\in H^\alpha(\mathbb{R}^n)$, and $u_0\geq 0$. Then there is a unique solution $u\in C^1([0,T_0],H^\alpha(\mathbb{R}^n))$ to the linear initial value problem $$\begin{cases}
\partial_t{u}=\nabla\cdot(u\nabla(-\Delta)^{-s}u),\\
u(x,0)=u_{0}.
\end{cases}$$ And if the initial data $u_0\geq 0$, we can get $u\geq0, (x,t)\in \mathbb{R}^n\times[0,T_0]$.
Set $u^1=u_0$. Notice $\partial_t{u}=\nabla\cdot(u\nabla(-\Delta)^{-s}u)=\nabla u\cdot \nabla (-\Delta)^{-s}u-u(-\Delta)^{1-s}u$, we construct a sequence $\{u^{n}\}$ defined by solving the following systems $$\label{3.1}
\begin{cases}
\partial_t{u^{n+1}}=\nabla u^{n+1}\cdot \nabla(-\Delta)^{-s}u^{n}-u^n(-\Delta)^{1-s}u^{(n+1)},\\
u^{n+1}(x,0)=u_{0}.
\end{cases}$$ Firstly by Theorem 3.1, we get $u^2\in C([0,T);H^\alpha)$, $\forall T<\infty$, and it satisfies $u^2\geq 0$ and $$\sup_{0\leq t\leq T}\|u^2\|_{H^\alpha}\leq \|u_0\|_{H^\alpha}\exp(C\|u^1\|_{H^\alpha}T).$$ If $\exp(2C\|u_1\|_{H^\alpha}T_0)\leq2$, for example $T_0=\frac {\ln 2}{2C(1+\|u_0\|_{H^\alpha})}$, we have $\sup_{0\leq t\leq T_0}\|u^2\|_{H^\alpha}\leq 2\|u_0\|_{H^\alpha}$. By the standard induction argument, if $u^{n}\in C([0,T_0];H^\alpha)$ , $u^n\geq 0$ is a solution of (\[3.1\]) with $\|u^n\|_{H^\alpha}\leq 2\|u_0\|_{H^\alpha}$, by Theorem 3.1 we can get $u^{n+1}\in C([0,T_0];H^\alpha)$ , $u^{n+1}\geq0$ and $$\sup_{0\leq t\leq T_0}\|u^{n+1}\|_{H^\alpha}\leq \|u_0\|_{H^\alpha}\exp(C\|u^{n}\|_{H^\alpha}T_0)\leq 2\|u_0\|_{H^\alpha},$$ $$\frac d{dt}\|u^{n+1}\|_{H^{\alpha-1}}\leq C\|u^n\|_{H^\alpha}\|u^{n+1}\|_{H^\alpha}\leq C\|u_0\|^2_{H^\alpha}.$$ By Aubin compactness theorem, there is a subsequence of $u^{n}$ that convergence strongly to $u$ in $C([0,T];H^\alpha)$. If $u\geq 0,\tilde{u}\geq0$ are two solutions of problem (3.1), then $w=u-\tilde{u}$ satisfies that $$\begin{cases}
\partial_t{w}=\nabla\cdot(w \nabla(-\Delta)^{-s}u)+\nabla \cdot(\tilde{u}\nabla(-\Delta)^{-s}w),\\
w(x,0)=0.
\end{cases}$$ By Proposition 2.2 we can get $$\begin{aligned}
\frac12\frac d{dt}{\|w\|_{L^2}^2}&=\int w\nabla\cdot(w \nabla(-\Delta)^{-s}u)+\int w\nabla \tilde{u}\cdot\nabla(-\Delta)^{-s}w-\int w \tilde{u}(-\Delta)^{1-s}w\triangleq I_1+I_2+I_3.\end{aligned}$$ $I_1,I_3$ can be estimated as: $$\begin{aligned}
&I_1=\int w\nabla w\cdot \nabla(-\Delta)^{-s}u=\frac12\int\nabla w^2\cdot \nabla(-\Delta)^{-s}u=\frac12\int w^2(-\Delta)^{1-s}u\leq C\|u\|_{H^{\alpha}}\|w\|^2_{L^2},\\
&I_3\leq-\frac 12\int \tilde{u}(-\Delta)^{1-s}w^2=\int-\frac 12(-\Delta)^{1-s}\tilde{u}\cdot w^2\leq C\|u\|_{H^{\alpha}}\|w\|^2_{L^2}.\end{aligned}$$ When $s>\frac 12$, $$\begin{aligned}
I_2&\leq C\|w\|_{L^2}\|\nabla u\cdot\nabla(-\Delta)^{-s}w\|_{L^2} \\
&\leq C\|w\|_{L^2}\|\nabla u\|_{\dot{H}^{\frac n2+1-2s}}\|\nabla(-\Delta)^{-s}w\|_{\dot{H}^{2s-1}}
\leq C\|u\|_{H^{\alpha}}\|w\|^2_{L^2}.\end{aligned}$$ When $s=\frac 12$, the above estimates are still valid. Combine the above estimates, $\frac {d}{dt}\|w\|_{L^2}\leq C\|w\|_{L^2}\|u\|_{H^{\alpha}}$. By Gronwall’s inequality we can deduce $w(x,t)=0$ on $[0,T_0]$.
Correction
==========
In our previous paper [@zhou01] in which we establish the well-posedness result in Besov spaces for the equation (\[equation01\]), there is a mistake in page 9 when we estimate the term $J_4'$ in equation (4.5). To correct the mistake, we modify our proof in the way as following: First we construct the approximate equation: $$\label{equation41}
\begin{cases}
u_t^{(n+1)}=\nabla u^{(n+1)} \cdot \nabla(-\triangle)^{-s}u_{\epsilon}^{(n)}-u_{\epsilon}^{(n)}(-\triangle)^{1-s}u^{(n+1)}; \\
u^{(n+1)}(0)=\sigma_{\epsilon}*u_0,\quad u^{(1)}=\sigma_{\epsilon}*u_0.
\end{cases}$$ By the argument in section 2, we can always find the sequence $u^{(n)}$ who solves the linear systems (\[equation41\]). Assume $u_0\geq0$, we prove $u^{(n+1)}\geq0$. Inspired by [@caffarelli02], we assume that $x_0$ is a point of minimum of $u^{(n+1)}$ at time $t=t_0$. This indicate that $\nabla u^{(n+1)}(x_0)=0$, and $$(-\triangle)^{1-s} u^{(n+1)}(x_0)=c\int \frac{u(x_0)-u(y)}{|y|^{n+2(1-s)}}dy\leq0.$$ Thus we deduce $\frac{\partial}{\partial t}u^{(n+1)}\big|_{t=t_0}\geq0$, and by induction there holds $$u^{(n+1)}\geq 0.$$ By the same way as [@zhou01], taking $\triangle_j$ on (\[equation41\]), we obtain $$\begin{aligned}
\partial_t \triangle_j u^{(n+1)}
& =\sum[\triangle_j,\partial_i (-\triangle)^{-s} u_{\epsilon}^{(n)}] \partial_i u^{{n+1}}
+\sum \partial_i(-\triangle)^{-s} u_{\epsilon}^{(n)} \triangle_j (\partial_i u^{(n+1)}) \\
&-[\triangle_j,u_{\epsilon}^{(n)}] (-\triangle)^{1-s} u^{(n+1)}
-u_{\epsilon}^{(n)} \triangle_j (-\triangle)^{1-s}u^{(n+1)}.\end{aligned}$$ Multiplying both sides by $\frac{\triangle_j u^{(n+1)}}{|\triangle_j u^{(n+1)}|}$, and integrating over $\mathbb{R}^d$, then denote the corresponding part in the right side by $J_1', J_2', J_3', J_4'$, respectively. We obtain the estimates, $$\begin{aligned}
J_1' \leq C 2^{-j\alpha}\|u^{(n+1)}\|_{B_{1,\infty}^{\alpha}} \|u^{(n)}\|_{B_{1,\infty}^{\alpha+1-2s}} \\
J_2' \leq C 2^{-j\alpha}\|u^{(n+1)}\|_{B_{1,\infty}^{\alpha}} \|u^{(n)}\|_{B_{1,\infty}^{\alpha+1-2s}} \\
J_3' \leq C 2^{-j\alpha}\|u^{(n)}\|_{B_{1,\infty}^{\alpha}} \|u^{(n+1)}\|_{B_{1,\infty}^{\alpha+1-2s}}.\end{aligned}$$ And the estimate for the term $J_4'$ is replaced by $$\begin{aligned}
J_4' =
&-\int u^{(n)}\triangle_j (-\triangle)^{1-s} u^{(n+1)} \frac{\triangle_j u^{(n+1)}}{|\triangle_j u^{(n+1)}|} \\
&\leq -\int u^{(n)} (-\triangle)^{1-s} |\triangle_j u^{(n+1)}| \\
&\leq -\int (-\triangle)^{1-s} u^{(n)} |\triangle u^{(n+1)}| \\
&\leq 2^{-j\alpha}\|u^{n}\|_{B_{1,\infty}^{r+2-2s}}\|u^{(n+1)}\|_{B_{1,\infty}^{\alpha}}.\end{aligned}$$ Here $r>d$ be a arbitrary real number, and in the first inequality we use the following pointwise estimate:
[@miao01] Set $0\leq \alpha \leq 2$, $p\geq1$. Then for any $f\in\mathcal{S}(\mathbb{R}^d)$, there holds $$p|f(x)|^{p-2}f(x)\Lambda^{\alpha}f(x)\geq \Lambda^{\alpha}|f(x)|^p.$$
Taking $r$ such that $r+2-2s<\alpha$, e.g. set $r=\alpha-1$, we conclude $$\frac{d}{dt}\|u^{(n+1)}\|_{B_{1,,\infty}^\alpha}\leq \|u^{(n)}\|_{B_{1,,\infty}^\alpha}\|u^{(n+1)}\|_{B_{1,,\infty}^\alpha}.$$ The other parts of the proof are no difference.
Acknowledgement {#acknowledgement .unnumbered}
===============
We are very grateful for PhD. Mitia Duerinckx for pointing out our mistake and share some good views on this problem. This paper is supported by the NNSF of China under grants No. 11601223 and No.11626213.
[99]{}
M. Allen, L. Caffarelli, A. Vasseur. *Porous medium flow with both a fractional potential pressure and fractional time derivative.* arXiv preprint arXiv:1509.06325, 2015.
P. Biler, C. Imbert, G. Karch. *The nonlocal porous medium equation: Barenblatt profiles and other weak solutions.* Archive for Rational Mechanics and Analysis, 2015, 215(2): 497-529.
L. Caffarelli, F. Soria, J. L. Vázquez. *Regularity of solutions of the fractional porous medium flow.* arXiv preprint arXiv:1201.6048, 2012.
L. Caffarelli, J. L. Vázquez. *Nonlinear porous medium flow with fractional potential pressure.* Archive for rational mechanics and analysis, 2011, 202(2): 537-565.
L. Caffarelli, J. L. Vázquez. *Regularity of solutions of the fractional porous medium flow with exponent $1/2$.* arXiv preprint arXiv:1409.8190, 2014.
L. Caffarelli, J. L. Vázquez. *Asymptotic behaviour of a porous medium equation with fractional diffusion.* arXiv preprint arXiv:1004.1096, 2010.
J. A. Carrillo, Y. Huang, M. C. Santos, J. L. Vázquez. *Exponential convergence towards stationary states for the 1D porous medium equation with fractional pressure.* Journal of Differential Equations, 2015, 258(3): 736-763.
A. Córdoba, D. Córdoba. *A maximum principle applied to quasi-geostrophic equations.* Communications in mathematical physics, 2004, 249(3): 511-528.
N. Ju. *Existence and uniqueness of the solution to the dissipative 2D quasi-geostrophic equations in the Sobolev space.* Communications in Mathematical Physics, 2004, 251(2): 365-376.
C. Miao, J. Wu, Z. Zhang. *Littlewood-Paley Theory and Applications to Fluid Dynamics Equations.* Monographson Modern Pure Mathematics, No. 142 (Science Press, Beijing, 2012).
D. Stan, F. del Teso, J. L. Vázquez. *Transformations of self-similar solutions for porous medium equations of fractional type.* Nonlinear Analysis: Theory, Methods Applications, 2015, 119: 62-73.
J. L. Vázquez. *Nonlinear diffusion with fractional Laplacian operators.* Nonlinear Partial Differential Equations. Springer Berlin Heidelberg, 2012: 271-298.
X. Zhou, W. Xiao, J. Chen, *Fractional porous medium and mean field equations in Besov spaces*, Electron. J. Differential Equations, 2014 (2014), 1-14.
$^1$ Department of Information Technology, Nanjing Forest Police College, 210023 Nanjing, China\
Email: zhouxuhuan@163.com\
$^{2*}$ School of Applied Mathematics, Nanjing University of Finance and Economics, 210023 Nanjing, China\
Email: xwltc123@163.com\
[^1]: Mathematics Subject Classification(2000): 35K55, 35K65, 76S05.
[^2]: Keywords: fractional porous medium equation, degenerate diffusion transport equation, Sobolev spaces.
|
---
address: |
Department of Physics, Tokyo Metropolitan University,\
1-1 Minami-Osawa, Hachioji, Tokyo 192-0397, Japan\
[E-mail: kitazawa@phys.metro-u.ac.jp]{}
author:
- NORIAKI KITAZAWA
title: ' Dynamical Generation of Yukawa Couplings in Intersecting D-brane Models '
---
Introduction
============
The understanding of masses and mixings of quarks and leptons is one of the most important problems in particle physics. It has been expected that the string theory can give a solution as well as the unified description of the fundamental interactions including gravity. Recent developments in the model building based on the intersecting D-branes (see Refs.[@BGKL; @AFIRU; @CSU] for essential idea) make the explicit and concrete discussions possible. Especially, the models with low-energy supersymmetry are interesting, because such models are constructed as stable solutions of the string theory (see Refs.[@CSU; @BGO; @CPS; @Honecker; @Larosa-Pradisi; @Cvetic-Papadimitriou; @Kitazawa; @CLL; @Kitazawa2; @Honecker-Ott; @Kokorelis] for explicit model buildings).
The structure of the Yukawa coupling matrices in intersecting D-brane models is discussed in Refs.[@CLS; @CIM; @CKL; @ALS; @KKMO]. The problem is that we typically have the factorized form of the Yukawa coupling matrices, $g_{ij} = a_i b_j,$ or the diagonal Yukawa coupling matrices, if the origin of the generation is the multiple intersections of D-branes. Both structures can not give realistic quark or lepton masses and mixings. The efforts in the model building, including many Higgs doublets[@ALS], for example, may solve the problem, but the origin of the generation may have to be reconsidered.
In this article, we propose another scenario to have generation structure. The generation may not be originated from the multiple intersections of D-branes, but the repetitive existence of many confining forces for “preons”. For example, suppose that we have “preons” with some specific charge under the standard model gauge group which is appropriate to form one generation with the confining USp$(2)$ gauge interaction. If such “preons” are realized in an intersecting D-brane model, and they belong to the fundamental representation of USp$(6)$, then we have composite three generations by the decomposition of ${\rm USp}(6) \rightarrow {\rm USp}(2)
\times {\rm USp}(2) \times {\rm USp}(2)$ due to the D-brane splitting. The difference of the positions of D-branes for each USp$(2)$ gauge symmetry may affect the structure of Yukawa coupling matrices. In the following we sketch a model in which this idea is explicitly realized (see Ref.[@Kitazawa2] for complete description). We also give an evidence that the resultant Yukawa coupling matrices in this scenario can be realistic.
The Model and Yukawa Coupling Matrices
======================================
The configuration of the intersecting D6-branes in type IIA ${\bf T}^2 \times {\bf T}^2 \times {\bf T}^2
/{\bf Z}_2 \times {\bf Z}_2$ orientifold is given in Table \[config\].
D6-brane winding number multiplicity
---------- ---------------------------------------- --------------
D6${}_1$ $ \quad [(1,-1), (1,1), (1,0)] \quad $ $4$
D6${}_2$ $ \quad [(1,1), (1,0), (1,-1)] \quad $ $6+2$
D6${}_3$ $ \quad [(1,0), (1,-1), (1,1)] \quad $ $2+2$
D6${}_4$ $ \quad [(1,0), (0,1), (0,-1)] \quad $ $12$
D6${}_5$ $ \quad [(0,1), (1,0), (0,-1)] \quad $ $8$
D6${}_6$ $ \quad [(0,1), (0,-1), (1,0)] \quad $ $12$
: Configuration of intersecting D6-branes. All three tori are considered to be rectangular (untilted). Three D6-branes, D6${}_4$, D6${}_5$ and D6${}_6$, are on top of some O6-planes. We also have orientifold image D-brane for each D-brane listed in this table.
\[config\]
Ramond-Ramond tadpoles are canceled out in this configuration, and four-dimensional ${\cal N} = 1$ supersymmetry is realized under the condition of $\chi_1 = \chi_2 = \chi_3 = \chi$, where $\chi_i = R^{(i)}_2 / R^{(i)}_1$ and $R^{(i)}_{1,2}$ are radii for each three torus $i = 1,2,3$. The D6${}_2$-brane system consists of two parallel D6-branes with multiplicities six and two which are separated in the second torus in a consistent way with the orientifold projections. The D6${}_3$-brane system consists of two parallel D6-branes with multiplicity two which are separated in the first torus in a consistent way with the orientifold projections. D6${}_1$, D6${}_2$ and D6${}_3$ branes give gauge symmetries of U$(2)_L=$SU$(2)_L \times$U$(1)_L$, U$(3)_c \times$U$(1) =$SU$(3)_c \times$U$(1)_c \times$U$(1)$ and U$(1)_1 \times$U$(1)_2$, respectively. The hypercharge is defined as $${Y \over 2} = {1 \over 2} \left( {{Q_c} \over 3} - Q \right)
+ {1 \over 2} \left( Q_1 - Q_2 \right),$$ where $Q_c$, $Q$, $Q_1$ and $Q_2$ are charges of U$(1)_c$, U$(1)$, U$(1)_1$ and U$(1)_2$, respectively.
We break three USp$(12)_{{\rm D6}_4}$, USp$(8)_{{\rm D6}_5}$ and USp$(12)_{{\rm D6}_6}$ gauge symmetries to the factors of USp$(2)$ gauge symmetries by appropriately configuring D6-branes of D6${}_4$, D6${}_5$ and D6${}_6$ (see Ref.[@Kitazawa2] for concrete configuration). The resultant gauge symmetries are respectively as follows. $$\mbox{USp}(12)_{{\rm D6}_4} \longrightarrow
{\mathop{\otimes}}_{\alpha=1}^6 \mbox{USp}(2)_{{\rm D6}_4,\alpha}
\label{gauge_D6_4}$$ $$\mbox{USp}(8)_{{\rm D6}_5} \longrightarrow
{\mathop{\otimes}}_{a=1}^4 \mbox{USp}(2)_{{\rm D6}_5,a}
\label{gauge_D6_5}$$ $$\mbox{USp}(12)_{{\rm D6}_6} \longrightarrow
{\mathop{\otimes}}_{\alpha=1}^6 \mbox{USp}(2)_{{\rm D6}_6,\alpha}
\label{gauge_D6_6}$$ All of these USp$(2)$ gauge intersections can be naturally stronger than any other unitary gauge interactions. If we choose $\kappa_4 M_s \sim 1$ and $\chi \sim 0.1$, where $\kappa_4=\sqrt{8 \pi G_N}$ and $M_s = 1 / \sqrt{\alpha'}$, the scales of dynamics of all USp$(2)$ gauge interactions are of the order of $M_s$, and the values of the standard model gauge coupling constants are reasonably of the order of $0.01$ at the string scale.
A schematic picture of the configuration of intersecting D6-branes of this model is given in Figure \[intersec\].
![ Schematic picture of the configuration of intersecting D6-branes. This picture schematically shows the intersections of D6-branes in six-dimensional space, and the relative place of each D6-brane has no meaning. The number at the intersection point between D6${}_a$ and D6${}_b$ branes denotes intersection number $I_{ab}$ with $a<b$. []{data-label="intersec"}](intersec.eps){width="10cm"}
“Preons” are localized on six apices of the hexagonal area. The sector of D6${}_1$-D6${}_2$-D6${}_4$ intersections gives four generation of left-handed quarks and leptons as two-body bound states of “preons”. There are six composite generations due to six USp$(2)$ gauge interactions of D6${}_4$-brane, and two anti-generations due to the twice intersection between D6${}_1$-brane and D6${}_2$-brane. Two of six generations become massive with two anti-generations through the Yukawa couplings among two “preons” and one anti-generation field associated with the triangular aria of this sector. The similar happens to the sector of D6${}_2$-D6${}_3$-D6${}_6$ intersections which gives four generation of right-handed quarks and leptons. The sector of D6${}_1$-D6${}_3$-D5${}_5$ intersections gives two pairs of massless Higgs doublets.
The hexagonal area in Figure \[intersec\] indicates the existence of six-point higher-dimensional interactions among “preons”. Since all the quarks, leptons and Higgs fields are two-body bound state of “preons”, the higher-dimensional interactions give Yukawa interactions after the confinement of USp$(2)$ gauge interactions. The value and structure of Yukawa coupling matrices are determined by the positions of six intersections of D-branes in the compact space.
In Ref.[@KKMO] the possibility to obtain non-trivial Yukawa coupling matrices has been shown. The Yukawa coupling matrices of the heaviest two generations of up-type and down-type quarks in a certain condition of the sizes of three tori are given as follows. $$g^u \sim
\left(
\begin{array}{cc}
1 &
0
\cr
\varepsilon_1^2 \varepsilon_3 &
\varepsilon_1 \varepsilon_3
\end{array}
\right) + {\cal O}(\varepsilon_3^2),
\qquad
g^d \sim
\left(
\begin{array}{cc}
\varepsilon_1 &
0
\cr
\varepsilon_1 \varepsilon_3 &
\varepsilon_3
\end{array}
\right) + {\cal O}(\varepsilon_3^2),$$ where $\varepsilon_i = \exp ( - A_i / 2 \pi \alpha' )$ and $A_i$ is the $1/8$ of the area of the i-th torus. Note that we can obtain Yukawa coupling of the order of unity, although it seems difficult that all six positions of intersections coincide. These Yukawa coupling matrices give mass ratio and Kobayashi-Maskawa mixing angles for heavy two generations in a certain assumption to the vacuum expectation values of Higgs doublet fields. The results are $$\frac{m_{u,3}}{m_{u,4}} \sim \varepsilon_1 \varepsilon_3,
\qquad
\frac{m_{d,3}}{m_{d,4}} \sim \frac{\varepsilon_3}{ \varepsilon_1}, \qquad
V_{34} \sim \varepsilon_3.$$ By taking $\varepsilon_1 \sim 0.5$ and $\varepsilon_3 \sim 0.01$, we obtain the values corresponding to $m_c/m_t \simeq 0.0038$, $m_s/m_b \simeq 0.025$, and $V_{cb} \simeq 0.04$.
It would be very interesting to explore more realistic models of the quark-lepton flavor in this framework.
Acknowledgements
================
The author thank T. Kobayashi, N. Maru and N. Okada, for useful discussions.
[99]{} R. Blumenhagen, L. G" orlich, B. K" ors and D. L" ust, JHEP [**0010**]{}, 006 (2000). G. Aldazabal, S. Franco, L.E. Ib' a\~ nez, R. Rabad' an and A.M. Uranga, J. Math. Phys. 42, 3103 (2001); JHEP [**0102**]{}, 047 (2001). M. Cvetič, G. Shiu and A.M. Uranga, Phys. Rev. Lett. [**87**]{}, 201801 (2001); Nucl. Phys. [**B615**]{}, 3 (2001). R. Blumenhagen, L. G" orlich and T. Ott, JHEP [**0301**]{}, 021 (2003). M. Cvetič, I. Papadimitriou and G. Shiu, Nucl. Phys. [**B659**]{}, 193 (2003). G. Honecker, Nucl. Phys. [**B666**]{}, 175 (2003). M. Larosa and G. Pradisi, Nucl. Phys. [**B667**]{}, 261 (2003). M. Cvetič and I. Papadimitriou, Phys. Rev. [**D67**]{}, 126006 (2003). N. Kitazawa, hep-th/0401096, to be published in Nucl. Phys. B. M. Cvetič, T. Li and T. Liu, hep-th/0403061. N. Kitazawa, hep-th/0403278. G. Honecker and T. Ott, hep-th/0404055. C. Kokorelis, hep-th/0406258. M. Cvetič, P. Langacker and G. Shiu, Nucl. Phys. [**B642**]{}, 139 (2002). D. Cremades L.E. Ib' a\~ nez and F. Marchesano, JHEP [**0307**]{}, 038 (2003). N. Chamoun, S. Khalil and E. Lashin, Phys. Rev. [**D69**]{}, 095011 (2004). S.A. Abel, O. Lebedev and J. Santiago, hep-ph/0312157. N. Kitazawa, T. Kobayashi, N. Maru and N. Okada, hep-th/0406115.
|
---
abstract: 'Software-defined networking offers a device-agnostic programmable framework to encode new network functions. Externally centralized control plane intelligence allows programmers to write network applications and to build functional network designs. OpenFlow is a key protocol widely adopted to build programmable networks because of its programmability, flexibility and ability to interconnect heterogeneous network devices. We simulate the functional topology of a multi-node quantum network that uses programmable network principles to manage quantum metadata for protocols such as teleportation, superdense coding, and quantum key distribution. We first show how the OpenFlow protocol can manage the quantum metadata needed to control the quantum channel. We then use numerical simulation to demonstrate robust programmability of a quantum switch via the OpenFlow network controller while executing an application of superdense coding. We describe the software framework implemented to carry out these simulations and we discuss near-term efforts to realize these applications.'
author:
- |
Venkat R. Dasari, Ronald J. Sadlier, Ryan Prout,\
Brian P. Williams, and Travis S. Humble Army Research Laboratory, Aberdeen Proving Ground, Maryland;\
Quantum Computing Institute, Oak Ridge National Laboratory, Oak Ridge, Tennessee
bibliography:
- 'spieqic2016\_manuscript.bib'
title: 'Programmable Multi-Node Quantum Network Design and Simulation'
---
Introduction
============
A quantum network is composed of interacting nodes that express applications in quantum communication, computation and sensing [@VanMeter2014]. Its nodes and links represent the fundamental structure of the network with nodes containing both quantum and classical resources while links support both quantum and classical communication. Each device within the network has a role in the encoding, decoding, transmitting, receiving, repeating and routing of quantum information. Nodes may be further specialized to accommodate other specific tasks. For example, quantum computers may be viewed as quantum networks over short length scales specialized to perform computational tasks [@Britt2015]. Other specific applications that use quantum networks include quantum key distribution for secure communication [@Gisin2002], blind quantum computing for secure interactive computation [@Broadbent2009], and distributed quantum sensing for infrastructure protection [@Humble2013; @Williams2015]. The advantages offered by quantum networks for solving these problems include greater utility, greater extensibility, and greater application resiliency.
Despite the progress made in deploying specialized quantum networks, a desired feature for future quantum networks is the ability to easily encode a device-agnostic unified control plane capable of brokering communications between various quantum network node types. The advantage of programmable networks is that they can be tasked to support new functionality, for example, by converting between communication and sensing applications. In general, programmable networks reduce the overhead for future network deployment and permit rapid adoption of new functional paradigms [@Campbell1999]. Within the conventional networking community, programmability is a key advantage afforded by software-defined networking (SDN) principles [@Hu]. The SDN paradigm supports nodes within a network that can be managed by a external controller via software interfaces as opposed to pre-configured hardware constructions. One of the primary advantages of this programmable approach is that the nodes within a network can be programmed during deployment instead of resorting to costly hardware redesign or replacement.
Previously, we have shown how the basic principles of programmable networks can be extended to quantum networks, in which quantum metadata specifies attributes needed by the network to accommodate specific uses of the quantum channel [@HumbleSadlier2014; @Dasari2015]. Quantum metadata is the classical information that moves through the network to characterize how applications make use of quantum data. This represents an initial step toward the goal of network programmability. Network programmablilty is realized by exchanging metadata about the requested functionality between a network controller and the network nodes, especially the switches and routers responsible for managing traffic movement. The movement of metadata is managed by the SDN controller, which implements policies for different metadata instances. Each type of network device may have different metadata requirements, e.g., a router versus a transmitter [@Dasari2015]. The SDN controller is responsible for distributing the metadata needed by each device to achieve a given functionality. This requires programmable devices within the quantum network.
In this contribution, we provide the first demonstration of a programmable quantum network by realizing the necessary interfaces for programmable quantum nodes and links. We clarify the role of quantum metadata within the programmable network paradigm by showing compatibility with the OpenFlow protocol, a widely used SDN paradigm for network control. We investigate how quantum networks can be built as programmable networks and how the OpenFlow protocol can be extended to account for metadata specific to quantum networks. Because of its programmability and compatibility with management of optical networks, OpenFlow is highly suitable to control the new attributes defining the quantum channels that carry metadata between various quantum devices. We build on this compatibility by designing models of the devices, including hosts and switches, needed to implement a quantum network. We then use numerical simulation alongside explicit classical networking to mimic the behavior of a switched quantum network performing super-dense coding. We implement quantum-classical communication protocols, including metadata exchanges, and we use numerical simulations that include noisy quantum transmissions. We present details of the quantum network simulator as well as the software infrastructure that can be easily extended for use in experimental test beds.
![Design schematic for a three-node quantum network with SDN controller connected to both the classical and quantum switch.[]{data-label="fig:quantumnetworkdesign"}](QuantumNetworkDesign.png "fig:"){width="0.8\columnwidth"}\
Quantum Network Model
=====================
A schematic design of a 3-node quantum network is shown in Fig. \[fig:quantumnetworkdesign\]. This diagram consists of three host nodes that each have a classical network interface card (CNIC) and a quantum network interface card (QNIC). These network interfaces connect a host to classical and quantum links that terminate at classical and quantum switches, respectively. We distinguish between classical and quantum network traffic as a natural separation of concerns in the function of a quantum network. Whereas the quantum network layer is responsible for transmitting quantum states between devices, the classical network layer is responsible for controlling those quantum transmission by passing metadata between the nodes.
As shown in Fig. \[fig:quantumnetworkdesign\], the classical and quantum switches are linked directly with each other as well as with the Openflow controller. In particular, the quantum switch must have a CNIC to accept classical metadata from the hosts and to accept programming from the network controller. The controller manages interactions between the classical and quantum network layers by defining how and which metadata within the classical network should be forwarded to the quantum switch. For a quantum switch that supports teleportation or entanglement swapping, i.e., a quantum repeater, the controller may also perform functions that program the teleportation protocol. In general, the QNIC’s and the quantum switch exchange quantum network traffic, which implies the structured transmission of quantum information carriers. Our implementation focuses on transmissions constructed from quantum optical carriers and especially from single and bi-photon states. The classical network traffic represents information encoded and transmitted using conventional network standards, e.g., TCP.
The role of the SDN controller is highlighted in Fig. \[fig:qnode\], which shows the interaction between a quantum network device and the OpenFlow controller used in our implementation. An agent within each device listens for requests from the controller that come in the form of periodic polling to monitor changes in device state. In addition, device policies may be set so that updates are pushed asynchronously to the controller. Higher-level networking functions, such as packetization, are set by the local device configuration, but the OpenFlow agents provides an interface for the network controller to overwrite these behaviors and reconfigure the device.
Host Nodes: Software Stack and Middleware Support
-------------------------------------------------
We have previously discussed the design of host nodes based on the principles of software-defined communication. In this approach, each host uses a layered design to separate the concerns necessary for the application-specific transmission and reception of quantum data [@HumbleSadlier2014]. We divide this stack into three distinct layers: software, middleware, and hardware. The software layer consists of the host-specific application as well as the software libraries necessary to access the quantum and classical resource on the node. However, the application need not be aware that communication over a quantum channel occurs, and this design emphasizes the ease of adapting existing applications to use quantum communication. Instead, the middleware layer serves to translate the transmission and reception requests from the software layer into commands that can be parsed by the quantum-enabled hardware. In this regard, the middleware decouples the software from the hardware and removes the requirement that user and node manufacture share a common goal. The price of this design is that the middleware layer must therefore capture knowledge about the range of uses acceptable to hardware *and* conform to an API that is convenient for the software developer.
The hardware layer exposes access to the low-level physical resources contained within the node. It is responsible for the translation of commands from the middleware layer into hardware specific commands for equipment such as phase modulators and translation stages. An analog to the hardware layer in the traditional computing environment is the device driver, i.e., software that takes commands from the operating system and controls the hardware for a device. For our quantum networking example, this corresponds to the CNIC and QNIC that each connect to respective network links. The hardware interface depends on the physical encoding scheme and transmission media as well as environmental controls and actuators. We have previously tested the layered host design against experimental hardware based on biphoton pair sources and detectors [@Pooser2012; @Williams2015]. However, we currently focus on the use of numerical simulators as a diagnostic replacement for experimental hardware. The benefit of this approach is two fold. First, we can emulate the functionality of any desired hardware by using numerical simulators of adjustable fidelity. Second, we can use the numerical simulators to efficiently test the behavior of the quantum network as the number of nodes and switches increases. This is both more efficient with respect to labor and materials and more effective in prototyping next generation network applications.
Our numerical model of the hardware layer at each node consists of virtual components that emulate the functionality and properties of expected hardware components. Because we can construct these models to offer the same functionality as the actual hardware layer, we are able to explore the interaction between all three layers prior to experimental testing. In this regard, the middleware is agnostic behavior to the presence of the actual hardware or its numerical model.
### Switches: Software Stack and Hardware Model
The fundamental basis for programmable networks is extracting the control layer for the network away from the data plane that defines how each node operators. The resulting centralized control of the network allows for more powerful management of the networking nodes. In our network design, we use the Ryu OpenFlow controller to define the control layer and to interface with the classical and quantum switch. As shown in Fig. \[fig:qnode\], the communication between the controller and switches is handled by the OpenFlow protocol, which enables the controller to poll for attributes of the switch. Each switch monitors a series of flow tables that are used to inspect and act upon incoming packets. For the classical switch, these packets inform address resolution and service decisions, while the quantum switch uses the flow tables to parse messages forwarded by the classical network. This approach centralizes the control of both switches within the Ryu controller and permits the controller to coherently manage how the classical and quantum networks interacts.
![The interactions between the OpenFlow controller and the classical and quantum switches and the host nodes that support quantum networking.[]{data-label="fig:qnode"}](q-node.png "fig:"){width="0.8\columnwidth"}\
As data flows in the network, the switches use their flow tables to make forwarding decisions as packets arrive. The flow tables are built upon flow entries consisting of match fields, counters, and a set of instructions to apply to matching packets. If a matching entry is found, the instructions associated with the specific flow entry are executed at the switch. These instructions include packet forwarding and packet modification. Thus, each flow entry has an action associated with it. The action can forward the packet to a given port, encapsulate and forward the packet to the controller, forward the packet to the next flow table in the pipeline, or drop the packet.
Quantum metadata, inserted into classical packets, defines the quantum transmission paths. Using software defined networking within our network infrastructure allows us to insert the quantum metadata within network flows observable by the switch. The switch can then alert the controller when quantum metadata is detected and the controller can forward this information to a quantum switch, making it aware of what paths to open for quantum transmission.
The design of the quantum switch, in order to communicate with the controller, is built using three layers that are again based on separation of concerns. A classical networking layer within the quantum switch is responsible for interacting with the SDN controller and other classical switches or nodes located on the network. We represent the classical networking layer using a OpenFlow-compatible switch called OpenvSwitch. OpenvSwitch communicates with the OpenFlow controller and also interfaces with the switch middleware layer. The purpose of the middleware is to translate classical actions produced by flow entries within OpenvSwitch to hardware configurations of the quantum switch. The hardware layer represents the quantum optical hardware necessary to route quantum states encoded in single and bi-photon transmission. Because of the no-cloning principles, the switch can not measure and resend the incoming state. Instead, the coherence within the quantum state must be preserved while passing through the switch. A direct method of ensuring coherent in optical states is to employ linear optical elements for the switch physical layer. We impose that requirement on our quantum switches, however, other approaches based on non-linear optical phenomena are also possible.
### Links: Quantum Optical Characteristics
The requirements of a quantum optical network link differs from the modern design. The current classical protocol of multiplexed distribution and replication utilizing repeaters does not support transmission of photonic quantum states. A state’s fragile coherences and inability to be cloned challenges its inclusion in these networks. The quantum optical hardware must be designed with coherence preservation in mind. For the polarization-entangled quantum states that typify many state of the art experiments, the links within the quantum networking layer must preserve the polarization coherence within a potentially separated photon pair. Our design of quantum physical layer imposes requirements on the use of low-loss network components that preserve this type of polarization coherence. However, in general, there are tradeoffs in the choice of hardware components matching this design. For instance, polarization maintaining fiber is a convenient component in preserving coherence, but is substantially more lossy that standard optical fiber which scrambles the state coherence. Successfully demonstrating a multi-node quantum network necessarily requires some trade-offs between data and error rates. Of course, this is a challenge that classical networks share, and some methods applicable in the classical space are also applicable in the quantum space. For instance, both quantum and classical error correction are useful for mitigating against the decoherence and loss found in conventional components[@Sadlier2016].
We design our quantum networking links using polarization maintaining optical fiber, coherence preserving switches, and precise temporal alignment. Optical fiber links between any two network components are assumed to consist of custom birefringence compensated optical fibers. This supports the coherence of the state by symmetrizing the travel time of photons with different polarizations while maintaining these polarizations. Novel optical switches are utilized that preserve the quantum state up to a known rotation $X,Y,$ or $Z$, are low loss, non-interferometric, and have potential to switch photon paths in less than 100 ns. One mechanism enabling superdense coding is Bell state analysis utilizing Hong-Ou-Mandel interference [@hong1987measurement] which requires picosecond timing precision between the members of a photon pair during a Bell state measurement. While transport over large distances challenges this requirement due to difficulties with path length temperature and infrastructure dependencies, for local networks in controlled environments simple calibrations are sufficient.
Quantum Network Simulation
==========================
We use simultaneous simulation of both the classical and quantum network traffic to predict the behavior of a multi-node quantum network. Each network node executes the layered stack described previously using a model for the hardware layer. In this section, we discuss how the simulation environment is constructed using a combination of software tools.
Classical Network Simulation
----------------------------
Mininet is a powerful network simulator that is especially useful for experimenting with Software Defined Network applications. To simply explain, you pass it a file specifying the topology you want to simulate and it establishes the environment using the Linux namespace software. Our topology consists of two virtual switches, three host nodes and an OpenFlow controller, as shown in Fig. \[fig:quantumnetworkdesign\]. The communication between the switches and the controller is significant, since this is where how the OpenFlow protocol enforces the network management plane to be extracted from the switches and centralized at the controller. The controller manages both switches and provides there switching logic by accessing their flow tables.
Our switches are built using the OpenvSwitch software and are managed through the OpenFlow protocol by using the Ryu controller. We are bound to the use of version 1.4 of the OpenFlow protocol in order to access optical attributes of OpenFlow. Our controller and switches run on OpenFlow 1.4 version. The program at the switch is a simple switch design that learns MAC addresses and forwards traffic to the proper ports. This program is enabled to run on the switch by directing the switch to communicate with the controller, while the controller gives the switch its forwarding logic. As network states change the switch asks the controller for instructions and the switch updates its flow tables.
In order to confirm that quantum metadata is transferred between nodes we use a classical TCP handshake to set up the connection as shown in Fig. \[fig:handshaking\]. Following acknowledgments between hosts, we transfer a quantum message and close the connection. This protocols treats the existence of the quantum message as the metadata communicated by a classical packet and subsequently recognized by the switch flow tables. The switch then sends these specific packets to the controller and the controller relays the proper information to the quantum switch.
![Interactions between hosts and switch on both quantum and classical channels.[]{data-label="fig:handshaking"}](Handshaking.png "fig:"){width="1.0\columnwidth"}\
Quantum Network Simulation
--------------------------
Simulation of the quantum network traffic requires a centralized manager that can monitor the global state of the quantum network physical layer. This is the primary difficulty for quantum simulation since it requires active monitoring of the classical network traffic to inform how each node and link should behave during the simulation. For example, when a bipartite entangled state is distributed across two nodes, its joint state will depend on the local actions taken by each host application. Therefore, the classically-defined instructions issued at each node must be caught by the global simulation manager. Our approach to state simulation is based on centralizing the storage and simulation of quantum systems that exist on the quantum network. We also only monitor the actions that nodes submit to the hardware layer, which has been virtualized in our design for the express purpose of simulation. We use a server-client architecture that permits node to submit operations and receive data to and from a simulation server. We refer to the simulation server as the *dispatcher* since it controls and dispatches all interactions with the quantum network layer.
By using simulation, the client middleware is able to connect directly to the dispatcher. This places the responsibility of emulating the physical link in the dispatcher, which agrees with our design goal of centralizing the computation of quantum effects. Our middleware implementation uses a simulation specific module that provides a means to connect to the server and supplies additional feedback. An example of this feedback may be the state that the client’s quantum detector is in, which will define the classical control signals taht the client receives following a detection event.
The dispatcher has a holistic view of the quantum network. This is beneficial for modeling channel noise that is time dependent and requires accounting for previous transmissions on a particular quantum channel (memory effects). Centralization also eliminates the need of artificial synchronization between clients. The dispatcher stores the entire state of the quantum network and its a history with predefined depth. The result is that determining the history of messages for a channel is trivial based on history lookup.
The numerical simulation of quantum states is separated from the modeling of noise. The dispatcher only determines the operations for the quantum simulation while a separate numerical simulator is used to evaluate these models. Currently, we use a numerical simulation manager *Sabot* to launch and catch results from quantum circuit simulations. Our quantum circuit simulator is based on the [CHP]{} simulator and its stabilizer formalism [@Aaronson2004]. Although CHP is not capable of universal quantum simulation, it does provide an efficient method for simulating the stabilizer circuits that frequently arise in quantum communication protocols. In addition, the Sabot simulation manager provides methods for easily extending these circuit model to other forms of simulation. Sabot also provides several convenience features to improve access and an increase in functionality. This includes a dispatcher facing server, a quantum circuit description interpreter, and a pseudorandom number generator. The Sabot server provides an [API]{} by which the dispatcher can access internal methods for the creation and modification of quantum states. This supports interactive simulations that can be updated based on changes in network states.
An interpreter function is used to generate a circuit description written in quantum assembly based on QASM into a sequential list of quantum operators. The dispatcher generates the circuit description for simulation after evaluating the appropriate noise model given relevant parameters about the classical and quantum state of the network. This design decouples the specification of a noise model from the numerical simulation and has the effect of increasing the flexibility of the system.
Simulating Quantum Network Behavior
===================================
We use the model for a quantum network and the network simulator described above to characterize the behavior of a superdense coding application. The purpose of this demonstration is to presents the integrated function of the individual components. Recall that in superdense coding, Alice and Bob initially share a pair of entangled qubits. Alice has a 2 bit message $b_1b_0$ she wishes to communicate to Bob. She encodes her message by applying one of the four unitary operators $\mathcal{O} \in \{I, X, Z, XZ\}$, which uniquely maps the original state within the complete set of Bell states. Alice and Bob establish the specific encoding scheme before transmission. An example of this encoding scheme is presented in table \[tab:encoding\]. Alice applies the operator $\mathcal{O}(b_1 b_0)$ to her qubit and transmits it to Bob. Bob must receive both entangled qubits and perform a joint measurement that discriminates between the four Bell states. Based upon the outcome of the measurement, Bob decodes the original message from Alice.
$b_1b_0$ $\mathcal{O}$ ${\left| \psi_{A,B} \right>}$
---------- --------------- -------------------------------
00 I ${\left| \Phi^+ \right>}$
01 X ${\left| \Psi^+ \right>}$
10 Z ${\left| \Phi^- \right>}$
11 XZ ${\left| \Psi^- \right>}$
: An encoding between classical bits and Bell states.[]{data-label="tab:encoding"}
In simulating this protocol, we first construct a network topology in which Alice and Bob are connected via a single classical switch. We assume Alice is transmitting a string of many bit pairs. She and Bob then use the classical handshaking protocol described above to frame the transmission of quantum states. We use a version of the handshaking protocol in which Alice and Bob repeat the protocol for each transmitted Bell state. However, Alice and Bob need not know that their communications are utilizing superdense coding or the characteristics of the quantum channel.
During the network simulation, mininet tracks the state of the classical packets and simulates the arrival at the hosts and switch. We use wireshark, a popular and free network packet analyzer, to monitor the handshaking traffic between Alice and Bob occurring on the classical network within mininet. Example output from Wireshark is shown in figure \[fig:wireshark\], for which messaging between Alice and Bob is apparent. Also shown are network messages between the host nodes and *ED*, the dispatcher host. This traffic represent the quantum network simulation traffic.
We developed an analogous diagnostic tool for monitoring the state of the quantum network simulated within the dispatcher. An example of this textual output is in Fig. \[fig:dspy\]. This diagnostic tool captures events from the quantum simulator dispatcher and transmit them to a remote node for processing within Sabot. The message data s presents the simulation metadata as well as quantum state or measurement results that are communicated between the network hosts and dispatcher. For example, note that timestamps used to account for time correlations between events on the quantum network and the classical network. This is particularly useful for network hosts attempting non-local protocols based on time-arrival of the transmissions.
![Events internal to the quantum simulator can be viewed with timestamps to correlate with events on the classical network.[]{data-label="fig:dspy"}](dspy.png "fig:"){width="1.0\columnwidth"}\
After transmission is complete, Bob has the original message Alice intended to communicate. If the quantum channel simulated were to include noise in the form of decoherence or losses, then Bob would receive Alice’s original message with some errors. These errors could be mitigated with quantum error correction or in the case of SDC, classical forward error correction could be used [@Sadlier2016]. The latter is beneficial because it does not require larger quantum states, which is challenging to achieve with current technology and is computationally intensive to simulate. The action of classical forward error correction could take place in the data processing component of the software stack allowing for changes to be made to the error correction scheme without affecting the requirements of the components associated with quantum transmission and reception.
While this demonstration has emphasized the interactions between two hosts, Alice and Bob, the classical and quantum network simulators can be easily extended to add additional nodes to the network. It is our hope that these tools can observe the emergent behavior of the network that will allow the engineering of robust communication protocols that mitigate the effects of, for example, transmission collisions, heavy network traffic, and noise communication channels.
Conclusions
===========
We have presented the first design of a programmable quantum network using the principles of software-defined networking. Our approach has realized implementations for hosts and switches that support quantum communication including network interfaces for both the classical and quantum networking layers. We have introduced the OpenFlow controller to manage the interaction between the classical and quantum networking layers by setting the different behaviors of these heterogeneous switches. In addition, we have developed numerical simulators capable of modeling the complete quantum network including its classical metadata and quantum state. We have leveraged the mininet simulation environment for realizing the classical network communication while we have implemented a numerical simulator to track the state of the quantum network layer. Finally, we have applied this simulator to the case of superdense coding using entangled photons and we have discussed how it can be used to test novel ideas for error corrected detection.
Using programmable network principles to manage the behavior of the quantum network offers additional opportunities for managing its heterogeneous nodes. While our design has focused on the development of nodes supporting quantum optical hardware, these designs can be easily modified to accommodate other hardware layers. This is because details regarding the hardware and its physics are abstracted by the separation of concerns and especially the middleware layer, which exposes logical features that can be realized. For quantum optical hardware, transmitting and receiving are natural logical behaviors that may be exposed to the controller. By contrast, trapped ion hardware offers capabilities for memory and logical processing within the node. The controller can manage these devices differently according to the logical features as opposed to the hardware features. This permits more robust design of the network especially with respect to modification or upgrades of the node hardware.
This work was supported by a research collaboration with Oak Ridge National Laboratory and US Army Research Laboratory. VRD expresses his gratitude to the OSD Applied Research for the Advancement of S&T Priorities (ARAP) Program for its partial financial support of this work.
|